id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.12832
Accurate numerical evaluation of systematics in the experiment for electron electric dipole moment measurement in HfF$^+$
Hyperfine structure of the ground rotational level of the metastable $^3\Delta_1$ electronic state of $^{180}$HfF$^+$ ion is calculated at presence of variable external electric and magnetic fields. Calculations are required for analysis of systematic effects in experiment for electron electric dipole moment ($e$EDM) search. Different perturbations in molecular spectra important for $e$EDM spectroscopy are taken into account.
Alexander N. Petrov
2023-08-24T14:45:49Z
http://arxiv.org/abs/2308.12832v1
Accurate numerical evaluation of systematics in the experiment for electron electric dipole moment measurement in HfF\({}^{+}\) ###### Abstract Hyperfine structure of the ground rotational level of the metastable \({}^{3}\Delta_{1}\) electronic state of \({}^{180}\)HfF\({}^{+}\) ion is calculated at presence of variable external electric and magnetic fields. Calculations are required for analysis of systematic effects in experiment for electron electric dipole moment (eEDM) search. Different perturbations in molecular spectra important for _e_EDM spectroscopy are taken into account. ## I Introduction The measuring the electron electric dipole moment (_e_EDM) serves as highly sensitive probe for testing the boundaries of the Standard Model of electroweak interactions and its extensions [1; 2; 3]. The current constrain for _e_EDM \(|d_{\rm e}|<4.1\times 10^{-30}\)_e_.cm (90% confidence) was obtained using trapped \({}^{180}\)Hf\({}^{19}\)F\({}^{+}\) ions [4] with spinless \({}^{180}\)Hf isotope. The measurements were performed on the ground rotational, \(J\)=1, level in the metastable electronic \({}^{3}\Delta_{1}\) state. As a matter of fact the _e_EDM measurement is a highly accurate spectroscopy of \(J\)=1 level at the presence of rotating electric and magnetic fields. It is clear that accurate evaluation of systematic effects becomes very important with the increase in statistical sensitivity. The main part of a great success achieved in solving this problem in HfF\({}^{+}\) experiment is due to the existence of close levels, so-called \(\Omega\)-doublets, of the opposite parities. In Ref. [5] possible systematic shifts in the experiment were considered in details and corresponding analytical formulas were obtained. In turn in Refs. [6; 7] the numerical method for theoretical calculation of \(J\)=1 hyperfine energy levels in rotating fields was developed. The method demonstrated a very high accuracy by comparison with the latest experimental data [8]. The goal of the present work is to study selected systematics numerically taking into account different perturbations in molecular spectra. The _e_EDM sensitive levels of \({}^{180}\)Hf\({}^{19}\)F\({}^{+}\) are described in details in Refs. [5; 9; 10]. \({}^{180}\)Hf isotope is spinless, \({}^{19}\)F isotope has a non-zero nuclear spin \(I\)=1/2. Hyperfine energy splitting between levels with total momentum \(F=3/2\) and \(F=1/2\), \({\bf F}\)=\({\bf J}\)+\({\bf I}\), is several tens of megahertz. In the absence of external fields, each hyperfine level has two parity eigenstates known as the \(\Omega\)-doublets. In the external _static_ electric field the \(F=3/2\) states form two (with absolute value of projection of the total momentum on the direction of the electric field, \(m_{F}\), equal to one half and three half) Stark doublets levels. Below the levels in the doublets will be called upper and lower in accordance to their energies. Upper and lower levels in doublet are double degenerate. Namely, two Zeeman sublevels connected by time reversal \(m_{F}\rightarrow-m_{F}\) have the same energy. The levels \(m_{F}=\pm 3/2\) are of interest for the _e_EDM search experiment. The corresponding energy scheme is depicted on Fig. 1 of Ref. [8]. The picture above is for the static electric field. Now let us take into account the fact that the fields in the experiment are the rotating ones. The rotation of electric field causes the degenerate sublevels \(m_{F}=+3/2\) and \(m_{F}=-3/2\) to interact [9]. Therefore, in the case of _rotating_ electric field eigenstates have slightly different energies and present equal-mixed combinations of \(m_{F}=\pm 3/2\) sublevels which are insensitive to _e_EDM. Note, that in case of rotating electric field \(m_{F}\) is projection on the axis (coinciding with rotating electric field) rotating in the space. In turn the rotating magnetic field which in the experiment is parallel or antiparallel to the rotating electric field gives opposite energy shift for \(m_{F}=+3/2\) and \(m_{F}=-3/2\) and for a sufficiently large magnetic field \(m_{F}\) becomes a good quantum number (as in static fields) and corresponding eigenstates again become sensitive to _e_EDM. We see that magnetic field, in contrast to experiments in static fields, is not an (not only an) auxiliary tool, but should ensure a nonzero energy shift due to possible nonzero value of _e_EDM [7; 10]. To completely polarize the molecule and to access the maximum _e_EDM signal both rotating electric and magnetic fields should be large enough, see e.g. Fig. 2 in Ref. [7]. For these fields the energy splitting, \(f\), between \(m_{F}=\pm 3/2\) sublevels is dominated by Zeeman interaction with smaller contribution coming from the fact that rotating fields are used. The measurement of \(f\) is repeated under different conditions which depend on binary switch parameters such as \(\vec{\mathcal{B}}\), \(\vec{\mathcal{D}}\), \(\vec{\mathcal{R}}\) being switched from \(+1\) to \(-1\) (see Ref. [4; 10] for details). \(\vec{\mathcal{B}}=+1(-1)\) means that the rotating magnetic field, \({\bf B}_{\rm rot}\), is parallel (antiparallel) to the rotating electric field \({\bf E}_{\rm rot}\); \(\vec{\mathcal{D}}=+1(-1)\) means that the measurement was performed for lower (upper) Stark level; and \(\vec{\mathcal{R}}\) defines direction for the rotation of the fields around the laboratory \(z\) axis: \(\vec{\omega}_{\rm rot}=\vec{\mathcal{R}}\omega_{\rm rot}\hat{z}\), where \(\vec{\omega}\) is the angular velocity. The measured of \(f\) can be expanded as \[f(\tilde{\mathcal{D}},\tilde{\mathcal{B}},\tilde{\mathcal{R}})=f^{0 }+\tilde{\mathcal{D}}f^{\mathcal{D}}+\tilde{\mathcal{B}}f^{\mathcal{B}}+\tilde{ \mathcal{R}}f^{\mathcal{R}}\] \[+\tilde{\mathcal{B}}\tilde{\mathcal{D}}f^{\mathcal{B}\mathcal{D}}+ \tilde{\mathcal{D}}\tilde{\mathcal{R}}f^{\mathcal{D}\mathcal{R}}+\tilde{ \mathcal{B}}\tilde{\mathcal{R}}f^{\mathcal{B}\mathcal{R}}+\tilde{\mathcal{D}} \tilde{\mathcal{B}}\tilde{\mathcal{R}}f^{\mathcal{D}\mathcal{R}}, \tag{1}\] where notation \(f^{S_{1},S_{2}\dots}\) denotes a component which is odd under the switches \(S_{1},S_{2},...\) and can be calculated by formula \[f^{S_{1},S_{2}\dots}=\frac{1}{8}\sum_{\mathcal{B},\tilde{\mathcal{D}},\tilde{ \mathcal{R}}}S_{1}S_{2}...f(\tilde{\mathcal{D}},\tilde{\mathcal{B}},\tilde{ \mathcal{R}}). \tag{2}\] The \(e\)EDM signal manifests as the contribution to \(f^{\mathcal{BD}}\) channel according to \[f^{\mathcal{BD}}=2d_{e}E_{\mathrm{eff}}, \tag{3}\] where \(E_{\mathrm{eff}}\) is the effective electric field, which can be obtained only in precise calculations of the electronic structure. The values \(E_{\mathrm{eff}}=24\) GV/cm [11; 12], \(22.5(0.9)\) GV/cm [13], \(22.7(1.4)\) GV/cm [14] were obtained. According to eq. (2) \[f^{\mathcal{BD}}=\frac{1}{8}\sum_{\mathcal{B},\tilde{\mathcal{D}},\tilde{ \mathcal{R}}}\tilde{\mathcal{B}}\tilde{\mathcal{D}}f(\tilde{\mathcal{D}}, \tilde{\mathcal{B}},\tilde{\mathcal{R}}). \tag{4}\] Beyond \(e\)EDM where are a lot of systematics which contributes to \(f^{\mathcal{BD}}\) and thus mimic \(e\)EDM signal [5]. The point is that the measurement of other components \(f^{0}\) (even under all switches), \(f^{\mathcal{D}}\), \(f^{\mathcal{B}}\) and others together with their theoretical analysis can tell us about size of systematic effects and perhaps a way to take them into account [5; 8]. ## II Theoretical methods Following Refs. [6; 7; 15], the energy levels and wave functions of the \({}^{180}\)Hf\({}^{19}\)F\({}^{+}\) ion are obtained by a numerical diagonalization of the molecular Hamiltonian (\(\mathbf{\hat{H}}_{\mathrm{mol}}\)) in the external variable electric \(\mathbf{E}\)(t) and magnetic \(\mathbf{B}\)(t) fields over the basis set of the electronic-rotational wavefunctions \[\Psi_{\Omega}\theta^{J}_{M,\Omega}(\alpha,\beta)U^{\mathrm{F}}_{M_{I}}. \tag{5}\] Here \(\Psi_{\Omega}\) is the electronic wavefunction, \(\theta^{J}_{M,\Omega}(\alpha,\beta)=\sqrt{(2J+1)/4\pi}D^{J}_{M,\Omega}(\alpha,\beta,\gamma=0)\) is the rotational wavefunction, \(\alpha,\beta,\gamma\) are Euler angles, \(U^{F}_{M_{I}}\) is the F nuclear spin wavefunctions and \(M\) (\(\Omega\)) is the projection of the molecule angular momentum, \(\mathbf{J}\), on the lab \(\hat{z}\) (internuclear \(\hat{n}\)) axis, \(M_{I}=\pm 1/2\) is the projection of the nuclear angular momentum on the same axis. Note that \(M_{F}=M_{I}+M\) is not equal to \(m_{F}\). The latter, as stated above, is the projection of the total momentum on the rotating electric field. The molecular Hamiltonian for \({}^{180}\)Hf\({}^{19}\)F\({}^{+}\) reads \[\mathbf{\hat{H}}_{\mathrm{mol}}=\mathbf{\hat{H}}_{\mathrm{el}}+\mathbf{\hat{ H}}_{\mathrm{rot}}+\mathbf{\hat{H}}_{\mathrm{hfs}}+\mathbf{\hat{H}}_{\mathrm{ ext}}. \tag{6}\] Here \(\mathbf{\hat{H}}_{\mathrm{el}}\) is the electronic Hamiltonian, \(\mathbf{\hat{H}}_{\mathrm{rot}}\) is the Hamiltonian of the rotation of the molecule, \(\mathbf{\hat{H}}_{\mathrm{hfs}}\) is the hyperfine interaction between electrons and fluorine nuclei as they described in Ref. [6] and \(\mathbf{\hat{H}}_{\mathrm{ext}}\) describes the interaction of the molecule with variable magnetic and electric fields as it is described in Ref. [7]. In this paper the time dependent electric and magnetic fields lie in the \(xy\) plane. Depending on the particular form of time dependence the interaction with the fields is taken into account within two approaches. In the first one the transition to the rotating frame is performed, whereas in the second approach the quantization of rotating electromagnetic field is performed. Only the static fields parallel to \(\vec{\omega}_{\mathrm{rot}}\) (\(\hat{z}\) axis) are allowed in the first scheme, whereas the second approach is valid for arbitrary static, rotating and oscillating fields with arbitrary directions and frequencies [7]. Following Ref. [6] we considered \({}^{3}\Delta_{\underline{1}}\), \({}^{3}\Delta_{2}\), \({}^{3}\Pi_{0^{+}}\) and \({}^{3}\Pi_{0^{-}}\) low-lying electronic basis states. \(\mathbf{\hat{H}}_{\mathrm{el}}\) is diagonal on the basis set (5). Its eigenvalues are transition energies of these states. They were calculated and measured in Ref. [16]: \[{}^{3}\Delta_{1}:T_{e} = 976.930\ \mathrm{cm}^{-1}\,\] \[{}^{3}\Delta_{2}:T_{e} = 2149.432\ \mathrm{cm}^{-1}\,\] \[{}^{3}\Pi_{0^{-}}:T_{e} = 10212.623\ \mathrm{cm}^{-1}\,\] \[{}^{3}\Pi_{0^{+}}:T_{e} = 10401.723\ \mathrm{cm}^{-1}. \tag{7}\] Electronic matrix elements for calculation of the molecular Hamiltonian were taken from Ref. [6], except for the hyperfine structure constant \(A_{\parallel}=-62.0\) MHz measured in Ref. [10]. ## III Results ### Non-reversing magnetic field In the experiment the rotating magnetic field, \(\mathbf{B}_{\mathrm{rot}}\), is parallel or antiparallel to the rotating electric field \(\mathbf{E}_{\mathrm{rot}}\). In an ideal case after reversing the absolute value of magnetic field remains the same. At the presence of non-reversing component the absolute values for two directions are different. Non-reversing magnetic field makes additional contributions to \(f^{\mathcal{BD}}\), which leads to systematic effect, as well as to \(f^{\mathcal{B}}\) components. Both shifts are proportional to non-reversing component of \(\mathbf{B}_{\mathrm{rot}}\), and according to Ref. [5] the ratio is \[\frac{f^{\mathcal{B}}}{f^{\mathcal{BD}}}=\frac{\mathrm{g}^{u}+\mathrm{g}^{l}}{ \mathrm{g}^{u}-\mathrm{g}^{l}}. \tag{8}\] Here \(\mathrm{g}^{u}\) and \(\mathrm{g}^{l}\) are the g-factors of the upper and lower Stark doublets in the external electric field. Thus, one can remove this systematic monitoring the relatively large \(f^{\mathcal{B}}\) component and applying the correction to \(f^{\mathcal{B}\mathcal{D}}\) on the base of Eq. (8). For numerical calculation of this effect we, according to the first approach mentioned above, perform a transition to the rotating frame. In this case the rotating fields are replaced by the static ones in rotating frame: \[\mathbf{E}(\mathrm{t})_{\mathrm{rot}}=\mathcal{E}_{\mathrm{rot}}(\hat{\mathrm{ x}}\mathrm{cos}(\omega_{\mathrm{rot}}\mathrm{t})+\hat{\mathrm{y}}\mathrm{sin}( \omega_{\mathrm{rot}}\mathrm{t}))\rightarrow\mathcal{E}_{\mathrm{rot}}\hat{ \mathrm{X}}, \tag{9}\] \[\mathbf{B}(\mathrm{t})_{\mathrm{rot}}=\mathcal{B}_{\mathrm{rot}}(\hat{\mathrm{ x}}\mathrm{cos}(\omega_{\mathrm{rot}}\mathrm{t})+\hat{\mathrm{y}}\mathrm{sin}( \omega_{\mathrm{rot}}\mathrm{t}))\rightarrow\mathcal{B}_{\mathrm{rot}}\hat{ \mathrm{X}} \tag{10}\] and the perturbation \[\hat{V}=-\vec{\omega}_{\mathrm{rot}}\mathbf{F}=-\omega_{\mathrm{rot}}\hat{F}_ {Z} \tag{11}\] due to the rotation is added to the Hamiltonian. Here \(X\),\(Y\),\(Z\) are the axes of the rotating frame. The calculated ratio \(f^{\mathcal{B}}/f^{\mathcal{B}\mathcal{D}}\) as function of \(f^{0}\) on Fig. 1 is presented. In the calculation \(\omega_{\mathrm{rot}}/2\pi=+375\) kHz and \(\mathcal{E}_{\mathrm{rot}}=+58\) V/cm which correspond to the values used in the experiment. Also the calculated ratio \(f^{\mathcal{D}}/f^{0}\) and the calculated value \((\mathrm{g}^{u}+\mathrm{g}^{l})/(\mathrm{g}^{u}-\mathrm{g}^{l})=-473\) are given. For values \(f^{0}=77\) Hz, \(105\) Hz and \(151\) Hz, used in the experiment [4], we obtain \(f^{\mathcal{B}}/f^{\mathcal{B}\mathcal{D}}=-481\), \(-473\) and \(-469\) respectively. The latter value corresponds to the solid (black) curve on Fig. 4 of Ref. [8]. The values are not identical to each other and to \((\mathrm{g}^{u}+\mathrm{g}^{l})/(\mathrm{g}^{u}-\mathrm{g}^{l})\) due to the rotation perturbation (11). As Zeeman splitting \(f^{0}\) increases the ratios \(f^{\mathcal{B}}/f^{\mathcal{B}\mathcal{D}}\) and \(f^{\mathcal{D}}/f^{0}\) approach their saturated value \(-465\) which is different from \((\mathrm{g}^{u}+\mathrm{g}^{l})/(\mathrm{g}^{u}-\mathrm{g}^{l})=-473\) on \(8\). ### The second and higher harmonics of \(\mathcal{E}_{\mathrm{rot}}\) According to the theory of Ref. [5] addition electric field oscillating in \(xy\) plane at double frequency \(2\omega_{\mathrm{rot}}\) together with static magnetic field in the same plane makes additional contributions to \(f^{\mathcal{B}}\) but no contribution to \(f^{\mathcal{B}\mathcal{D}}\) which formally does not lead to a systematic effect. However, applying the correction (8) on the base of the observed \(f^{\mathcal{B}}\) does affect the measurement of \(f^{\mathcal{B}\mathcal{D}}\) component [5]. To calculate this effect we use variable fields which in addition to the components that rotates in the \(xy\)-plane with frequency \(\omega_{\mathrm{rot}}\) (see Eqs. (9,10)) consists of static component of magnetic field along the laboratory \(x\) axis and electric field with components along \(x\) and \(y\) axes which oscillate with frequency \(2\omega_{\mathrm{rot}}\) and have addition to rotating component phase \(\varphi\): \[\mathbf{E}(\mathrm{t})=\mathcal{E}_{\mathrm{rot}}(\hat{\mathrm{ x}}\mathrm{cos}(\omega_{\mathrm{rot}}\mathrm{t})+\hat{\mathrm{y}}\mathrm{sin}( \omega_{\mathrm{rot}}\mathrm{t}))+\] \[\mathcal{E}_{\mathrm{x}}\hat{x}cos(2\omega_{\mathrm{rot}}\mathrm{ t}+\varphi)+\mathcal{E}_{\mathrm{y}}\hat{y}\mathrm{cos}(2\omega_{\mathrm{rot}} \mathrm{t}+\varphi), \tag{12}\] \[\mathbf{B}(\mathrm{t})=\mathrm{B}_{\mathrm{x}}\hat{\mathrm{x}}+ \tag{13}\] \[\mathcal{B}_{\mathrm{rot}}(\hat{x}cos(\omega_{\mathrm{rot}}\mathrm{ t})+\hat{y}\mathrm{sin}(\omega_{\mathrm{rot}}\mathrm{t})). \tag{14}\] Below we put \(\omega_{\mathrm{rot}}/2\pi=+375\) kHz, \(\mathcal{E}_{\mathrm{rot}}=+58\) V/cm, \(\mathcal{B}_{\mathrm{rot}}=\pm 6\) mG (corresponds to \(f^{0}=77\) MHz) which are the values used in the experiment [4] and \(B_{x}=14\) mG and \(\mathcal{E}_{x}=\mathcal{E}_{y}\), \(\mathcal{E}_{x}/\mathcal{E}_{\mathrm{rot}}=10^{-2}\). Note, that \(\omega_{\mathrm{rot}}\) and \(\mathcal{E}_{\mathrm{rot}}\) are always positive. In this and following subsections the time-dependence of external fields is accounted for by the interaction with the corresponding quantized electromagnetic fields that corresponds to the second approach described in Ref. [7]. In Fig. 2 the calculated values of \(f^{\mathcal{B}\mathcal{D}}\) and \(f^{\mathcal{B}}\) as functions of the phase \(\varphi\) are given. The calculated \(f^{\mathcal{B}}\) is in agreement with Fig 3 Panel B of Ref. [4]. The general behavior with presence of static magnetic field along the \(y\) axis is given by eq. (37) of Ref. [5]. Our calculation also indicates nonzero value \(f^{\mathcal{B}\mathcal{D}}\) with the ratio \(f^{\mathcal{B}}/f^{\mathcal{B}\mathcal{D}}=-16000\). According to the theory of Ref. [5] the nonzero value of \(f^{\mathcal{B}\mathcal{D}}\) can appear if \(\Delta\mathrm{g}/E\) depends on the external _static_ electric field \(E\). Here \(\Delta\)g\({}_{0}\)\(-\)g\({}^{l}\). Fig. 3 presents the calculated values of \(\Delta\)g/\(E\). We present results for the cases when magnetic interaction with both \({}^{3}\Pi_{0^{\pm}}\) and \({}^{3}\Delta_{2}\) is taken into account and for the case when the interaction with \({}^{3}\Pi_{0^{\pm}}\) is omitted. One can see that if interaction with \({}^{3}\Pi_{0^{\pm}}\) states is taken into account the value \(\Delta\)g/\(E\) depends on the external electric fields. Within small area of an value of static electric field the g-factor difference can be presented as \[\Delta\text{g}=\Delta\text{g}_{0}+\Delta\text{g}_{1}E. \tag{15}\] If interaction with \({}^{3}\Pi_{0^{\pm}}\) is omitted \(\Delta\)g\({}_{0}\)\(=\)\(0\) for a very high accuracy. In table 1 the calculated \(\Delta\)g\({}_{0}\) and \(\Delta\)g\({}_{1}\) for the case when interaction with \({}^{3}\Pi_{0^{\pm}}\) is taken into account are given. Interaction with \({}^{3}\Pi_{0^{\pm}}\) states ensure nonzero \(\Delta\)g value for \(\Omega\)\(-\)doublets levels already at zero external electric field [6]. Note, that one of the \(\Omega\)\(-\)doublet states has admixture only \({}^{3}\Pi_{0^{+}}\) state, whereas another one has admixture only \({}^{3}\Pi_{0^{-}}\). As electric field increases \(\Omega\)\(-\)doublets levels become the Stark-doublets ones with good \(\Omega\) quantum number and with equal admixture of \({}^{3}\Pi_{0^{+}}\) as well as \({}^{3}\Pi_{0^{-}}\). Therefore as electric field increases the \(\Delta\)g\({}_{0}\) decreases, but is nonzero for any finite electric field. As is stated above, the nonzero \(\Delta\)g\({}_{0}\) leads to nonzero \(f^{\mathcal{BD}}\). As the effect is proportional to \(\Delta\)g\({}_{0}\), according to the Table 1, for \(\mathcal{E}_{\text{rot}}\) = 20 V/cm used in the first stage of the experiment [10] we have \(f^{\mathcal{B}}/f^{\mathcal{BD}}=-5500\). Similarly to the second harmonic the electric field oscillating in \(xy\) plane at frequency \(3\omega_{\text{rot}}\) together with the gradient of magnetic field in the same plane makes additional contributions to both \(f^{\mathcal{B}}\) and \(f^{\mathcal{BD}}\) with the same (as for the second harmonic) ratio \(f^{\mathcal{B}}/f^{\mathcal{BD}}=\Delta g_{0}/(g^{u}+g^{l})\). The absolute value for \(f^{\mathcal{B}}\) is given by Eq. (39) of Ref. [5]. ### Ellipticity of \(\mathcal{E}_{\text{rot}}\) According to the theory of Ref. [5] ellipticity of \(\mathcal{E}_{\text{rot}}\) together with first-order magnetic field gradient makes additional contributions to \(f^{\mathcal{BD}}\) and \(f^{\mathcal{B}}\) components with the ratio \[\frac{f^{\mathcal{B}}}{f^{\mathcal{BD}}}=\frac{3}{4}\frac{\text{g}^{u}+\text{ g}^{l}}{\text{g}^{u}-\text{g}^{l}}. \tag{16}\] To calculate this effect we use variable fields \[\mathbf{E(t)}=(\mathcal{E}_{\text{rot}}+\mathcal{E}_{\epsilon}) \hat{\text{x}}\text{cos}(\omega_{\text{rot}}t)+\] \[(\mathcal{E}_{\text{rot}}-\mathcal{E}_{\epsilon})\hat{y}sin( \omega_{\text{rot}}t), \tag{17}\] \[\mathbf{B(t)}=\mathcal{B}_{\text{rot}}\frac{\mathcal{E}_{\text{ rot}}+\mathcal{E}_{\epsilon}}{\mathcal{E}_{\text{rot}}}\hat{\text{x}}\text{cos}( \omega_{\text{rot}}t)+\] \[\mathcal{B}_{\text{rot}}\frac{\mathcal{E}_{\text{rot}}-\mathcal{E }_{\epsilon}}{\mathcal{E}_{\text{rot}}}\hat{y}sin(\omega_{\text{rot}}t))+\] \[\mathcal{B}_{\epsilon}\frac{\mathcal{E}_{\text{rot}}+\mathcal{E}_ {\epsilon}}{\mathcal{E}_{\text{rot}}}\hat{x}\text{cos}(\omega_{\text{rot}}t)-\] \[\mathcal{B}_{\epsilon}\frac{\mathcal{E}_{\text{rot}}-\mathcal{E }_{\epsilon}}{\mathcal{E}_{\text{rot}}}\hat{y}sin(\omega_{\text{rot}}t)). \tag{18}\] In the calculation we put \(\mathcal{E}_{\epsilon}=+1\) V/cm, \(\mathcal{B}_{\epsilon}=+0.1\) mG. Equation (17) is rotating electric field having an ellipticity with major axis along the \(x\) axis. First two lines of Eq. (18) is the modification of the rotating magnetic field from Eq. (10) caused by perturbation of the ion micromotion due to the acquired ellipticity of \(\mathcal{E}_{\text{rot}}\). This modification, actually, does not affect the result. The last two lines of Eq. (18) is the additional magnetic field feeling by the ion in the first-order magnetic field gradient [5]. The calculation gives \(f^{\mathcal{B}\mathcal{D}}=-0.913\cdot 10^{-4}\) Hz, \(f^{\mathcal{B}}=0.332\cdot 10^{-1}\) Hz. The ratio is \[\frac{f^{\mathcal{B}}}{f^{\mathcal{B}\mathcal{D}}}=-364=0.757\cdot(-481), \tag{19}\] where \(-481\) is the ratio \(f^{\mathcal{B}}/f^{\mathcal{B}\mathcal{D}}\) for systematic related to the non-reversing magnetic field for \(\mathcal{E}_{\mathrm{rot}}=\pm 6\) mG (\(f^{0}\) = 77 Hz). Note the difference of the coefficient 0.757 in Eq. (19) from the coefficient 3/4 = 0.750 in Eq. (16). This difference can be explained as following. Looking on the derivation of Eq. (16) (see Eqs. (43,44) in Ref. [5]) one notes that the coefficient 3/4 originate from the fact that \(\mathrm{g}^{u}+\mathrm{g}^{l}\) is assumed to be independent of electric field, whereas \(\Delta\mathrm{g}=\mathrm{g}^{u}-\mathrm{g}^{l}\) linearly depends on electric field. If \(\Delta\mathrm{g}=\mathrm{g}^{u}-\mathrm{g}^{l}\) were independent of electric field the coefficient in Eq. (16) would be equal to one. We know, however, from the calculation above that \(\Delta\mathrm{g}\) has a small fraction (2.7% for \(\mathcal{E}_{\mathrm{rot}}\) =58 V/cm as it follows from Table 1) which is independent of electric field. Then one can calculate that \(1\cdot 0.027+0.750(1-0.027)=0.757\) in accordance to the coefficient in Eq. (19). ## IV Conclusion The accurate numerical calculation of some systematic effects in the experiment for _e_EDM search on \({}^{180}\)HfF\({}^{+}\) cation is performed. A small deviation from analytical formulas derived in Ref. [5] is discussed. The results can be used for testing experimental methods and in the next generation of experiments on the HfF\({}^{+}\) cation and on similar systems like ThF\({}^{+}\).
2306.04524
ExoMol line lists -- L: High-resolution line lists of H$_3^+$, H$_2$D$^+$, D$_2$H$^+$ and D$_3^+$
New MiZo line lists are presented for the D$_2$H$^+$ and D$_3^+$ isotopologues of H$_3^+$. These line lists plus the existing H$_3^+$ MiZATeP and the Sochi H$_2$D$^+$ line lists are updated using empirical energy levels generated using the MARVEL procedure for H$_3^+$, H$_2$D$^+$ and D$_2$H$^+$, and effective Hamiltonian energies for D$_3^+$ for which there is significantly less laboratory data available. These updates allow accurate frequencies for far infrared lines for these species to be predicted. Assignments of the energy levels of H$_3^+$ and D$_3^+$ are extended using a combination of high accuracy variational calculations and analysis of transition intensities. All line lists are made available via www.exomol.com.
Charles A. Bowesman, Irina I. Mizus, Nikolay F. Zobov, Oleg L. Polyansky, Janos Sarka, Bill Poirier, Marco Pezzella, Sergei N. Yurchenko, Jonathan Tennyson
2023-06-07T15:33:36Z
http://arxiv.org/abs/2306.04524v1
ExoMol line lists - L: High-resolution line lists of H\({}_{3}^{+}\), H\({}_{2}\)D\({}^{+}\), D\({}_{2}\)H\({}^{+}\) and D\({}_{3}^{+}\). ###### Abstract New MiZo line lists are presented for the D\({}_{2}\)H\({}^{+}\) and D\({}_{3}^{+}\) isotopologues of H\({}_{3}^{+}\). These line lists plus the existing H\({}_{3}^{+}\) MiZATeP and the Sochi H\({}_{2}\)D\({}^{+}\) line lists are updated using empirical energy levels generated using the MARVEL procedure for H\({}_{3}^{+}\), H\({}_{2}\)D\({}^{+}\) and D\({}_{2}\)H\({}^{+}\), and effective Hamiltonian energies for D\({}_{3}^{+}\) for which there is significantly less laboratory data available. These updates allow accurate frequencies for far infrared lines for these species to be predicted. Assignments of the energy levels of H\({}_{3}^{+}\) and D\({}_{3}^{+}\) are extended using a combination of high accuracy variational calculations and analysis of transition intensities. All line lists are made available via www.exomol.com. keywords: molecular data - opacity - planets and satellites: atmospheres - stars: atmospheres - ISM: molecules. ## 1 Introduction H\({}_{3}^{+}\) is known to form rapidly in H\({}_{2}\) gas following an ionisation event via the strongly exothermic reaction \[\rm H_{2}+H_{2}^{+}\longrightarrow H_{3}^{+}+H \tag{1}\] which occurs at essentially every collision. As H\({}_{2}\) is common in a variety of astronomical bodies, H\({}_{3}^{+}\) is often the dominant molecular ion. The first 30 years of H\({}_{3}^{+}\) astronomy has been comprehensively reviewed by Miller et al. (2020). H\({}_{3}^{+}\) formation is stimulated by cosmic ray ionisation of the interstellar medium and by collisions with fast electrons and other charged particles in planetary ionospheres. H\({}_{3}^{+}\) is also believed to form in the ionosphere of planets through the ionisation of H\({}_{2}\) by extreme ultraviolet radiation (Chadney et al., 2016). In gas giant ionospheres, H\({}_{3}^{+}\) acts as a coolant through efficient infrared (IR) emissions (Miller et al., 2010); indeed it is thought that H\({}_{3}^{+}\) emissions are key to determining the stability limits in hot Jupiter exoplanets (Koskinen et al., 2007). The infrared spectrum of H\({}_{3}^{+}\) has been extensively observed in giant planets in our solar system such as Jupiter (Drossart et al., 1989; Ballester et al., 1994; Miller et al., 1997; Moore et al., 2017), Saturn (Geballe et al., 1993; Stallard et al., 2008, 2008, 2008), Uranus (Trafton et al., 1993; Lam et al., 1997; Trafton et al., 1999; Melin et al., 2019) and is believed to be present in Neptune (Melin et al., 2011, 2018) although it is yet to be detected there. Its presence can be used as an effective temperature probe here and in other astrophysical settings (Gibbs & Fitzgerald, 2022). H\({}_{3}^{+}\) is similarly expected to be of importance in extrasolar giant planets (Chadney et al., 2016; Khodachenko et al., 2015), such as hot-Jupiters (Lenz et al., 2016), and an even more prominent feature in the aurorae of brown dwarfs (Gibbs & Fitzgerald, 2022); however, it has so far defied observation on these objects. H\({}_{3}^{+}\) has also been observed in the interstellar medium (ISM) via absorption in the infrared light of a background star (Oka, 2006) where it forms through cosmic ray ionisation (Geballe & Oka, 1996). Hence it is also used in this setting to trace the cosmic ray ionisation rate (Indriolo & McCall, 2012; Harju et al., 2017) and similarly primarily relies on IR emissions. These IR bands lie well within the wavelength range of the instruments onboard the JWST. In regions where the precursors to H\({}_{3}^{+}\) exist in deuterated forms, namely HD and D\({}_{2}\), equivalent reactions occur to that described in Eq. (1) resulting in the formation of the deuterated isotopologues H\({}_{2}\)D\({}^{+}\), D\({}_{2}\)H\({}^{+}\) and D\({}_{3}^{+}\)(Merkt et al., 2022). At low temperatures fractionation drives the preferential formation of isotopically substituted H\({}_{3}^{+}\)(Hewitt et al., 2005); indeed, models by Walmsley et al. (2004) suggest that in certain very cold regions D\({}_{3}^{+}\) may be the dominant isotopologue of H\({}_{3}^{+}\)! Spectra of H\({}_{2}\)D\({}^{+}\)(Stark et al., 1999; Caselli et al., 2003) and D\({}_{2}\)H\({}^{+}\)(Vastel et al., 2004) have been observed in interstellar space through pure rotational transitions which lie in the far infrared / THZ region. However, D\({}_{3}^{+}\) remains undetected, at least in part because its higher symmetry means that, like H\({}_{3}^{+}\), its pure rotational spectrum is very weak. Elsewhere, electrons provided by H\({}_{3}^{+}\) have been shown to play an important role in the atmospheres of cool white dwarfs (Bergeron et al., 1997). Line lists for H\({}_{3}^{+}\)(Kao et al., 1991; Neale et al., 1996) and the associated partition function (Neale & Tennyson, 1995; Ramnall & Tennyson, 2004) have played a key role in the astronomical study of this important molecular ion. In this work we update the MiZATeP H\({}_{3}^{+}\) line list of Mizus et al. (2017) and the older ST1 H\({}_{2}\)D\({}^{+}\) line list of Sochi & Tennyson (2010). We do this using updated versions of the marvel (measured active rotation-vibration energy levels) studies due to Furtenbacher et al. (2013a) and Furtenbacher et al. (2013b). We present new line lists for D\({}_{2}\)H\({}^{+}\) and D\({}_{3}^{+}\), for which we also use empirical energy levels to improve the accuracy of the transition frequencies between key levels. These line lists are produced as part of the ExoMol project (Tennyson & Yurchenko, 2012). ## 2 Method The triatomic discrete variable representation (DVR) nuclear motion code DVR3D (Tennyson et al., 2004) was used previously and here to compute initial energy levels for H\({}_{3}^{+}\) and its deuterated isotopologues as well as Einstein A-coefficients for each transition. This code, which is based on the use of an exact nuclear motion kinetic energy operator, has been shown to be capable of giving highly accurate results for the H\({}_{3}^{+}\) system (Polyansky & Tennyson, 1999; Pavanello et al., 2012a). It is important to note that in the absence of any absolute measurements of transition intensities for the H\({}_{3}^{+}\) system, all models rely on computed values which are thought to be accurate (Farnik et al., 2002; Petrignani et al., 2014). It should be noted that DVR3D only provides assignments for the rigorous quantum numbers: \(J\), rotationless parity \(e/f\) and the interchange symmetry for two identical atoms. This means that most states in the existing version of the MiZATeP H\({}_{3}^{+}\)(Mizus et al., 2017) and ST1 H\({}_{2}\)D\({}^{+}\)(Sochi & Tennyson, 2010) line lists do not have full ro-vibrational labels. We partially address this issue below. Marvel(Furtenbacher et al., 2007; Furtenbacher & Csasz, 2012; Tobias et al., 2018) takes assigned, high resolution spectra and uses them to construct empirical energy levels with spectroscopic accuracy and specified uncertainties. Use of the energy levels can greatly improve the accuracy with which a line list can predict transition frequencies, see Al-Derzi et al. (2021) for a recent example. For H\({}_{3}^{+}\), H\({}_{2}\)D\({}^{+}\) and D\({}_{2}\)H\({}^{+}\) marvel spectroscopic networks were then constructed using available laboratory spectra in order to obtain empirical energy levels for each species. These empirical energy levels were then used to improve our line lists, correcting for obs.-calc. shifts in levels where empirical energy levels are available for comparison. Such a refinement allows for a subset of the energies to be provided with very high accuracy, as has been demonstrated in similar projects (Bowsman et al., 2022). This allows for high-accuracy transition frequency predictions to be made (Al-Derzi et al., 2021), making the final line list well suited for high-resolution studies (Bowsman et al., 2021; Owens et al., 2022). The marvel process determines an uncertainty for each energy level based on the uncertainties of the input transition that define that level. ### Spectroscopic Networks Transition data for a molecule can be aggregated to construct a spectroscopic network, where the transition frequencies represent the edges of the networks and the energy levels the nodes (Furtenbacher et al., 2007; Csasz & Furtenbacher, 2011). The marvel procedure (Furtenbacher & Csasz, 2012) achieves this by inverting transition matrices, which yields a set of empirical energy levels with individual uncertainties. Spectroscopic networks have been constructed in the past for the molecules H\({}_{3}^{+}\)(Furtenbacher et al., 2013a), H\({}_{2}\)D\({}^{+}\) and D\({}_{2}\)H\({}^{+}\)(Furtenbacher et al., 2013b) but new transition data has since been published. New transition data allows us to expand the level coverage of the networks and in the case of new data that remeasures existing transitions to improve the accuracy to which term energies are known. The recent high-resolution experiments have provided new THz transition data with uncertainties on the order of MHz or kHz, well within the part-per-billion regime. When all of the transitions that determine a level's term energy are consistent within their experimental uncertainties, the uncertainty on the final term energy will generally be on the same order, but not less than, the smallest uncertainty of the transitions that define the level. Hence, with the addition of these high-accuracy transition measurements we are able to significantly improve the accuracy of our final energies, in some cases by a few orders of magnitude. The marvel procedure requires all transitions within a network to be identified by the same set of quantum numbers. For this purpose, the deuterated isotopologues can be divided into two groups: H\({}_{3}^{+}\) and D\({}_{3}^{+}\) are symmetric-tops belonging to the D\({}_{3\text{h}}\)(M) molecular symmetry group and H\({}_{2}\)D\({}^{+}\) and D\({}_{2}\)H\({}^{+}\) are asymmetric-tops belonging to the C\({}_{2\text{v}}\)(M) group. These in turn dictate the set of good quantum numbers used to define the levels of the networks. As symmetric tops, the molecules H\({}_{3}^{+}\) and D\({}_{3}^{+}\) are defined by two primary vibrational modes, symmetric stretching (\(\nu_{1}\)) and bending (\(\nu_{2}\)). The bending mode is degenerate however, and as such these species are also described by the vibrational angular momentum quantum number \(l_{2}=\nu_{2},\nu_{2}-2,...,-\nu_{2}+2,-\nu_{2}\). The energy levels of these species are identified instead by \(L_{2}=|l_{2}|\) in this work. The rigorous rotational angular momentum quantum number \(J\) is used to define the rotational levels of these molecules. The projection of \(J\) along the molecular symmetry axis, \(k\) can be used to determine the parity of the energy levels, such that the total parity is the sign of \((-1)^{k}\)(Furtenbacher et al., 2013a). The quantum number \(k\) does not offer a complete description of the system however, due to Coriolis coupling between it and \(l_{2}\). As such, the value \(G=|g|\) is used, where \(g=k-l_{2}\)(Hougen, 1962). This quantum number is important because, as members of the D\({}_{3\text{h}}\)(M) symmetry group, the \(A_{1}^{\prime}\), \(A_{1}^{\prime\prime}\), \(A_{2}^{\prime}\), \(A_{2}^{\prime}\) rovibronic symmetries only exist for these molecules when \(G=3n\), where \(n\) is an integer. In H\({}_{3}^{+}\) the \(A_{1}^{\prime}\) and \(A_{1}^{\prime\prime}\) symmetries do not exist however, as they are determined to have 0 nuclear spin statistical weight (Watson, 1984). This difference arises from the nuclear spin \(I\) of constituent atoms, which are \(I=\frac{1}{2}\) for hydrogen and \(I=1\) for deuterium. Consequently \(\rm H_{3}^{+}\) consists of two nuclear spin isomers while \(\rm D_{3}^{+}\) has three (Watson et al., 1987); these are shown in Table 1. Levels with equivalent vibrational, \(J\) and \(G\) assignments are differentiated by the \((u|l|m)\)\(U\)-notation of Watson (1994). \(u\) and \(l\) are used to identify the upper and lower energy levels of the same assignment, differing in their value of \(K=|k|\), and \(m\) is used when such a distinction is irrelevant as only one value of \(K\) can exist that produces the same \(G\). As such, the energy levels of the species \(\rm H_{3}^{+}\) and \(\rm D_{3}^{+}\) are identified by the quantum number set \((\nu_{1},\nu_{2},\bar{L}_{2},J,G,U,K,\Gamma_{\rm{rev}})\). As asymmetric-tops, \(\rm H_{2}D^{+}\) and \(\rm D_{2}H^{+}\) are defined by the symmetric stretching, bending and anti-symmetric stretching vibrational quantum numbers \(\nu_{1}\), \(\nu_{2}\) and \(\nu_{3}\). The quantum number \(J\) and its projections onto the \(C_{2}\) axis and the axis perpendicular to the \(C_{2}\) axis in the plane of the molecule, \(K_{a}\) and \(K_{c}\), are the standard ones for labelling the rotational states of an asymmetric top. For both asymmetric-tops, the total parity is the sign of \((-1)^{K_{c}}\). A similar expression is used to identify the spin isomers of each asymmetric-top, such that they are labelled ortho when it is positive and para when negative. For \(\rm H_{2}D^{+}\) this is expression is \((-1)^{\nu_{3}+K_{a}}\) and for \(\rm D_{2}H^{+}\) it is \((-1)^{\nu_{3}+K_{a}+K_{c}}\). The ortho and para nuclear spin isomers identify the rovibronic symmetry group of the molecule, \(A_{1}\), \(A_{2}\), \(B_{1}\) or \(B_{2}\), as shown in Table 1. Hence, the energy levels of the species \(\rm H_{2}D^{+}\) and \(\rm D_{2}H^{+}\) are identified by the quantum number set \((\nu_{1},\nu_{2},\nu_{3},J,K_{a},K_{c},\Gamma_{\rm{rev}})\). Transitions between nuclear spin isomers are forbidden, meaning networks constructed from observational transitions will be split into separate components for each isomer. Given marvel determines the energies of each level relative to the lowest energy level in the network, which is defined as 0, this presents a problem as any levels of a different nuclear spin isomer to that of the zero-energy level will not have their energies defined. This is avoided by the introduction of "magic" numbers; forbidden transitions are added to the networks to connect the lowest levels of each nuclear spin isomer to each other using calculated energies. This enables the determination of all energies for connected levels within the networks, relative to the zero-energy level. For \(\rm D_{3}^{+}\) this treatment was not needed as the energies were determined through the use of effective Hamiltonian calculations. sitions which determine 703 empirical energy levels. Of the remaining components: two consist of 6 transitions; one of 5 transitions; two of 4 transitions; three of 3 transitions; 12 of 2 transitions and the remaining 18 of single transitions. As these minor components are not connected to the primary component the energies of the levels within them are not determined relative to the zero-energy level and are hence not considered further. #### 2.1.2 H\({}_{2}\)D\({}^{+}\) network A marvel network for H\({}_{2}\)D\({}^{+}\) was originally produced by Furtenbacher et al. (2013b) and contained transition data from 13 sources. Since then, two new sets of transition measurements have been published by Jusko et al. (2016) and Jusko et al. (2017). _16JuKoScAs_(Jusko et al., 2016): Transitions frequencies for 11 \(\nu_{1}\) band transitions are reported with MHz and sub-MHz accuracy. This source provides significantly higher accuracy measurements of transitions previously observed by Amano (1985). _17JuToMuGh_(Jusko et al., 2017): 3 pure-rotation transition measurements are provided with sub-MHz or kHz accuracy. These two new sources and a "magic" forbidden transition to connect the nuclear spin isomers, derived from effective Hamiltonian calculations using molecular constants from Amano (2006), brought the total number of transitions for H\({}_{2}\)D\({}^{+}\) to 210; these are summarised in Table 4. 208 transitions were validated successfully by marvel, yielding a final network comprising 7 components, the largest of which contained 200 transitions that determine 109 unique energy levels. For the remaining 6 components, one contained 3 transitions while the rest are single transition components; these disconnected components are not considered further here. #### 2.1.3 D\({}_{2}\)H\({}^{+}\) network The D\({}_{2}\)H\({}^{+}\) marvel network was originally published by Furtenbacher et al. (2013b) and was constructed using transition data from 9 sources. Four new sets of transition data have subsequently been published (Jusko et al., 2016, 2017; Yu. et al., 2017; Markus et al., 2019) and have now been added to the existing network. _16JuKoScAs_(Jusko et al., 2016): This source provides 10 transitions frequencies observed with sub-MHz accuracy. 8 of these transitions had been previously observed by Lubic & Amano (1984). _17JuToMuGh_(Jusko et al., 2017): 3 ground state pure-rotation transition measurements are provided with kHz accuracy. _17YuPeAmMa_(Yu. et al., 2017): This source reports 5 ground state pure-rotation transitions with sub-MHz accuracy. 4 of these transitions had not been reported in other sources, with the other also being observed by Jusko et al. (2017). _19MaKoMc_(Markus et al., 2019): Transition frequencies for 37 \(\nu_{1}\) band transitions are provided with MHz accuracy. 10 of the transitions in this source are new and had not been reported elsewhere, while the rest had been observed by Lubic & Amano (1984) and Jusko et al. (2016). With the addition of 55 new transition measurements, the new network contains 210 transitions. Two existing transitions that had digitisation errors in their transitions frequencies were updated to the correct values. A "magic" forbidden transition is included to connect the otherwise distinct ortho and para components of the network, using an frequency cal \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c} \hline Wavenumber (cm\({}^{-1}\)) & \(\nu_{1}^{\prime}\) & \(\nu_{2}^{\prime}\) & \(L_{2}^{\prime}\) & \(J^{\prime}\) & \(G^{\prime}\) & \(U^{\prime}\) & \(K^{\prime}\) & \(\Gamma_{\rm two}^{\prime}\) & \(\nu_{1}^{\prime}\) & \(\nu_{2}^{\prime}\) & \(L_{2}^{\prime}\) & \(J^{\prime\prime}\) & \(G^{\prime\prime}\) & \(U^{\prime\prime}\) & \(K^{\prime\prime}\) & \(\Gamma_{\rm two}^{\prime\prime}\) & Source \\ \hline [MISSING_PAGE_POST] & 5 & 4 & 0 & 0 & 0 & 6 & 6 & m & 6 & A\({}^{\prime}_{2}\) & Morong et al. (2009) \\ 12116.353(10) & 0 & 6 & 0 & 3 & 3 & 3 & m & 3 & A\({}^{\prime\prime}_{2}\) & 0 & 0 & 0 & 3 & 0 & m & 0 & A\({}^{\prime}_{2}\) & Morong et al. (2009) \\ 12331.180(10) & 1 \begin{table} \begin{tabular}{c c c c c} \hline \hline & & Energy range & Uncertainty \\ Vib. & \(J\) range & V/A & (cm\({}^{-1}\)) & Mean/Max (cm\({}^{-1}\)) \\ \hline 800ka (Oka [08]) & & & & \\ 011 - 000 & 0 - 4 & 14/15 & 2457 - 2918 & 0.013/0.015 & \\ 810ka (Oka [1981]) & & & & \\ 011 - 000 & 0 - 6 & 27/30 & 2457 - 3030 & 0.010/0.010 & \\ 84WaFoMcBe (Watson et al. [1984]) & & & & \\ 011 - 000 & 0 - 7 & 46/46 & 2217 - 3030 & 0.010/0.010 & \\ 84WaFoMcBe (Watson et al. [1984]) & & & & \\ 000 - 000 & 1 - 4 & 5/5 & 173 - 596 & 0.010/0.012 & \\ 87MaMoMoMo (Jiaroski et al. [1987]) & & & & \\ 011 - 000 & 0 - 10 & 11/13 & 1798 - 3193 & 0.010/0.020 & \\ 89MaFeWaMi (Jiaroski et al. [1989]) & & & & \\ 022 - 000 & 1 - 12 & 44/50 & 4500 - 5094 & 0.017/0.041 & \\ 09BoRaCo (Bawendi et al. [1990]) & & & & \\ 011 - 000 & 3 - 9 & 14/14 & 2468 - 2889 & 0.010/0.010 & \\ 020 - 011 & 1 - 6 & 14/14 & 2395 - 2685 & 0.010/0.010 & \\ 022 - 011 & 0 8 & 6/68 & 2090 - 2945 & 0.010/0.040 & \\ 111 - 011 & 5 - 6 & 2/2 & 2741 - 2854 & 0.010/0.010 & \\ 111 - 100 & 1 - 4 & 21/21 & 2089 - 2771 & 0.010/0.010 & \\ 190 culated by Yu. et al. (2017). The final set of D\({}_{2}\)H\({}^{+}\) transition data is summarised in Table 5. All of the transitions in the network were validated by marvel, yielding a primary network component containing 200 transitions which define 115 unique energy levels. The rest of the transitions are present in 7 disconnected components, one of which contains 3 transitions, another 2 transitions and the remainder are single transition components. ### Effective Hamiltonians There have been significantly fewer observations of D\({}_{3}^{+}\) spectra than of the three other isotopologues considered here. As such, there was insufficient data to build a well-connected network. In lieu of this, we used effective Hamiltonian constants from Watson et al. (1987) and Amano et al. (1994) to calculate energies for the states in the range \(J=0-15\), up to a maximum energy of 2676.387 cm\({}^{-1}\). These calculations were performed using the program pgopher (Western, 2017) which also provides full state assignments for the levels it computes. Hence full quantum number assignments were determined via this method for 282 levels within the bands for which constants were available: 188 levels in the 000 band (\(\nu_{1}\), \(\nu_{2}\), \(L_{2}\)); 2 levels in the 010 band; 105 in the 011 band; 22 in the 100 band. The majority of the levels assigned through this method have \(J<10\). Further assignments were done manually for states at higher energies with the aid of the \begin{table} \begin{tabular}{l c c c} \hline & & Energy range & Uncertainty \\ Vib. & \(J\) range & V/A & (cm\({}^{-1}\)) & Mean/Max (cm\({}^{-1}\)) \\ \hline 08VeLeAgBe (Véillie et al., 2008) & & & \\ [MISSING_PAGE_POST] eWOE (Berts et al., 2012) & & & & 05AmHi (Amano & Hirao, 2005) \\ 144 - 000 & 1 - 2 & 1/1 & 12882 - 12882 & 0.005/0.005 & 000 - 1 - 3 & 2/3 & 12.423 - 21.563 & 1.31\(\times 10^{-6}\)/1.67\(\times 10^{-6}\) \\ 231 - 000 & 1 - 2 & 1/1 & 12589 - 12589 & 0.005/0.005 & 06 vibrational band origins published by Amano et al. (1994) and the assigned hot and overtone bands published by Aljiah et al. (1995). A further 1045 states were assigned this way and a breakdown of the assigned bands is given in Table 6. These assignments were added to the calculated states file, given that DVR3D only provides assignments for the rigorous quantum numbers: \(J\), rotationless parity \(e/f\) and interchange of two of the D atoms. ## 3 Line List Calculations ### Updated H\({}_{3}^{+}\) and H\({}_{2}\)D\({}^{+}\) While the main line lists were computed using DVR3D, we took advantage of calculations by Sarka et al. (2021); Sarka & Poirier (2022) performed using the variational nuclear motion programs ScaiIT (Chen & Poirier, 2006a, b, 2010b, a; Petty & Poirier, 2014), and GENIUSH (Matyus et al., 2009; Fabri et al., 2011) to give full quantum number designations. The approach applied is detailed by Sarka & Poirier (2022) but we provide a quick summary of the main steps here. First, using Jacobi coordinates ScaiIT calculations were carried out in the four blocks of the \(G_{4}\) permutation-inversion (PI) symmetry group. Due to the very high convergence accuracy, the full G\({}_{12}\) PI group labels (\(\Gamma_{\rm{rv}}\)) were easily assigned unambiguously using the \(\Gamma(G_{12}/D_{3h})\downarrow G_{4}\) correlation table. Next, calculations were repeated with the code GENIUSH with slightly lower accuracy, but still sufficient to match the energy levels with the ScalIT ones. After the vibrational states were labelled (\(\nu_{1},\nu_{2},L_{2}\)), vibrational parent labels were semi-automatically assigned to rovibrational states using the rigid rotor decomposition scheme (RRD) implemented in GENIUSH (Matyus et al., 2010). The RRD overlaps of GENIUSH also help to assign the rotational quantum numbers (\(J,G,U,K\)), as the symmetric top rigid rotor functions are labelled by \(K\). Using this method we were able to label by the quantum number set (\(\nu_{1},\nu_{2},L_{2},J,G,U,K,\Gamma_{\rm{rv}}\)) an additional 1525 states in the MiZATeP H\({}_{3}^{+}\) line list. The energies of the existing H\({}_{3}^{+}\) and H\({}_{2}\)D\({}^{+}\) line lists were updated with empirical energies where they were known. For levels with matching assignments in both the states files and corresponding marvel network, the term energies and their uncertainties were set to the values determined by the marvel procedure. For the levels that did not exist within the molecules' marvel network, the calculated term energies were retained and estimates for their uncertainties were \begin{table} \begin{tabular}{c c c c c} \hline \(\nu_{1}\) & \(\nu_{2}\) & \(L_{2}\) & Count & Energy range (cm\({}^{-1}\)) & \(J\) range \\ \hline [MISSING_PAGE_POST] 0 \\ \hline \end{tabular} \end{table} Table 6: The assigned vibrational bands of the D\({}_{3}^{+}\) states file, detailing the \(J\) range and maximum energy of each band. \begin{table} \begin{tabular}{c c c c c} \hline \multicolumn{1}{c}{Vib.} & \(J\) range & V/A & (cm\({}^{-1}\)) & Mean/Max (cm\({}^{-1}\)) \\ \hline 84HuAm (Lubic \& Amano, 1984) & & & \\ 100 - 000 & 0 - 6 & 34/34 & 2638 - 2990 & 0.005/0.005 \\ 86FMeWa (Foster et al., 1986b) & & & & \\ 001 - 000 & 0 - 5 & 35/35 & 1916 - 2291 & 0.005/0.005 \\ 010 - 000 & 0 - 6 & 53/53 & 1782 - 2280 & 0.004/0.005 \\ 90PoMe (Polvansky \& McKellar, 1990) & & & \\ 010 - 000 & 3 - 4 & 1/1 & 2276 - 2276 & 0.005/0.005 \\ 02PaDaKoPo (Farnik et al., 2002) & & & & \\ 002 - 000 & 0 - 2 & 6/6 & 3994 - 4066 & 6.17\(\times 10^{-4}\)/9.00\(\times 10^{-4}\) \\ 011 - 000 & 0 2 & 6/6 & 3983 - 4122 & 0.002/0.004 \\ 020 - 000 & 0 - 2 & 4/4 & 3847 - 3887 & 0.003/0.004 \\ 03HEM (Hinno \& Manna, 2003) & & & \\ 000 - 000 & 1 - 1 & 1/1 & 23.071 - 23.071 & 6.00\(\times 10^{-7}\) \\ 05AAHi (Amano \& Hirao, 2005) & & & \\ 000 - 000 & 0 - 2 & 3/3 & 23.071 - 49.254 & 1.00\(\times 10^{-5}\)/1.00\(\times 10^{-5}\) \\ 06HP1B1B (Hlawenka et al., 2006) & & & \\ 102 - 000 & 0 - 3 & 3/3 & 6534 - 6536 & 0.005/0.005 \\ 07AsHuHuKu (Aasvany et al., 2007) & & & & \\ 102 - 000 & 0 - 2 & 5/5 & 6467 - 6536 & 0.005/0.005 \\ 11 - 000 & 0 1 & 1/1 & 6581 - 6581 & 0.005/0.005 \\ 120 - 000 & 0 1 & 1/1 & 6482 - 6482 & 0.005/0.005 \\ 08AsHuMiWi (Aasvany et al., 2008) & & & & \\ 000 - 000 & 0 - 1 & 1/1 & 49.254 - 49.254 & 5.00\(\times 10^{-7}\)/5.00\(\times 10^{-7}\) \\ 16JuKoScAs (Jusko et al., 2016) & & & \\ 100 - 000 & 0 2 & 10/10 & 2684 - 2866 & 8.80\(\times 10^{-6}\)/2.60\(\times 10^{-5}\) \\ 17JuMoTuG (Jusko et al., 2017) & & & & \\ 000 - 000 & 0 - 2 & 3/3 & 23.071 - 49.254 & 2.08\(\times 10^ calculated as follows \[\Delta\tilde{E}=\begin{cases}0.1,\qquad\text{if }\tilde{E}<2000\\ \lfloor\tilde{E}/2000\rfloor/10,\qquad\text{otherwise}.\end{cases} \tag{2}\] ### New line lists New line lists were computed for D\({}_{3}\)H\({}^{+}\) and D\({}_{3}^{+}\) molecular ions. Both calculations used the highly accurate global _ab initio_ PES, together with adiabatic and relativistic correction surfaces, computed by Pavanello et al. (2012a,b) which were used for the MiZATeP line list. To calculate transitions intensities the high-accuracy DMS obtained for the H\({}_{3}^{+}\) system was used (Petrignani et al., 2014). This DMS was obtained by fitting 7 parameters to a polynomial form written in terms of effective charges, see Rohse et al. (1994). This DMS was centred on the centre-of-mass ensuring the correct treatment of the centre-of-charge - centre-of-mass displacement which leads to D\({}_{2}\)H\({}^{+}\) (and H\({}_{2}\)D\({}^{+}\)) having a permanent dipole moment. The DVR3D program suite (Tennyson et al., 2004) was used to compute the final line lists. As part of this project a new module was added to this suite which converts DVR3D results to ExoMol format (Tennyson et al., 2013) comprising a states and trans file, and allowing spectra to be easily generated using ExoCross(Yurchenko et al., 2018). The module reads the energy levels and Einstein coefficients from DVR3D, requiring as input from the user the molecular symmetry, C\({}_{s}\) or C\({}_{2v}\), and the nuclear statistic weight for each symmetry. The program, called LINELIST, is available as part of the DVR3D program suite from the ExoMol GitHub pages. As with H\({}_{3}^{+}\) and H\({}_{2}\)D\({}^{+}\), the calculated term energies of D\({}_{2}\)H\({}^{+}\) were updated with empirical values from the new marvel_network where available. Likewise, the unchanged calculated energies of the new D\({}_{2}\)H\({}^{+}\) and D\({}_{3}^{+}\) line lists had uncertainties estimated using Eq. (2). ### D\({}_{2}\)H\({}^{+}\) nuclear motion calculations DVR3D nuclear motion calculations were performed in the following way: with 31, 31, and 50 grid points for two radial and an angular scattering coordinates. The calculations used Morse-like oscillators with parameters 3.1, 0.1, and 0.006 for \(r_{1}\) (D - D) radial coordinate, and spherical oscillators with parameters 0, 0, and 0.016 for the \(r_{2}\) (H - D\({}_{2}\) scattering coordinate). The dimension of the final vibrational Hamiltonians were set to 5000. Calculations included all levels up to 15000 cm\({}^{-1}\) for \(J\leq 25\). 1500 vibrational basis functions calculated using DVR3DRJZ were passed to ROTLEV3 for the rotational step of the calculation. Following Polyansky & Tennyson (1999), nuclear masses \(m_{\text{H}}=1.007276\) Da and \(m_{\text{D}}=2.013553\) Da were used for rotational motion and effective masses intermediate between nuclear and atomic masses \(m_{\text{H}}=1.007537\) Da and \(m_{\text{D}}=2.013814\) Da were used for the vibrational motion. This formulation has been shown to allow for non-adiabatic effects in the calculation. The resulting MiZo line list is complete up to a temperature of 2000 K. ### D\({}_{3}^{+}\) nuclear motion calculations DVR3D calculations for D\({}_{3}^{+}\) were performed using the same size grids and Hamiltonians as those specified for D\({}_{2}\)H\({}^{+}\) above (31, 31, and 50 grid points for two radial and an angular scattering coordinates, correspondingly, and the final Hamiltonian dimensions equal to 5000). The calculations used Morse-like oscillators with parameters 2.6, 0.1, and 0.006 for the \(r_{1}\) coordinate, and spherical oscillators with parameters 0, 0, and 0.016 for the \(r_{2}\) coordinate. Nuclear masses equal to \(m_{\text{D}}=2.013553\) Da were used for all calculations. On the basis of nuclear motion calculation results and the DMS by Petrignani et al. (2014) a linelist in the frequency region up to 15 000 cm\({}^{-1}\) for a temperature of 800 K and using the corresponding partition function value 118.7 together with a list of corresponding energy levels for D\({}_{3}^{+}\) was computed. This linelist contains transitions between energy states with \(J\) values 0--15 and energies 0--15 500 cm\({}^{-1}\). The new MiZo D\({}_{3}^{+}\) line list is complete up to a temperature of 800 K. Statistics for all of the line lists presented here are given in Table 7. ### States files New states files are provided for each of the four species considered here and are formatted using the standard outlined by Tennyson et al. (2013). H\({}_{3}^{+}\) and D\({}_{3}^{+}\) are identified using the same set of quantum numbers and hence their states files contain the same columns. Excerpts from the H\({}_{3}^{+}\) and D\({}_{3}^{+}\) states files are provided in Table 8. Similarly, the states files for H\({}_{2}\)D\({}^{+}\) and D\({}_{2}\)H\({}^{+}\) contain the same set of quantum number columns and an excerpts for them are shown in Table 9. All entries in the new states files are marked with a source tag, indicating the method that was used to determine the final term energy for that level. Several values for the source tag can occur in ExoMol line lists (Bowesman et al., 2021) but only three are in use here: "Ma" for marvelised energies, "EH" for energies from effective Hamiltonian calculations and "Ca" for energies calculated using DVR3D. For all levels marked "Ca" in all four states files an uncertainty estimate was calculated using Eq. (2). All new and updated states files are ordered on increasing \(J\), \(\Gamma_{\text{rev}}\) and energy. The existing H\({}_{2}\)D\({}^{+}\) states file was already formatted this way, while H\({}_{3}^{+}\) was not and has been changed. Hence for H\({}_{3}^{+}\) the state counting number assigned to each level has been reassigned and the references to these values in the corresponding trans file have accordingly been upgraded. Excerpts from the trans files for the new D\({}_{2}\)H\({}^{+}\) and D\({}_{3}^{+}\) line lists can be seen in Table 10. In general all molecular states should be characterised by three rigorous quantum numbers: \(J\), parity and the overall symmetry, which also determines the nuclear spin state (ortho/para/meta). All calculations have \(J\) and parity rigorously determined; due to the full treatment by DVR3D of the C\({}_{2v}\) symmetry, the symmetry and nuclear spin statistics are also correctly determined for H\({}_{2}\)D\({}^{+}\) and D\({}_{2}\)H\({}^{+}\). This is not the case for H\({}_{3}^{+}\) and D\({}_{3}^{+}\) however, as the program does not include a full treatment for a D\({}_{3h}\) Hamiltonian. The symmetry was initially output for all molecular species considered here under C\({}_{2v}\) symmetry, but was subsequently mapped to the appropriate D\({}_{3h}\) representation for H\({}_{3}^{+}\) and D\({}_{3}^{+}\) for states with quantum number assignments. Due to the nuclear spin statistical weight of the A\({}^{\prime}_{1}\), A\({}^{\prime\prime}_{1}\) symmetries in H\({}^{+}_{3}\) being 0, it was possible to map all of the C\({}_{2\nu}\) symmetries to the appropriate D\({}_{3h}\) representation (Mizus et al., 2017). For D\({}^{+}_{3}\), where D\({}_{3h}\) symmetry and quantum number assignment was not carried out, the entries retain the \(\mathrm{C_{2\nu}}\) symmetries output by DVR3D. These \(\mathrm{C_{2\nu}}\) symmetries are given as lower-case in the states file, to avoid confusion between the A\({}_{1}\) and A\({}_{2}\) irreducible representations under C\({}_{2\nu}\) symmetry (formatted as "a1" and "a2") and the A\({}^{\prime}_{1}\), A\({}^{\prime\prime}_{1}\), A\({}^{\prime}_{2}\) and A\({}^{\prime\prime}_{2}\) irreducible representations under D\({}_{3h}\) symmetry. ### Partition Functions Calculating the partition functions of the new line lists requires the determination of the total degeneracy of each state \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Species & \(\mathrm{J_{max}}\) & \(\mathrm{T_{max}}\) (K) & \(\mathrm{E_{low}}\) (cm\({}^{-1}\)) & \(\mathrm{E_{high}}\) (cm\({}^{-1}\)) & No. Trans & No. marvelised Trans \\ \hline H\({}^{+}_{3}\) & 37 & 5000 & 25189.70934 & 42000.74357 & 127542657 & 17147 \\ H\({}_{2}\)D\({}^{+}\) & 20 & 1750 & 6988.91426 & 18496.21669 & 22164810 & 895 \\ D\({}_{2}\)H\({}^{+}\) & 25 & 2000 & 8253.939824 & 34838.528613 & 2290235000 & 905 \\ D\({}^{+}_{3}\) & 15 & 800 & 2639.965597 & 17234.642274 & 36078183 & 0 \\ \hline \end{tabular} \end{table} Table 7: Statistics for the new line lists, detailing: the maximum \(J\) value for each; the maximum temperature up to which the line list is complete; the maximum lower state energy involved in a transition; the maximum upper state energy involved in a transition; the total number of transitions and the total number of transitions between marvelised states. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c} \hline \(i\) & \(E\) & \(g_{\mathrm{tot}}\) & \(J\) & unc & \(\tau\) & \(\mathrm{e/f}\) & \(\Gamma_{\mathrm{rve}}\) & No. & Isomer & \(\nu_{1}\) & \(\nu_{2}\) & \(L_{2}\) & \(G\) & \(U\) & \(K\) & Source Tag \\ \hline 1 & 0.000000 & 0 & 0 & 0.000000 & nan & e & A1’ & 1 & nan & 0 & 0 & 0 & 0 & 0 & Ma \\ 2 & 2521.410484 & 2 & 0 & 0.000133 & 8.4474e-03 & e & E’ & 1 & p & 0 & 1 & 1 & 1 & m & 0 & Ma \\ 3 & 4998.052947 & 2 & 0 & 0.010004 & 2.4871e-03 & e & E’ & 2 & p & 0 & 2 & 2 & 2 & m & 0 & Ma \\ 4 & 5554.06100 & 2 & 0 & 0.010000 & 7.3144e-03 & e & E’ & 3 & P & 1 & 1 & 1 & 1 & m & 0 & Ca \\ 7 & 7005.974780 & 2 & 0 & 0.010000 & 3.1660e-03 & e & E’ & 4 & P & 0 & 3 & 1 & 1 & m & 0 & Ca \\ 6 & 7870.229810 & 2 & 0 & 0.010000 & 2.5047e-03 & e & E’ & 5 & P & 1 & 2 & 2 & 2 & 0 & Ca \\ 7 & 8488.013160 & 2 & 0 & 0.010000 & 6.6336e-03 & e & E’ & 6 & P & 2 & 1 & 1 & 1 & m & 0 & Ca \\ 8 & 9113.041390 & 2 & 0 & 0.010000 & 2.0836e-03 & e & E’ & 7 & P & 0 & 4 & 2 & 2 & m & 0 & Ca \\ 9 & 9663.699850 & 2 & 0 & 0.010000 & 1.1122e-03 & e & E’ & 8 & P & 1 & 3 & 1 & 1 & m & 0 & Ca \\ 10 & 9997.183130 & 2 & 0 & 0.010000 & 1.0631e-03 & e & E’ & 9 & P & 0 & 4 & 4 & 4 & m & 0 & Ca \\ \hline \end{tabular} \end{table} Table 8: Excerpts from the H\({}^{+}_{3}\) and D\({}^{+}_{3}\) states files, using the format defined by Tennyson et al. (2013). Note that the zero-energy level of H\({}^{+}_{3}\) with A\({}^{\prime}_{1}\) symmetry does not exist and hence has “nan” entries for its lifetime and isomer. Any row for which the \(\nu_{1}\), \(\nu_{2}\), \(L_{2}\), \(G\), \(U\) or \(K\) assignment is not known will likewise have “nan” entries in these columns. considered, as show in Eq. (3): \[z=\sum_{i}g_{\rm tot,i}exp\left(-\frac{E_{i}}{k_{B}T}\right), \tag{3}\] where \(g_{\rm tot}\) is the total degeneracy of the \(i\)th level, \(E_{i}\) its energy, \(k_{B}\) is the Boltzmann constant and T is the temperature. The total degeneracy of a level depends directly on the nuclear spin statistical weight, \(g_{\rm nn}\): \[g_{\rm tot}=g_{\rm ns}\left(2J+1\right). \tag{4}\] The nuclear spin statistical weights represent the number of nuclear spin functions that yield each symmetry. We follow the ExoMol and HITRAN convention of including the full nuclear spin degeneracy in our partition functions. In the case of H\({}_{3}^{+}\), where each atom consists of one proton with \(I=\frac{1}{2}\), Fermi statistics apply which results in no allowed configurations corresponding to the \(\Lambda_{1}^{\prime}\) or \(\Lambda_{1}^{\prime\prime}\) representations. Accordingly, the \(g_{\rm ns}\) values for these representations of H\({}_{3}^{+}\) are 0, as shown in Table 1. For D\({}_{3}^{+}\) however, each Deuterium atom has \(I=1\) and is hence a boson, meaning additional representations can occur that give rise to the "meta" nuclear spin isomer. Our mappings of the C\({}_{2\rm v}\) representations output by DVR3D to the full D\({}_{3\rm h}\) symmetry are complete for all states presented in the H\({}_{3}^{+}\) line list, but not for the states of D\({}_{3}^{+}\). Hence, we must also consider the appropriate statistical weights to use when determining the degeneracy of the D\({}_{3}^{+}\) states left with C\({}_{2\rm v}\) symmetries. As the D\({}_{3h}\) representations A\({}_{1}^{\prime}\), A\({}_{2}^{\prime\prime}\), A\({}_{2}^{\prime\prime}\), E\({}^{\prime\prime}\) correspond to A\({}_{1}\), A\({}_{2}\), B\({}_{2}\), B\({}_{1}\), A\({}_{1}\oplus\) B\({}_{2}\), A\({}_{2}\oplus\) B\({}_{1}\) respectively under C\({}_{2\rm v}\) symmetry, we can calculate population weighted average statistical weights for each C\({}_{2\rm v}\) representation. These are calculated knowing that approximately two thirds of all D\({}_{3}^{+}\) levels will be E\({}^{\prime}\) and E\({}^{\prime\prime}\), due to the constraint that A\({}_{1}^{\prime}\), A\({}_{1}^{\prime\prime}\), A\({}_{2}^{\prime}\) and A\({}_{2}^{\prime\prime}\) levels only occur when \(G=3n\), where \(n\) is an integer; see also Berthlinger et al. (1992). Accordingly, for the A\({}_{1}\) and A\({}_{2}\) representations in C\({}_{2\rm v}\) symmetry, one third will be A\({}_{1}^{\prime}\) or A\({}_{1}^{\prime\prime}\) under D\({}_{3h}\) symmetry while the remaining two thirds will be E\({}^{\prime}\) and E\({}^{\prime\prime}\). The equivalent population ratio is true for B\({}_{1}\) and B\({}_{2}\) states and the corresponding A\({}_{2}^{\prime}\) and A\({}_{2}^{\prime\prime}\) representations. These population ratios are then weighted using the weights from Table 1, \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \(i\) & \(E\) & \(g_{\rm tot}\) & \(J\) & unc & \(\tau\) & +/- & \(\Gamma_{\rm rev}\) & No. & Isomer & \(\nu_{1}\) & \(\nu_{2}\) & \(\nu_{3}\) & \(K_{a}\) & \(K_{c}\) & Source Tag \\ \hline \multicolumn{11}{c}{H\({}_{2}\)D\({}^{+}\)} \\ \hline 1 & 0.000000 & 3 & 0 & 0.0000000 & inf & + & A1 & 1 & P & 0 & 0 & 0 & 0 & 0 & Ma \\ 2 & 2206.8771127 & 3 & 0 & 0.0050000 & 5.7143e-02 & + & A1 & 2 & P & 0 & 1 & 0 & 0 & 0 & Ma \\ 3 & 2992.5022357 & 3 & 0 & 0.0000050 & 1.8706e-02 & + & A1 & 3 & P & 1 & 0 & 0 & 0 & 0 & Ma \\ 4 & 4287.4732000 & 3 & 0 & 0.2000000 & 1.8812e-02 & + & A1 & 4 & P & 0 & 2 & 0 & 0 & 0 & Ca \\ 5 & 4602.6196400 & 3 & 0 & 0.2000000 & 3.1868e-03 & + & A1 & 5 & P & 0 & 0 & 2 & 0 & 0 & Ca \\ 6 & 5039.7666160 & 3 & 0 & 0.2000000 & 1.9520e-02 & + & A1 & 6 & P & 1 & 1 & 0 & 0 & 0 & Ca \\ 7 & 5877.2574100 & 3 & 0 & 0.200000 & 1.0870e-02 & + & A1 & 7 & P & 2 & 0 & 0 & 0 & 0 & Ca \\ 8 & 6287.6671127 & 3 & 0 & 0.0020000 & 6.4627e-03 & + & A1 & 8 & P & 0 & 3 & 0 & 0 & 0 & Ma \\ 9 & 6645.7008700 & 3 & 0 & 0.3000000 & 2.5443e-03 & + & A1 & 9 & P & 0 & 1 & 2 & 0 & 0 & Ca \\ 10 & 6991.5781127 & 3 & 0 & 0.0020000 & 9.4938e-03 & + & A1 & 10 & P & 1 & 2 & 0 & 0 & 0 & Ma \\ \hline \multicolumn{11}{c}{D\({}_{2}\)B\({}^{+}\)} \\ \hline 1 & 0.0000000 & 12 & 0 & 0.0000000 & inf & + & A1 & 1 & o & 0 & 0 & 0 & 0 & 0 & Ma \\ 2 & 1968.1622648 & 12 & 0 & 0.0050000 & 1.9317e-02 & + & A1 & 2 & o & 1 & 0 & 0 & 0 & Ma \\ 3 & 2736.9754969 & 12 & 0 & 0.0000040 & 1.1633e-02 & + & A1 & 3 & o & 1 & 0 & 0 & 0 & Ma \\ 4 & 3821.2081607 & 12 & 0 & 0.1000000 & 1.2272e-02 & + & A1 & 4 & o & nan & nan & nan & nan & Ca \\ 5 & 4042.7712648 & 12 & 0 & 0.0090000 & 1.2305e-02 & + & A1 & 5 & o & 0 & 2 & 0 & 0 & Ma \\ 6 & 4648.7587400 & 12 & 0 & 0.2000000 & 6.4298e-03 & + & A1 & 6 & o & nan & nan & nan & nan & Ca \\ 7 & 5385.3522270 & 12 & 0 & 0.2000000 & 6.0840e-03 & + & A1 & 7 & o & nan & nan & nan & nan & Ca \\ 8 & 5579.1928202 & 12 & 0 & 0.2000000 & 8.7066e-03 & + & A1 & 8 & o & nan & nan & nan & nan & Ca \\ 9 & 6008.5163100 & 12 & 0 & 0.3000000 & 4.8418e-03 & + & A1 & 9 & o & nan & nan & nan & nan & Ca \\ 10 & 6432.5596700 & 12 & 0 & 0.3000000 & 6.6492e-03 & + & A1 & 10 & o & nan & nan & nan & nan & Ca \\ \hline \multicolumn{11}{c}{\(i\): State counting number;} \\ \multicolumn{11}{c}{\(\hat{E}\): Term value (in cm\({}^{-1}\));} \\ \multicolumn{11}{c}{\(g_{\rm tot}\): Total state degeneracy;} \\ \multicolumn{11}{c}{\(J\): Total angular momentum quantum number;} \\ \multicolumn{11}{c}{unc: Estimated uncertainty of energy level (in cm\({}^{-1}\));} \\ \multicolumn{11}{c} \(\tau\): Radiative lifetime (in seconds);} \\ \multicolumn{11}{c}{+/-: Total parity;} \\ \multicolumn{11}{c}{\(\Gamma_{\rm rev}\): C\({}_{2\rm v}\) symmetry group;} \\ \multicolumn{11}{c}{No: Symmetry group block counting number;} \\ \multicolumn{11}{c}{Isomer: Nuclear spin isomer;} \\ \multicolumn{11}{c}{\(\nu_{1}\): Symmetric stretching quantum number;} \\ \multicolumn yielding the \(g_{\rm ns}\) values shown in Table 11. These adjusted weights were used to calculate the total degeneracies of the states of the D\({}_{3}^{+}\) line list for which C\({}_{2\rm v}\) symmetries are used and are included in the new states file. Partition functions are given in Table 12 for a series of temperatures up to the value of T\({}_{\rm max}\) for each line list. Partitions for D\({}_{3}^{+}\) had previously been calculated by Ramanall & Tennyson (2004) and their values are in close agreement with those presented here. For H\({}_{2}\)D\({}^{+}\) however, the partition functions differ from those given by Sochi & Tennyson (2010). This is due to the different nuclear spin statistical weights used in their earlier work that did not consider the spin of the Denterium atom. The degeneracies given in the H\({}_{2}\)D\({}^{+}\) line list have been updated to use the nuclear spin statistical weights quoted in Table 1 and are thus consistent with the partition function computed using Eq. (3) and the same convention. ## 4 Spectra ### Rotational Spectra Purely rotational transitions within the vibrational ground states are important for detections of H\({}_{3}^{+}\) and particularly its deuterated isotopologues in the interstellar medium. This is due to the low temperatures in such regions driving the majority of the level populations to low-energy states. Predicted transition frequencies for such transitions are given in Table 13 for H\({}_{3}^{+}\) and D\({}_{3}^{+}\) and in Table 14 for H\({}_{2}\)D\({}^{+}\) and D\({}_{2}\)H\({}^{+}\), based on the term energies derived from the marvel networks presented here. While the purely rotational transitions of H\({}_{3}^{+}\) are "forbidden" due to the lack of a permanent electric dipole moment, it is believed that a small, temporary dipole moment can be induced due to the distortion of the molecule from equilibrium geometry under rotation about the C\({}_{2}\) molecular axis (Pan & Oka, 1986; Miller & Tennyson, 1988). The D\({}_{3}^{+}\) states file contains 43 extremely long-lived states with radiative lifetimes greater than \(10^{10}\) s and can be considered meta-stable. In the extreme case, two states are calculated to have radiative lifetimes greater than \(10^{18}\) s, a duration greater than the current age of the universe. All of these meta-stable states are in the vibrational ground state and have \(G=J\) or \(G=J-1\). The same is true for the meta-stable states of H\({}_{3}^{+}\), though only one of their radiative lifetimes is in excess of \(10^{10}\) s. In both species, this can cause molecules to become "trapped" in these states in collision free environments with consequences for both laboratory measurements (Kreckel et al., 2004) and for possible maser action. ### Mid and Far Infrared Spectra Fig. 1 shows example stick spectra for H\({}_{3}^{+}\) and its deuterated isotopologues in the far infrared region from 10 - 1000 cm\({}^{-1}\). These spectra were generated using the program ExoCross(Yurchenko et al., 2018) and were computed for a temperature of 200 K. The predicted transitions frequencies listed in Tables 13 and 14 that are expected to be of importance in the ISM fall within this range. Likewise, spectra were computed for the mid infrared region between 1000 - 5000 cm\({}^{-1}\) for all species considered here and are shown in Fig. 2. These cover several wavelength ranges where H\({}_{3}^{+}\) has been observed in Jupiter's aurorae, such as the K band region centred at 2.2 \(\mu\)m (Uno et al., 2014) and the L band centred on 3.5 \(\mu\)m (Baron et al., 1991). This also covers the wavelength regions observed using the Jupiter infrared Auroral Mapper (JIRAM) instrument onboard NASA's Juno spacecraft (Dinelli et al., 2019; Migliorini et al., 2019) and those covered by the Near infrared Camera (NIRCam) long-wavelength channel onboard JWST. Comparisons between computed spectra and real observations cannot be made for H\({}_{2}\)D\({}^{+}\), D\({}_{2}\)H\({}^{+}\) and D\({}_{3}^{+}\), as no measured transition intensities have been published. McKellar & \begin{table} \begin{tabular}{c c c} \hline \hline \(\Gamma_{\rm vvo}\) & \(g_{\rm ns}\) \\ \hline A\({}_{1}\), A\({}_{2}\) & 6 \\ B\({}_{1}\), B\({}_{2}\) & 3 \\ \hline \end{tabular} \end{table} Table 11: Adjusted nuclear spin statistical weights for the levels of D\({}_{3}^{+}\) for which C\({}_{2\rm v}\) symmetries are used. \begin{table} \begin{tabular}{c c c c c} \hline \hline & D\({}_{3}^{+}\) & H\({}_{2}\)D\({}^{+}\) & D\({}_{2}\)H\({}^{+}\) & D\({}_{3}^{+}\) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 12: The partition functions for each new line list, calculated for a range of temperatures up to each line list’s maximum temperature, given in Table 7. Watson (1998) presented infrared absorption spectra of the \(\nu_{2}\) fundamental band of H\({}_{3}^{+}\) and provided integrated intensities, which were well reproduced by the original line list (Mizus et al., 2017). ## 5 Conclusion We have updated three marvel networks for H\({}_{3}^{+}\), H\({}_{2}\)D\({}^{+}\) and D\({}_{2}\)H\({}^{+}\) to include all currently published spectroscopic data for these molecules. We have also performed variational nuclear motion calculations using the program DVR3D for D\({}_{2}\)H\({}^{+}\) and D\({}_{3}^{+}\) to produce the new MiZo line lists. The empirical energy levels derived from the marvel networks have been used to update the calculated levels of their respective molecules. This allows for the subset of transitions involving these marvelised energies to be determined to much higher accuracy, making the use of these line lists well suited for high-resolution spectroscopy. The D\({}_{3}^{+}\) calculations have been combined with a set of energies derived from effective Hamiltonian calculations computed using experimentally determined molecular constants. Given these effective Hamiltonian constants were derived from transitions in infrared bands, the new D\({}_{3}^{+}\) line list is best suited for infrared studies. Overall, the new D\({}_{3}^{+}\) line list is of lower resolution than the other three, due to the lack of marvelised energy levels. Hyperfine effects are known to be present in the spectra of H\({}_{3}^{+}\) and its deuterated isotopologues, due to the nuclear spin of the component protons and deuterons (Jensen et al., 1997). These hyperfine splittings have not yet been observed however in either an astrophysical setting or in the laboratory. Were experiments to be conducted to measure the hyperfine-resolved spectra of the species considered here, it would enable the construction of a hyperfine-resolved marvel network and subsequently line lists. Similarly, any further measurements of hyperfine-unresolved spectra could be added to the current marvel network to further constrain the resultant empirical energy levels and hence improve the accuracy of the line lists. If a sufficiently large number of spectra were observed for D\({}_{3}^{+}\) to form a well-connected spectroscopic network, it would enable us to further update the D\({}_{3}^{+}\) line list with marvel energies to a high-resolution standard. The four line lists presented here, each consisting of a states file and transitions file, are made available via www.exomol.com. ## Acknowledgements We thank Tibor Furtenbacher and Attila Csaszar for supplying the marvel4 code and for helpful discussions during the course of this work. This work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme through Advance Grant number 883830 and the UK STFC under grant ST/R000476/1. BP acknowledges support from the US National Science Foundation (CHE-1665370), and the Robert A. Welch Foundation (D-1523). NFZ and OLP acknowledge support by State Project IAP RAS No. 0030-2021-0016. JS is grateful to NKFIH for support (PD142580). Figure 1: Far infrared spectra for H\({}_{3}^{+}\) and its deuterated isotopologues, computed using the program ExoCross(Yurchenko et al., 2018) at 200 K between 10 - 1000 cm\({}^{-1}\) (1 mm - 10 \(\mu\)m). ## Data Availability The updated MARVEL input files and resulting output energy levels are given as supporting material. All other data are available via the www.exomol.com website.
2308.04088
Suspending droplets beyond the Rayleigh limit: The interplay of acoustic and gravity forces
In this work, we experimentally investigate the suspension behavior of droplets subjected to standing acoustic waves. We focus on the droplet sizes beyond the Rayleigh limit, i.e., when the droplet size is comparable to the wavelength of the acoustic wave. We show that an acoustic field can disrupt the uniform motion of aqueous droplets in oil and cause them to either suspend or settle, depending on the interplay between acoustic and gravity forces. Remarkably, in contrast to droplets within the Rayleigh limit, the critical acoustic power or minimum pressure amplitude required to suspend droplets beyond the Rayleigh limit is dependent on droplet size. As the droplet size increases, the critical acosutic power increases significantly. Building upon this understanding, a novel sorting method is proposed based on critical acoustic power.
Jeyapradhap Thirisangu, E Hemachandran, Karthick Subramani
2023-08-08T07:07:42Z
http://arxiv.org/abs/2308.04088v1
# Suspending droplets beyond the Rayleigh limit: The interplay of acoustic and gravity forces ###### Abstract In this work, we experimentally investigate the suspension behavior of droplets subjected to standing acoustic waves. We focus on the droplet sizes beyond the Rayleigh limit, i.e., when the droplet size is comparable to the wavelength of the acoustic wave. We show that an acoustic field can disrupt the uniform motion of aqueous droplets in oil and cause them to either suspend or settle, depending on the interplay between acoustic and gravity forces. Remarkably, in contrast to droplets within the Rayleigh limit, the critical acoustic power or minimum pressure amplitude required to suspend droplets beyond the Rayleigh limit is dependent on droplet size. As the droplet size increases, the critical acoustic power increases significantly. Building upon this understanding, a novel sorting method is proposed based on critical acoustic power. ## I Introduction The seminal concept of manipulating matter using acoustic waves was first demonstrated by Kundt et al. [1] during the later part of the 19th century. To explain the above phenomenon, in 1934, King [2] conducted a detailed theoretical study and provided the expression of the radiation pressure exerted on a rigid sphere without considering the compressibility. Subsequently in 1955, Yosioka and Kawasima [3] expanded the King's theoretical framework by incorporating compressible spheres and achieved good agreement with the experimental observations of air bubbles in the water. Gor'kov [4] developed an elegant approach by formulating the acoustic radiation force as the gradient of a potential (now commonly referred to as the Gor'kov Potential) to replicate the results of Yosioka and Kawasima. This potential is dependent on the time-averaged kinetic and potential energies of the acoustic fields. Later, Eller [5] reported theoretical analysis and experimental results on trapping the air bubble in the liquid medium under the acoustic standing wave against the upward buoyancy force. Furthermore, Crum [6] conducted a comprehensive theoretical and experimental investigation of the acoustic force acting on small liquid droplets (paraldehyde, hexane, benzene, toluene, chlorobenzene, and carbon tetrachloride) introduced in water under the influence of a standing acoustic wave. Crum successfully suspended the small liquid droplets using acoustic force against gravity and established that the minimum pressure amplitude required to suspend the droplet is independent of the droplet size. Coakley et al. [7] demonstrated cell manipulation techniques and conducted theoretical analysis on the effects of acoustic pressure on the suspension position of the cells. Following the above fundamental works of acoustic radiation force (ARF) acting on particles, droplets, and cells, several practical applications have been demonstrated in the past two decades in various fields such as biological [8; 9; 10; 11], medical [12; 13; 14; 15], food [16; 17] and chemical sciences [18; 19; 20]. Recently, by extending Crum's work, Luo et al. [21; 22] experimentally investigated the effects of droplet size, acoustic pressure, frequency, and density ratio on the suspension characteristics of droplets. Despite the extensive literature on acoustic radiation force (ARF) experienced by the particles (beads/cells/droplets), existing works including those mentioned above are restricted to the Rayleigh limit (\(a<<\lambda\), i.e. particle size '\(a\)' much smaller than the wavelength '\(\lambda\)'). Thus, the behavior of particles above the Rayleigh limit remains largely unexplored. Following are the recent works, that address the acoustic radiation force acting on the large particles. Baasch et al. [23] theoretically investigated the acoustic radiation force acting on larger particles and droplets of size up to \(a/\lambda\leq 0.35\). Comparing their numerical results with ARF for small particles obtained from the Gor'kov potential [4], they showed that the ARF equation overestimates the force as the size of the droplets or particles departs from the Rayleigh limit. Ospina et al. [24] investigated the acoustic levitation of polystyrene particles in air using a symmetric concentric levitator and they experimentally found that the smaller particles, less than half the wavelength, are trapped along the axis around pressure nodes, while larger particles are trapped nearer to pressure antinodes. In this work, we experimentally investigate the suspension behavior of water droplets in an oil medium subjected to standing acoustic waves, where the droplet size is of the same order as the wavelength (\(a\sim\lambda\)). we particularly focus on the regime where the dynamics of the droplet is only governed by the interplay between acoustic force and the gravity force. For the droplet size beyond the Rayleigh limit, we show that the critical acoustic power (\(P_{cr}\)) required to suspend the droplet increases with the droplet size. This is in contrast to the case of droplets within the Rayleigh limit where the critical acoustic power required is independent of the droplet size. Furthermore, the average velocity and settling time of the droplet are also experimentally investigated by varying the acoustic power up to \(P_{cr}\). Finally, we demonstrate the novel sorting method for droplets based on critical power. Our study provides new insights into the suspension characteristics of droplets beyond the Rayleigh limit and can open up new avenes for the development of sophisticated droplet sorting methods using acoustic fields. ## II Physics of the problem When a particle/droplet of size \(a\ll\lambda\) is subjected to a standing acoustic wave along the \(y\)-direction as shown in the Fig. 1, the primary radiation force (\(F_{ac}\)) acting on the particle/droplet is expressed as [25, 26, 27, 28] \[F_{ac}=4\pi ka^{3}\varphi E_{ac}\sin(2ky), \tag{1a}\] \[\varphi=\frac{1}{3}\left(\frac{5\;\widetilde{\rho}-2}{2\widetilde{\rho}+1}- \frac{1}{\widetilde{\rho}\;\widetilde{c}^{2}}\right),\] (1b) \[E_{ac}=p_{a}^{2}/4\rho_{0}c_{0}^{2}. \tag{1c}\] Where \(a\) is the droplet radius, \(k=2\pi/\lambda\) is the wave number, \(\lambda\) is the wavelength of the wave, \(y\) is the position of the droplet relative to the pressure node, \(\varphi\) is the acoustic contrast factor, \(E_{ac}\) is the acoustic energy density, \(p_{a}\) is the acoustic pressure amplitude, \(\widetilde{\rho}\) is the ratio of the density of the particle (\(\rho_{p}\)) to the density of the continuous medium (\(\rho_{0}\)), and \(\widetilde{c}\) is the ratio of the speed of sound of particle (\(c_{p}\)) to the speed of sound of the continuous medium (\(c_{0}\)). When \(\varphi<0\), the droplet moves toward the pressure antinode; when \(\varphi>0\), it moves to the pressure node. Hereafter we refer to the pressure node and pressure antinode as simply node (N) and antinode (AN). The magnitude of the acoustic force is zero at nodes and anti-nodes within the acoustic field, while it reaches its maximum at the midpoint between the node and anti-node as illustrated in Fig. 1. The nature of the Eqs. (1) is clearly illustrated in Fig. 1. Along with the acoustic force, gravity also determines the dynamics of the droplet. Thus, the net effect of the gravity and buoyancy acting on a droplet is given by, \[F_{g}=-\frac{4}{3}\pi(\rho_{w}-\rho_{0})a^{3}g\;. \tag{2}\] Where \(F_{g}\) is the net gravity and g is the gravitational acceleration along the negative \(y\) direction. Unlike the gravity force given in Eq. (2) (which is uniform and always acts downward along the \(y\) direction), the acoustic force (Eqs. (1)) is non-uniform and its direction depends on the position of the particle. If the droplet moves in the continuous medium, it experiences the opposing drag force \(F_{\mu}\) given by Hadamard-Rybczynski, [29] \[F_{\mu}=-4\pi\left(\frac{1+3\beta/2}{1+\beta}\right)\mu_{0}aV\;, \tag{3}\] where \(\beta\) is the ratio of viscosity of the droplet (\(\mu_{\rm w}\)) to the viscosity of the continuous phase (\(\mu_{0}\)) and \(V\) is the velocity of the droplet. Under the assumption that the inertial force is negligible, the balance between the different forces described above can be expressed as follows: \[F_{g}+F_{ac}+F_{\mu}=0. \tag{4}\] In the presence of both gravity and an acoustic field, the dynamics of droplets are governed by the interplay between these forces, resulting in either the suspension or settling of the droplets within the medium. The velocity of the settling droplet under the influence of these forces can be calculated from Eq. (4). The velocity becomes zero when the droplet is suspended in the continuous medium, indicating a balance between the gravity force (\(F_{g}\)) and the acoustic force (\(F_{ac}\)). This can be mathematically represented by substituting \(F_{\mu}=0\) (\(V=0\)) in the Eq. (4), \[F_{g}+F_{ac}=0. \tag{5}\] The above equation clearly shows that droplet/particle suspends only in the positive (upward) acoustic force region in Fig. 1. By substituting Eqs. (1) and (2) into Eq. (5), the acoustic pressure amplitude (\(p_{a}\)) required for droplet suspen Figure 1: Migration of particles in the standing acoustic field given by Eq. 1. a) \(F_{ac}\) acting on the positive contrast particles placed at different locations causes them to move towards the nearest node. b) \(F_{ac}\) acting on the negative contrast particles placed at different locations causes them to move towards the nearest antinode. The dotted line indicates pressure variation in the \(y\) direction. sion can be obtained. The minimum pressure amplitude (\(p_{min}\)) necessary for suspension occurs at the position where the upward acoustic force is maximum (i.e. \(sin(2ky)=1\)), \[p_{\min}=\sqrt{\frac{2\lambda\rho_{\rm o}c_{0}^{2}(\rho_{\rm w}-\rho_{\rm o})g}{ 3\pi\varphi}}. \tag{6}\] The relationships between acoustic power (\(P_{ac}\)), acoustic energy density (\(E_{ac}\)), and pressure amplitude (\(p_{a}\)) can be expressed as [30; 31; 28] \[P_{ac}\propto E_{ac}\propto p_{a}^{2}. \tag{7}\] From the Eq. (6) and Eq. (7), the following result can be inferred, \[P_{min}\propto E_{min}\propto p_{min}^{2}\neq f(a). \tag{8}\] From the above equations, it is evident that the minimum acoustic power (\(P_{min}\)) or minimum acoustic energy density (\(E_{min}\)) or minimum pressure amplitude (\(p_{min}\)) required to suspend the droplet is independent of the droplet size. This is because any change in droplet size scales both the acoustic force (\(F_{ac}\)) and the force of gravity (\(F_{g}\)) in the same proportion (\(a^{3}\)). It is important to note that the above formula and discussions are valid only if the droplet size is within the Rayleigh limit (\(a<<\lambda\)). Now, we proceed to investigate the behavior of droplets beyond the Rayleigh limit under the acoustic fields and gravity. ## III Materials and methods In this study, mineral oil (SRL chemical, India) (\(\rho_{p}=857.5\)\(kg/m^{3}\), \(c_{p}=1440\)\(m/s\), and \(\mu_{\rm p}=26.5\)\(mP\)as) is employed as the continuous medium and Dyed DI water (\(\rho_{o}=1000\)\(kg/m^{3}\), \(c_{o}=1481\)\(m/s\), and \(\mu_{\rm o}=1\)\(mPa.s\)) as the droplet/dispersed medium. Experiments are performed by introducing the above fluids in a quartz rectangular channel of a cross-section: 8 mm width and 6 mm breadth. The height of the channel (\(H\)) along the \(y\) direction is 20 mm, the bottom of the channel is sealed with a piezoelectric transducer PZT SP-4 (Sparkler Cermatics, India) using Epoxy glue, and the top is opened to the atmosphere. The outer surface of the quartz glass channel is coated with Polydimethylsiloxane (PDMS) to improve its optical clarity. The transducer is actuated to introduce acoustic fields in the fluid domain by means of electrical excitation provided by a power amplifier (AR RF/Microwave Instrumentation 50U1000) and a function generator (Tektronix AFG1022). The power input (\(P\)) to the transducer is obtained from the voltage and current measurement using the Digital Storage Oscilloscope (Tektronix TDS 2024C) and current probe (Yokogawa 701933) respectively. The experiments are conducted at the frequency (\(f\)) of 720 \(kHz\). To capture the motion of droplets, a high-speed camera (Phantom VEO 746, USA) is used along with an LED light source (Phantom, USA) for illumination. For each test, mineral oil is initially introduced into the quartz glass channel followed by applying an acoustic wave field, and subsequently, a water droplet is introduced using the syringe for the study. The uncertainty of the distance measurement by the high-speed camera is \(\pm\) 0.031 mm (2 pixels). The input power (\(P\)) supplied to the square transducer of 25 mm X 25 mm is partially transferred (\(P\propto P_{ac}\)) to the fluid domain (8 mm X 6 mm) in contact as the acoustic power (\(P_{ac}\)) while the remaining input power is transferred to the glass. ## IV Results and discussion This section provides a comprehensive investigation of the trapping/suspension characteristics of aqueous droplets in an oil medium. The primary objective is to investigate how these droplets behave as their size approaches the order of the wavelength of the standing wave (beyond the Rayleigh limit). ### Interplay between gravity and acoustic fields The dynamics of the droplet is governed by the interplay between acoustic and gravity forces. The role of the interfacial tension force is neglected since both the gravity and acoustic forces applied are not enough to deform the droplets as observed in the experiments (Figs. 2 & 3). Figure 2a illustrates the droplet behavior under the influence of gravity and acoustic fields while Fig. 2b displays the experimental results of a droplet of a specific size exposed to the acoustic field of varying power. In the absence of an acoustic field (Fig. 2b.i), a higher-density droplet (water) in a lower-density medium (mineral oil) undergoes a uniform downward motion due to the balance between gravity and drag forces. Whereas, it is observed that the addition of an acoustic field disrupts the droplet's uniform motion. If the applied input power is strong enough to overcome the gravity, the droplet suspends (Fig. 2a.iv and Fig. 2b.iii). When the applied input power is insufficient, the droplet settles at a delayed time (Fig. 2a.iii and Fig. 2b.ii) compared to the settling time of the droplet in the absence of acoustic fields (Fig. 2a.ii and Fig. 2b.i). The aforementioned results are clearly explained below. Since the wavelength (\(\lambda=c_{0}/f=2.057\) mm) of the acoustic fields is much lesser than the height (\(H=20\) mm) of the domain \(\lambda\ll H\), it produces a series of nodes and anti-nodes in the fluid domain (Fig. 2a.iii and Fig. 2a.iv). This results in two alternating force regions, one is a positive acoustic force region where the acoustic force acts upward (the region below the node and above its nearest anti-node, as depicted in red in Fig. 2a.iii and Fig. 2a.iv) and the other is negative acoustic force region where the acoustic force acts downward (above the node and below its nearest anti-node, as depicted in green in the Fig. 2a.iii). From Fig. 2a, it is clear that when a water droplet is placed in the negative acoustic region, it will be pushed to the node since the acoustic force and gravity force acts in the downward direction. Once it comes below the node or positive acoustic region, the acoustic force starts acting upward direction opposing the gravity force. At this position, if the applied force is not enough to overcome gravity, then Figure 2: Suspension characteristics of the identical size of droplets subjected to varying input power. a) Schematic representation of droplets with and without acoustic fields. b) Experimental results of suspension of the droplets i) without acoustics, ii) when subjected to input power of 1.2 W which is less than the \(P_{cr}\), and iii) when subjected to more than the critical input power of 1.5 W. the droplet settles by passing through the series of nodes and anti-nodes (Fig. 2b.ii). In this settling process, the downward velocity of the droplet becomes non-uniform as the droplet velocity is more in the negative acoustic force region and less in the positive acoustic force region. Consequently, the droplet spends more time in the positive acoustic force region and less time in the negative acoustic force region which results in delayed settling time compared to the settling time of the droplet in the absence of an acoustic field. The above settling time delay in the presence of an acoustic field is explained more clearly as follows: let's assume the average acoustic force magnitude (\(|F_{ac}|=|F|/2\)) acting on the droplet is half of the gravity force (\(F_{g}=-F\)), the net force (\(F_{g}+F_{ac}\)) acting on the negative and positive acoustic force region become -\(3/2F\) and -\(1/2F\). If -\(V\), -\(l\), and \(t\) are the downward velocity, downward displacement, and settling time of the droplet in the absence of acoustic force, then the velocity in the negative and positive acoustic regions are -\(3/2V\) and -\(1/2V\) respectively. Thus, the time the droplet spends on the negative acoustic region (-\(l/2\)) and positive acoustic region (-\(l/2\)) are \(t/3\) and \(t\), the total time taken by the droplet for settling becomes \(4t/3\) as compared to the time \(t\) taken by the droplet in the absence of acoustic fields.It is observed that as the power increases, settling time increases when \(F_{ac}\) approaches the dominant \(F_{g}\). When the applied input power (\(P\geq P_{cr}\)) generates sufficient acoustic force to overcome gravity, the droplet suspends in the positive acoustic force region (Fig. 2a.iv). The above discussion is clearly evident in the experimental results shown in Fig. 2b, the droplet with a size of 0.7 mm is introduced into the channel at time \(t=0\) s. In the absence of acoustic fields (Fig. 2b.i), the droplet settles at 9 s due to the balance between gravity and drag forces. When an input power of 1.2 W is applied which is below the critical power (\(P<P_{cr}\)) (Fig. 2b.ii), the motion of droplets in a series of nodes and antinodes results in a delayed settling time of 17 s. When the input power is more than the critical input power of 1.5 W (\(P\geq P_{cr}\)) for the 0.7 mm droplet size, the acoustic force becomes sufficiently strong to overcome gravity, leading to the suspension of the droplet at the positive acoustic force region. ### Beyond the Rayleigh limit Figure 3 shows the experimental results on droplets of different sizes (beyond the Rayleigh limit) subjected to the acoustics fields. The results are remarkable, as the droplet size increases, the critical power \(P_{cr}\) required to suspend the droplet increases exponentially. This is in contrast to Eqs. (6) and (8) which predicts that the minimum power required to suspend the droplet is not the function of the droplet size. In Fig. 3b, the relationship between droplet size and critical acoustic power demonstrates distinct trends across different size ranges. For droplet sizes up to a quarter of the wavelength (\(a<\lambda/4\)), a slight increase in the critical acoustic power is observed as shown in Fig. 3b. Whereas, for droplet sizes larger than a quarter of the wavelength (\(a>\lambda/4\)), the critical acoustic power follows an exponential-like curve. For example, a droplet size of 0.6 mm diameter requires 0.8 W of critical acoustic power, and a droplet size of 0.9 mm diameter demands critical power of 4 W. When attempting to suspend droplets with a size \(a>\lambda/2\), the input power required increases quite significantly. However, at 5 W, cavitation occurred and limited further power increment in our experiment. The behavior of the droplets beyond the Rayleigh limit observed in Fig. 3b can be explained qualitatively to a large extent by assuming the bigger droplet is a collection of Rayleigh particles/droplets. By adopting this assumption, we can apply the small particle acoustic field to every point of this larger droplet (Fig. 1). The immediate consequences of the assumption are as follows: the force acting on the smaller droplet at the node is zero by Eqs. (1). Whereas, for the larger droplet placed at the node, the acoustic force acting on the upper portion of the droplet is negative and the lower portion of the droplet is positive as shown in Fig. 4a (position A). Thus, the net acoustic force acting on the droplet becomes zero. By using the above assumption, first, we proceed to explain the migration of the larger droplets \(a<\lambda/4\) and followed by Figure 3: Suspending the droplet beyond the Rayleigh limit. a) Experimental results of critical input power required to suspend the droplets of different sizes, i) \(a=0.38mm\), ii) \(a=0.7mm\), and iii) \(a=1.05mm\). b) Characterization of suspending, transition, and settling zones. The critical input power curve separates the settling and suspending zones. \(a>\lambda/4\). Let's assume the center of the droplet size of \(a<\lambda/4\) is initially introduced at the AN as shown in Fig. 4a (position A). The net \(F_{ac}\) acting on the droplet at position A is zero as \(+F_{ac}\) experienced by the upper portion is counterbalanced by the \(-F_{ac}\) of the bottom portion. Thus, the dominant gravity force moves the droplet downwards. At position B, the droplet moves downwards with increased velocity as the acoustic force on the entire droplet acts downwards supporting gravity. At position C also droplet continues to move downwards, as the force balance scenario is similar to Position A. At position D, as the acoustic force acting opposite to the direction of gravity force, the droplet can be suspended, if the applied acoustic power \(\geq P_{cr}\). The reason for the increase in the \(P_{cr}\) along with the droplet size can be explained as follows: First for droplets \(a<\lambda/4\), as the droplet becomes bigger the droplet volume is distributed in the lesser force magnitude region as shown in the Fig.4a. For a given power, the average Figure 4: Schematic representation of the interplay between the acoustic and gravity forces at different droplet positions. a) \(a<\frac{\lambda}{4}\), b) \(\frac{\lambda}{4}<d<\frac{\lambda}{2}\), and c) \(\frac{\lambda}{2}<d<\frac{3\lambda}{4}\). Figure 5: Analysis of net volume responsible for acoustic force for a different droplet sizes, a) \(a<\frac{\lambda}{4}\), b) \(\frac{\lambda}{4}<d<\frac{\lambda}{2}\), and c) \(\frac{\lambda}{2}<d<\frac{3\lambda}{4}\). Note: The formula used to calculate the volume of the portion of the sphere is given by: volume \(=\pi h^{2}(3R-h)/3^{32}\), where \(h\) represents the height of the cap and \(R\) denotes the radius of the sphere. acoustic force acting on the bigger droplet is less compared to the smaller droplet as shown in Fig. 4a. Thus \(P_{cr}\) for a bigger droplet will be slightly more compared to the smaller droplet. This explains the marginal increases in \(P_{cr}\) for the droplet size \(a<\frac{\lambda}{4}\) as observed in Fig.3. For the case of \(a<\lambda/4\), the droplet can be completely accommodated in the positive acoustic force region as shown in Fig. 4a (at position D). Similar to the droplet size \(a<\lambda/4\), droplets of the size of \(a>\lambda/4\) are also suspended in Position D shown in Fig. 4b & Fig. 4c where the major portion of the droplet is present in the positive acoustic force region. However in the case of the droplet size \(a>\lambda/4\), the droplet can't be completely accommodated in a positive acoustic force region, some portion of the droplet is always present in the negative acoustic force region. Thus to suspend the droplet size of \(a>\lambda/4\), the positive acoustic force acting on a portion(s) of the droplet not only opposes the gravity but must also counteract the negative acoustic force acting on the other portion(s) of the droplet as shown in Fig. 4b & c. Because of this reason, \(P_{cr}\) rises exponentially when the droplet size increases more than \(\lambda/4\). For \(a>\lambda/4\), the net volume responsible for the resultant acoustic force is significantly less than the total volume of the droplet which can be explained by the volume distribution analysis given below. The volume distribution analysis of different droplet size ranges \(a<\frac{\lambda}{4}\), \(\frac{\lambda}{4}<d<\frac{\lambda}{2}\), and \(\frac{\lambda}{2}<d<\frac{3\lambda}{4}\) are shown in the Fig. 5. The volume analysis presented here is approximate since the acoustic force magnitude is assumed to be uniform. The net volume of the droplet (\(V_{net}\)) responsible for the acoustic force is \(V_{net}=\left|V_{pos}-V_{neg}\right|\), where \(V_{pos}\) is the volume portion of the droplet in the positive acoustic force region and \(V_{neg}\) is the volume portion of the droplet in the negative acoustic force region. The variation of the \(V_{net}\) and the direction of force acting on it with respect to the position of the droplet is shown (solid line) in Fig. 5. The variation is shown for a \(\lambda/2\) cycle (from one AN to consecutive AN) and the same pattern repeats throughout the domain. Red and green dotted lines indicate the percentage of droplet volume in the positive and negative acoustic force region respectively. From Fig. 5a it is clear that for \(a\leq\lambda/4\), the whole droplet (\(V_{net}=100\%\)) volume experiences the positive acoustic force at the position D. Whereas for droplets with sizes between \(\frac{\lambda}{4}<d\leq\frac{\lambda}{2}\), the maximum \(V_{net}\) is significantly less than \(100\%\). For instance, if the droplet size is \(a=\lambda/2\), the maximum \(V_{net}\) experiencing the upward acoustic force is \(37.5\%\) of the total volume at position D (Fig. 5b). Similarly, when the droplet size between \(\frac{\lambda}{2}<d<\frac{3\lambda}{4}\), the \(V_{net}\) is even smaller compared to the previous case. For example, if the droplet size is \(a=\frac{13\lambda}{20}\), the maximum \(V_{net}\) experiencing the upward acoustic force is \(9.69\%\) of the total volume at position D (Fig. 5c). From this analysis, it is clear that as the droplet size increases acoustic force becomes ineffective (requires more power) in suspending the droplet against gravity. ### Settling time and average velocity In this section, we study the average velocity (\(V_{avg}\)) and settling time (\(t_{s}\)) of the droplet when the input power applied is less than \(P_{cr}\). From Fig. 6a, it is evident that the presence of acoustic fields delays the setting time of the droplet, as the input power increases the settling time increases. The reason for the delayed settling time is clearly explained in Section IV.1. From Fig. 6a, it can be inferred that the influence of input power on the settling time gets weaker as the droplet size increases since the droplet covers multiple positive and negative acoustic force regions. The average velocity shown in Fig. 6b is in line with the settling time results. The acoustic field reduces the average velocity of the droplet as compared to the uniform velocity of the droplet due to gravity without acoustic fields. This is attributed to the fact that the droplet spends more time in the negative acoustic force region compared to Figure 6: Experimental results of settling time (a) and average velocity (b) of the droplets of different sizes subjected to varying input power. Note: Applied input power is less than critical input power for the above experimental results. the positive acoustic force region. It is also to be noted that as the droplet size increases, the settling time decreases and average velocity increases for any input power including the zero power (without acoustics). ### Sorting of the droplets based on the critical acoustic power method If the size of the droplets is beyond the Rayleigh limit, it offers a novel sorting method called sorting based on critical power. Before proceeding to understand this method, it is crucial to understand the two conventional sorting methods widely used for small particles based on Eqs. (1): Size-based sorting and contrast factor-based sorting. In contrast factor-based sorting [33], acoustic radiation force is utilized to direct particles of positive contrast factor to node and negative contrast factor to anti-node. On the other hand, in size-based sorting [34], since ARF given in Eqs. (1) is proportional to \(a^{3}\), the larger particles tend to move faster towards the node or anti-node compared to the smaller particles. Size-based sorting is achieved by the timely collection of large particle from the node or antinode before smaller particle reaches there. It is important to note that if enough time is given, both the smaller and larger particles will eventually converge towards the node or anti-node. The proposed sorting method based on critical power is illustrated in Fig. 7a. Both the droplets settle if the applied power (\(P\)) is less than the critical power of both the droplets (\(P<P_{cr,B}<P_{cr,A}\)) (Fig. 7a.i). Droplet B suspends and droplet A settles (Fig. 7a.iii) if \(P_{cr,B}<P<P_{cr,A}\). In the case of \(P>P_{cr,A}>P_{cr,B}\), both droplets suspended as shown in Fig. 7a.iv. Here we propose a sorting method based on the condition illustrated in Fig. 7a.iii). The above condition can also be stated as follows: For the given input power, there exists a critical diameter (\(d_{cr}\)), whereby droplets with a diameter less than the \(d_{cr}\) suspend and droplets with size more than the \(d_{cr}\) settle (Fig. 7a.iii). Fig. 7b & Fig. 7c experimentally demonstrate sorting in diluted w/o emulsion and dense w/o emulsion respectively. In Fig. 7b, sorting occurs at an acoustic power of 1.5 W which corresponds to a critical diameter of 0.7 mm, and droplets of size smaller than \(a<0.7\) mm are suspended, while droplets of size larger than \(a>0.7\) mm (marked in a dotted circle) are settled within a few seconds. Similarly, in Fig. 7c (dense w/o emulsion) the droplets of size less than 0.8 mm and droplets of size greater than 0.8 mm (marked in a dotted circle) are sorted by applying an acoustic power of 2.5 W (\(d_{cr}=0.8\) mm). Remarkably, this sorting method is robust and works even in dense suspension where the particle-particle interaction is significant. ## V Conclusion In this work, we experimentally investigated the suspension behavior of droplets beyond the Rayleigh limit when subjected to standing acoustic waves. We showed that if the droplet size exceeds the Rayleigh limit, the critical acoustic power required to suspend the droplets against gravity strongly depends on the droplet size. The suspension characteristics of the droplet under different regimes were explained qualitatively by adopting the assumption that the larger droplet can be considered as a collection of Rayleigh particles/droplets. In addition, we also demonstrated the novel sorting of droplets using the critical power method. Our study provides new insights into the suspension characteristics of the droplets beyond the Rayleigh limit which will aid the development of advanced droplet sorting techniques using acoustic fields. To enhance our understanding of the current experimental results and explore droplet behavior beyond our experimental constraints, we are conducting numerical simulations, to be discussed in our forthcoming research. ###### Acknowledgements. This work is supported by the Department of Science & Technology - Science and Engineering Research Board (DST-SERB) via Grant No: SRG/2021/002180 and the Department of Science & Technology - Fund for Improvement of Science & Technology Infrastructure (DST-FIST) via Grant Figure 7: Droplet sorting based on the critical power method. a) Schematic representation. Experimental result of droplet sorting: b) in a diluted w/o emulsion and c) in a dense w/o emulsion. No: SR/FST/ET-I/2021/815. We express our gratitude to Mr. Aswinraj M from IIITDM for his support with the electrical circuit and measurements. ## Author Declarations ### Conflict of Interest The authors have no conflicts of interest to disclose. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2303.11091
Quantum (in)stability of maximally symmetric space-times
Classical gravity coupled to a CFT$_4$ (matter) is considered. The effect of the quantum dynamics of matter on gravity is studied around maximally symmetric spaces (flat, de Sitter and Anti de Sitter). The structure of the graviton propagator is modified and non-trivial poles appear due to matter quantum effects. The position and residues of such poles are mapped as a function of the relevant parameters, the central charge of the CFT$_4$, the two $R^2$ couplings of gravity as well as the curvature of the background space-time. The instabilities induced are determined. Such instabilities can be important in cosmology as they trigger the departure from de Sitter space and in some regions of parameters are more important than the well-known scalar instabilities. It is also determined when the presence of such instabilities is unreliable if the associated scales are larger than the ``species" cutoff of the gravitational theory.
Jewel K. Ghosh, Elias Kiritsis, Francesco Nitti, Valentin Nourry
2023-03-20T13:25:23Z
http://arxiv.org/abs/2303.11091v3
# Quantum (in)stability of maximally symmetric space-times ###### Abstract: Classical gravity coupled to a CFT\({}_{4}\) (matter) is considered. The effect of the quantum dynamics of matter on gravity is studied around maximally symmetric spaces (flat, de Sitter and Anti de Sitter). The structure of the graviton propagator is modified and non-trivial poles appear due to matter quantum effects. The position and residues of such poles are mapped as a function of the relevant parameters, the central charge of the CFT\({}_{4}\), the two \(R^{2}\) couplings of gravity as well as the curvature of the background space-time. The instabilities induced are determined. Such instabilities can be important in cosmology as they trigger the departure from de Sitter space and in some regions of parameters are more important than the well-known scalar instabilities. It is also determined when the presence of such instabilities is unreliable if the associated scales are larger than the "species" cutoff of the gravitational theory. + Footnote †: preprint: CCTP-2023-3 ITCP-IPP 2023/3 ###### Contents * 1 Introduction * 1.1 Summary and results * 1.2 Discussion * 2 The theory * 2.1 Setup * 2.2 Constructing the renormalized action * 2.3 The induced stress tensor * 2.4 Background solutions * 3 Bulk metric perturbations * 4 The boundary scalar perturbation * 4.1 Gauge fixing * 4.2 Scalar equation of motion * 4.3 Scalar tachyonic instabilities * 5 The spin-two spectral equations * 5.1 Flat-slicing * 5.2 dS-slicing * 5.3 AdS-slicing * 5.3.1 Dynamical gravity on one side * 5.3.2 Symmetric boundary conditions * 5.4 Identifying ghosts from poles in the propagator * 6 Tensor instabilities in pure gravity * 6.1 When are tensor ghosts light? * 7 Poles of the Minkowski spin-two propagator and stability * 8 Poles of the dS spin-two propagator and stability * 8.1 Numerial results for two typical sets of parameters * 8.2 Analytic results for tensor tachyonic modes in dS at large \(|\nu|\) * 8.3 Tachyonic and ghost-like instabilities for dS in parameter space * 9 Poles of the AdS spin-two propagator and stability * 9.1 Results for two typical sets of parameters * 9.2 Analytic results for tensor tachyonic modes in AdS in the large-\(|\nu|\) regime * 9.3 Infinite series of stable solutions * 9.4 Tachyons and ghosts in parameter space for the AdS case ## Acknowledgements * A Ghosts and tachyons in Effective Field Theory * A.1 A simple model * A.2 EFT * A.3 The IR expansion * B Renormalized action * C Comparison with the Starobinsky model * D AdS slicing coordinates * E Schrodinger problem in the bulk * F Flat space tachyonic time scale * G Tachyonic tensor eigenmodes in de Sitter * H Tachyonic tensor eigenmodes in anti de Sitter * I Asymptotic behaviour of Legendre functions * J Quadratic action * J.1 Scalar * J.2 Tensor * K Unphysical scalar mode due to gauge fixing * K.1 Non-diagonal components of the scalar equation * K.2 Pure gauge scalar * L Comparison with previous results for a dS boundary * L M. Further plots M.1 More snapshots for de Sitter M.2 More snapshots for AdS ## 1 Introduction The interplay between semiclassical gravity and the quantum effects of Quantum Field Theory (QFT) is a topic of research that has been in the spotlight for several decades. The most important area of applicability motivating these issues arise from cosmology. In cosmology, we treat gravity as semiclassical1 and we couple it with QFTs. The class of semiclassical metrics relevant in this case is cosmological metrics, with the most prominent example being the maximally symmetric cosmological metric, ie, de Sitter. Footnote 1: Exceptions exist where gravity is treated in perturbation theory, [34] or where the curvature is very strong and a more fundamental theory (an example is string theory) can take over. Other contexts also are relevant, like semiclassical effective actions of string theory with asymptotically AdS or flat asymptotics. In this context, quantum effects in gravity and matter seem to go hand in hand as they are controlled by the same underlying parameter, the string coupling constant. However, we understand that there are two types of quantum effects associated with string loops or the \(\alpha^{\prime}\) expansion, although their separation is "duality-frame" dependent. There are however limits, (known as double scaling limits) in which gravitational quantum effects can be made subleading to "matter" quantum effects. In the context of holography, relevant for AdS or asymptotically AdS spaces typically (bulk) gravity and matter are treated semiclassically, but subleading corrections in N involve quantum effects in both sectors. The case of QFTs on de Sitter space is an especially hot issue, as we believe that our universe has been near de Sitter at least twice during its history. Defining a QFT on de Sitter space and, in particular, answering questions about QFT backreaction on the geometry is a subtle issue. Perturbative field quantization (and renormalization) on fixed classical curved space-times is text-book material [1]. However, answering concrete questions about the observable effects of quantum fields backreacting on classical geometry is not straightforward. This concerns particularly theories which are gapless in the infrared, due to the presence of infrared divergences (and in most cases strong IR dynamics). It is well known that in classical GR, both de Sitter space and Flat Minkowski space are non-linearly stable, [2]-[6]. However, in the presence of quantum effects from matter, instabilities can appear. For the case of flat space this was established in [7]-[17]. In (quasi) de Sitter space, there are several other issues arising when one considers QFTs. In [18]-[21] a divergence of scalar correlators was observed at large times. This was addressed in [22, 23, 24] using a stochastic approach. The case of interacting massive scalar fields has been treated thoroughly more recently in [25, 26]. A systematic approach to compute corrections in the massless case is lacking and the problem remains still open. The accumulation of long wavelength fluctuations in an expanding universe is another issue that has been studied, starting with references [27, 28]. Their analysis was extended further in [29]. Another issue concerned the fact that the two-point function of a massless scalar in de Sitter space had to break de Sitter symmetry, due to the presence of a zero mode, [30, 31]. This issue is of a different nature and is more similar to the fact that in two dimensions, a massless scalar is IR singular. The resolution of this issue may be therefore similar: massless scalars are not good acceptable fields on de Sitter space2 as argued in [32]. Footnote 2: We shall later conclude in this paper that in all the theories we examine and which are all gapless, there is no breaking of de Sitter invariance, and the de Sitter invariant vacuum is chosen. The expectation that QFTs in de Sitter space render the manifold unstable at the non-linear level has been entertained for a long time [33]-[37]. In particular, destabilizing effects were most important from massless particles, and a gravity 2-loop computation in [34] suggested such an instability. Similar calculations with massless scalars implied similar effects, [27, 28], see however [38]. The topic of quantum effects has been revived after cosmological (CMB) data became precise, and in [39, 40, 41] it was argued that large time-dependent logs from quantum effects of quantum fields, could give large corrections to inflationary observables. A different approach in [42], provided different results. Therefore, the question of the consequences of the secular terms (growing with time) which arise in perturbation theory of a massless scalar field in the cosmological patch of de Sitter remains controversial. Do these contributions indicate an instability of de Sitter space against quantum perturbations? Or is this conclusion an artefact of finite orders in perturbation theory, which is expected to disappear once an appropriate resummation is performed (as is the case for infra-red effects in thermal perturbation theory)? A review of these developments and additional references can be found in [43]. Furthermore, the persistent difficulty of constructing de Sitter vacua in string theory, see [44] for a review, has led to the conjecture that de Sitter space cannot be attained in a weakly curved/coupled quantum theory of gravity [45]. There is another issue where de Sitter instabilities induced by quantum dynamics can be important: they trigger an exit from the inflationary regime that is an important ingredient of any inflationary model. Indeed, scalar instabilities triggered by coupling gravity to a CFT\({}_{4}\), provided an exit from inflation, [46, 47, 48] induced by the conformal anomaly, [49]. However, this exit was not good enough and the model was modified to what is now called the Starobinsky model, where inflation is triggered by a (rather large) \(R^{2}\) term. This is a very successful model when compared to current data [50]. In that model also the exit from inflation is triggered by the unstable scalar mode, that in this context is the scalaron of \(R^{2}\) gravity. So far the quantum effects studied and the backreaction on de Sitter and Minkowski spaces used weakly-coupled QFTs. The holographic gauge/gravity duality provides a way to tackle non-perturbative (large-\(N\)) four-dimensional quantum field theories by mapping them to higher-dimensional semiclassical General Relativity. Moreover, one controls the dynamics of holographic QFTs even when the manifold they are defined upon is curved. Holography has been applied to cosmological issues already in several works, [51, 52, 48], [53]-[67]. One important arena where this technique can provide important information is when _four-dimensional_ gravity (described by Einstein General Relativity or its higher-derivative extensions) is coupled to a strongly coupled QFT. Among the issues that may arise are for example the stability under small (metric and matter) perturbations as well as the non-perturbative stability of cosmological backgrounds. The holographic approach allows for recasting these questions in terms of a classical higher-dimensional gravity theory, which via holography captures the effects of the QFT, coupled to a classical four-dimensional gravitational theory. To be more specific, the holographic setup consists of two coupled sectors: 1. The holographic sector describes the strongly coupled (holographic) CFT, whose dual is living in a higher-dimensional space-time (_the bulk_) with metric \(\mathcal{G}_{ab}\). 2. The four-dimensional gravity sector is defined in terms of a metric \(g^{(0)}_{\omega\sigma}\). This metric plays the role of a boundary condition for \(\mathcal{G}_{ab}\), and it has no bulk dynamics. From the CFT point of view, it corresponds to the source of the field theory stress tensor. The action describing classical 4d gravity coupled to the holographic QFT has the form: \[S=S_{\text{grav}}[g^{(0)}_{\omega\sigma}]+S_{bulk}[\mathcal{G}_{ab},\ldots] \tag{1}\] The first term can be taken to be the usual Einstein-Hilbert action plus eventually higher curvature terms, and it has the effect of making \(g^{(0)}_{\omega\sigma}\) dynamical. The second term describes the higher-dimensional holographic dual of the QFT, and the dots represent bulk fields other than the metric. Both terms are treated classically, but the bulk action encodes holographically the full quantum dynamics of the dual field theory. The two sectors are coupled by the requirement that on the conformal boundary \({\cal G}_{\omega\sigma}\) asymptotes to \(g^{(0)}_{\omega\sigma}\). "Integrating out" the CFT consists in evaluating \(S_{bulk}[{\cal G}_{ab},\ldots]\) on shell. This results in an effective gravitational action for \(g^{(0)}_{\omega\sigma}\) alone, \[S_{\rm eff}[g^{(0)}_{\omega\sigma}]=S_{\rm grav}[g^{(0)}_{\omega\sigma}]+S^{on- shell}_{bulk}[g^{(0)}_{\omega\sigma},\ldots] \tag{2}\] The second term in (2) is now a functional of the boundary value \(g^{(0)}_{\omega\sigma}\), i.e. the four-dimensional metric. Varying the effective action results in the semi-classical Einstein equation : \[E_{\omega\sigma}=\langle T_{\omega\sigma}\rangle_{CFT} \tag{3}\] where \(E_{\omega\sigma}\) is the variation of the first term in (1) (i.e. the Einstein tensor if \(S_{\rm grav}\) is purely GR) and \(\langle T_{\omega\sigma}\rangle_{CFT}\) is obtained by varying \(S^{on-shell}_{bulk}\) with respect to \(g^{(0)}_{\omega\sigma}\). In a previous work [66], some of the authors have used the setup described above to address the issue of non-perturbative existence of 4d de Sitter space coupled to a (gap-less) holographic field theory. For this, the attention was limited to boundary metrics \(g^{(0)}_{\omega\sigma}\) of maximal symmetry. In this case, the effective action (2) takes the form of an effective \(f(R)\) theory, \[S_{\rm eff}[g^{(0)}_{\omega\sigma}]=\int d^{4}x\sqrt{g^{(0)}}f(R), \tag{4}\] where \(R\) is the Ricci scalar of \(g^{(0)}_{\omega\sigma}\). It was shown in [66] that such a theory generically still admits de Sitter solutions (albeit with a smaller cosmological constant than that of the "bare" 4d gravity) after the full quantum effects of the field theory are taken into account. The results of [66] suggest that, at least in the holographic context, IR effects from a QFT do not always destroy de Sitter space-time. However, that analysis could only be applied to _constant curvature_ 4d space-times. Therefore, it says nothing about the stability of the solutions under non-homogeneous perturbations. The form (4) of the effective action is only good for obtaining maximally symmetric solutions, and it would be misleading to expand it in perturbations around one such background. Rather, to study small perturbations of a holographic QFT coupled to gravity one has to go back to the original theory (1) and study its perturbation spectrum. This has been considered already in [52], albeit in a slightly different context (Randall-Sundrum cosmology), and more recently in [68] for the case of a two derivative gravitational action coupled to a holographic CFT around de Sitter space. In [68], beyond the scalar instabilities that are known for several decades, [47], a spin-2 instability was found for small enough de Sitter curvatures. Our goal in this paper is to extend previous results on the stability of maximally symmetric spacetimes due to quantum effects in several ways: * We consider not only de Sitter but also flat space and Anti-de Sitter space. * We consider a classical gravitational theory with all couplings necessary for renormalization which are relevant at low energies. This implies that we have a cosmological constant and Einstein terms as well as the two independent \(R^{2}\) terms with (finite) dimensionless renormalized couplings \(\alpha,\beta\). * In our case, the only non-trivial (i.e. in which the CFT degrees of freedom participate) quadratic action for the fluctuations is that in the spin-2 sector. We analyze not only the possible tachyonic poles that are responsible for the instabilities but also the presence of negative residues that signal the presence of ghosts * Moreover, we investigate when such tachyons or ghosts are below the effective UV cutoff for the classical gravitational theory, which is given by the so-called species cutoff, [69]. Concretely, we consider general perturbations around maximally symmetric four-dimensional space-times in a holographic setup, given by (1), where the matter content is a four-dimensional holographic conformal field theory. The coupling to gravity is entirely described by the exact one-point correlation function of the CFT stress-energy tensor, as in equation (3). This stress-tensor is obtained on a boundary of \(AdS_{5}\) from a holographic calculation and has the correct conformal Weyl anomaly (see [49] for a review of the conformal quantum anomaly, and [70] for the holographic CFT stress-tensor). The perturbation analysis is obtained by fluctuating the bulk and boundary metrics and then writing the linearized version of the effective Einstein equation (3) for the four-dimensional metric \(g^{(0)}_{\omega\sigma}\). We perform this analysis around maximally symmetric 4d space-times of positive curvature (dS), negative curvature3 (AdS) and vanishing curvature (Minkowski). Footnote 3: The case of negative curvature is special. It corresponds to foliating AdS\({}_{5}\) by AdS\({}_{4}\) slices, which leads to a geometry with two connected boundaries, which is dual to two copies of the CFTs with an interface between them (see e.g. [71] and the recent discussion in [72]). One has then different options on how to couple dynamical gravity to the system, the most general case being a bi-gravity theory with each metric coupled to one of the CFTs. Here, we discuss the special cases in which only one metric is dynamical. In this work, we pursue two main objectives: 1. We first obtain analytic spectral equations (in terms of transcendental functions) for the boundary metric fluctuations of the 4d gravity+ holographic CFT system, around a general maximally symmetric background; 2. We then perform a full numerical analysis of the spectrum in momentum space, (defined by the eigenvalue of the Laplacian on the corresponding maximally symmetric space-time) and determine criteria for the presence of instabilities of both tachyonic and ghostlike type. This way, we obtain a detailed map of the stable and unstable regions of parameter space. Our approach is similar in spirit to other works in the context of weakly coupled field theory: a similar analysis was performed with a matter content given by quantum corrections of free massless scalar CFT [47], for _homogeneous_ time-dependent metric perturbations around de Sitter. A similar perturbation analysis around flat space was carried out for a free scalar coupled to higher-derivative gravity in [14]. Here, we use the holographic setup to perform a full parameter-space analysis of the gravity+CFT system, and we establish stable and unstable regions of parameter space around background solutions with zero, negative and positive constant curvature. Specifically, the parameters of the model are: * The boundary gravity renormalized couplings \(\Lambda,G,\alpha,\beta\), corresponding to local covariant functions of \(g^{(0)}_{\omega\sigma}\) of dimension up to four, which we define as follows: \[\frac{\Lambda}{8\pi G}\sqrt{g^{(0)}},\quad\frac{1}{16\pi G}\sqrt{g^{(0)}}R, \quad\frac{\alpha}{384\pi}\sqrt{g^{(0)}}R^{2},\quad\frac{\beta}{64\pi}\sqrt{g^ {(0)}}\left[R_{\omega\sigma}R^{\omega\sigma}-\frac{1}{3}R^{2}\right]\] (5) Here \(\Lambda\) is the 4d cosmological constant, \(G\) is 4d Newton's constant, \(R_{\omega\sigma}\) is the Ricci tensor of \(g^{(0)}_{\omega\sigma}\) and \(R\) the corresponding Ricci scalar and \(\alpha,\beta\) two dimensionless parameters. * The parameter \(N\), counting the degrees of freedom of the CFT. This can be traded for the central charge of the CFT. * The value \(R\) of the curvature of the background solution. This is not an independent parameter, as it is determined by the other parameters via the background solution. However, it is convenient to use this as an independent parameter instead of e.g. the 4d cosmological constant. * An extra parameter is the _renormalization scale_\(\mu\) of the CFT, which arises from the conformal anomaly. Since this is universal, its only effect is to shift the parameter \(\beta\) so that it enters only in the combination: \[\beta_{\rm eff}=\beta-\frac{N^{2}}{\pi}\log\left(4\mu^{2}GN^{2}\right).\] (6) Although the QFT we couple to gravity is a holographic CFT\({}_{4}\), it is important to stress that our result holds for _any_ generic CFT coupled to gravity: as we shall observe below, the spectral properties of the system are determined by the stress-tensor 2-point function, which for a CFT in any conformally flat space-time is completely fixed by the central charge. Therefore, although the method we use to compute the fluctuation spectra is specific to a large-\(N\) holographic CFT, to obtain the result for a generic CFT it is enough to trade the parameter \(N\) with the appropriate central charge. In order to have semiclassical 4d gravity, we require that the 4d curvature \(R\) is small compared to the cutoff of the theory. Typically, this is assumed to be the Planck scale or the string scale in the case of string theory. However, as argued in [69] if the matter theory has many degrees of freedom, the perturbativity condition imposes a lower cutoff (the so-called "species scale") than the Planck scale. For example if, like in our case, the matter theory has \(\mathcal{O}(N^{2})\) degrees of freedom the species cutoff \(M_{\text{species}}\) is \[M_{\text{species}}\equiv\frac{M_{\text{Planck}}}{N}\sim\frac{1}{\sqrt{G}N}. \tag{7}\] In our case, we require \(N\gg 1\) in order to have semiclassical gravity in the bulk (dual to the large N CFT). At the same time, we require that boundary curvatures are below the species cut-off, \[R\lesssim M_{\text{species}}^{2}=\frac{M_{\text{Planck}}^{2}}{N^{2}} \tag{8}\] The condition (8) is enough to neglect all higher curvature corrections which we have not included in (5): in general, the gravity effective action may contain higher derivative terms that we can write schematically as \[S=M_{\text{Planck}}^{2}R+\alpha R^{2}+\sum_{n=1}^{\infty}a_{n}R^{n+2} \tag{9}\] The coefficients will get renormalized by the CFT and the estimate is that, in the large-\(N\) limit, \[\alpha\sim\mathcal{O}(N^{2})\ \ \,\ \ \ a_{n}\sim\frac{N^{2}}{M^{2n}} \tag{10}\] At the cutoff (8) we can estimate the importance of the various terms: \[M_{\text{Planck}}^{2}R\sim\frac{M_{\text{Planck}}^{4}}{N^{2}}\ \ \,\ \ \alpha R^{2}\sim\frac{M_{\text{Planck}}^{4}}{N^{2}}\ \ \,\ \ \ a_{n}R^{n+2}\sim\frac{M_{\text{Planck}}^{4}}{N^{2n+2}} \tag{11}\] For this reason, we can neglect all powers of the curvature above \(R^{2}\). The same considerations about the cut-off apply to the analysis of fluctuations, and in particular of instabilities. In effective field theory, any mode with a mass of the order or above the cutoff is outside the reach of the theory. In particular, it is only when the ghost or tachyon mass is well below the EFT cut-off that one can unambiguously conclude that the theory truly has stability issues. Indeed, as it is well known (and as we review in a simple example in Appendix A), an EFT originating from an otherwise healthy UV theory may display some unstable modes as an artefact of the low-energy expansion. In this case, the unstable modes will have masses of the order of the EFT cut-off (the scale of the fields that have been integrated out). Turning this around, we can say that one cannot conclude anything about the actual stability or instability of an EFT based on the occurrence of ghosts or tachyons whose mass scale is at or above the EFT cut-off: one would have to know the UV completion to reach a definite conclusion. For the same reason, within EFT one cannot reach any definite conclusion about the stability of spacetimes whose curvature scale (\(H\) or \(\chi\)) is at or larger than the cut-off. Throughout this work, we shall encounter many instabilities (tensor and scalar ghosts and tachyons) and will be careful to compare their mass scale with the cut-off (7) to determine whether they can be considered physical. We devote the rest of this introduction to an extended summary of these techniques as well as a discussion of the results obtained in this work. Although they stem from a holographic calculation, we insist that they are valid for a generic CFT, and describe the results mostly in the language of the field theory side. We leave the details of the holographic approach to the rest of the paper. ### Summary and results The setup studied here consists of higher curvature gravity coupled to a Conformal Field Theory (CFT). For the gravitational part, we consider the Einstein-Hilbert action with a cosmological constant plus quadratic curvature term: \[S_{\rm grav}=S_{EH}+S_{2} \tag{12}\] where \[S_{EH}=-\frac{1}{16\pi G}\int d^{4}x\sqrt{-g^{(0)}}(R-2\Lambda) \tag{13}\] and \[S_{2}=\frac{\alpha}{384\pi}\int d^{4}x\sqrt{-g^{(0)}}R^{2}+\frac{\beta}{64\pi} \int d^{4}x\sqrt{-g^{(0)}}\left(R_{\omega\sigma}R^{\omega\sigma}-\frac{1}{3}R ^{2}\right) \tag{14}\] Here, \(g^{(0)}_{\mu\nu}\) is the 4d space-time metric, \(g^{(0)}\) its determinant and \(R\) its curvature. The parameters \(G,\Lambda,\alpha,\beta\) are the (finite) renormalized parameters which already contain the contributions of the CFT4 Footnote 4: These parameters are defined so that they are finite in an appropriate scaling limit after removing the UV cut-off. The detailed procedure is described in section 2. Coupling to a large-\(N\) CFT is implemented via holography: we identify the 4d space-time with the conformal boundary of AdS\({}_{5}\) bulk manifold, and \(g^{(0)}_{\mu\nu}\) with the leading term in the Fefferman-Graham expansion of the 5d bulk metric: \[ds^{2}_{bulk}=L^{2}\frac{d\rho^{2}}{4\rho^{2}}+\frac{1}{\rho}\left[g^{(0)}_{ \mu\nu}+O(\rho)\right]dx^{\mu}dx^{\nu}\qquad\rho\to 0 \tag{15}\] where \(\rho\to 0\) corresponds to the AdS boundary and \(L\) is the AdS length. This way, one can obtain 4d maximally symmetric metrics \(g^{0}\) whose Ricci curvature \(\bar{R}\) satisfies the relation: \[\Lambda=\frac{1}{4}\left(\bar{R}-\frac{GN^{2}\bar{R}^{2}}{48\pi}\right). \tag{16}\] The Ricci curvature \(\bar{R}\) of the maximally symmetric background space-time can be positive, negative or vanishing. It is convenient to parametrize it in the various cases as follows: \[\bar{R}=\left\{\begin{aligned} & 12H^{2},&&\text{de \ Sitter},\\ & 0,&&\text{Minkowski}\\ &-12\chi^{2},&&\text{Anti \ de \ Sitter}\end{aligned}\right. \tag{17}\] The parameter \(N\), characterizing the number of degrees of freedom of the CFT, is related to the bulk Planck scale \(M\) and bulk AdS length \(L\) by \(N^{2}\propto(ML)^{3}\). The first term in equation (16) is the contribution from the vacuum Einstein equation, and the second term is the CFT contribution. To conclude, the _independent_ parameters of the theory are the curvature of the background space5\(\bar{R}\), the four-dimensional Newton constant \(G\), the two \(R^{2}\) couplings \(\alpha\) and \(\beta_{\text{eff}}\), where \(\beta_{\text{eff}}\) is defined in (6) and the number of colors6\(N\) of the holographic CFT\({}_{4}\). It will be convenient to express quantities in terms of the following "reduced" parameters: Footnote 5: Even if \(\bar{R}\) is not a parameter in the action, we can trade \(\Lambda\) for \(\bar{R}\) using (16). Although the relation between \(\Lambda\) and \(\bar{R}\) is not one-to-one, by scanning over all values of \(\Lambda\) we can obtain any value of \(\bar{R}\). \[\tilde{\alpha}=\frac{\pi\alpha}{N^{2}},\qquad\tilde{\beta}_{\text{eff}}= \frac{\pi\beta_{\text{eff}}}{N^{2}}. \tag{18}\] Our goal is to determine, as a function of the parameters of the model, the spectrum of gravitational fluctuations of the boundary metric around any maximally symmetric 4d boundary metric \(\bar{\zeta}_{\mu\nu}\). We use this information to determine the perturbative stability of the system. The perturbed boundary metric is taken to be: \[g^{(0)}_{\mu\nu}=\bar{\zeta}_{\mu\nu}+\delta\zeta^{b}_{\mu\nu} \tag{19}\] In an appropriate gauge, the boundary perturbation can be written as \[\delta\zeta^{b}_{\omega\sigma}=\psi\bar{\zeta}_{\omega\sigma}+h^{(0)}_{\omega \sigma} \tag{20}\] where \(\psi\) is a scalar degree of freedom, and \(h^{(0)}_{\omega\sigma}\) is a tensor perturbation which is transverse and traceless with respect to the boundary metric. The scalar is a pure boundary mode7, whereas the four-dimensional gravity tensor modes couple to tensor perturbations in the bulk. Footnote 7: The scalar mode couples to the trace of the stress-energy tensor of the CFT\({}_{4}\). Since this theory is conformally invariant, the two-point function of the trace vanishes. Therefore the non-trivial action for the scalar mode is generated by the boundary \(R^{2}\) terms as well as the conformal anomaly of the CFT\({}_{4}\), [47]. If the theory is instead a QFT, extra contributions are expected for the dynamics of the scalar mode. The metric perturbations are coupled to the CFT via the bulk dynamics: the boundary field \(\hbar^{(0)}_{\omega\sigma}(x)\) is the leading term in a near-boundary expansion of the perturbation of the _bulk_ metric. #### Spectral functions The spectral analysis around the holographic background is tightly connected to the holographic two-point function of the boundary stress tensor. When working at linear order in fluctuations both in the bulk and on the boundary, all one needs is the structure of the effective action (2) at quadratic order as a function of the boundary metric perturbation \(\delta\zeta^{b}\): \[S^{(2)}_{\rm eff}=\int d^{4}x\frac{1}{2}\delta\zeta^{b}_{\mu\nu}O^{\mu\nu,\rho \sigma}_{\rm grav}\delta\zeta^{b}_{\rho\sigma}-\frac{1}{2}\int d^{4}x\int d^{4 }y\,\delta\zeta^{b}(x)_{\mu\nu}\left\langle T^{\mu\nu}(x)T^{\rho\sigma}(y) \right\rangle_{CFT}\delta\zeta^{b}_{\rho\sigma}(y) \tag{21}\] These two terms correspond to the quadratic order approximation of each of the two terms in (2): \(O_{\rm grav}\) is the local kinetic operator of the quadratic term in the 4d gravity action \(S_{\rm grav}\) in (12); \(\langle T^{\mu\nu}T^{\rho\sigma}\rangle_{CFT}\) is the holographic two-point function of the stress tensor, which is by definition: \[\langle T^{\mu\nu}(x)T^{\rho\sigma}(y)\rangle_{CFT}=-\frac{\delta}{\delta\zeta ^{b}_{\mu\nu}(x)}\frac{\delta}{\delta\zeta^{b}_{\rho\sigma}(y)}S^{on-shell}_{ bulk} \tag{22}\] The stress tensor two-point function contains both local and non-local contributions. The local contributions simply renormalize the coefficients of local terms which are already present in \(O_{\rm grav}\). The non-local contributions are genuine new effects of the CFT which one cannot find in a local gravity theory. Equation (21) shows that by computing the holographic two-point function we have access to the full propagator, which we denote by \(\mathcal{F}^{-1}\), of the boundary metric fluctuations: the inverse propagator is \[\mathcal{F}^{\mu\nu\rho\sigma}\equiv O^{\mu\nu,\rho\sigma}_{\rm grav}-\langle T ^{\mu\nu}T^{\rho\sigma}\rangle_{CFT} \tag{23}\] and the spectrum of the system are the solutions of the integrodifferential equation \[\mathcal{F}^{\mu\nu\rho\sigma}\delta\zeta^{b}_{\rho\sigma}=0. \tag{24}\] The linear equation (24) can be recast into two separate _scalar_ spectral equations for the scalar and tensor modes defined in (20), by going to the appropriate "momentum space" of the boundary coordinate. This is done by decomposing the modes in eigenfunctions of the d'Alembert operator \(\nabla^{2}\) of the background boundary metric \(\bar{\zeta}_{\omega\sigma}\): in the positive, zero and negative curvature case we take the fluctuation to satisfy \[\left(\nabla^{2}-r\frac{\bar{R}}{12}\right)\delta\varphi(x)=\left\{\begin{array} []{ll}-H^{2}\left(\nu^{2}-\frac{9}{4}\right)\delta\varphi(x)&dS\\ \\ -k^{2}\delta\varphi&\text{Minkowski}\\ \\ \chi^{2}\left(\nu^{2}-\frac{9}{4}\right)\delta\varphi(x)&AdS\end{array}\right. \tag{25}\] where \(\delta\varphi(x)\) stands for either \(\psi\) or \(h^{(0)}_{\mu\nu}\), \(r\) is the spin of the perturbation (\(r=0\) for \(\psi\) and \(r=2\) for \(h^{(0)}_{\omega\sigma}\)), \(H\) is the Hubble parameter in the case of positive curvature boundary (de Sitter) and \(\chi\) is the inverse AdS length in the case of negative curvature boundary, as in (17). For flat space, this is the usual Fourier decomposition where \(k^{2}=k^{\mu}k_{\mu}\). In both curved cases, \(\nu\) is a dimensionless eigenvalue measuring the invariant "momentum" in units of the background curvature. The values of \(\nu^{2}\) (or \(k^{2}\) in the flat case) are determined by the spectral equation (24), which in momentum space becomes a transcendental equation for \(\nu^{2}\) (or \(k^{2}\)) of the form: \[\mathcal{F}(\nu)=0 \tag{26}\] where the precise form of the function \(\mathcal{F}\) depends both on the nature of the mode (scalar or tensor) and the background curvature and the parameters in the action. The expressions obtained from a holographic calculation can be found below: Scalar modeIn this case, the inverse propagator is a polynomial in \(\nu^{2}\) (or \(k^{2}\)), because it results from a quadratic action which is local on the boundary. The expression of the inverse propagator is given by * **Minkowski** \[\mathcal{F}_{scalar}(k)=-\frac{3}{16\pi G}\left(k^{2}-\frac{4}{\alpha G}\right)\] (27) * **de Sitter and Anti-de Sitter** \[\mathcal{F}_{scalar}(\nu)=-\frac{1}{64\pi G}\left[\alpha G\bar{R}-12+\frac{ GN^{2}\bar{R}}{2\pi}\right]\left\{\frac{4}{G\alpha}-\frac{N^{2}\bar{R}}{6\pi \alpha}-\frac{\bar{R}}{12}\left(\nu^{2}-\frac{9}{4}\right)\right\}.\] (28) This is the "physical" scalar inverse propagator. For the details, see section 4. Tensor modesFor tensor modes, the non-local contribution from the CFT stress-tensor correlator in (23) gives rise to non-polynomial expressions for the inverse propagators: * **Minkowski** \[{\cal F}_{\rm tensor,Mink}(k)=\frac{N^{2}}{64\pi^{2}}k^{2}\left\{-\frac{2\pi}{GN^ {2}}+\frac{k^{2}}{2}\left[\frac{1}{2}-2\gamma_{E}-\log\left(GN^{2}k^{2}\right)- \frac{\pi\beta_{\rm eff}}{N^{2}}\right]\right\}\] (29) * **de Sitter** \[{\cal F}_{\rm tensor,dS}(\nu)= \frac{N^{2}H^{2}}{64\pi^{2}}\left(\nu^{2}-\frac{9}{4}\right) \left\{1-\frac{2\pi}{GN^{2}H^{2}}+\frac{2\pi\alpha}{N^{2}}-\frac{1}{2}\left( \nu^{2}-\frac{1}{4}\right)\left[\right.\right.\] (30) \[\left.\left.2\log\left(GN^{2}H^{2}\right)-\frac{1}{2}+2{\cal H} \left(\nu-\frac{1}{2}\right)+\frac{\pi\beta_{\rm eff}}{N^{2}}\right]\right\}.\] where \({\cal H}\) is the harmonic number function defined in (5.34). The expression (1.30) with \(\alpha=0\) was already obtained in [68]8. In this work, we rederive it in our setup and generalise it to negative and zero curvature and arbitrary values of the \(\alpha\) parameter. Footnote 8: In [68]\(\beta\) was fixed but the renormalization scale \(\mu\) (called E in that paper) was allowed to vary. * a priori independent - copies of the CFT. Therefore, one has freedom in how to couple 4d gravity to the system. Here, we discuss two concrete cases: **a) Dynamical gravity on one side:** In this case only one of the two CFTs is coupled to dynamical gravity, and the metric on the second boundary is frozen. \[{\cal F}^{-}_{\rm tensor,AdS}(\nu)= \frac{N^{2}\chi^{2}}{64\pi^{2}}\left(\nu^{2}-\frac{9}{4}\right) \left\{1+\frac{2\pi}{N^{2}}\left(\frac{1}{G\chi^{2}}+\alpha\right)+\right.\] (31) \[\left.-\frac{1}{2}(\nu^{2}-1/4)\left[\frac{\pi\beta_{\rm eff}}{N ^{2}}+\log\left(GN^{2}\chi^{2}\right)-\frac{1}{2}+\right.\right.\] \[\left.\left.+{\cal H}\left(-\frac{1}{2}-\nu\right)+{\cal H} \left(-\frac{1}{2}+\nu\right)\right]\right\}.\] **b) Symmetric boundary conditions:** In this case, there is effectively a single boundary (see [72] for a recent discussion), and there is again a single dynamical gravity theory coupled to a single 4d CFT on AdS. This leads to the following spectral density: \[{\cal F}^{sym}_{\rm tensor,AdS}(\nu)= \frac{N^{2}\chi^{2}}{64\pi^{2}}\left(\nu^{2}-\frac{9}{4}\right) \left\{1+\frac{2\pi}{N^{2}}\left(\frac{1}{G\chi^{2}}+\alpha\right)+\right. \tag{32}\] \[\left.-\frac{1}{2}(\nu^{2}-1/4)\left[\frac{\pi\beta_{\rm eff}}{N ^{2}}+\log\left(GN^{2}\chi^{2}\right)-\frac{1}{2}\right.\right.\] \[\left.\left.+{\cal H}\left(\nu-\frac{1}{2}\right)+{\cal H}\left( -\nu-\frac{1}{2}\right)-\frac{\pi}{\cos\pi\nu}\right]\right\}.\] #### Stability Instabilities of the system are encoded in the properties of the zeros of \({\cal F}\). We perform a full analysis of all parameter space, which we summarize below. As a byproduct, by setting \(N=0\) we obtain the pure gravity spectral functions and study the corresponding zeros, which give indications about the stability of quadratic gravity around any constant curvature background. When discussing the gravity + CFT system, we always compare the results with those of pure gravity theories with the appropriate renormalized parameters. This allows us to identify the new effects (if any) which arise specifically from the coupling to the CFT. For a CFT with parameter \(N\), the comparison should be made by choosing the pure gravity parameters \(\alpha\) and \(\beta\) such that \(\alpha=\tilde{\alpha}/\pi\) and \(\beta=\tilde{\beta}_{\text{eff}}/\pi\), in terms of the quantities defined in (18): these are the quantities which are expected to be of order unity after renormalization of the local terms by the CFT is taken into account. Depending on the curvature, there are different criteria for instabilities. On any background, instabilities can be of two types: * **Tachyonic instabilities** correspond to modes which grow exponentially in time and are related to the _position_ of the root \(\nu\) in the complex plane. Specifically, a root \(\nu\) of \({\cal F}(\nu)\) is _tachyon-stable_ in the following cases: \[\left\{\begin{array}{ll}|Re(\nu)|\leq\frac{3}{2}&dS\\ k^{2}\leq 0&\text{Minkowski}\\ Re(\nu)\neq 0&AdS\end{array}\right\}\quad\Rightarrow\quad\text{tachyon-stable}\] (33) In all other cases, the mode is tachyonic. * **Ghost instabilities** correspond to a mode with eigenvalue \(\nu_{0}^{2}\) (or \(k_{0}^{2}\)) developing a "wrong sign" kinetic term, and are related to the value of the residue of \({\cal F}\) at the pole: \[\left\{\begin{array}{ll}Res{\cal F}^{-1}(\nu_{0}^{2})<0&dS\\ Res{\cal F}^{-1}(k_{0}^{2})<0&\text{Minkowski}\\ Res{\cal F}^{-1}(\nu_{0}^{2})>0&AdS\end{array}\right\}\quad\Rightarrow\quad \text{ghost-stable}\] (34) The sign conventions are discussed in section 6. A heavy ghost can be tolerated if its mass is above the cut-off of the theory because in this case it cannot be described in the context of effective theory (and it may become healthy in the UV-completion). In this work we compare ghost masses with two cut-offs: the 4d Planck scale \(G^{-1/2}\) (the ultimate cut-off in the semiclassical approach) and the _species cutoff scale_\(\Lambda_{\rm species}\) \[\Lambda_{\rm species}\equiv(GN^{2})^{-1/2}\;,\] (35) which can be argued to be the true cut-off of a gravity theory coupled to \(N^{2}\) degrees of freedom [69]. In fact, it seems that the latter is the natural scale in which to measure boundary curvature \(R\) in the present set-up: it always appears in the combination \[GN^{2}R\sim\frac{R}{M_{P}^{2}N^{2}}\sim\frac{R}{\Lambda_{\rm species}^{2}}\;.\] An unstable mode can be a ghost, a tachyon, or both. In what follows we summarize our results in the scalar and tensor sector and for zero, positive and negative background curvature. One important point to which we have to pay attention is whether the unstable mode is within the limits of effective field theory, i.e. whether it is light in Planck units (in the case of pure gravity) or light in units of the species scale (7) (in the case of gravity coupled to the CFT). #### Stability in the scalar sector For the scalar mode, it is straightforward to read off the conditions (33-34) from equations (27-28): this leads to the following conclusions: * In Minkowski space, the scalar spectral function (27) is the same in pure gravity and in the presence of the CFT and does not depend on \(N\) as the conformal anomaly of the CFT is not relevant. The scalar mode is never a ghost, and it is tachyonic if \(\alpha>0\). This agrees with previous analysis (e.g. [14]). The tachyonic mode is within the bounds of the theory if its mass is below the cut-off, which in terms of the "reduced" \(\tilde{\alpha}\) parameter defined in (18) requires \(\tilde{\alpha}\gg 1\) (the same condition as in pure gravity, since \(\alpha=\tilde{\alpha}\)). * In de Sitter space, scalar tachyon-stability requires: \[\frac{1}{\alpha}\left(1-\frac{GN^{2}H^{2}}{2\pi}\right)\leq 0.\] (36) For consistency, \(GN^{2}H^{2}\ll 1\) and therefore the second factor is always positive in effective field theory. Therefore, tachyon stability implies \(\alpha<0\). Moreover, the scalar mode is a ghost if: \[\left(\frac{\pi\alpha}{N^{2}}+\frac{1}{2}\right)\frac{GN^{2}H^{2}}{12\pi}>1.\] (37) ie. when \(\alpha\gg 1\). For pure gravity, (1.36) with \(N=0\) is the same condition (\(\alpha<0\)) as for Minkowski space. In pure de Sitter gravity, the scalar can also be a ghost, if \(\alpha\) is very large (at least of order \(1/(GH^{2})\gg 1\)). This mode is light in Planck units if \(|\alpha|\gg 1\). In the presence of the CFT, the tachyon stability condition is modified by the second term proportional to \(N^{2}\) in (1.36). However, note that this term is small if we insist the curvature is below the species cutoff, which requires \(GN^{2}H^{2}\ll 1\). If this is the case, the tachyon-stability condition is not affected much by the CFT in the context of low-energy EFT9. The scalar mode is below the species cut-off if \(|\tilde{\alpha}|\gg 1\) (the same condition as for pure gravity). Finally, both with and without the CFT the time-scale \(\tau\) of the tachyonic instability is roughly the inverse tachyon mass, \(\tau\sim\sqrt{G|\alpha|}\). In effective field theory (\(GH^{2}\ll 1\)) this is much faster than the de Sitter Hubble rate \(H^{-1}\) (i.e. the tachyon instability is very strong), unless \(|\alpha|\gg(GH^{2})^{-1}\). Therefore, for \(\tilde{\alpha}\) in the interval Footnote 9: A notable case in which this condition is violated is the Starobinsky realization of de Sitter (or more generally, inflation), in which the cosmological constant term is absent and the de Sitter curvature is fixed to \(GN^{2}H^{2}=4\pi\)[46, 47]. In this case, the tachyon stability condition is reversed to \(\alpha>0\). We comment on this case in Appendix C. \[1\ll\tilde{\alpha}\ll\frac{1}{GH^{2}N^{2}}\] (1.38) we have a strong instability (faster than one Hubble time) within effective field theory. This condition also applies to pure gravity, if we set \(N=1\) and \(\tilde{\alpha}=\alpha\). * The discussion is similar for Anti-de Sitter. Scalar tachyon-stability requires: \[\frac{9}{4}-\frac{4}{\alpha G\chi^{2}}\left(1+\frac{GN^{2}\chi^{2}}{2\pi} \right)\geq 0.\] (1.39) and the scalar mode is a ghost if \[-\left(\frac{\pi\alpha}{N^{2}}+\frac{1}{2}\right)\frac{GN^{2}\chi^{2}}{12\pi} >1.\] (1.40) Like before, to be in the effective field theory we must require that \(GN^{2}\chi^{2}\ll 1\). The condition for the scalar modes (whether a tachyon or a ghost) to be within the bounds of effective field theory is \(|\tilde{\alpha}|\gg 1\). #### Stability in the tensor sector Unlike the case of the scalar, exploring the roots of the tensor spectral function can only be done numerically, except in some corners where analytic approximations for the transcendental functions can be used (in particular the large eigenvalue limit \(\nu\rightarrow\infty\)). Below we give the broad features of the stability results in the three cases (zero, positive and negative curvature). More details can be found in the main body of the paper. In each case, we emphasize what happens for two special parameter values: (a) \(N=0\) which corresponds to pure gravity with higher curvature terms; (b) \(\alpha=\beta_{\rm eff}=0\), which corresponds to setting the (renormalized) local quadratic curvature terms to zero. This gives a measure of the truly non-local contributions from the CFT. * In the special case of pure gravity (\(N=0\)), for \(\beta\neq 0\) (and independently of \(\alpha\)) the quadratic Ricci tensor term always generates a ghost, whose mass is, [73], \[m_{ghost}^{2}=\frac{4}{\beta G}.\] (41) For \(\beta<0\) this is also a tachyon. This mode is light compared to the cut-off \(G^{-1/2}\) when \(|\beta|\gg 1\). Therefore, the gravity theory is a good and stable effective theory only if \(\beta\) is positive and \(\beta\lesssim 1\). * In the presence of the CFT, the spectral function is (29) and its non-trivial roots are the solutions of a transcendental equation of the type \(X\log X=a\), where \(X\) is proportional to \(k^{2}\) and \(a\) is a real constant. The analysis can be done semi-analytically and it leads to the conclusion that for any value of \(\tilde{\alpha}\) and \(\tilde{\beta}_{\rm eff}\)_Minkowski space always contains two tachyonic tensor modes_. The theory becomes eventually tachyon-stable only in the extreme limit \(\tilde{\beta}_{\rm eff}\rightarrow+\infty\). In this limit, one always finds also a light ghost (light compared to the "species" scale \((GN^{2})^{-1/2}\)), as in the pure gravity case. All in all, the masses of the unstable tensor modes are above the species cut-off for \(O(1)\) values of \(\beta_{\rm eff}\) (this includes the special case \(\alpha=\beta_{\rm eff}=0\)), while Minkowski space is unstable within EFT iff \(|\tilde{\beta}_{\rm eff}|\gg 1\) and independently of \(\tilde{\alpha}\). * **de Sitter* * In the special case of pure higher curvature gravity (\(N=0\)), there are always two tensor modes, one of which is the massless graviton, and the other is massive. The massive mode is tachyonic if the following condition is violated: \[\frac{2}{\beta}\left(\alpha-\frac{1}{GH^{2}}\right)<1.\] (42) Because \(GH^{2}\ll 1\), the condition (42) is violated if \(\beta<0\) for \(\alpha\) and \(\beta\) of order unity (this matches the Minkowski result). Whether or not (1.42) holds, either mode is necessarily a ghost. If \(\beta-2\alpha<2(GH^{2})^{-1}\), the ghost is the massless spin-2 mode, otherwise it is the massive mode. For \(O(1)\) values of \(\alpha\) and \(\beta\), the ghost is the massive mode, and its mass is of order \(\mathcal{O}(M_{p})\). One can have a light ghost only if \(\alpha\gg(GH^{2})^{-1}\gg 1\) (in which case the ghost is the massless graviton) or if \(|\beta|\gg 1\) (in this case which one is the ghost depends on the sign of \(\beta\)). All in all, in pure gravity the theory is stable and ghost-free within EFT (i.e. below the cut-off \(M_{p}\)) if \(\alpha\) and \(\beta\) are both \(O(1)\). * We now turn to the case of gravity coupled to the CFT. In de Sitter, the presence of tachyonic tensor modes depends on the curvature, on \(N\) and the parameters \(\tilde{\alpha}\) and \(\tilde{\beta}_{\text{eff}}\). The dS curvature \(H\) always enters in the combination \(GN^{2}H^{2}\), i.e. the natural scale to which the curvature is compared is the "species" scale (1.7). For a de Sitter background, the presence or absence of tachyonic instabilities is illustrated in figure 10. For a fixed value of \(GN^{2}H^{2}\), tachyon-stability corresponds to values of \(\tilde{\beta}_{\text{eff}}\) larger than a certain critical value, which is typically of order unity. For fixed \(\tilde{\beta}_{\text{eff}}\), there are two regimes, depending on the value of \(\tilde{\alpha}\): for small \(\tilde{\alpha}\), and into negative values, the theory is tachyon-stable for \(GN^{2}H^{2}\) larger than a certain critical value (generically of order unity); for large and positive \(\tilde{\alpha}\) there are also intermediate regions of stability: the theory goes from unstable at small \(GN^{2}H^{2}\), to stable as \(GN^{2}H^{2}\) increases, to again unstable, and finally to stable at large \(GN^{2}H^{2}\). In the specific case \(\tilde{\alpha}=\tilde{\beta}_{\text{eff}}=0\) there is a critical value for \(GN^{2}H^{2}\) below which de Sitter space is tachyon-unstable, as it was also shown in [68]. The critical value corresponds to \(GN^{2}H^{2}\approx 0.32\). For small curvatures, and for \(\tilde{\alpha}\) of order unity, the tachyon pole is generically located around the cut-off scale, unless one takes \(|\tilde{\beta}_{\text{eff}}|\gg 1\). For any values of the parameters, there are tensor ghosts (tachyonic or not). However, generically, these ghosts are heavy (in units of the "species" cut-off \((GN^{2})^{-1/2}\)) or they occur for curvatures of the order of the cut-off. * In the special case \(\tilde{\alpha}=\tilde{\beta}_{\text{eff}}=0\), like in the generic case above, for any curvature (including zero-curvature flat spacetime), the mass of the ghost is always larger but comparable to the species scale. * **Anti-de Sitter.* * In the special case of pure gravity (\(N=0\)), the situation is similar to the one in de Sitter. There are two tensor modes, one of them massless and the other massive. For generic \(O(1)\) parameters \(\alpha\) and \(\beta\), the massive mode is a tachyon for \(\beta<0\) (up to small corrections). One of the two tensor modes is always a ghost, and it is light only when \(\alpha\) and/or \(\beta\) are very large. Therefore, as in de Sitter, for \(O(1)\) values of the parameters, the theory does not have instabilities within EFT. This is what happens in top-down string theory [74]. On the other hand, this analysis means that one has to be careful in taking \(\alpha\) and \(\beta\) too large. This is standard practice in order to obtain a qualitatively different behaviour from Einstein AdS gravity. This is common in phenomenological holographic models and some examples with commentary are [75; 76]. * In the presence of the CFT, as in de Sitter, tensor modes can be tachyonic or not depending on the parameters \(\tilde{\alpha}\), \(\tilde{\beta}_{\text{eff}}\). The situation is represented in figure 24. For fixed AdS curvature, there are tachyonic modes for large and negative values of \(\tilde{\beta}_{\text{eff}}\) up to a certain critical value (which depends on \(\tilde{\alpha}\) for large curvatures but is independent of \(\tilde{\alpha}\) for small curvatures) above which the theory is tachyon-stable. The critical value is generically \(\mathcal{O}(1)\). For a fixed \(\tilde{\beta}_{\text{eff}}\) there are different possibilities: the theory may be tachyon-stable (\(\tilde{\beta}_{\text{eff}}\) large and positive, \(\alpha\gtrsim 0\) ), or be tachyon-stable only above a certain curvature (\(\tilde{\beta}_{\text{eff}}\) large and negative) or cross from tachyon-stability to instability to stability again for \(\tilde{\beta}_{\text{eff}}\sim O(1)\) and \(\alpha<0\). Unless \(|\tilde{\beta}_{\text{eff}}|\gg 1\), the tachyonic modes are above the species cut-off. * The special case \(\tilde{a}=\tilde{\beta}_{\text{eff}}=0\). Here, AdS space-time is tachyon-stable for any curvature below the species cut-off. Finally, we note that until now it was the scalar instability in de Sitter (or near de Sitter) that was employed as a mechanism for exiting inflation. However, our results show that, depending on the parameters, the "fastest" instability may be in the scalar or the tensor sector. It should be stressed though that if the fastest instability is the spin-2 one, this is a disaster for cosmology. The reason is that this instability generates large transverse variations of the background metric destroying fast its homogeneity and therefore the main principle of cosmology. Consequently, for cosmology, spin-2 instabilities must be avoided. Up to specific details which may vary depending on the parameters, the general features of the spectra discussed above can be summarised as follows: **Pure gravity:** * scalar tachyon if \(\alpha>0\) * if \(\beta\neq 0\), two tensor modes: one massless graviton and one massive ghost (tachyonic or not). * **dS and AdS** * scalar tachyon if \(\alpha>0\) if \(GN^{2}H^{2}\ll 1\). * scalar (light) ghost if \(\alpha\gg 1/(GH^{2})\). * If \(\beta\neq 0\), two tensor modes, one massless and one massive. One of them is necessarily a ghost. In all these cases, these ghosts/tachyons are below the cutoff \(M_{p}\)_only_ if \(|\alpha|\gg 1\) and/or \(|\beta|\gg 1\). **Gravity coupled to the CFT:** * The bounds on the ghost/tachyon regions vary, and there may be more massive tensor modes in the spectrum (in particular in AdS). * The cut-off is now lowered to the species scale, \(M_{p}/\sqrt{N}\) * The presence of _light_ ghosts/tachyons still requires the effective coefficients of the \(R^{2}\) and \(R_{\mu\nu}R^{\mu\nu}\) to be large \(|\tilde{\alpha}|\gg 1\) and/or \(|\tilde{\beta}_{\text{eff}}|\gg 1\). ### Discussion Our findings show that there are whole regions of parameter space where the holographic matter + gravity theory suffers from both scalar and tensor instabilities, for all signs of the curvature. In particular, the unstable region contains the whole of flat space except eventually in the limit where we decouple the CFT. Even though ghosts and tachyons seem ubiquitous, as we argued earlier, only when the unstable modes are lighter than our EFT cut-off (7) do they signal an unequivocal instability. From our analysis, it emerges that, at small curvatures (compared to the cut-off), if the renormalized coefficients of the local quadratic curvature terms, (18), are \(O(1)\), unstable modes generically have masses above the UV cut-off \(M_{p}/N\). On the other hand, the presence of light ghosts or tachyons requires very large values of the parameters (18). It turns out that, for large values of the higher curvature parameters (18), one _also_ finds light ghosts in pure quadratic-curvature gravity (in dS, AdS or Minkowski) without the CFT. Based on our analysis of parameter space, we can make the following statement: _Within the validity of EFT, for parameter values for which pure gravity shows no pathologies, neither does the gravity+CFT system._ In other words, for background curvatures below the cut-off, light unstable modes in the gravity + CFT system are due essentially to (effective) large _local_ higher curvature terms which would result in the same instabilities in the pure gravity with the same parameters. It should be remembered though, that in pure gravity the cutoff is taken to be the Planck scale while in the gravity+CFT system, the cutoff is taken to be the (renormalized) species cutoff in (35). Note that if we insist instead on taking the EFT cut-off to be the Planck scale (rather than the species scale) even in the presence of the CFT, this conclusion changes, and we are led to the fact that coupling a healthy CFT to gravity _does_ introduce instabilities within EFT. This is another indication that the correct cut-off is indeed \(M_{p}/\sqrt{N}\). From the holographic standpoint, in the case of a CFT coupled to gravity, the scalar modes are the simplest since they do not propagate in the bulk, the only dynamical scalars are boundary degrees of freedom whose dynamics is determined by the \(R^{2}\) terms and the conformal anomaly [52]. Therefore, any scalar instabilities can be simply traced purely to a local boundary gravity action. The unstable scalar is a pure-gauge mode in Einstein gravity, but it becomes dynamical thanks to the higher curvature terms and the conformal anomaly, and depending on the coefficients, it may become tachyonic and/or ghost-like. In this context, scalar instabilities were studied in 4-dimensional higher-curvature gravity around flat space in [7]. Around de Sitter, scalar instabilities were investigated in [47] in the (original) Starobinsky model [46], and here we recover the results obtained in the linearized version of Vilenkin's analysis. It is worth mentioning that this model falls outside of the EFT description: indeed, in [46; 47] the 4d cosmological constant is set to zero, which fixes the dS curvature to satisfy \(GN^{2}H^{2}=4\pi\). The value of \(H\) is above the species cut-off \(1/(\sqrt{G}N)\) (although for large \(N\) it may still be sub-Planckian). In this case, the no-tachyon condition in the scalar sector is \(\alpha>0\), (see equation (36). However for phenomenological reasons, it is rather desirable to have a scalar tachyon in order to leave the de Sitter solution in the early universe, and one should choose \(\alpha<0\). Similar considerations can be made if we want to make a comparison with what goes nowadays under the name of the Starobinsky model for inflation10, Footnote 10: \(\hat{\alpha}\) in equation (43) is related to our \(\alpha\) as \(\hat{\alpha}=\frac{\alpha}{384\pi}\). \[S=-\int\sqrt{-g}\left(\frac{1}{16\pi G}R-\hat{\alpha}R^{2}\right). \tag{43}\] This may be thought of as a simplified version of the anomaly-driven realization of de Sitter in [46; 47] in which one neglects the non-local contribution from the conformal anomaly \(\sim R^{2}\log R\). For this model, one does not need a CFT, but pure higher curvature gravity is enough. The model (43) does not admit de Sitter solutions since the absence of the logarithmic term pushes this solution to infinite curvature. However, it admits quasi-de Sitter slowly-rolling FRW solutions (in an appropriately defined Einstein frame) for \(\hat{\alpha}<0\), i.e. precisely where one expects a scalar tachyonic instability (see equation (36) for large curvature): it is this tachyon that eventually pushes the solution away from the near-de Sitter geometry, thus ending inflation. This is the choice made in phenomenological models of inflation, where the parameter \[\alpha=384\pi\hat{\alpha}\sim-5.95\times 10^{11}\] to reproduce the amplitude of the primordial perturbation spectrum. For \(\hat{\alpha}<0\), the scalar mode is not a ghost in pure \(R+R^{2}\) gravity, as can be seen by setting \(N=0\) in (37). More details about how our results compare to these models can be found in Appendix C. A general discussion of higher-derivative gravity including the tensor modes can be found in [73], where it was pointed out that the \(R_{\mu\nu}R^{\mu\nu}\) term gives rise to a tensor ghost around flat space. Here, we also extend this discussion to de Sitter and AdS. In the holographic context of gravity coupled to a large-\(N\) CFT, instabilities were already found in certain corners of parameter space by [68], and we agree with their results. A general analysis of the gravitational spectrum around de Sitter space was also performed in [52] for a specific value of the dS Hubble parameter (namely \(GN^{2}H^{2}=4\pi\)) for which the (renormalized) cosmological term is zero (this case however, is outside of the range of EFT since \(R\sim H^{2}>M_{species}\)). As we mentioned above, here we find that these tensor instabilities are either outside of the EFT validity, or they require large effective values of the higher curvature coefficients which would make the pure gravity theory pathological as well. A more detailed comparison with [68] and [52] appears in appendix L. When there are light tensor tachyons or ghosts, it is interesting to ask which direction in solution space the instability leads to. The scalar instability contains a homogeneous mode which can be understood as an instability of the de Sitter solution towards a more general FRW. These are the instabilities of the type considered in [47]. However, non-homogeneous scalar instabilities and tensor instabilities break FRW. A related question is whether this analysis around maximally symmetric spacetimes persists in more realistic cosmological solutions such as FRW. The same holographic setup used here can in principle be applied to FRW boundary metrics, by generalizing the bulk solution along the lines of [54; 55; 57; 77]. This paper is organized as follows. The setup of our work is presented in section 2, where we start from a theory of gravity in \(AdS_{5}\) and obtain the boundary action with higher curvature terms induced on a regularized boundary. Metric perturbations are set up in section 3 for the bulk perspective, and in section 4 for the boundary theory. Section 4 also studies the dynamics of the pure boundary scalar perturbation. The five remaining degrees of freedom for metric perturbations are contained in a transverse-traceless tensor studied in section 5, where its equation of motion is obtained. In section 6 we discuss tensor instabilities in pure gravity with quadratic curvature terms. Tensor instability in the general case of the CFT coupled to gravity is studied in section 7 for flat space-time, in section 8 for positive curvatures and in section 9 for negative curvatures. The appendix contains some of the technical details of this paper. We briefly review here the different sections. In appendix A, we provide an explicit example of effective field theory which develops instabilities (ghosts and tachyons) due to the IR expansion. We also find that the mass of these unstable modes is always above the EFT cut-off. Appendix B reviews the computation of the counterterms for the bulk renormalization procedure [70]. In appendix C, we relate our setup to Starobinsky's inflation [46]. The geometry of AdS-slicing coordinates is reviewed in appendix D where we map them to the more usual _global coordinates_ of AdS. We also remark that AdS-slicing coordinates are _global_ in the sense that they cover the whole AdS manifold. Appendix E relates the bulk radial equation of spin-2 perturbation to a Schrodinger problem, which allows us to study the normalizability of its solutions. In appendix F, we compute the decay rate of Minkowski spacetime in terms of the mass of a tachyonic pole. Appendices G and H derive the criteria (33) for the spin-2 modes in dS and AdS respectively. appendix I studies the asymptotic behaviour of associated Legendre functions which enter into the solution for spin-2 perturbations in AdS-slicing coordinates of AdS. Appendix J computes the quadratic terms of the boundary action, which enter into the definition of the two-point functions for metric perturbations. Appendix K proves that one can discard an unphysical scalar mode appearing in the quadratic action, by showing that this mode is a pure gauge. Appendix L compares our results to previous papers which have used a similar setup. We find the values of our parameters which reduce our setup to their case. Finally, appendix M provides supplementary material concerning the poles of the spin-2 propagators in curved space. The arXiv webpage of this paper contains supplementary material, including 5 animated gifs showing the poles of the spin-2 propagator for different choices of parameters (\(GN^{2}\bar{R}\), \(\tilde{\alpha}\) and \(\tilde{\beta}_{\rm eff}\)). These gifs and their associated parameters are presented in the "animated_gifs.pdf" ancillary file. ## 2 The theory We use the following notation for the various metrics: ### Setup We consider a semi-classical theory of gravity in four dimensions, described by a 4d metric \(g^{(0)}_{\omega\sigma}\), including quadratic curvature terms and coupled to a 4d Conformal Field Theory (CFT). The total action is \[S=S_{\rm grav}+S_{\rm CFT}. \tag{11}\] The first term, \(S_{\rm grav}\), is the gravity action: \[S_{\rm grav}=S_{\rm EH}+S_{\alpha}+S_{\beta}, \tag{12}\] which includes the Einstein-Hilbert plus the cosmological constant term11, Footnote 11: In our notation, the curvature tensors are understood to be those built from the metric \(g^{(0)}_{\omega\sigma}\), unless otherwise specified explicitly. \[S_{\rm EH}=-\frac{1}{16\pi G}\int d^{4}x\sqrt{g^{(0)}}(R-2\Lambda), \tag{13}\] as well as two quadratic curvature terms: \[S_{\alpha}=\frac{\alpha}{384\pi}\int d^{4}x\sqrt{g^{(0)}}R^{2}, \tag{14}\] \[S_{\beta}=\frac{\beta}{64\pi}\int d^{4}x\sqrt{g^{(0)}}\left(R^{\omega\sigma}R _{\omega\sigma}-\frac{1}{3}R^{2}\right). \tag{15}\] Here, \(G\) is the Newton constant, \(\Lambda\) the cosmological constant, \(\alpha\) and \(\beta\) are the dimensionless \(R\)-squared couplings. The second term \(S_{\rm CFT}\) in (1) is the quantum effective action of a CFT in a background metric \(g^{(0)}_{\omega\sigma}\), and it is a functional of the background metric \(g^{(0)}_{\omega\sigma}\). The action (1) is meant to be the renormalized action, in which all the divergences have been renormalized. The parameters \(G,\Lambda,\alpha,\beta\) are therefore to be interpreted as finite, physical parameters left after the renormalization procedure (which will be described in detail in subsection 2.2 and reference [66]). Variation of the action with respect to the boundary metric yields the following Einstein equation: \[R_{\omega\sigma}-\frac{1}{2}Rg^{(0)}_{\omega\sigma}+\Lambda g^{(0)}_{\omega \sigma}+8\pi G\left({}^{(\alpha)}H_{\omega\sigma}+{}^{(\beta)}H_{\omega\sigma }\right)=8\pi G\langle T_{\omega\sigma}\rangle. \tag{6}\] where \[{}^{(\alpha)}H_{\omega\sigma}=\frac{\alpha}{96\pi}\left\{\nabla_{\omega} \nabla_{\sigma}R-RR_{\omega\sigma}-\left(\Box R-\frac{1}{4}R^{2}\right)g^{(0) }_{\omega\sigma}\right\}, \tag{7}\] \[{}^{(\beta)}H_{\omega\sigma}=\frac{\beta}{32\pi}\left\{\frac{1}{2}\left(R_{ \kappa\lambda}R^{\kappa\lambda}-\frac{1}{3}R^{2}+\frac{1}{3}\Box R\right)g^{( 0)}_{\omega\sigma}-2R_{\omega\kappa\sigma\lambda}R^{\kappa\lambda}-\Box R_{ \omega\sigma}\right.\] \[\left.+\frac{1}{3}\left(2RR_{\omega\sigma}+\nabla_{\omega}\nabla_{\sigma}R \right)\right\}. \tag{8}\] In Eq. (6), the right-hand side is the renormalized CFT stress-energy tensor expectation value, \[\langle T_{\omega\sigma}\rangle=\frac{2}{\sqrt{g_{(0)}}}\frac{\delta S_{CFT}} {\delta g^{(0)\omega\sigma}}. \tag{9}\] Before we present the computation of the CFT stress tensor, we comment on the terms which are explicit on the left-hand side of (6). It is important to remark that \({}^{(\beta)}H_{\omega\sigma}\) is traceless. Furthermore, \({}^{(\alpha)}H_{\omega\sigma}\) is also traceless if the boundary has a constant curvature. For convenience, in the following, we define the tensor \[E_{\omega\sigma}\equiv-\frac{16\pi G}{\sqrt{g^{(0)}}}\frac{\delta S}{\delta g^ {(0)\omega\sigma}}. \tag{10}\] We then write Einstein's equations as: \[E_{\omega\sigma}=0. \tag{11}\] From now on, we shall assume the CFT is a large-\(N\) theory which has a holographic description in terms of a (semiclassical) five-dimensional gravity dual. We shall review how the renormalized stress tensor (9) is computed in this context [70], and how the renormalized parameters of the effective gravity theory arise. However, as we shall argue, our results do not depend on this assumption. Below we present the main results and give more details in appendix B. ### Constructing the renormalized action To arrive at (1) we replace the CFT contribution with its dual description, namely an Einstein-Hilbert theory on a 5-dimensional manifold \(\mathcal{M}\) (the bulk, on which the metric will be denoted by \(\mathcal{G}\)) together with covariant boundary terms on the boundary, \(\partial\mathcal{M}\). However, this is divergent and to regulate the divergences we move the boundary \(\partial\mathcal{M}\) to the regulated boundary \(\partial\mathcal{M}_{\epsilon}\) which is inside \(\mathcal{M}\) and this also defines the regulated bulk space \(\mathcal{M}_{\epsilon}\). \(\gamma\) is the induced metric on the regulated boundary. The bare regularized gravity dual action is: \[S_{\text{reg}}=S_{\text{bulk}}+S_{\text{grav}}^{0}. \tag{12}\] The first term is the usual Einstein-Hilbert action with a boundary \(\partial\mathcal{M}_{\epsilon}\), \[S_{\text{bulk}}=-M^{3}\left[\int_{\mathcal{M}_{\epsilon}}d^{5}x\sqrt{ \mathcal{G}}(R[\mathcal{G}]-2\Lambda_{5})-2\int_{\partial\mathcal{M}_{ \epsilon}}d^{4}x\sqrt{\gamma}K\right]. \tag{13}\] where \(M\) is the 5-dimensional Planck mass, \(\gamma\) is the determinant of the induced metric on \(\partial\mathcal{M}_{\epsilon}\) and \(K\) is the corresponding extrinsic curvature 12. The second term in (12) is a boundary term which depends only on intrinsic tensors on \(\partial\mathcal{M}_{\epsilon}\), Footnote 12: Geometrical tensors follow the same conventions as Wald’s book _General Relativity_ \[S_{\text{grav}}^{0}=S_{\text{EH}}^{0}+S_{\alpha}^{0}+S_{\beta}^{0}, \tag{14}\] where \[S_{\text{EH}}^{0}=-\frac{1}{16\pi G^{0}}\int d^{4}x\sqrt{\gamma}(R[\gamma]-2 \Lambda^{0}), \tag{15}\] a \(R^{2}\) term \[S_{\alpha}^{0}=\frac{\alpha^{0}}{384\pi}\int d^{4}x\sqrt{\gamma}\left(R[ \gamma]\right)^{2}, \tag{16}\] and an additional term proportional to the 4-dimensional Weyl anomaly13[49] Footnote 13: In our notation, the Latin letters will denote the bulk coordinates, and the Greek indices such as \(\omega,\sigma\) denote 4-dimensional slice coordinates. \[S_{\beta}^{0}=\frac{\beta^{0}}{64\pi}\int d^{4}x\sqrt{\gamma}\left(R[\gamma]^ {\kappa\lambda}R[\gamma]_{\kappa\lambda}-\frac{1}{3}\left(R[\gamma]\right)^{2 }\right). \tag{17}\] This action looks similar to (5), and depends on a set of bare parameters \(\alpha^{0}\), \(\beta^{0}\), \(\Lambda^{0}\) and \(G^{0}\). Below we shall relate these bare parameters to the physical ones (\(\alpha\), \(\beta\), \(\Lambda\) and \(G\)) in the renormalized action (1). We also define a length \(L\) associated with the bulk cosmological constant, \[\Lambda_{5}=-\frac{6}{L^{2}}. \tag{18}\] We consider asymptotically AdS solutions, for which the ansatz for the full metric is written using Fefferman-Graham coordinates [78], given by \[ds^{2}=\mathcal{G}_{ab}dX^{a}dX^{b}=L^{2}\frac{d\rho^{2}}{4\rho^{2}}+\frac{1}{ \rho}g_{\omega\sigma}(x,\rho)dx^{\omega}dx^{\sigma}, \tag{19}\] where \(L\) and \(x^{\sigma}\) have the dimension of a length and \(\rho\) is dimensionless. This coordinate system is the one of an asymptotically AdS space with a conformal boundary located at \(\rho\to 0\). We define the regulated boundary \(\partial\mathcal{M}_{\epsilon}\) as the hypersurface \(\rho=\epsilon\), on which the induced metric is: \[\gamma_{\omega\sigma}(\epsilon,x)=\frac{1}{\epsilon}g_{\omega\sigma}(\epsilon, x). \tag{2.20}\] The metric \(g_{\omega\sigma}\) is determined by solving the bulk Einstein equation order by order in \(\rho\) as \(\rho\to 0\), starting with an arbitrary metric \(g^{(0)}_{\omega\sigma}\) to lowest order [70]: \[g_{\omega\sigma}(x,\rho)=g^{(0)}_{\omega\sigma}+\rho g^{(2)}_{\omega\sigma}+ \rho^{2}g^{(4)}_{\omega\sigma}+\hat{g}_{\omega\sigma}\rho^{2}\log\rho+\mathcal{ O}(\rho^{3}). \tag{2.21}\] The leading term in this expansion, \(g^{(0)}_{\omega\sigma}\), is identified with the metric of the dual field theory side. The terms \(g^{(2)}_{\omega\sigma}\) and \(\hat{g}_{\omega\sigma}\) are given by14: Footnote 14: Recall that all the geometrical tensors are built from the metric \(g^{(0)}_{\omega\sigma}\) unless otherwise stated \[g^{(2)}_{\omega\sigma}=-\frac{L^{2}}{2}\left(R_{\omega\sigma}-\frac{R}{6}g^{( 0)}_{\omega\sigma}\right), \tag{2.22}\] \[\hat{g}_{\omega\sigma}=\frac{L^{4}}{16}\left\{2R_{\omega\kappa\sigma\lambda}R ^{\kappa\lambda}-\frac{1}{3}\nabla_{\omega}\nabla_{\sigma}R+\nabla^{2}R_{ \omega\sigma}-\frac{2}{3}RR_{\omega\sigma}+\left(\frac{1}{6}R^{2}-,\right.\right. \tag{2.23}\] \[\left.\left.-\frac{1}{6}\nabla^{2}R-\frac{1}{2}R_{\kappa\lambda}R^{\kappa \lambda}\right)g^{(0)}_{\omega\sigma}\right\}\] These expressions are found by solving the bulk Einstein equation in a near-boundary expansion [70]. Note that, comparing Eqs. (2.8) and (2.23), we can write: \[{}^{(\beta)}H_{\omega\sigma}=-\frac{\beta}{2\pi}\hat{g}_{\omega\sigma}. \tag{2.24}\] Unlike \(g^{(2)}_{\omega\sigma}\) and \(\hat{g}_{\omega\sigma}\), \(g^{(4)}_{\omega\sigma}\) is not fully determined from \(g^{(0)}_{\omega\sigma}\), except for its trace, which is given by15[70]: Footnote 15: As in [70], when matrix components are not written, it means that both matrix multiplication and trace operations are done using the metric \(g^{(0)}\). \[\mathrm{Tr}\left[g^{(4)}\right]=\frac{1}{4}\mathrm{Tr}\left[\left(g^{(2)} \right)^{2}\right]. \tag{2.25}\] Divergences of \(S_{\mathrm{bulk}}\), which arise when we remove the regulator and let \(\epsilon\to 0\), are made explicit when \(S_{\mathrm{bulk}}\) is written in terms of \(g^{(0)}_{\omega\sigma}\). The method to obtain these divergences is briefly reviewed in appendix B, resulting in: \[S_{\mathrm{bulk}}=\frac{M^{3}}{L}\int d^{4}x\sqrt{g^{(0)}}\left\{-\frac{6}{ \epsilon^{2}}+\frac{1}{8}\log\epsilon\left(R^{\kappa\lambda}R_{\kappa\lambda} -\frac{1}{3}R^{2}\right)\right\}+\mathcal{O}(\epsilon^{0}), \tag{2.26}\] The first term in curly brackets contains all the divergent terms of \(S_{\rm bulk}\). These can also be written covariantly in a series expansion involving curvature tensors of the induced metric on the boundary. \[S_{\rm bulk}=\frac{M^{3}}{L}\int d^{4}x\sqrt{\gamma}\left\{-6-\frac{L^{2}}{2}R[ \gamma]+\right.\] \[+\left.\frac{L^{4}}{8}\left(\frac{1}{2}+\log\epsilon\right)\left(R^{\kappa \lambda}[\gamma]R_{\kappa\lambda}[\gamma]-\frac{1}{3}\left(R[\gamma]\right)^ {2}\right)\right\}+... \tag{2.27}\] where \(...\) indicates higher curvature invariants. The explicit \(\epsilon\) dependence in (2.27) reflects the conformal anomaly. The quadratic curvature term in (2.26) can be shifted by a finite amount by redefining the cut-off. This scheme dependence is made explicit by introducing as an extra parameter, a scale \(\mu\), and defining the divergent part of the action \(S_{\rm div}\) as follows: \[S_{\rm div}\equiv\frac{M^{3}}{L}\int d^{4}x\sqrt{g^{(0)}}\left\{-\frac{6}{ \epsilon^{2}}+\frac{1}{4}\log(\sqrt{\epsilon}\mu L)\left(R^{\omega\sigma}R_{ \omega\sigma}-\frac{1}{3}R^{2}\right)\right\}. \tag{2.28}\] We now turn to the bare "boundary" gravitational action (2.14). It is also divergent in the limit \(\epsilon\to 0\). This can be made manifest by expressing it in terms of curvature tensors of \(g^{(0)}_{\omega\sigma}\) using the expansion (2.21) and the expressions (2.22-2.23). The result for the Einstein-Hilbert part is \[S^{0}_{\rm EH}=-\frac{1}{16\pi G^{0}}\int d^{4}x\sqrt{\gamma}(R[\gamma]-2 \Lambda^{0}), \tag{2.29}\] \[=-\frac{1}{16\pi G^{0}}\int d^{4}x\sqrt{g^{(0)}}\left\{-\frac{2\Lambda_{0}}{ \epsilon^{2}}+\frac{1}{\epsilon}\left(1+\frac{\Lambda_{0}L^{2}}{6}\right)R+\right.\] \[\left.+\frac{L^{2}}{4}\left(2+\frac{\Lambda_{0}L^{2}}{4}\right)\left(R^{ \omega\sigma}R_{\omega\sigma}-\frac{1}{3}R^{2}\right)+{\cal O}(\epsilon) \right\}, \tag{2.30}\] while \(S^{0}_{\alpha}\) and \(S^{0}_{\beta}\) are finite. Note that additional finite quadratic terms appear when we expand \(S^{0}_{EH}\) in powers of \(\epsilon\). Therefore, the full boundary action \(S^{0}_{\rm grav}\) written in terms of \(g^{(0)}_{\omega\sigma}\), including all divergent and finite terms, has the form: \[S^{0}_{\rm grav}=-\frac{1}{16\pi G^{0}}\int d^{4}x\sqrt{g^{(0)}}\left\{-\frac{ 2\Lambda_{0}}{\epsilon^{2}}+\frac{1}{\epsilon}\left(1+\frac{\Lambda_{0}L^{2} }{6}\right)R\right\}+\] \[+\left[\beta^{0}-\frac{1}{16\pi G^{0}}\frac{L^{2}}{4}\left(2+\frac{\Lambda_{0} L^{2}}{4}\right)\right]\int d^{4}x\sqrt{g^{(0)}}\left(R^{\omega\sigma}R_{ \omega\sigma}-\frac{1}{3}R^{2}\right)\] \[+\alpha^{0}\int d^{4}x\sqrt{g^{(0)}}R^{2}+{\cal O}(\epsilon). \tag{2.31}\] The renormalisation procedure we adopt consists in taking the limit \(\epsilon\to 0\) by choosing appropriately the bare parameters (\(\Lambda^{0}\), \(G^{0}\), \(\alpha^{0}\) and \(\beta^{0}\)) as a function of the cut-off, such that the quantity \[S_{\rm grav}\equiv S^{0}_{\rm grav}+S_{div} \tag{32}\] remains finite16 in the limit \(\epsilon\to 0\). Footnote 16: This is different from the standard holographic renormalization procedure [70], in which the bare parameters (\(\Lambda^{0}\), \(G^{0}\), \(\alpha^{0}\) and \(\beta^{0}\)) are independent on the cut-off and a counterterm action (whose coefficients are completely fixed) is introduced to cancel all divergences coming from \(S_{\rm bulk}\) (27). This leaves only finite quadratic curvature terms in the renormalized action and no Einstein-Hilbert term. We write the resulting finite action in terms of new _physical_ parameters (\(\Lambda\), \(G\), \(\alpha\) and \(\beta\)), each corresponding to the one boundary term: \[\Lambda=\frac{1}{\epsilon}\left(1+\frac{\Lambda^{0}L^{2}}{6}\right)^{-1} \left[\Lambda^{0}-\frac{48\pi G^{0}M^{3}}{L}\right], \tag{33}\] \[G=\frac{\epsilon G^{0}}{1+\frac{\Lambda^{0}L^{2}}{6}}, \tag{34}\] \[\alpha=\alpha^{0}, \tag{35}\] \[\beta=\beta^{0}+16\pi M^{3}L^{3}\log(\sqrt{\epsilon}L\mu)-\frac{2L^{2}}{G^{0}} \left(1+\frac{\Lambda^{0}L^{2}}{8}\right). \tag{36}\] and we take \(\epsilon\to 0\) together with appropriate limits of (\(\Lambda_{0}\), \(G_{0}\), \(\alpha_{0}\) and \(\beta_{0}\)) so that the left hand sides are finite. Finally, combining (12) and (32) we write the renormalized action as \[S=\lim_{\epsilon\to 0}\left[S_{\rm bulk}-S_{\rm div}\right]+S_{\rm grav}, \tag{37}\] \[\equiv S_{\rm CFT}+S_{\rm grav}, \tag{38}\] i.e. equation (1). The bulk contribution inside the square brackets is interpreted as the renormalized effective action of the CFT. ### The induced stress tensor The renormalized stress tensor is defined by \[\langle T_{\omega\sigma}\rangle=\lim_{\epsilon\to 0}\left[\frac{1}{\epsilon} \frac{2}{\sqrt{\gamma}}\frac{\delta S_{CFT}}{\delta\gamma^{\omega\sigma}} \right]=\frac{2}{\sqrt{g^{(0)}}}\frac{\delta S_{\rm CFT}}{\delta g^{(0)\omega \sigma}}. \tag{39}\] It can be shown that this definition leads to \[\frac{2}{\sqrt{\gamma}}\frac{\delta S_{bulk}}{\delta\gamma^{\omega\sigma}}=2M ^{3}(K_{\omega\sigma}-K\gamma_{\omega\sigma}). \tag{40}\] As shown in [70] the divergent pieces of (2.40) cancel the ones of the \(S_{\rm div}\). We are then left with the renormalized stress tensor given by 17 Footnote 17: For a notational simplification, there is no difference in subscript and superscript in Fefferman-Graham metric expansion. \[\left\langle T_{\omega\sigma}\right\rangle=-\frac{2M^{3}}{L}\left\{2\left[2\log( \mu L)-1\right]\hat{g}-2g^{(4)}+\left(g^{(2)}\right)^{2}-\frac{1}{4}g^{(0)}{ \rm Tr}\left[\left(g^{(2)}\right)^{2}\right]+\right.\right.\] \[\left.\left.+\frac{1}{4}g^{(0)}\left({\rm Tr}\left[g^{(2)}\right]\right)^{2}- \frac{1}{2}g^{(2)}{\rm Tr}\left[g^{(2)}\right]\right\}_{\omega\sigma}, \tag{2.41}\] where \(g^{(2)}_{\omega\sigma}\), \(g^{(4)}_{\omega\sigma}\) and \(\hat{g}_{\omega\sigma}\) are the terms of the Fefferman-Graham expansion (2.21), the expressions for \(g^{(2)}_{\omega\sigma}\) and \(\hat{g}_{\omega\sigma}\) in terms of \(g^{(0)}_{\omega\sigma}\) are given in Eqs. (2.22,2.23). \(g^{(4)}_{\omega\sigma}\) must be obtained from solving the bulk dynamics. The stress-tensor expectation value in (2.41) is to be inserted into the right-hand side of the Einstein equation (2.6). Even if the stress-tensor is not fully constrained by the boundary data, its trace is known using (2.25). It gives \[g^{(0)\omega\sigma}\left\langle T_{\omega\sigma}\right\rangle=\frac{(ML)^{3}} {4}\left(R^{\omega\sigma}R_{\omega\sigma}-\frac{1}{3}R^{2}\right). \tag{2.42}\] In a generic CFT with a 5d gravity dual, the parameter \(M^{3}L^{3}\) is large, and proportional to the central charge18\(a\): Footnote 18: Recall that in holographic CFTs, the two central charges \(a\) and \(c\) are equal, up to \(1/N^{2}\) corrections \[(ML)^{3}=\frac{a}{2\pi^{2}}. \tag{2.43}\] When the CFT is a large-\(N\) gauge theory in 4d, then \(a\propto N^{2}\) For example, in \(\mathcal{N}=4\) SYM we have, in the large-\(N\) limit: \[a=\frac{N^{2}}{4}. \tag{2.44}\] In what follows we assume, for definiteness, the \(\mathcal{N}=4\) relation (2.44), and set: \[M^{3}L^{3}=\frac{N^{2}}{8\pi^{2}}. \tag{2.45}\] This will allow us to replace \(M^{3}L^{3}\) with \(N^{2}\) and write all equations which pertain to the field theory side purely in terms of 4d parameters. Readers can keep in mind that, for any other CFT (even for those that are not large-\(N\)) they can substitute \[N^{2}\to 4a.\] ### Background solutions In this section, we discuss the background (i.e. homogeneous) solutions of the equations of motion for the 5d theory (2.12). We take these solutions to be the \(AdS_{5}\) metric with three different maximally symmetric slicings, \[ds_{5}^{2}=L^{2}du^{2}+a^{2}(u)\bar{\zeta}_{\omega\sigma}dx^{\omega}dx^{\sigma}, \tag{2.46}\] where \(a(u)\) is a dimensionless scale factor and the slice metric \(\bar{\zeta}_{\omega\sigma}\) is a \(u\)-independent maximally symmetric 4d metric. This results in three possible coordinate systems for AdS\({}_{5}\), that correspond to the dual CFT on three distinct four-dimensional maximally symmetric metrics: AdS\({}_{4}\), dS\({}_{4}\) and M\({}_{4}\). * \(\bar{\zeta}\) being the Minkowski metric. In this case \[a(u)=e^{u},\] (2.47) where \(u>0\) and the AdS boundary is located at \(u\to+\infty\). * \(\bar{\zeta}\) being the de Sitter metric with Hubble curvature \(H\), in which case \[a(u)=LH\sinh u,\] (2.48) where \(u\in\mathbb{R}\). \(u=0\) is a horizon. From now on, we take \(u\) positive. Therefore, \(u\to+\infty\) is the \(AdS_{5}\) boundary. The curvature of dS is given by \[\bar{R}=12H^{2}.\] (2.49) * \(\bar{\zeta}\) being the Anti-de Sitter metric with radius \(\chi^{-1}\), in which case \[a(u)=L\chi\cosh u.\] (2.50) In this case, there are two asymptotic boundaries, located at \(u=\pm\infty\). These two boundaries are connected. More details for the geometry of AdS-slicing coordinates are given in appendix D. In the field theory interpretation, these correspond to two independent copies of the CFT on AdS\({}_{4}\) that interact via their common AdS\({}_{4}\) boundary19. It is a matter of choice whether only one or both are coupled to dynamical metric perturbations, as we shall discuss in section 5.3. Since there is no horizon at \(u=0\), we shall observe that both sides are reachable by the bulk metric perturbations. The boundary curvature is related to \(\chi\) by: Footnote 19: A conformal rescaling of such a setup corresponds to an interface between two copies of the same CFT in flat space, see the extended discussion in [64]. \[\bar{R}=-12\chi^{2}.\] (2.51) The bulk metric (2.46) can also be written in Fefferman-Graham coordinates as \[ds_{5}^{2}=L^{2}\frac{d\rho^{2}}{4\rho^{2}}+\frac{1}{\rho}f(\rho)ds_{4}^{2}. \tag{2.52}\] where \(ds_{4}^{2}=\bar{\zeta}_{\omega\sigma}dx^{\omega}dx^{\sigma}\) and the function \(f\) for each different slicing is then given by \begin{tabular}{|c|c|c|c|} \hline space-time & \(ds_{4}^{2}\) & \(\rho\) & \(f(\rho)\) \\ \hline M\({}_{4}\) & \(ds_{\text{flat}}^{2}\) & \(\rho=e^{-2u}\) & \(1\) \\ \hline dS\({}_{4}\) & \(ds_{dS}^{2}\) & \(\rho=\left(\frac{2}{L\bar{H}}\right)^{2}e^{-2u}\) & \(f_{dS}(\rho)=1-\frac{(L\bar{H})^{2}}{2}\rho+\left(\frac{L\bar{H}}{2}\right)^{ 4}\rho^{2}\) \\ \hline AdS\({}_{4}\) & \(ds_{AdS}^{2}\) & \(\rho=\left(\frac{2}{L_{\chi}}\right)^{2}e^{-2\text{sign}(u)u}\) & \(f_{AdS}(\rho)=1+\frac{(L\chi)^{2}}{2}\rho+\left(\frac{L\chi}{2}\right)^{4} \rho^{2}\) \\ \hline \end{tabular} In these coordinates, \(\rho>0\) and the \(AdS_{5}\) boundary is located at \(\rho=0\). In de Sitter slicing, we are free to choose a sign of \(u\) (here we took positive \(u\)) because of the horizon \(u=0\) which separates the two sides of the bulk. However, in \(AdS\) slicing where there is no such horizon, the bulk \(AdS_{5}\) needs two different Fefferman-Graham patches such that \(\rho\to 0\) is the \(AdS_{5}\) boundary \(u\rightarrow\pm\infty\). Hence the sign(\(u\)) in the expression for \(\rho\) in AdS-slicing.20 Footnote 20: The global embedding of dS and AdS slices in AdS\({}_{5}\) is discussed in detail in [79] for dS, and in [72] for AdS. The background solutions (2.52) are then related to the general Fefferman-Graham expansion (2.19) by \[g_{\omega\sigma}(x,\rho)|_{\text{background}}=f(\rho)\bar{\zeta}_{\omega\sigma}, \tag{2.54}\] from which every term of the expansion (2.21) are fixed. In particular, we can read off the corresponding boundary theory metric \(g_{\omega\sigma}^{(0)}\) as the leading term as \(\rho\to 0\): \[g_{\omega\sigma}^{(0)}=\bar{\zeta}_{\omega\sigma} \tag{2.55}\] and it is either the Minkowski metric or the de Sitter metric with Hubble scale \(H\), or the anti-de Sitter metric with AdS length \(\chi^{-1}\). We denote the background curvature \(R[\bar{\zeta}]\equiv\bar{R}\), \[\bar{R}=\left\{\begin{aligned} & 12H^{2},&\text{ de \ Sitter},\\ & 0,&\text{ Minkowski}\\ & -12\chi^{2},&\text{ Anti de Sitter}\end{aligned}\right. \tag{2.56}\] For a maximally symmetric background, the trace of the Einstein equation (2.6) gives \[\Lambda=\frac{1}{4}\left(\bar{R}-\frac{GN^{2}\bar{R}^{2}}{48\pi}\right), \tag{2.57}\] where \(N\) was defined in (42). Note that, for each value of the boundary parameters \(\Lambda\) and \(G\), there are either two values of the curvature \(\bar{R}\) satisfying equation (57), or there are none. On the other hand, by scanning all values of \(\Lambda\), we can obtain any value of \(\bar{R}\). Therefore, it is convenient to trade \(\Lambda\) for \(\bar{R}\): in what follows we shall replace \(\Lambda\) in terms of \(\bar{R}\) using (57) in all equations. This leaves \(GN^{2}\bar{R}\) as the only dimensionless background curvature parameter. Equation (57) does not depend on \(\alpha\) or \(\beta\) since they multiply tensors that are traceless when evaluated on the background metric \(\bar{\zeta}_{\omega\sigma}\) with constant curvature. The maximally symmetric backgrounds were discussed in detail (for holographic CFTs and holographic RG flows on de Sitter) in [66]. We now move to the perturbations around these background solutions. These are described by turning on perturbations in both the bulk and the boundary metric and solving the corresponding Einstein's equation and boundary conditions. This will be the subject of the next section. ## 3 Bulk metric perturbations Equation (54) holds for the unperturbed, background metric. In this section, we study perturbations of the bulk metric, by adopting the same gauge invariant decomposition of metric perturbations as in [52]. In a perturbed geometry, the bulk metric reads \[ds_{5}^{2}=(\mathcal{G}_{ab}+\delta\mathcal{G}_{ab})dX^{a}dX^{b}. \tag{58}\] Using (46), one can relate the slice component perturbations to actual perturbations of the slice metric \(\delta\zeta_{\omega\sigma}\) defined as \[\delta\mathcal{G}_{\omega\sigma}=a^{2}(u)\delta\zeta_{\omega\sigma}, \tag{59}\] such that the full metric can now be written as \[ds_{5}^{2}=(\mathcal{G}_{uu}+\delta\mathcal{G}_{uu})du^{2}+2(\mathcal{G}_{u \sigma}+\delta\mathcal{G}_{u\sigma})dudX^{\sigma}+a^{2}(u)(\bar{\zeta}_{\omega \sigma}+\delta\zeta_{\omega\sigma})dx^{\omega}dx^{\sigma}. \tag{60}\] Even if a \(5\times 5\) symmetric matrix contains 15 independent elements, only 10 degrees of freedom are invariant under the gauge transformation \[\delta\mathcal{G}_{ab}\rightarrow\delta\mathcal{G}_{ab}+2\nabla^{(\mathcal{G })}_{(a}\xi_{b)}. \tag{61}\] One can construct these 10 invariant quantities by decomposing the perturbation \(\mathcal{G}_{ab}\) into transverse and traceless elements for the slice covariant derivative \(\hat{\nabla}\) built with \(\zeta_{\omega\sigma}\) as follows [52]: \[\delta\mathcal{G}_{uu}=A, \tag{62}\] \[\delta\mathcal{G}_{u\sigma}=B_{\sigma}+\hat{\nabla}_{\sigma}B, \tag{63}\] \[\delta\zeta_{\omega\sigma}=h_{\omega\sigma}+2\hat{\nabla}_{(\omega}\chi_{\sigma)}+ \bar{\zeta}_{\omega\sigma}\psi+\hat{\nabla}_{(\omega}\partial_{\sigma)}\phi, \tag{100}\] where \(B_{\sigma}\), \(\chi_{\sigma}\) are transverse and \(h_{\omega\sigma}\) is transverse-traceless: \[\hat{\nabla}^{\sigma}\chi_{\sigma}=0=\bar{\zeta}^{\omega\sigma}h_{\omega\sigma}, \tag{101}\] \[\hat{\nabla}^{\omega}h_{\omega\sigma}=0. \tag{102}\] As it is well known (and rederived in [52] in the present context), the only propagating degree of freedom in the bulk of \(AdS\) with pure gravity is the _tensor_ (transverse-traceless) perturbation \(h_{\omega\sigma}\) which contains 5 degrees of freedom. On top of this, there exists a scalar mode which has purely boundary dynamics, and that will be discussed in the next section. Therefore, here we set to zero all components of the perturbation except for the tensor mode. The next step is to obtain the equation of motion for this tensor mode. The bulk Einstein equation is: \[R_{ab}[\mathcal{G}]=-\frac{4}{L^{2}}\mathcal{G}_{ab}. \tag{103}\] When linearized with respect to \(h_{\omega\sigma}\), the above equation yields: \[(L^{2}\nabla^{(\mathcal{G})2}+2)a^{2}(u)h_{\omega\sigma}=0, \tag{104}\] where the differential operator into parenthesis is known as the Lichnerowicz operator for AdS. This operator can be decomposed into the \((u,x^{\omega})\) slicing coordinates (46). Equation (104) then takes the following form \[\left\{\partial_{u}^{2}+4\frac{a^{\prime}}{a}\partial_{u}+2\left[1-\left( \frac{a^{\prime}}{a}\right)^{2}\right]+L^{2}a^{-2}\hat{\nabla}^{2}\right\}h_{ \omega\sigma}=0. \tag{105}\] This equation will be specialized to different slicings and solved in section 5. Tensor perturbations \(h_{\omega\sigma}\) can be expanded in a similar way as in (19): \[h_{\omega\sigma}=h_{\omega\sigma}^{(0)}+\rho h_{\omega\sigma}^{(2)}+\rho^{2} h_{\omega\sigma}^{(4)}+\rho^{2}\log\rho\hat{h}_{\omega\sigma}+\mathcal{O}( \rho^{3}). \tag{106}\] We shall now linearize the boundary Einstein field equation (6) and obtain an equation which involves the various terms in the near-boundary expansion (106). To this end, we need to relate perturbations of the metric \(g_{\omega\sigma}\) defined in (19) to the slice perturbations \(h_{\omega\sigma}\) defined in (100). We introduce the following notation, for any tensor \(A\) of the slice metric: \[(\delta_{h}A)[\bar{\zeta}]\equiv\lim_{\varepsilon\to 0}\frac{A[\bar{\zeta}+ \varepsilon h^{(0)}]-A[\bar{\zeta}]}{\varepsilon}. \tag{107}\] We identify term by term the expansion (21) with the expansion of the bulk metric (46) close to the boundary \(\rho\to 0\) where \(\rho(u)\) is given in table 53. The result for both AdS and dS is given by \[\delta_{h}g_{\omega\sigma}^{(0)}=h_{\omega\sigma}^{(0)}, \tag{108a}\] \[\delta_{h}g^{(2)}_{\omega\sigma}=h^{(2)}_{\omega\sigma}-\frac{L^{2}R}{24}h^{(0)}_ {\omega\sigma}, \tag{3.15b}\] \[\delta_{h}g^{(4)}_{\omega\sigma}=h^{(4)}_{\omega\sigma}-\frac{L^{2}R}{24}h^{(2)}_ {\omega\sigma}+\left(\frac{L^{2}R}{48}\right)^{2}h^{(0)}_{\omega\sigma},\] (3.15c) \[\delta_{h}\hat{g}_{\omega\sigma}=\hat{h}_{\omega\sigma}. \tag{3.15d}\] The quantities \(h^{(2)}\) and \(\hat{h}\) can be written in terms of \(h^{(0)}\) and of boundary curvature tensors: as is summarized in appendix B, \(g^{(2)}_{\omega\sigma}\) and \(\hat{g}_{\omega\sigma}\) are obtained in terms of \(g^{(0)}\) by solving perturbatively the bulk Einstein equation for small \(\rho\)[70]. By varying the solution for \(g^{(2)}\) and \(\hat{g}\) given in (B.6, B.7) with respect to \(h^{(0)}_{\omega\sigma}\), we obtain using (3.15): \[h^{(2)}_{\omega\sigma}=\frac{L^{2}}{4}\left(\nabla^{2}-\frac{R}{6}\right)h^{( 0)}_{\omega\sigma}, \tag{3.16}\] \[\hat{h}_{\omega\sigma}=-\frac{2\pi L^{4}}{\beta}\delta_{h}{}^{(\beta)}H_{ \omega\sigma}=-\frac{L^{4}}{32}\left(\nabla^{2}-\frac{R}{6}\right)\left(\nabla ^{2}-\frac{R}{3}\right)h^{(0)}_{\omega\sigma}, \tag{3.17}\] where the laplacian operator \(\nabla^{2}\) is constructed with the Fefferman-Graham metric \(g^{(0)}_{\omega\sigma}\). On the contrary, we need to solve (3.11) in the whole bulk (with appropriate conditions in the interior) to find \(h^{(4)}_{\omega\sigma}\). We postpone this to section 5. Using the equations above, all linearized quantities can be expressed purely in terms of \(h^{(0)}_{\omega\sigma}\) and \(h^{(4)}_{\omega\sigma}\), which for now are independent. The variation of the holographic stress-tensor (2.41) in the presence of a perturbation \(\delta h_{\mu\nu}\) is given by: \[\delta_{h}\left\langle T_{\omega\sigma}\right\rangle=\frac{N^{2}}{2\pi^{2}L^{ 4}}\left[\delta_{h}g^{(4)}_{\omega\sigma}-\left(\frac{L^{2}R}{24}\right)^{2} \delta_{h}g^{(0)}_{\omega\sigma}+(1-2\log(\mu L))\delta_{h}\hat{g}_{\omega \sigma}\right], \tag{3.18}\] where \(\delta_{h}g^{(4)}_{\omega\sigma}\), \(\delta_{h}g^{(0)}_{\omega\sigma}\) and \(\delta_{h}\hat{g}_{\omega\sigma}\) have to be written using equations (3.15a-3.15d) and (3.16-3.17). We now turn to the linearization of the left-hand side of Einstein's equation (2.6), in which the cosmological constant can be replaced by a function of the background curvature \(\bar{R}\) using (2.57). Note that the CFT stress-tensor also contributes to the value of \(\Lambda\) through the trace of the background Einstein equation (2.57). By moving all the CFT contributions (i.e. those proportional to \(N\)) to the right-hand-side of the linearized Einstein equation, we find \[\left(-\nabla^{2}+\frac{R}{6}\right)h^{(0)}_{\omega\sigma}+8\pi G\delta_{h}({} ^{(\alpha)}H_{\omega\sigma}+{}^{(\beta)}H_{\omega\sigma})=8\pi G\delta_{h} \left\langle T_{\omega\sigma}\right\rangle^{T}, \tag{3.19}\] where \[\left\langle T_{\omega\sigma}\right\rangle^{T}\equiv\left\langle T_{\omega \sigma}\right\rangle-\frac{1}{4}g^{(0)}_{\omega\sigma}\left\langle T^{\mu}_{ \mu}\right\rangle. \tag{3.20}\] The curvature squared terms \({}^{(\alpha)}H_{\omega\sigma}\) and \({}^{(\beta)}H_{\omega\sigma}\) are then linearized with respect to the tensor perturbation. Then, equation (3.19) is written as a sum of contributions from the Fefferman-Graham terms \(h^{(0)}_{\omega\sigma}\), \(h^{(4)}_{\omega\sigma}\) and \(\hat{h}_{\omega\sigma}\): \[h^{(4)}_{\omega\sigma}+\frac{L^{4}R}{24}\left\{\frac{3\pi}{GN^{2}R}-\frac{1}{4 }-\frac{\pi\alpha}{4N^{2}}\right\}\left(\nabla^{2}-\frac{R}{6}\right)h^{(0)}_{ \omega\sigma}+\left(1-2\log(\mu L)+\beta\frac{\pi}{N^{2}}\right)\hat{h}_{\omega \sigma}=0, \tag{3.21}\] where \(\hat{h}_{\omega\sigma}\) is to expressed in terms of \(h^{(0)}_{\omega\sigma}\) using (3.17). Equation (3.21) is a linear equation relating \(h^{(4)}_{\omega\sigma}\) to \(h^{(0)}_{\omega\sigma}\). As usual in holography however, \(h^{(4)}_{\omega\sigma}\) is determined by \(h^{(0)}_{\omega\sigma}\) by solving the bulk equation and imposing a regularity condition in the interior. This makes \(h^{(4)}_{\omega\sigma}\) into a (non-local) linear functional of \(h^{(0)}_{\omega\sigma}\). Therefore, all in all, equation (3.21) takes the form of a dynamical equation for \(h^{(0)}_{\omega\sigma}\), of the form: \[{\cal F}^{\mu\nu\omega\sigma}(\nabla^{2},\bar{R})h^{(0)}_{\omega\sigma}=0. \tag{3.22}\] Determining the explicit form of the functional \({\cal F}\) will be the goal of section 5. Here we conclude by the remark that equation (3.22) can also be obtained by varying the quadratic part of the action (2.1) evaluated on-shell: indeed, as shown in appendix J, once it is evaluated on the solution of the linear bulk equation, the quadratic part of the action (2.1) is equal to the boundary expression: \[S^{(2)}[h^{(0)}] =\frac{N^{2}}{2\pi^{2}}\int d^{4}x\,\sqrt{\tilde{\zeta}}h^{(0) \omega\sigma}\left\{h^{(4)}_{\omega\sigma}+\left(\frac{\pi\beta}{N^{2}}+1-2 \log(\mu L)\right)\hat{h}_{\omega\sigma}+\right.\] \[\left.+\frac{RL^{4}}{24}\left(\nabla^{2}-\frac{R}{6}\right)\left( \frac{3\pi}{GN^{2}R}-\frac{1}{4}-\frac{\pi\alpha}{4N^{2}}\right)h^{(0)}_{ \omega\sigma}\right\}. \tag{3.23}\] Using (3.17) for \(\hat{h}\) and the determination of \(h^{(4)}\) in terms of \(h^{(0)}\) from the bulk solution, this expression can be written again as a quadratic functional of \(h^{(0)}\): \[S^{(2)}[h^{(0)}]=\int d^{4}x\,\sqrt{g^{(0)}}\int d^{4}y\,\sqrt{g^{(0)}}h^{(0) \mu\nu}(x){\cal F}_{\mu\nu\omega\sigma}(\nabla^{2},\bar{R})[x,y]h^{(0)\omega \sigma(y)} \tag{3.24}\] where \({\cal F}\) is the same functional which gives the equation of motion (3.22), as it is clear by varying (3.24) with respect to \(h^{(0)\mu\nu}\). The quantity \({\cal F}_{\mu\nu\omega\sigma}\) is the inverse propagator of the induced boundary gravity tensor fluctuations \(h^{(0)}_{\omega\sigma}\): \[{\cal F}_{\mu\nu\omega\sigma}\equiv\frac{1}{\sqrt{g^{(0)}(x)}}\frac{1}{\sqrt{ g^{(0)}(y)}}\frac{\delta^{2}S^{(2)}}{\delta h^{(0)\mu\nu}(x)\delta h^{(0) \omega\sigma}(y)} \tag{3.25}\] Using the definition (2.38) in the quadratic action, \[S^{(2)}=S^{(2)}_{\rm grav}+S^{(2)}_{CFT}, \tag{3.26}\] the right-hand side of equation (3.25) can be seen as the sum of two contributions, one from \(S_{\rm grav}\) and one from \(S_{CFT}\). As \(S_{\rm grav}\) is local (it is quadratic in the boundary curvature), the first contribution is a _local_\(4\)-derivative differential operator, \[\frac{1}{\sqrt{g^{(0)}(x)}}\frac{1}{\sqrt{g^{(0)}(y)}}\frac{\delta^{2}S_{\rm grav }^{(2)}}{\delta h^{(0)\mu\nu}\delta h^{(0)\omega\sigma}}=\delta(x,y)O_{\mu\nu \rho\sigma}(\nabla^{2},\bar{R}). \tag{3.27}\] The part coming from the CFT is by definition the renormalized stress tensor correlator of the CFT: \[\frac{1}{\sqrt{g^{(0)}(x)}}\frac{1}{\sqrt{g^{(0)}(y)}}\frac{\delta^{2}S_{CFT} ^{(2)}}{\delta h^{(0)\mu\nu}\delta h^{(0)\omega\sigma}}=-\langle T_{\mu\nu}(x )T_{\omega\sigma}(y)\rangle_{CFT}. \tag{3.28}\] Therefore, the full inverse graviton propagator (3.25) has the form : \[\mathcal{F}_{\mu\nu\omega\sigma}=\delta(x,y)O_{\mu\nu\rho\sigma}(\nabla^{2}, \bar{R})-\langle T_{\mu\nu}(x)T_{\omega\sigma}(y)\rangle_{CFT}. \tag{3.29}\] The non-local part is fully contained in the term \(h^{(4)}\) in equation (3.23), and to determine it one has to solve the bulk radial equations. We make a final comment on the appearance of the bulk AdS radius \(L\) in equation (3.23). As \(L\) is _not_ a parameter of the 4d theory (only \(ML\) is, see equation (2.43), this quantity should not enter the full spectral operator \(\mathcal{F}_{\mu\nu\rho\sigma}\). This is indeed the case: as will become obvious with the explicit computations in section 5, there is a similar logarithmic contribution to (3.23) coming from \(h^{(4)}\), which will effectively replace \(\log\mu L\to-2\log(2\mu\sqrt{G}N)\). These terms come from the variation of Weyl anomaly in the CFT, which has the form of the term with coefficient \(\beta\) in the gravitational action (see equation (2.5)). This implies that effectively \(\beta\) and \(\mu\) will always appear in the combination: \[\beta_{\rm eff}=\beta-\frac{N^{2}}{\pi}\log(4\mu^{2}GN^{2}). \tag{3.30}\] ## 4 The boundary scalar perturbation In this section, we focus on scalar perturbations21. This scalar mode decouples in pure Einstein's gravity but reappears due to the higher curvature terms in the boundary action22 Footnote 21: Here by scalar we mean with respect to the slice isometry group. Footnote 22: If the QFT is not conformal, then the coupling of the QFT to gravity will also contribute to the scalar dynamics via the two-point function of the trace of the energy-momentum tensor. In the present setup, where gravity lives on the boundary of \(AdS_{5}\), the scalar mode exists only on the boundary because only tensor perturbations are dynamical in the bulk (see e.g. [52]) as the trace of the energy-momentum tensor has trivial dynamics in a CFT. ### Gauge fixing To study the dynamics of this boundary scalar mode, we define boundary metric perturbations: \[ds_{4}^{2}=g^{(0)}_{\omega\sigma}dx^{\omega}dx^{\sigma}=(\bar{\zeta}_{\omega\sigma }+\delta\zeta^{b}_{\omega\sigma})dx^{\omega}dx^{\sigma}, \tag{4.1}\] where \(\bar{\zeta}_{\omega\sigma}\) is the background metric (flat, dS or AdS) defined in 2.4 and \(\delta\zeta^{b}_{\omega\sigma}\) is a perturbation which, unlike the general perturbation in equation (3.3), depends only on the slice coordinates \(x^{\mu}\). The decomposition (3.7) still applies, and the boundary gauge transformations are: \[\delta\zeta^{b}_{\omega\sigma}\rightarrow\delta\zeta^{b}_{\omega\sigma}+2 \hat{\nabla}_{(\omega}\xi_{\sigma)}. \tag{4.2}\] One can do a gauge transformation to eliminate the transverse and longitudinal vector components, by choosing \(\xi_{b}\) in (3.4) to be: \[\xi_{\sigma}=-\chi_{\sigma}-\frac{1}{2}\partial_{\sigma}\phi. \tag{4.3}\] Keeping only the scalar mode, one is left with: \[\delta\zeta^{b}_{\omega\sigma}=\psi\,\bar{\zeta}_{\omega\sigma}. \tag{4.4}\] Equation (4.4) is the definition of the scalar perturbation, and we study its dynamics in the following subsections. ### Scalar equation of motion The classical equations of motion for \(\psi\) are obtained by linearizing the Einstein equation (2.6), which we rewrite here for convenience: \[0=E_{\omega\sigma}[g^{(0)}]\equiv-\frac{16\pi G}{\sqrt{g^{(0)}}}\frac{\delta S [g^{(0)}]}{\delta g^{(0)\omega\sigma}} \tag{4.5}\] \[=R_{\omega\sigma}-\frac{1}{2}Rg^{(0)}_{\omega\sigma}+\Lambda g^{(0)}_{\omega \sigma}+8\pi G({}^{(\alpha)}H_{\omega\sigma}+{}^{(\beta)}H_{\omega\sigma})-8 \pi G\left\langle T_{\omega\sigma}\right\rangle, \tag{4.6}\] where quadratic curvature terms \({}^{(\alpha)}H_{\omega\sigma},\ {}^{(\beta)}H_{\omega\sigma}\) are defined in (2.7) (2.8) and the CFT stress tensor is given in (2.41). The equation of motion for \(\psi\) is obtained by linearizing \(E_{\omega\sigma}[g^{(0)}]\) around \(E_{\omega\sigma}[\bar{\zeta}]\). Linear and quadratic curvature terms are linearized using the definitions (4.1-4.4). However, to obtain the CFT stress tensor, one needs to solve the bulk equations. Nevertheless, its trace (2.42) and its divergence (which is zero) are fully constrained by the boundary geometrical tensors. Hence, one can take a shortcut and perturb the trace of Einstein's equation. It will then be convenient to define the trace of the generalized Einstein tensor (4.5) as \[E[g^{(0)}]\equiv g^{(0)\omega\sigma}E_{\omega\sigma}[g^{(0)}]. \tag{4.7}\] Then, the full, non-linear, traced Einstein equation is given by \[0=E[g^{(0)}]=-R+4\Lambda-\frac{\alpha G}{4}\Box R-\frac{GN^{2}}{4\pi}\left(R^{ \omega\sigma}R_{\omega\sigma}-\frac{1}{3}R^{2}\right), \tag{111}\] where \(\Box\) is the Laplacian operator \(\nabla^{2}\) applied to a scalar quantity. Equation (111) only contains scalar geometric quantities of the boundary. When evaluated on the background metric \(\bar{\zeta}\), equation (111) reduces to (57). The linearization of geometrical quantities which appear in (111) for an arbitrary perturbation (108) around \(\bar{\zeta}\) are given by \[\delta R=-\left(3\Box+\bar{R}\right)\psi,\qquad\delta\left(R^{\omega\sigma}R_{ \omega\sigma}\right)=\frac{\bar{R}}{2}\delta R. \tag{112}\] We observe that these linearized scalar quantities depend on \(\psi\) only, due to \(h^{(0)}\) being traceless. This leads to the linearized version of equation (111): \[\left[1+\frac{\alpha G}{4}\Box-\frac{GN^{2}\bar{R}}{24\pi}\right](3\Box+\bar{R })\psi=0. \tag{113}\] Equation (113) is a linear _local_ equation for \(\psi(x)\). We now turn to the computation of the scalar propagator from varying the action as in (104). Because of a residual gauge variance of \(\psi\), the action cannot formally be differentiated by \(\psi\). In appendix J, we still define the following quantity: \[\widetilde{\mathcal{F}}\equiv\frac{1}{\sqrt{\zeta}}\left.\frac{\delta^{2}S[g^ {(0)}]}{\delta\psi^{2}}\right|_{\bar{\zeta}}. \tag{114}\] If \(\widetilde{\mathcal{F}}\) was the scalar propagator, then the result would be \[\widetilde{\mathcal{F}}=-\frac{1}{16\pi G}\left[1+\frac{\alpha G}{4}\Box- \frac{GN^{2}\bar{R}}{24\pi}\right](3\Box+\bar{R}). \tag{115}\] By taking the inverse of (115), the scalar propagator one would obtain is then: \[\widetilde{\mathcal{F}}^{-1}=-64\pi G\left[\alpha GR-12+\frac{GN^{2}\bar{R}}{ 2\pi}\right]^{-1}\left\{\frac{1}{\Box+\frac{4}{G\alpha}-\frac{N^{2}\bar{R}}{6 \pi\alpha}}-\frac{1}{\Box+\frac{R}{3}}\right\}, \tag{116}\] which describes two propagating modes. However, the second pole in (116) is unphysical, as we show in Appendix K. The reason it appears in (116) is that \(\psi\) is not a gauge invariant quantity, and this propagator was derived by considering only the trace of Einstein's equation. To show that the second pole is a pure gauge, one has to look at the non-diagonal components of the linearized Einstein equation. This was discussed in [7] for flat space with quadratic curvature terms only, and in appendix K we present the argument to non-zero curvature and with a holographic stress tensor on the right-hand side of (103), see also [80; 52] for similar discussions. Therefore, the only physical pole of the putative propagator (117) is the first term, and the true scalar propagator is: \[\mathcal{F}^{-1}=-64\pi G\left[\alpha G\bar{R}-12+\frac{GN^{2}\bar{R}}{2\pi} \right]^{-1}\left\{\frac{1}{\Box+\frac{4}{G\alpha}-\frac{N^{2}\bar{R}}{6\pi \alpha}}\right\}. \tag{118}\] Both the position of the pole and the sign of the residue depend on the parameters of the model: there are regions in parameter space where the scalar can be a ghost or a tachyon. The condition for a mode to be tachyonic changes depending on the sign of the background curvature \(\bar{R}\), and will be discussed in the next subsection. Here we discuss under which condition the scalar is ghostlike. In any background, the scalar mode is a ghost if the residue of the pole has the "wrong" sign, the "right" sign being that of the pole of the massless spin-2 pole in pure gravity, which in our conventions is: \[\mathcal{F}^{-1}_{masslessspin2}=32\pi G\frac{1}{\nabla^{2}-\frac{R}{6}}. \tag{119}\] Therefore, the scalar mode is a ghost if the residue of (118) is positive, i.e: \[\left(\frac{\pi\alpha}{N^{2}}+\frac{1}{2}\right)\frac{GN^{2}\bar{R}}{12\pi}>1 \quad\Rightarrow\quad\text{scalar mode is a ghost}. \tag{120}\] This condition is valid both for positive and negative \(\bar{R}\). In sections 8 and 9, we shall see that this inequality also appears in the context of the tensor sector, but not exactly for the same reasons. It is useful to compare the equation of motion (114) to other discussions in the literature. If either \(\bar{R}=0\) or \(N=0\), the equation of motion (114) agrees with the \(R+R^{2}\) modified gravity analysis of [73]. The first case, \(N=0\), corresponds to pure gravity with no CFT; the second case, \(\bar{R}=0\), corresponds to flat space, in which the boundary scalar mode decouples even in the presence of the CFT. ### Scalar tachyonic instabilities Neglecting the unphysical solution in (114), we are left with a single scalar mode satisfying a massive Klein-Gordon equation: \[\Box\psi=\frac{4}{G\alpha}\left[\frac{GN^{2}R}{24\pi}-1\right]\psi. \tag{121}\] It is useful to parametrize the eigenvalues in terms of a complex " total momentum eigenvalue" \(\nu^{2}\) (or \(k^{2}\) in flat space) as follows: \[\Box\psi=\left\{\begin{array}{ll}-\frac{\bar{R}}{12}\left(\nu^{2}-\frac{9}{ 4}\right)\psi,\,\bar{R}\neq 0,\\ \\ -k^{2}\psi,&\bar{R}=0.\end{array}\right. \tag{122}\] Equation (4.17) then translates into \[\nu^{2}-\frac{9}{4}=\frac{48}{G\alpha\bar{R}}\left(1-\frac{GN^{2}\bar{R}}{24\pi} \right),\quad dS\,\mathrm{or}\,AdS, \tag{4.19}\] \[k^{2}=\frac{4}{G\alpha},\quad\text{Minkowski}. \tag{4.20}\] To determine whether the solution corresponds to a tachyonic mode, we need to specify the geometry of the boundary. * **Minkowski.** The theory is tachyon-stable if the invariant four-momentum \(k^{2}\) is timelike or null, \(k^{2}\leq 0\). From equation (4.20) we conclude that: \[\alpha>0\quad\Rightarrow\quad\text{scalar mode is tachyonic}.\] (4.21) Note that for \(\alpha=0\), the scalar mode is decoupled (at quadratic order). * **de Sitter.** We choose to use Poincare coordinates, which cover the expanding patch of de Sitter. In these coordinates, the de Sitter metric is given by \[ds_{dS}^{2}=\frac{1}{(H\tau)^{2}}\eta_{\omega\sigma}dx^{\omega}dx^{\sigma}, \qquad\bar{R}=12H^{2},\] (4.22) where \(x^{0}\equiv\tau=\frac{e^{-Ht}}{H}\) is the conformal time. The solutions of Eq. (4.18) are given in terms of Bessel functions: \[\psi=(H\tau)^{\frac{3}{2}}J_{\pm\nu}(\tau p)\underset{\tau\to 0}{\sim}e^{Ht(\pm\nu-3/2)},\] (4.23) where \(p\equiv\sqrt{\delta_{ij}p^{i}p^{j}}\) is the norm of the 3-dimensional Fourier momentum in the spatial directions. Both solutions (4.23) are bounded as \(\tau\to 0\) if [68]: \[|\mathrm{Re}(\nu)|\leq 3/2.\] (4.24) Defining a tachyon as a mode which grows exponentially in time, equation (4.19) then translates into the statement23: Footnote 23: In an expanding background such a mode is still ok to have around as long as the growth rate is much smaller than the Hubble time, i.e. \(|Re(\nu)|-3/2|\ll 1\). In this case one can say that de Sitter space is long-lived. \[\frac{1}{\alpha}\left(1-\frac{GN^{2}H^{2}}{2\pi}\right)>0\quad\Rightarrow \quad\text{scalar mode is tachyonic}.\] (4.25) By inserting equation (4.19) into the exponent of (4.23), we obtain the "decay rate" \(\Gamma\) of de Sitter due to the tachyon instability: \[\Gamma=H\left[\sqrt{\frac{9}{4}+\frac{4\pi}{GN^{2}H^{2}\tilde{\alpha}}\left(1- \frac{GN^{2}H^{2}}{2\pi}\right)}-\frac{3}{2}\right],\] (4.26) which is real and positive if we are in the tachyonic regime (4.25). We now discuss a few special cases. As in flat space, if \(\alpha=0\), the scalar mode is non-propagating. * The special case \(GN^{2}H^{2}=4\pi\) corresponds to a vanishing 4d cosmological constant \(\Lambda=0\) (by equation (2.57)). This is the case studied in [52]. We find, in agreement with that work, that scalar tachyonic instabilities occur for \(\alpha<0\). This also includes the homogeneous scalar mode from the original Starobinsky model [46, 47]. * In the absence of the CFT, i.e. for \(N^{2}=0\), de Sitter is tachyon-unstable in the scalar sector for \(\alpha>0\). This is the opposite sign compared to the previous paragraph case. There is no contradiction here, since (2.57) shows that de Sitter is not a solution when both \(\Lambda\)_and_\(N^{2}\) vanish. * **anti-de Sitter**: In this case, a massive scalar mode is tachyonic if it violates the BF bound, [81], which in 4 space-time dimensions means that the mass \(m\) satisfies \[m^{2}\chi^{2}<-\frac{9}{4},\] (4.27) where \(\chi\) is the AdS length. Comparing with equation (4.18) with \(\bar{R}=-12\chi^{2}\), violation of the BF bound is equivalent to: \[\nu^{2}<0.\] (4.28) Using (4.19) we then conclude: \[\frac{9}{4}-\frac{4}{\alpha G\chi^{2}}\left(1+\frac{GN^{2}\chi^{2}}{2\pi} \right)<0\quad\Rightarrow\quad\text{scalar mode is tachyonic}.\] (4.29) * For \(\alpha=0\) the scalar mode decouples as in the other cases. * For pure gravity, in the absence of the CFT (\(N^{2}=0\)), the tachyonic condition (4.29) becomes \[\frac{16}{\alpha G\chi^{2}} > 9.\] (4.30) In particular, this cannot be satisfied for \(\alpha<0\). This is the opposite compared to the de Sitter case. ## 5 The spin-two spectral equations We now move to tensor perturbations \(h^{(0)}_{\omega\sigma}\) defined in (3.25). The linearized Einstein equation (3.21) contains \(h^{(4)}\), which can only be specified by solving the perturbation equations in the bulk. This section is devoted to expressing \(h^{(4)}\) in terms of the boundary perturbation \(h^{(0)}\) by solving the bulk tensor equation (3.11). This has to be done separately for each slice geometry (flat, positive and negative curvature). We treat each case in a separate subsection. ### Flat-slicing The bulk equation of motion for the tensor mode (3.11) simplifies significantly in flat slicing coordinates. First, one can write the bulk metric as a conformally flat space by defining the usual Poincare coordinate \(Z\) as \[Z\equiv e^{-u}=\sqrt{\rho}. \tag{5.1}\] The perturbed bulk metric (3.3) in which we only keep the propagating tensor \(h_{\omega\sigma}\) is then written as \[ds_{5}^{2}=\frac{1}{Z^{2}}[L^{2}dZ^{2}+(\eta_{\omega\sigma}+h_{\omega\sigma}) dx^{\omega}dx^{\sigma}]. \tag{5.2}\] The bulk equation of motion (3.11) describes the dynamics of a massless graviton in AdS. In flat slicing coordinates (5.2), we can insert \(a=e^{u}\) into (3.12), which boils down to the massless scalar equation \[\Box_{5}h_{\omega\sigma}=0, \tag{5.3}\] where \(\Box_{5}\) is the \(AdS\) scalar Laplacian in Poincare coordinates (2.47), given by \[L^{2}\Box_{5}=Z^{2}(\partial_{Z}^{2}+L^{2}\eta^{\kappa\lambda}\partial_{\kappa }\partial_{\lambda})-3Z\partial_{Z}. \tag{5.4}\] Now the strategy is to search for separable solutions, which we write as \[h_{\omega\sigma}(Z,x)=F(Z,k)\tilde{h}_{\omega\sigma}^{(0)}(x,k), \tag{5.5}\] where \(\tilde{h}^{(0)}\) solves the eigenvalue equation parametrized by \(k^{2}\) as \[\partial^{\sigma}\partial_{\sigma}\tilde{h}_{\omega\kappa}^{(0)}=-k^{2}\tilde {h}_{\omega\kappa}^{(0)}\equiv(m_{2})^{2}\tilde{h}_{\omega\kappa}^{(0)}, \tag{5.6}\] and \((m_{2})^{2}\) can be a complex number in general. The second equation is an ordinary differential equation for \(F(Z,k)\), \[\left(Z^{2}\partial_{Z}^{2}-3Z\partial_{Z}-k^{2}L^{2}Z^{2}\right)F(Z,k)=0. \tag{5.7}\] Note that, ultimately, we want to write an equation for the boundary tensor perturbation of the form (3.22) in Fourier space, with \(\nabla^{2}\) replaced by \(-k^{2}\). The solution will be the physical mass\({}^{2}\) of a propagating 4d mode. As the solutions of this equation may be complex, we must allow for complex values of \(k^{2}\) beyond the usual choices of timelike (\(k^{2}<0\)) and spacelike (\(k^{2}>0\)) momentum one obtains for real wavenumbers. With this caveat, it is now convenient to write (5.7) as: \[\left[y^{2}\frac{d^{2}}{dy^{2}}-y^{2}-3y\frac{d}{dy}\right]F(y)=0,\qquad y \equiv LkZ. \tag{5.8}\] where we define \(k\), for any complex \(k^{2}\) outside of the negative real axis, as the complex root of \(k^{2}\) with _positive_ real part24 and with a slight abuse of notation, we have replaced \(F(Z,k)\) by \(F(y)\). Footnote 24: This prescription is enough to identify tachyonic modes, for which \(\text{Re}(k)\neq 0\). Instead, real negative \(k^{2}\) corresponds to non-tachyonic propagating particles, and as usual, their propagator needs a further prescription. We use the analytic continuation of the results to purely imaginary values of \(k\), which corresponds to taking the retarded stress tensor 2-point function. We now solve the equations for \(\tilde{h}^{(0)}\) (5.6) and for \(F\) (5.8). First, the solution of (5.6) are the Fourier modes : \[\tilde{h}^{(0)}_{\omega\kappa}(k^{\sigma})=e^{\pm ik^{\sigma}x_{\sigma}}=e^{ \pm i(-\omega t+\mathbf{k}.\mathbf{x})},\ \mathbf{k}\in\mathbb{R}^{3},\ \omega\equiv\sqrt{-k^{2}+\mathbf{k}^{2}}. \tag{5.9}\] For a non-negative eigenvalue \(k^{2}\), modes with \(|\mathbf{k}|<|k|\) will necessarily feature an imaginary part in \(\omega\) and one of the two solutions in (5.9) will diverge with time. This is the usual tachyon instability for flat space, which occurs for massive Klein-Gordon equations with negative mass square, and more generally it persists also for a complex mass. Therefore, the condition that a mode characterised by \(k^{2}\) is non-tachyonic is: \[\text{Re}(k)=0. \tag{5.10}\] where, as above, we have defined \(k\) as the complex root of \(k^{2}\) with positive real part25. Footnote 25: This is simply a convention since the equation has a symmetry in \(k\to-k\). Equation (5.8) is solved by modified Bessel functions, \[F(y)=y^{2}(\lambda_{1}K_{2}(y)+\lambda_{2}I_{2}(y)). \tag{5.11}\] We must impose that the solution (5.11) is regular at the horizon \(Z\to+\infty\). This requires \(\lambda_{2}=0\) because in this limit \(I_{2}(y)\sim exp[kLZ]\) and by definition \(\text{Re}(k)>0\). The remaining solution \(K_{2}(y)\) is a vanishing exponential at \(Z\to+\infty\). We fix the remaining parameter \(\lambda_{1}\) by choosing the normalization at the AdS\({}_{5}\) boundary \(Z=0\) so that: \[F(Z=0)=1. \tag{5.12}\] This way, the solution (5.5) for the bulk tensor perturbation \(h_{\omega\sigma}(x,Z)\) coincides at \(Z=0\) with the boundary tensor mode \(h^{(0)}_{\omega\sigma}\) defined in the FG expansion (3.13). For this reason, the leading term \(\tilde{h}^{(0)}_{\omega\sigma}\) of \(h_{\omega\sigma}\) in (5.5) is identified as the Fourier mode of the leading term in the Fefferman-Graham expansion (3.13) which was defined as \(h^{(0)}_{\omega\sigma}\). We drop the tilde from now on. For small \(y\), the Bessel function \(K_{2}\) behaves as: \[K_{2}(2y)\underset{y\to 0}{=}\frac{1}{2}\left\{y^{-2}-1+\frac{3}{4}y^{2}-y^{2} \left[\gamma_{E}+\log(y)\right]\right\}+\mathcal{O}(y^{4}), \tag{5.13}\] where \(\gamma_{E}\) is the Euler-Mascheroni constant. Then, equation (5.12) fixes \(\lambda_{1}=1/2\) in (5.11). Having completely fixed \(F(Z,k)\), we can read-off \(h^{(4)}_{\omega\sigma}\) and \(\hat{h}_{\omega\sigma}\) from its near-boundary expansion, (5.13) and (5.5) and compare with the corresponding terms in equation (3.13), recalling that \(Z=\sqrt{\rho}\). We find: \[h^{(2)}_{\omega\kappa}=-\left(\frac{kL}{2}\right)^{2}h^{(0)}_{\omega\kappa}, \tag{5.14a}\] \[h^{(4)}_{\omega\kappa}=\left(\frac{kL}{2}\right)^{4}\left[\frac{3}{4}-\gamma_{ E}-\log\left(\frac{kL}{2}\right)\right]h^{(0)}_{\omega\kappa},\] (5.14b) \[\hat{h}_{\omega\kappa}=-\frac{1}{2}\left(\frac{kL}{2}\right)^{4}h^{(0)}_{ \omega\kappa}. \tag{5.14c}\] The terms \(h^{(2)}_{\omega\kappa}\) and \(\hat{h}_{\omega\kappa}\) agree with the perturbative solutions of the bulk Einstein equation (B.5) that are given in appendix B by (B.6) and (B.7). To perform this comparison and check that they agree, it is enough to linearize \(g^{(2)}_{\omega\kappa}\) (B.6) and \(\hat{g}_{\omega\kappa}\) (B.7) with respect to the transverse traceless perturbation \(h^{(0)}_{\omega\sigma}\). The linearization of the stress tensor (2.41) around a flat background for the tensor perturbation is given by26: Footnote 26: The term \(h^{(2)}\) does not contribute in (5.15) because it always appears in the CFT stress tensor (2.41) multiplied by \(g^{(2)}[\bar{\zeta}]\), which vanishes on a flat background. \[\delta_{h}\left\langle T_{\omega\kappa}\right\rangle=\frac{N^{2}}{2\pi^{2}L^{ 4}}\left[h^{(4)}_{\omega\kappa}+\left(1-2\log\mu L\right)\hat{h}_{\omega\kappa }\right]. \tag{5.15}\] We can use the bulk solutions (5.14) into (5.15), to obtain the perturbed stress-tensor in terms of \(h^{0}\) alone: \[\delta_{h}\left\langle T_{\omega\kappa}\right\rangle=\frac{N^{2}}{2\pi^{2}} \left(\frac{k}{2}\right)^{4}\left[\frac{1}{4}-\gamma_{E}-\log\left(\frac{k}{2 \mu}\right)\right]h^{(0)}_{\omega\kappa}. \tag{5.16}\] As a final step, inserting the expressions (5.14) in (3.21) we obtain the linearized Einstein equation specialized to a flat background, in the form of an equation for \(h^{(0)}\) alone: \[\frac{N^{2}}{64\pi^{2}}k^{2}Q_{\text{flat}}(k)h^{(0)}_{\omega\sigma}=0, \tag{5.17}\] where \[Q_{\text{flat}}(k)\equiv\left\{-\frac{2\pi}{GN^{2}}+k^{2}\left[\frac{1}{4}- \gamma_{E}-\log\left(\frac{k}{2\mu}\right)-\frac{1}{2}\frac{\pi\beta}{N^{2}} \right]\right\}. \tag{5.18}\] From (5.18), as anticipated in Section 3, we can observe that the contributions from the renormalization scale \(\mu\) coming from the CFT and the quadratic curvature term proportional to \(\beta\) combine into the parameter \(\beta_{\text{eff}}\) given in equation (3.30). For convenience, we also define \[\tilde{\beta}_{\text{eff}}\equiv\frac{\pi\beta_{\text{eff}}}{N^{2}}. \tag{5.19}\] The spectral equation (5.17) is then written as \[Q_{\rm flat}(k)\equiv\left\{-\frac{2\pi}{GN^{2}}+\frac{k^{2}}{2}\left[\frac{1}{2}- 2\gamma_{E}-\log\left(GN^{2}k^{2}\right)-\tilde{\beta}_{\rm eff}\right]\right\}. \tag{5.20}\] The quantity multiplying \(h^{(0)}\) in (5.17) is the inverse propagator (3.25) for a flat spacetime. Its expression is given by27: Footnote 27: For the overall coefficient in this expression, see Appendix J. \[{\cal F}_{\rm flat}(k)=\frac{N^{2}}{64\pi^{2}}k^{2}Q_{\rm flat}(k). \tag{5.21}\] Non-trivial solutions (\(h^{(0)}_{\omega\sigma}\neq 0\)) to the equation of motion (5.17) correspond to the propagating momentum modes \(k\) of the boundary perturbation \(h^{(0)}\) and are found by solving the spectral equation \[k^{2}Q_{\rm flat}(k)=0. \tag{5.22}\] Solutions of this equation are the poles of the propagator \({\cal F}_{\rm flat}^{-1}\). An obvious solution to that equation is the massless mode \(k^{2}=0\) which is present in pure Einstein-Hilbert gravity. "Exotic" Modes with \(k^{2}\neq 0\) satisfy: \[1=\frac{GN^{2}k^{2}}{4\pi}\left(\frac{1}{2}-2\gamma_{E}-\log\left(GN^{2}k^{2} \right)-\tilde{\beta}_{\rm eff}\right). \tag{5.23}\] The only solutions which are non-tachyonic are those for which \(k^{2}<0\). The absence of tachyon-instabilities of flat space is then equivalent to the absence of solutions \(k\) to (5.23) with a non-zero real part. We study the existence of such unstable solutions in section 7. ### dS-slicing We now consider the CFT on de Sitter and we turn to equation (3.12) applied to dS slicing coordinates (2.48). As a result, we obtain \[\left\{\partial_{u}^{2}+4\coth u\partial_{u}+\frac{H^{-2}\tilde{\nabla}^{2}-2 }{\sinh^{2}u}\right\}h_{\omega\sigma}=0. \tag{5.24}\] The operator inside curly brackets is similar to the expression of the Laplace operator of \(AdS_{5}\) acting on scalars, in which case the numerator of the last term would be replaced by the 4-dimensional slice scalar Laplacian. Similarly to the flat slicing case (5.5), we search for separable solutions of the form: \[h_{\omega\sigma}(x,u)=F(u,\nu)\tilde{h}^{(0)}_{\omega\sigma}(x,\nu). \tag{5.25}\] This results in two equations: the first one is an eigenvalue problem on the slice, which we write as: \[(\hat{\nabla}^{2}-2H^{2})\tilde{h}^{(0)}_{\omega\sigma}=-H^{2}\left(\nu^{2}-\frac {9}{4}\right)\tilde{h}^{(0)}_{\omega\sigma}\equiv(m_{2})^{2}\tilde{h}^{(0)}_{ \omega\sigma}. \tag{110}\] The second equation is an ODE in the radial direction: \[\left\{\frac{d^{2}}{du^{2}}+4\coth u\frac{d}{du}-\frac{\nu^{2}-\frac{9}{4}}{ \sinh^{2}u}\right\}F(u,\nu)=0. \tag{111}\] The information about tachyonic instabilities is contained in the value of \(\nu\). As shown in Appendix G, and as it is pointed out in [52; 68], modes with \[|\text{Re}(\nu)|>3/2 \tag{112}\] are tachyonic because (110) contains a solution which diverges with time (see appendix G for the details). In pure 4d gravity, the only propagating mode would be the transverse-traceless graviton, which is a zero eigenvalue for the Lichnerowicz operator of de Sitter (the left-hand side of equation (110) and corresponds to \(\nu=\pm 3/2\). Turning on the CFT matter content and the quadratic curvature terms will allow for modes with different values of \(\nu\). These will be determined by solving the boundary spectral equation, which we derive below. As in the flat case, to obtain the boundary spectral equation we have to solve the radial equation (111). The most general solution of (111) is a linear combination of two hypergeometric functions given by \[F(u,\nu)=C_{+}\tanh u^{\nu-\frac{3}{2}}{}_{2}F_{1}\,\left[\frac{1 }{2}\left(\nu-\frac{3}{2}\right),\frac{1}{2}\left(\nu-\frac{1}{2}\right);1+ \nu;\tanh^{2}u\right]+\] \[+C_{-}\tanh u^{-\nu-\frac{3}{2}}{}_{2}F_{1}\,\left[\frac{1}{2} \left(-\nu-\frac{3}{2}\right),\frac{1}{2}\left(-\nu-\frac{1}{2}\right);1-\nu ;\tanh^{2}u\right], \tag{113}\] where \(C_{\pm}\) are integration constants. This solution may have a singularity at the horizon \(u=0\), depending on the real part of \(\nu\). As we show in appendix E, requiring the solution (113) to be normalizable at \(u=0\) gives the following constraints : * If \(\text{Re}(\nu)>0\), we need to set \(C_{-}=0\) for normalizability at \(u=0\). * If \(\text{Re}(\nu)<0\), we need to set \(C_{+}=0\) for normalizability at \(u=0\). * If \(\text{Re}(\nu)=0\), both solutions oscillate at the horizon \(u=0\). Since the problem (111) is symmetric in \(\nu\leftrightarrow-\nu\), we can choose \(\text{Re}(\nu)\geq 0\) without loss of generality. In this case, the most general regular solution at \(u=0\) is the one with \(C_{-}=0\). The case where \(\nu\) is imaginary lies in the stable region (112) and needs a further prescription (e.g. infalling boundary conditions). We shall define the spectral function by analytic continuation to purely imaginary \(\nu\). We fix the normalization of \(F\) by imposing that in the UV, at \(u\to+\infty\): \[F(u,\nu)\underset{u\to+\infty}{\longrightarrow}1. \tag{112}\] This condition ensures that \(\tilde{h}^{(0)}_{\omega\sigma}\) defined in (109) identifies with the leading term \(h^{(0)}_{\omega\sigma}\) of the Feffeman-Graham expansion (105). From now on, we drop the tilde on \(h^{(0)}\) although it only represents a single mode \(\nu\) (108). Using Gauss' hypergeometric theorem \[{}_{2}F_{1}\;[a,b;c;1]=\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}, \tag{113}\] valid for \(\mathrm{Re}(c)>\mathrm{Re}(a+b)\), the boundary condition (112) fixes the value of the integration constant to \[C_{+}=\frac{\Gamma\left(\frac{5}{2}+\nu\right)\sqrt{\pi}2^{-\nu-\frac{3}{2}}} {\Gamma(1+\nu)}. \tag{114}\] The near-boundary expansion of \(F(u,\nu)\) can be obtained using a hypergeometric transformation (page 49 of [82]). It allows us to transform \(F(u,\nu)\) into a power series of \(e^{-u}\) instead of \(\tanh^{2}u\) (hypergeometric functions are defined as a power series of their last argument). The first few terms of the result are given by \[F(u,\nu)=1-e^{-2u}\left(\nu^{2}-\frac{9}{4}\right)-e^{-4u}\left(\nu^{2}-\frac {9}{4}\right)\left\{1+\right.\] \[\left.+\left(\nu^{2}-\frac{1}{4}\right)\left[-u-\frac{3}{4}+\mathcal{H}\left( \nu-\frac{1}{2}\right)\right]\frac{}{}\right\}+\mathcal{O}(e^{-6u}), \tag{115}\] where \(\mathcal{H}\) is the harmonic number function defined in terms of the Euler Gamma function, \(\Gamma\) as \[\mathcal{H}(z)=\frac{\Gamma^{\prime}(z+1)}{\Gamma(z+1)}+\gamma_{E}. \tag{116}\] The terms in (115) are enough to read all of the Fefferman-Graham expansion (105) using the relation between positive \(u\) and \(\rho\) in tabular 108: \[h^{(2)}_{\omega\sigma}=-h^{(0)}_{\omega\sigma}\left(\frac{LH}{2}\right)^{2} \left(\nu^{2}-\frac{9}{4}\right) \tag{117a}\] \[h^{(4)}_{\omega\sigma}=-h^{(0)}_{\omega\sigma}\left(\frac{LH}{2}\right)^{4} \left(\nu^{2}-\frac{9}{4}\right)\left\{1+\left(\nu^{2}-\frac{1}{4}\right) \left[\log\left(\frac{LH}{2}\right)-\frac{3}{4}+\mathcal{H}\left(\nu-\frac{1 }{2}\right)\right]\right\}\] (117b) \[\hat{h}_{\omega\sigma}=-\frac{h^{(0)}_{\omega\sigma}}{2}\left(\frac{LH}{2} \right)^{4}\left(\nu^{2}-\frac{9}{4}\right)\left(\nu^{2}-\frac{1}{4}\right). \tag{117c}\] As we already discussed for the flat slicing, \(h^{(2)}\) and \(\hat{h}\) can be found using an independent method discussed in appendix B. This method consists in solving perturbatively the bulk Einstein equation at small values of the Fefferman-Graham coordinate \(\rho\) for an arbitrary boundary metric \(g^{(0)}\). The solution for \(g^{(2)}\) and \(\hat{g}\), given in (111) and (112), can be linearized with respect to \(h^{(0)}\) to obtain the same result as (108a) and (108c) using the formulae (106). However, this alternative method does not determine \(h^{(4)}\) in terms of \(h^{(0)}\), but only its trace and divergence. Inserting (108b)-(108c) into the linearized Einstein equation for the tensor mode (104), we find the equation of motion of the boundary spin-2 perturbation in momentum space given by \[\frac{N^{2}H^{4}}{64\pi^{2}}\left(\nu^{2}-\frac{9}{4}\right)Q_{dS}(\nu)h^{(0)}_ {\omega\sigma}=0, \tag{109}\] where \[Q_{\rm dS}(\nu)\equiv 1-\frac{2\pi}{GN^{2}H^{2}}+2\tilde{\alpha}-\frac{ 1}{2}\left(\nu^{2}-\frac{1}{4}\right)\left[\log\left(GN^{2}H^{2}\right)-\frac {1}{2}+2\mathcal{H}(\nu-1/2)+\tilde{\beta}_{\rm eff}\right],\] \[{\rm Re}(\nu)>0, \tag{110}\] where, as in flat space, we have combined the contributions from \(\mu\) and \(\beta\) into a single parameter \(\tilde{\beta}_{\rm eff}\) defined via equations (105) and (106). We have also defined the parameter \(\tilde{\alpha}\) as \[\tilde{\alpha}\equiv\frac{\pi\alpha}{N^{2}}. \tag{111}\] From now on, we shall always refer to the new quadratic curvature coefficients \(\tilde{\alpha}\) and \(\tilde{\beta}_{\rm eff}\), except in subsection 6 where we set \(N=0\) in which case these new quantities become ill-defined. The inverse spin-2 propagator defined in (107) is then given by \[\mathcal{F}_{\rm dS}(\nu)=\frac{N^{2}H^{4}}{64\pi^{2}}\left(\nu^{2}-\frac{9}{ 4}\right)Q_{dS}(\nu),\quad{\rm Re}(\nu)>0. \tag{112}\] The overall coefficient of (112) is obtained in appendix J. The expression for \(Q_{\rm dS}\) given in (110) is only valid for positive real parts of \(\nu\) because we chose \(C_{-}=0\) for normalizability of the bulk solution (101) at \(u=0\). By symmetry of the bulk equation (109) in \(\nu\leftrightarrow-\nu\), and in \(\nu\leftrightarrow\nu^{*}\), the propagator \(\mathcal{F}_{\rm dS}(\nu)\) must also obey the same symmetries. The combination of these two symmetries implies that both the real and imaginary axes of \(\nu\) are axes of symmetry for \(\mathcal{F}_{\rm dS}\). As a consequence, the inverse propagator for \({\rm Re}(\nu)<0\) is obtained by replacing \(\nu\rightarrow-\nu\) in (110). Each value of \(\nu\in\mathbb{C}\) solving equation (109) is a pole of the 2-point function \(\mathcal{F}_{\rm dS}\) and corresponds to a propagating mode. The positions and residues of these poles depend on the Hubble rate \(H\) of the boundary metric \(\bar{\zeta}_{\omega\sigma}\) (47), the quadratic curvature coefficient \(\tilde{\alpha}\) (111, 12), the scheme-dependent quadratic curvature coefficient \(\tilde{\beta}_{\rm eff}\) (106, 107) and the colour number \(N^{2}\). The existence of tachyonic modes \(|{\rm Re}(\nu)|>3/2\) will be studied in section 8. ### AdS-slicing Deriving the analogous spin-2 spectral equation (5.36) for AdS slicing follows the same steps as in the previous subsection, with the difference that now there are two UV boundaries, located at \(u\to\pm\infty\), corresponding to two CFTs, [72]. The equation of motion for bulk metric perturbations (3.11) in AdS slicing (2.50) is \[\left\{\partial_{u}^{2}+4\tanh u\partial_{u}+\frac{\chi^{-2}\hat{\nabla}^{2}+2 }{\cosh^{2}u}\right\}h_{\omega\sigma}=0, \tag{5.40}\] which is the analogue of equation (5.24). As it was done for dS slicing, we search for separable solutions of the form: \[h_{\omega\sigma}=F(u,\nu)\tilde{h}_{\omega\sigma}^{(0)}(x,\nu). \tag{5.41}\] We then separate equation (5.40) into an eigenvalue problem on the slice, \[(\hat{\nabla}^{2}+2\chi^{2})\tilde{h}_{\omega\sigma}^{(0)}=\chi^{2}\left(\nu^ {2}-\frac{9}{4}\right)\tilde{h}_{\omega\sigma}^{(0)}\equiv(m_{2})^{2}\tilde{h }_{\omega\sigma}^{(0)}, \tag{5.42}\] and a radial equation, \[\left\{\frac{d^{2}}{du^{2}}+4\tanh u\frac{d}{du}+\frac{\nu^{2}-\frac{9}{4}}{ \cosh^{2}u}\right\}F(u,\nu)=0. \tag{5.43}\] Before solving the radial equation, we first comment on the role the eigenvalues \(\nu\) play in the tachyonic instability. Note that the massless graviton is associated with the eigenvalue \(\nu=\pm 3/2\). Unlike de Sitter, where the eigenvalue for a massless spin-2 graviton separates between tachyonic and non-tachyonic modes, in AdS, some negative masses are non-tachyonic because they are allowed by the BF bound [81]. Thus, in AdS, the massless graviton does not saturate the stability bound. For general complex \(\nu\), we study the stability of metric perturbations in the Poincare patch of AdS in appendix H. To obtain a condition on the value of \(\nu\) in the complex plane, we study the existence of normalizable tachyonic modes of \(AdS_{4}\) for an arbitrary \(\nu\). As a result, such normalizable tachyonic modes exist if and only if \[\nu^{2}<0. \tag{5.44}\] When \(\nu^{2}\) is real, this statement reduces to the usual BF bound. Furthermore, we find that any complex \(\nu\) with a non-zero real part is not tachyonic. The most general solution of equation (5.43) is given by associated Legendre functions \[F(u,\nu)=(\cosh u)^{-2}\left(\lambda_{1}P_{\nu-1/2}^{2}(\tanh u)+\lambda_{2}Q_ {\nu-1/2}^{2}(\tanh u)\right). \tag{5.45}\] Since AdS-slicing coordinates (2.50) do not contain a horizon at \(u=0\), tensor perturbations \(h(u,\nu)\) can propagate in the whole bulk, between the two UV boundaries at \(u\to\pm\infty\). As a consequence, we are left with a choice of boundary conditions that we did not have for dS-slicing (in which case we imposed normalizability at the horizon). In AdS-slicing coordinates, different linear combinations of the two independent bulk solutions (5.45) correspond to different combinations of sources coupled to the CFT on each boundary. In our case, the boundary source is \(h^{(0)}_{\omega\sigma}\), the boundary metric perturbation. Therefore, generically, this setup corresponds to a bimetric theory. One possible choice is that only the boundary metric at \(u\to-\infty\) is chosen to be dynamical. Then, one should impose that on the other boundary, at \(u\to+\infty\), the source term of boundary metric perturbation vanishes. Another possibility to have a single dynamical metric is to identify the two boundaries, which corresponds to imposing a \(\mathbb{Z}_{2}\) symmetry \(u\to-u\) on the solution. We consider each of these cases in the following two subsections. The discussion above is relevant for a holographic CFT. For a generic CFT on AdS\({}_{4}\), the two-point function of the energy-momentum tensor depends on boundary conditions. For the simplest boundary conditions, Neumann or Dirichlet, the two-point function of the energy-momentum tensor can be calculated by mapping it to flat space by a conformal transformation and then using the method of images. We shall not pursue this further in the present paper. #### 5.3.1 Dynamical gravity on one side We choose to turn off leading (i.e. source-like) metric perturbations on the boundary at \(u\to+\infty\). This corresponds to the boundary conditions: \[F(u,\nu)\underset{u\to\pm\infty}{\longrightarrow}\left\{\begin{array}{ll}2 \lambda_{2}&=0,\\ \frac{4}{\pi}\lambda_{1}\cos(\pi\nu)-2\lambda_{2}\sin(\pi\nu)&=1,\end{array}\right. \tag{5.46}\] such that \(\tilde{h}^{(0)}\) (5.41) coincides with the leading term \(h^{(0)}\) of the Fefferman-Graham expansion (3.13) at the \(u\to-\infty\) side of the \(AdS_{5}\) boundary. These conditions fix the coefficients in (5.45) as \[\lambda_{2}=0, \tag{5.47}\] \[\lambda_{1}=\frac{\pi}{4\cos(\pi\nu)}, \tag{5.48}\] valid for \(\nu\neq n+\frac{1}{2}\), \(n\in\mathbb{Z}\). In the case where \(\nu=n+\frac{1}{2}\), we still have the requirement that \(\lambda_{2}=0\) but \(\lambda_{1}\) is unconstrained. The case of \(\nu=n+1/2\) is special because it corresponds to the spectrum of normalizable modes in \(AdS_{5}\), which can be seen from the asymptotic behaviour of Legendre functions in (I.21). Since \(\lambda_{1}\) diverges in the limit \(\nu\to n+1/2\), this discrete series of modes correspond to poles of the stress-tensor propagator or zeros of the propagator for tensor metric perturbations. Therefore, they cannot be the solution of the spin-2 spectral equation. As a consequence, we can ignore them for the rest of the paper. The unique solution (5.45) which satisfies the boundary conditions (5.46) can then be expanded near the dynamical boundary \(u\to-\infty\). This expansion is given by \[F(u,\nu)=1+\left(\nu^{2}-\frac{9}{4}\right)\left\{e^{2u}-e^{4u} \bigg{[}1+\left(\nu^{2}-\frac{1}{4}\right)\left(u-\frac{3}{4}\right.\right.\] \[\left.\left.+\frac{1}{2}\mathcal{H}\left(\nu-\frac{1}{2}\right)+ \frac{1}{2}\mathcal{H}\left(-\nu-\frac{1}{2}\right)\right)\right]\right\}+ \mathcal{O}(e^{6u}), \tag{5.49}\] where \(\mathcal{H}\) is again the harmonic-number function defined in (5.34). To read-off the Fefferman-Graham terms of the spin-2 perturbation from (5.49), we need to replace \(u\) by the Fefferman-Graham coordinate given in 2.53 by \[e^{2u}=\left(\frac{L\chi}{2}\right)^{2}\rho. \tag{5.50}\] Then, each term of the Fefferman-Graham expansion (3.13) can be identified from (5.49) as \[h^{(2)}_{\omega\sigma}=h^{(0)}_{\omega\sigma}\left(\frac{L\chi}{2}\right)^{2} \left(\nu^{2}-\frac{9}{4}\right) \tag{5.51a}\] \[h^{(4)}_{\omega\sigma}=-h^{(0)}_{\omega\sigma}\left(\frac{L\chi}{2}\right)^{4} \left(\nu^{2}-\frac{9}{4}\right)\left\{1+\left(\nu^{2}-\frac{1}{4}\right) \left[\log\left(\frac{L\chi}{2}\right)\right.\right.\] \[\left.\left.-\frac{3}{4}+\frac{1}{2}\mathcal{H}\left(-\frac{1}{2}- \nu\right)+\frac{1}{2}\mathcal{H}\left(-\frac{1}{2}-\nu\right)\right]\right\}\] (5.51b) \[\hat{h}_{\omega\sigma}=-\frac{h^{(0)}_{\omega\sigma}}{2}\left(\frac{L\chi}{2} \right)^{4}\left(\nu^{2}-\frac{9}{4}\right)\left(\nu^{2}-\frac{1}{4}\right). \tag{5.51c}\] The relation between the expansion of \(h_{\omega\sigma}(\rho,x^{\alpha})\) and \(\delta_{h}g_{\omega\sigma}(\rho,x^{\alpha})\) is given by (3.15). If we compare with the de Sitter slicing case (5.35), the analytic continuation \(H^{2}\to-\chi^{2}\) does not hold for \(h^{(4)}_{\omega\sigma}\) given in AdS by (5.51b) and in dS by (5.35b). The only difference between the two resides in the combination of harmonic functions \(\mathcal{H}\). For AdS (5.51b), the combination is symmetric in \(\nu\leftrightarrow-\nu\), which is not the case in de Sitter because the singularity at the horizon forced us to pick a sign for \(\text{Re}(\nu)\) and break the \(\mathbb{Z}_{2}\) symmetry in \(\nu\) for the bulk solution (5.29). Inserting the relations (5.51) into the linearized Einstein equation for the tensor perturbation (3.21), we obtain the spectral equation \(h^{(0)}\): \[\frac{N^{2}\chi^{4}}{64\pi^{2}}\left(\nu^{2}-\frac{9}{4}\right)Q_{(\cdot)}(\nu )h^{(0)}_{\omega\sigma}=0, \tag{5.52}\] where \[Q_{(\cdot)}(\nu)=1+2\left(\frac{\pi}{GN^{2}\chi^{2}}+\tilde{\alpha} \right)-\frac{1}{2}(\nu^{2}-1/4)\left[\tilde{\beta}_{\rm eff}\right.\] \[\left.+\log\left(GN^{2}\chi^{2}\right)-\frac{1}{2}+\mathcal{H} \left(-\frac{1}{2}-\nu\right)+\mathcal{H}\left(-\frac{1}{2}+\nu\right)\right]. \tag{100}\] We have used \(\tilde{\alpha}\) defined in (101) and \(\tilde{\beta}_{\rm eff}\) defined in (102). The inverse propagator for tensor perturbations is then given by28: Footnote 28: The overall coefficient is determined in Appendix J \[\mathcal{F}_{(\cdot)}=\frac{N^{2}\chi^{4}}{64\pi^{2}}\left(\nu^{2}-\frac{9}{4} \right)Q_{(\cdot)}(\nu). \tag{101}\] #### 5.3.2 Symmetric boundary conditions As an alternative way to couple AdS boundary gravity to the holographic sector, here we impose that the bulk tensor perturbation \(h_{\alpha\beta}\) has equal sources \(h^{(0)}\) on both boundaries \(u\to\pm\infty\). This is implemented by the boundary condition: \[F(u,\nu)\underset{u\to\pm\infty}{\longrightarrow}\left\{\begin{array}{ll}2 \lambda_{2}&=1,\\ \frac{4}{\pi}\lambda_{1}\cos(\pi\nu)-2\lambda_{2}\sin(\pi\nu)&=1.\end{array}\right. \tag{102}\] As for the previous boundary conditions, the case where \(\nu=n+1/2\) where \(n\) is an integer leaves \(\lambda_{1}\) unconstrained. However, we need to distinguish between the two following cases : * If \(n\) is odd, \(\lambda_{2}=1/2\) solves (102) and \(\lambda_{1}\) is unconstrained. This constant can therefore be set to an arbitrary value while (102) still holds. * If \(n\) is even, there is no solution for (102). Such modes are then forbidden in the symmetric case. If \(\nu\) is not a half-integer then the integration constants are given by \[\lambda_{2}=\frac{1}{2}, \tag{103}\] \[\lambda_{1}=\frac{\pi}{4}\left(\tan(\pi\nu)+\frac{1}{\cos(\pi\nu)}\right). \tag{104}\] One can observe from (104) that the limit \(\nu\to n+1/2\) can possibly make \(\lambda_{1}(\nu)\) diverge. As already discussed in the asymmetric case 5.3.1, these modes are the discrete spectrum of normalizable modes in \(AdS_{5}\). We again discuss the two cases : * If \(\nu\to n+1/2\) with \(n\) even, then \(\lambda_{1}(\nu)\) diverges. This limit corresponds to a pole of the stress-tensor correlator. Therefore, half-integer \(\nu\) with even \(n\), which are forbidden as discussed below (102), cannot be a solution to the spectral equation. * If \(\nu\to n+1/2\) with \(n\) odd, then \(\lambda_{1}(\nu)\to 0\). As a reminder, \(\lambda_{1}(\nu=n+1/2)\) can be set to an arbitrary value when \(n\) is odd. We can therefore extend \(\lambda_{1}\) by continuity to half integers with odd \(n\). Then, the solution (108) for \(\lambda_{1}(\nu)\) is continuous at \(\nu=n+1/2\), \(n\) odd, and we can include these half integers into our analysis while working with (108). The solution for \(F(u,\nu)\) given by (109) and (108) is then symmetric under \(u\leftrightarrow-u\). Its behaviour near both sides of the boundary \(u\to\pm\infty\) is obtained in appendix I, and the result is given by equation (19). We observe that the terms of the Fefferman-Graham expansion are all identical to (107) except \(h^{(4)}_{\omega\sigma}\) which now reads: \[h^{(4)}_{\omega\sigma} =-h^{(0)}_{\omega\sigma}\left(\nu^{2}-\frac{9}{4}\right)\left[1+ \frac{1}{2}\left(\nu^{2}-\frac{1}{4}\right)\left(2\log\left(\frac{L\chi}{2} \right)+\right.\right. \tag{109}\] \[\left.\left.-\frac{3}{2}+\mathcal{H}(\nu-1/2)+\mathcal{H}(-\nu- 1/2)-\frac{\pi}{\cos\pi\nu}\right)\right].\] Inserting the bulk data (109) into the equation of motion (111) we obtain our final result for the spectral equation in AdS with symmetric boundary conditions: \[\frac{N^{2}\chi^{4}}{64\pi^{2}}\left(\nu^{2}-\frac{9}{4}\right)Q_{\rm sym}( \nu)h^{(0)}_{\omega\sigma}=0, \tag{110}\] where \[Q_{\rm sym}(\nu)=1+2\left(\frac{\pi}{GN^{2}\chi^{2}}+\tilde{\alpha}\right)- \frac{1}{2}(\nu^{2}-1/4)\left[\tilde{\beta}_{\rm eff}+\right.\] \[\left.+\log\left(GN^{2}\chi^{2}\right)-\frac{1}{2}+\mathcal{H}(\nu-1/2)+ \mathcal{H}(-\nu-1/2)-\frac{\pi}{\cos\pi\nu}\right]. \tag{111}\] This expression is similar to the one obtained in the asymmetric case (108), except for the last term \(\pi/\cos(\pi\nu)\). This term becomes negligible if \(\nu\) is far from the real axis, as it decreases exponentially with the imaginary part of \(\nu\). The inverse propagator for an AdS space-time with symmetric sources is then: \[\mathcal{F}_{\rm sym}=\frac{N^{2}\chi^{4}}{64\pi^{2}}\left(\nu^{2}-\frac{9}{4 }\right)Q_{\rm sym}(\nu). \tag{112}\] ### Identifying ghosts from poles in the propagator Ghost instabilities are determined from the residue of the poles of the propagator. Whether a mode is a ghost is determined by the sign of the residue of the pole in \(\nu^{2}\): if it has the same sign as for the massless graviton in Einstein GR theory (on the same background), then the mode is healthy, otherwise, it is a ghost. It is convenient to identify the residue from the derivative of \(\mathcal{F}(\nu)\) with respect to the real part of \(\nu\): indeed using the holomorphic property of \(\mathcal{F}\) on the complex half-plane with positive real part, we have: \[\mathcal{F}^{\prime}(a+ib)=\left.\frac{\partial\mathcal{F}}{\partial a}\right| _{a+ib}. \tag{113}\] By symmetry of \({\cal F}(\nu)\) under \(\nu\leftrightarrow-\nu\), and by subtracting the Taylor expansion of \({\cal F}\) close to a pole \(\nu_{0}\) with the expansion close to \(-\nu_{0}\), one finds: \[\frac{1}{{\cal F}(\nu)}\underset{\nu\to\nu_{0}}{=}\frac{\nu_{0}}{{\cal F}^{ \prime}(\nu_{0})}\frac{1}{\nu^{2}-\nu_{0}^{2}}+{\cal O}(1). \tag{108}\] Therefore, the residue of the pole in \(\nu^{2}\) can be obtained as: \[\text{Res}[{\cal F}^{-1}](\nu_{0}^{2})\equiv\frac{\nu_{0}}{{\cal F}^{\prime}( \nu_{0})}. \tag{109}\] In pure gravity, the sign of the residue of the massless spin-2 pole in de Sitter is negative in our conventions. Therefore, ghosts are defined to be poles with a residue which is not real and negative. It can be real and positive, or even complex. If \({\cal F}^{\prime}(\nu_{0})=0\), then \(\nu_{0}\) is a higher order pole. In AdS, however, the massless spin-2 pole of Einstein-Hilbert gravity has a positive residue. In flat space, \(\nu\) has to be replaced by \(k\) in equation (109). The residue of a pole \(k_{0}\) in the \(k^{2}\) plane is related to \({\cal F}^{\prime}_{\text{flat}}(k_{0})\) as \[\text{Res}[{\cal F}^{-1}_{\text{flat}}](k_{0}^{2})\equiv\frac{k_{0}}{{\cal F}^ {\prime}(k_{0})}. \tag{110}\] In Einstein-Hilbert gravity, our conventions lead to a negative residue of the massless spin-2 pole. Positive and complex residues (110) will then be associated with ghost-like poles. ## 6 Tensor instabilities in pure gravity Before we discuss the instabilities arising from tensor modes in the gravity coupled to the holographic CFT, we pause here to give a brief overview of the tensor instabilities in pure gravity with higher curvature terms, described by the action (2). The spectral functions in pure gravity can be obtained simply by taking \(N=0\) in the expressions obtained in the previous section29, namely equations (109-110) for flat space, (111-113) for de Sitter and (112-113) or (114-115) for Anti-de Sitter. Footnote 29: Even if the \(N\to 0\) limit cannot be treated in holography, taking \(N=0\) in our setup is a quick way to decouple the bulk gravity theory from the boundary, and retrieve the results one would have obtained in a 4d modified gravity theory with Einstein-Hilbert plus quadratic curvature terms \(\alpha\) and \(\beta\) given by the action (2). * In the flat case, setting the \(N=0\) in \({\cal F}_{\text{flat}}\) (110) leads to: \[{\cal F}_{\text{flat}}\underset{N=0}{=}-\frac{k^{2}}{64\pi}\left\{\frac{2}{G} +\frac{\beta}{2}k^{2}\right\}.\] (111) The propagator is then a sum of two simple poles given by \[\mathcal{F}_{\text{flat}}^{-1}=-32\pi G\left\{\frac{1}{k^{2}}-\frac{1}{k^{2}+ \frac{4}{\beta G}}\right\}.\] (104) Therefore, the poles of the 2-point functions are the massless solutions \(k^{2}=0\) and an additional massive solution, \[k^{2}=-\frac{4}{\beta G}.\] (105) If \(\beta\) is negative, the 4-momentum corresponding to this solution is space-like, which implies a tachyonic instability. Thus, when the CFT is removed, flat space is then tachyon-unstable for strictly negative \(\beta\) and tachyon-stable for positive \(\beta\). As \(\beta\to 0\) the massive mode decouples and one recovers Einstein gravity. These results agree with [14] concerning the stability of flat space with a quadratic curvature action. Such perturbations around Minkowski space were first derived in [73]. Note that, in our conventions, the residue of the massless mode is negative. As we have seen above, the massive pole could be tachyonic or not depending on the sign of \(\beta\). However, it always corresponds to a ghost because its residue is positive for any \(\beta\). This pole is the usual ghost of quadratic gravity theories [73; 7]. Note that, when the mass of the ghost is above the cut-off (which for pure gravity is the 4d Planck scale \(G^{-1/2}\)) our results are not trustworthy in the context of a low-energy effective field theory of gravity. This is the case for \(|\beta|\lesssim 1\). In conclusion, in pure gravity, flat space-time is tachyon-stable when \(\beta>0\) and ghost-unstable for any \(\beta\), with the caveat that for \(|\beta|\) small or of order unity the mass of the ghost is above the cutoff for the analysis to be trusted. * For de Sitter, the pure gravity spectral function is obtained by setting \(N=0\) in (104), which gives: \[\mathcal{F}_{dS}(\nu)\underset{N=0}{=}-\frac{H^{4}}{64\pi}\left(\nu^{2}-\frac{ 9}{4}\right)\left\{\frac{2}{GH^{2}}-2\alpha+\frac{\beta}{2}\left(\nu^{2}-\frac {1}{4}\right)\right\}.\] (106) One simply needs to replace \(H^{2}\rightarrow-\chi^{2}\) to obtain the result for AdS, so we treat positive and negative curvature together. The propagator can then be written as a sum of two poles, \[\mathcal{F}_{dS}^{-1}(\nu)\underset{N=0}{=}-\frac{64\pi}{H^{4}}\left[\frac{2} {GH^{2}}-2\alpha+\beta\right]^{-1}\left\{\frac{1}{\nu^{2}-\frac{9}{4}}-\frac{ 1}{\nu^{2}-\frac{1}{4}+\frac{4}{\beta}\left(\frac{1}{GH^{2}}-\alpha\right) \right\}\right\}.\] (107) The first pole is the massless graviton, which is the only propagating mode that remains for \(\beta=0\). If \(\beta\neq 0\), the second pole is located at \[\nu^{2}=\frac{1}{4}+\frac{4}{\beta}\left(\alpha-\frac{1}{GH^{2}}\right). \tag{101}\] This equation shows that the \(\beta\to+\infty\) limit (while keeping \(\alpha\) and \(GH^{2}\) fixed) always makes a solution converge to \(\nu=\pm\frac{1}{2}\). In the opposite limit \(\beta\to 0\) this massive mode disappears and only the massless graviton remains. Note that if \(\beta=0\), the factor in curly braces in (100) vanishes for: \[\alpha=\frac{1}{GH^{2}}. \tag{102}\] Therefore, in the \(\beta=0\) case, and for any value of \(\alpha\), there exists a special value of the curvature scale \(H\) such that the tensor mode has a vanishing quadratic kinetic term and therefore it is strongly coupled30. In other words, the theory is strongly coupled for \(\beta=0\) and \(\alpha\) and \(H\) related by (102). Footnote 30: In such cases, there is a possibility of a Vainshtein-like mechanism operating. We do not know whether this has been investigated in this context. From (100) we observe that zeros of \(\mathcal{F}\) correspond to real \(\nu^{2}\). The de Sitter tachyon-stability condition (100) becomes \(\nu^{2}\leq 9/4\), while the anti-de Sitter condition (101) becomes \(\nu^{2}>0\). In pure gravity, these conditions translate to \[\frac{2}{\beta}\left(\alpha-\frac{1}{GH^{2}}\right)<1\quad\Rightarrow\quad \text{dS tachyon stable}, \tag{103}\] and anti-de Sitter is tachyon-stable if \[\frac{4}{\beta}\left(\alpha+\frac{1}{G\chi^{2}}\right)>-\frac{1}{4}\quad \Rightarrow\quad\text{AdS tachyon stable}. \tag{104}\] We now turn to ghost instabilities, starting with de Sitter. When the prefactor in the square brackets of (100) is positive, then the massless pole located at \(\nu^{2}=9/4\) has the same sign as the massless pole in pure gravity (\(\alpha=\beta=0\)) and therefore it is not a ghost, whereas the massive pole represents a ghost. On the contrary, if the prefactor in square brackets is negative, then the massless pole is ghost-like and the massive pole becomes ghost-free. All in all, the higher derivative pure gravity theory always has a ghost, be it the massless graviton or the massive mode. For AdS the conclusions are opposite: in our conventions, the massless graviton in pure Einstein gravity has positive residue in AdS, as can be seen by setting \(\alpha=\beta=0\) and replacing \(H^{2}\to-\chi^{2}\) in (100). ### When are tensor ghosts light? The discussion above holds if we take the spectral functions at face value. However, these conclusions can be trusted only when the poles lie within the validity of effective field theory, i.e. when the masses of the unstable modes are below the cut-off, which in pure gravity can be taken to be the Planck scale \(M_{p}=(8\pi G)^{-1/2}\). For the same reasons, all the expressions above make sense in effective field theory if the curvature is sub-Planckian, i.e. if \(GH^{2}\ll 1\). We shall verify when the unstable mode mass is sub-Planckian in the various cases. * **Flat space.** By equation (6.3), the modulus of the massive pole in Planck units is roughly \(G|m^{2}|=4/|\beta|\). Therefore, we conclude that: \[|\beta|\gg 1\quad\Rightarrow\quad\text{flat space gravity has a light tensor ghost.}\] (6.10) If in addition \(\beta<0\), this is also a light tachyon. * **de Sitter.** In this case, we have to distinguish two situations, depending on the sign of the prefactor in (6.5): 1. If \(\beta-2\alpha<-2/(GH^{2})\), then the massless mode is a ghost, and it is by definition below the cut-off. Since \(GH^{2}\ll 1\), this requires either \(\beta\) very large and negative, or \(\alpha\) very large and positive. 2. \(\beta-2\alpha>-2/(GH^{2})\) then the massive mode is a ghost. This is the most "natural" situation, as it does not require extreme values of \(\alpha\) and \(\beta\). By equation (6.6) the modulus of its mass squared in Planck units is31: Footnote 31: Recall that the mass is \(H^{2}(\nu^{2}-9/4)\). \[G|m_{ghost}^{2}|=\left|-2GH^{2}+\frac{4}{\beta}\left(\alpha GH^{2}-1\right) \right|.\] (6.11) The ghost is sub-Planckian when the second term on the left-hand side is smaller than unity (since the first term \(GH^{2}\) is always small in effective theory). For \(\alpha\) not too large, this is the case if \(|\beta|\gg 1\): \[\alpha\sim O(1),\;|\beta|\gg 1\Rightarrow\quad\text{de Sitter gravity has a light tensor ghost.}\] (6.12) All in all, we observe that for large \(|\beta|\), there is _always_ a light ghost in de Sitter (it may be massive, massless, or tachyonic). * **anti-de Sitter.** For AdS, the situation is the same as for de Sitter (with \(H^{2}\rightarrow-\chi^{2}\)), except that the role of points 1 and 2 above are exchanged: 1. If \(\beta-2\alpha>2/(G\chi^{2})\) then the massless pole is a ghost. Note that this is the generic situation for \(O(1)\) values of \(\alpha\) and \(\beta\), since the right-hand side of that inequality is a small number. 2. If instead \(\beta-2\alpha<2/(G\chi^{2})\), then the ghost is the massive mode. Its mass in Planck units is, in modulus: \[G|m_{ghost}^{2}|=\left|2G\chi^{2}-\frac{4}{\beta}\left(\alpha G\chi^{2}+1 \right)\right|.\] (111) For the ghost to be sub-Planckian this again requires \(\beta\gg 1\), but now this must be accompanied by a fine-tuning \(\alpha\simeq\beta/2\gg 1\) to ensure the ghost is the massive mode. We conclude that _generically_, AdS higher-curvature gravity has a light tensor ghost, unless \(\beta-2\alpha<2/(G\chi^{2})\ll 1\). ## 7 Poles of the Minkowski spin-two propagator and stability In this section, we analyse the flat-space spectral function found in section 5.1 and determine for which values of the parameters flat space-time are unstable under tensor perturbations. All the information about the tachyonic instability is contained in the location of the poles of the propagator, i.e. the zeros of the spectral function (109). As we have explained in section 5.1, in our conventions, a zero of (109) at a value \(k\) with a non-zero real part corresponds to a tachyonic mode32. Footnote 32: Recall that, in terms of momenta of Fourier modes, \(k\) represents the square root of \(k^{2}=-(k^{0})^{2}+\mathbf{k}^{2}\) with positive real part. Ghost instabilities are determined by computing the residue of these poles. In particular, a pole is not a ghost if its residue is negative, as is the case for the massless pole of the propagator in pure gravity (100). The spectral equation for non-trivial modes on Minkowski is given in equation (100), which we rewrite here for convenience: \[1=\frac{GN^{2}k^{2}}{4\pi}\left[\frac{1}{2}-2\gamma_{E}-\log\left(GN^{2}k^{2} \right)-\tilde{\beta}_{\text{eff}}\right]. \tag{112}\] As in our conventions we are taking \(\text{Re}(k)>0\), we can use the identity \[\log(k^{2})=2\log k. \tag{113}\] Equation (100) can then be written in the simpler form: \[X\log X=-a, \tag{114}\] where \(X\) and \(a\) are defined as: \[X\equiv GN^{2}k^{2}\exp\left\{-\frac{1}{2}+2\gamma_{E}+\tilde{\beta}_{\rm eff} \right\},\quad a\equiv 4\pi e^{-\frac{1}{2}+2\gamma_{E}+\tilde{\beta}_{\rm eff}}. \tag{108}\] The new variable \(X\) contains all information about tachyonic instabilities, the same way that \(k\) did. The stability condition (106) translates into \[X<0. \tag{109}\] Our problem is now simply to solve (105) for \(X\), which is a \(W(-a)\) Lambert's function, containing two branches. As a result, one or at most two solutions exist for a complex \(X\). To determine this, we write \[X=xe^{i\theta} \tag{110}\] and inject this expression into (105). Then (105) is equivalent to the two real equations \[x\theta=a\sin\theta\ \ \,\ \ \theta\cot\theta=-\log x \tag{111}\] The stability condition (109) translates into \(\theta=\pm\pi\). If a solution for any other value for \(\theta\) exists, it corresponds to a tachyonic mode. * First, we study the existence of purely real tachyonic solutions \(k^{2}>0\), for which \(\theta=0\). This is the case which was considered in [52]. If \(0\leq a<e^{-1}\), equation (105) has two solutions (one with a larger mass than the other), which merge at \(a=e^{-1}\). If on the other hand \(a>e^{-1}\), there is no solution with \(\theta=0\). Using this property, we can write a condition on \(\tilde{\beta}_{\rm eff}\) such that flat space has two tachyonic modes with \(k^{2}>0\): \[\tilde{\beta}_{\rm eff}\leq\tilde{\beta}_{\rm eff}^{\rm merge}\equiv-\log \left(4\pi e^{\frac{1}{2}+2\gamma_{E}}\right)\quad\Rightarrow\quad\text{Two tachyonic modes.}\] (112) When (112) is an equality, we observe a double pole located at \(X=e^{-1}\). In \(k^{2}\) space, this double pole is located at \[GN^{2}k^{2}=4\pi.\] (113) * We now consider the general case \(\theta\neq 0\). The imaginary and real parts of equation (105) give (111), that can be rewritten as \[\log x=-\theta\cot\theta,\] (114a) \[e^{-\theta\cot\theta}=a\frac{\sin\theta}{\theta},\] (114b) The number of solutions of ( 114b ) depends on the values of \(a\). Since \(\theta\cot\theta<1\) and \(\frac{\sin\theta}{\theta}<1\) for \(\theta\neq 0\), we can conclude that solutions of ( 114b ) with \(\theta\neq 0\) exist only in the range: \[a>e^{-1}.\] (115) These solutions are tachyonic as long as they do not move to the negative real axis \(\theta=\pm\pi\). This occurs as \(\tilde{\beta}_{\rm eff}\to+\infty\): in this limit, the instability approaches the imaginary axis, corresponding to \(k^{2}<0\). This was already noted in [52], and can be shown as follows: The two extremal values \(\theta=\pm\pi\) can be reached only if we take \(a\to+\infty\). Indeed, using the fact that the left-hand side of (7.10b) is bounded: \[\left|e^{-\theta\cot\theta}\right|\leq 1, \tag{7.12}\] then \(a\sin\theta/\theta\) must also be bounded as \(a\to+\infty\). Therefore, \[\theta\underset{a\to\infty}{\to}\pm\pi. \tag{7.13}\] The limit \(a\to+\infty\) is reached by taking \(\tilde{\beta}_{\rm eff}\to+\infty\). From the definitions (3.30-5.19), one way of obtaining this limit is by setting \(N=0\) which corresponds to decoupling the CFT as we have discussed in subsection 6. The pure gravity case for flat space was also studied in [14], and the analysis we have presented here agrees with the conclusion of that work: in pure gravity, flat space is tachyon-stable for positive \(\beta\) and tachyon-unstable for negative \(\beta\). To summarize, there are 2 real tachyonic poles when \(0\leq a<e^{-1}\). They merge at \(a=e^{-1}\) and they become complex at \(a>e^{-1}\). The complex poles of the spin-2 propagator can be found numerically, and are shown in figure 1 for some illustrative values of \(\tilde{\beta}_{\rm eff}\). Each snapshot corresponds to a different \(\tilde{\beta}_{\rm eff}\). Poles correspond to the intersections of the blue lines (zeros of the real part of the inverse propagator) and of the orange lines (zeros of the imaginary part). The massless pole is not shown in this Figure, it is always located at \(k=0\) for any value of the parameters \(\tilde{\alpha}\) and \(\tilde{\beta}_{\rm eff}\). In figure 1, we start at large and negative values of \(\tilde{\beta}_{\rm eff}\) in the upper-left panel33. Of the two tachyonic solutions, only the lighter one close to the origin is visible in the upper-left panel, while the heavier one is far away along the real axis. As \(\tilde{\beta}_{\rm eff}\) is increased, the heavier solution comes closer to the lighter solution as shown in snapshot (b). Then, they merge in snapshot (c) where \(a=e^{-1}\), i.e where \(\tilde{\beta}_{\rm eff}\) is chosen such that (7.8) is an equality. The complex instability continues to travel along the fixed orange curve and then becomes closer to the imaginary axis as \(\tilde{\beta}_{\rm eff}\) is increased. Footnote 33: Solutions of (7.3) are always a pair of complex conjugates due to the symmetry under \(\theta\to-\theta\), which is explicit in equations (7.10a,7.10b). This, and the fact that we chose the square root with \(\text{Re}(k)>0\), are the reasons why we only display the top-right quarter of the complex plane in Figure 1. We now turn to the analysis of ghosts. In the \(a<e^{-1}\) regime, there are two tachyonic modes, the heavier one is a ghost and the lighter is not. The mass of the ghost is always comparable to \(GN^{2}\) in this regime. Figure 1: _These snapshots show the poles of the spin-2 propagator in Minkowski spacetime, for selected values of the parameter \(\tilde{\beta}_{\text{eff}}\). The poles correspond to zeros of the real (blue curve) and imaginary (orange curve) parts of \(\mathcal{F}_{\text{flat}}(k)\) (5.21).and are denoted by coloured dots. A green dot indicates a negative residue (ghost-free), while a red dot corresponds to a positive residue (ghost). A purple dot is for complex residue (also a ghost). As \(\tilde{\beta}_{\text{eff}}\) is increased going from upper left to lower right, two tachyons located on the real axis for negative merge in snapshot (c) to form a second order pole. The merging happens at \(\tilde{\beta}_{\text{eff}}=\tilde{\beta}_{\text{eff}}^{\text{merge}}\) (7.8). For \(\tilde{\beta}_{\text{eff}}>\tilde{\beta}_{\text{eff}}^{\text{merge}}\) there is a complex conjugate pair, which moves to the origin for large and positive \(\tilde{\beta}_{\text{eff}}\)._ When \(\tilde{\beta}_{\text{eff}}\) is increased, after the merging at \(a=e^{-1}\) has occurred in snapshot (c) of Figure 1, the tachyonic complex ghost pole moves on the complex plane and approaches towards the imaginary axis as \(\tilde{\beta}_{\text{eff}}\) becomes large and positive. The ghost becomes lighter and lighter in units of \(GN^{2}\). In the large-\(\tilde{\beta}_{\text{eff}}\) regime, the ghost sticks to the imaginary axis where its residue becomes real and positive. More precisely, the imaginary part of the residue becomes smaller and smaller compared to the real part. In the limit \(\tilde{\beta}_{\text{eff}}\to+\infty\), the mass squared of the spin-2 mode \((m_{2})^{2}\) defined in Figure 2: _In this plot, we show the real part of the two tachyonic spin-2 poles in flat space (which also gives the inverse time scale of the Minkowski space spin-2 tachyonic instability, see app. F) in units of the species cutoff, as a function of \(\tilde{\beta}_{\text{eff}}\). Red and green markers correspond respectively to the ghost-like (lighter) tachyon and a non-ghostly (heavier) tachyon. Purple markers correspond to two complex conjugate ghost-like tachyonic poles. The blue vertical line is the value of \(\tilde{\beta}_{\text{eff}}\) which saturates (7.8), where these two poles merge and move to the complex plane as two complex conjugate poles as \(\tilde{\beta}_{\text{eff}}\) is further increased. Large (positive and negative) values of \(\tilde{\beta}_{\text{eff}}\) correspond to a long-lived tachyon. For comparison, the black curve corresponds to the case of pure gravity with \(\beta=\tilde{\beta}_{\text{eff}}/\pi\), where the tachyonic pole is given by equation (6.3). For this curve, the vertical axis is \(\log(G^{\frac{1}{2}}Re(k))\) while the horizontal axis is \(\pi\beta\)._ (5.6) becomes negative. This limit can be taken in (5.23) to obtain the value \[GN^{2}(m_{2})^{2}\underset{|\tilde{\beta}_{\text{eff}}|\rightarrow+\infty}{\sim} \frac{4\pi}{\tilde{\beta}_{\text{eff}}}. \tag{7.14}\] This equation agrees with what is seen in Figure 1. In Figures 2 and 3 we show the behaviour, respectively, of the real part and the complex modulus of the spin-2 poles. As it is shown in appendix F, the real part of Figure 3: _In this plot, we show the modulus of the mass of the spin-2 tachyonic poles in flat space, defined in (5.6), in units of the species scale (1.7), as a function of \(\tilde{\beta}_{\text{eff}}\). Red and green markers correspond respectively to the light ghost-like tachyon and the heavy non-ghostly tachyon. The tachyon exists for large and negative \(\tilde{\beta}_{\text{eff}}\) but its mass is too large to appear in the window. Purple markers correspond to two complex conjugate ghost-like tachyonic poles. The black curves correspond to the massive ghost in pure gravity with \(\beta=\tilde{\beta}_{\text{eff}}/\pi\): the solid line is the ghost-like tachyon (\(\beta<0\)), and the dashed line is the non-tachyonic ghost (\(\beta>0\)) The blue vertical line is the value of \(\tilde{\beta}_{\text{eff}}\) which saturates (7.8), where these two poles merge and move to the complex plane as two complex conjugate poles as \(\tilde{\beta}_{\text{eff}}\) is further increased. The black horizontal line marks the species cut-off (or the Planck scale cut-off for pure gravity). For this curve, the vertical axis is \(\log(G^{\frac{1}{2}}Re(k))\) while the horizontal axis is \(\pi\beta\). There are unstable modes lighter than the species cutoff only for large values of \(|\tilde{\beta}_{\text{eff}}|\)._ corresponds to the typical inverse time scale of the tachyonic instability. From these figures, one can follow the trajectory of the poles as a function of \(\tilde{\beta}_{\rm eff}\). As one can observe, for \(O(1)\) values of \(\tilde{\beta}_{\rm eff}\), the poles are above the (species) cut-off, therefore they are outside of the regime of our EFT analysis. It is only for \(\tilde{\beta}_{\rm eff}\) very large and positive or very large and negative that at least one pole becomes lighter than the cut-off scale. However, this is the same regime in which even pure gravity (black curves in figures 2 and 3) has a light instability (although in the case of pure gravity, large positive \(\beta\) corresponds to a ghost which is not also a tachyon, unlike in the presence of the CFT). In any regime where pure gravity does not have light unstable modes, adding the CFT does not make the effective field theory unstable. The behaviour of the complex solutions for \(X\) right after the merging can be described analytically by performing an expansion for small \(\theta\) in (114): \[\log x=-1+\frac{\theta^{2}}{3}+{\cal O}(\theta^{3}), \tag{115a}\] \[x\left[\log x\left(1-\frac{\theta^{2}}{2}\right)-\theta^{2}+{\cal O}(\theta^{3 })\right]=-a. \tag{115b}\] We can therefore eliminate \(x\) to find a solution for \(\theta\) given by \[\theta^{2}\approx 2(ae-1). \tag{116}\] The two complex branches of \(X\) then start at \(a=e^{-1}\). The solution for \(X=xe^{i\theta}\) is then \[x=e^{-1+\frac{\theta^{2}}{3}+{\cal O}(\theta^{3})}, \tag{117}\] and \[\theta\approx\pm\sqrt{2(ae-1)}. \tag{118}\] ## 8 Poles of the dS spin-two propagator and stability We now consider the positive curvature case and study the stability under tensor perturbations of 4d gravity plus a holographic CFT on de Sitter spacetime. Both tachyonic and ghost instabilities will be determined numerically, but we also provide analytical insight into these results. Tachyonic instability is identified by studying the location of zeros of the de Sitter tensor inverse propagator (104) in the complex \(\nu\) domain: such instability is characterized by the condition \({\rm Re}(\nu)>3/2\) (107). Whether or not the mode is a ghost is determined by the sign of the residue of the pole, as explained in subsection 5.4 ### Numerial results for two typical sets of parameters Before performing a full analysis in the parameter space spanned by \((GN^{2}H^{2}\), \(\tilde{\alpha}\), \(\tilde{\beta}_{\text{eff}})\), in this subsection we present, as an illustrative example, the results for two distinct sets of parameters which give different results but are typical cases of the more general behaviour of the system. For each of these two sets of parameters, we fix \((GN^{2}H^{2}\), \(\tilde{\alpha})\) and solve numerically the equation (110) for several values of \(\tilde{\beta}_{\text{eff}}\). The first example, shown in Figure 4 is an example of the small curvature regime. For instance, we observe that in this figure, the theory is tachyon-unstable from the snapshot (a) to snapshot (e) because one (or two) solutions are tachyonic (\(\text{Re}(\nu)>3/2\)). The two tachyons merge in snapshot (c) to form a double pole, where \(\mathcal{F}^{\prime}_{\text{dS}}(\nu)\) vanishes34. Footnote 34: Theories with a double pole have been recently discussed in [83]. After the merging, the double pole separates into two complex conjugate solutions. We only display positive imaginary parts in this Figure. In snapshot (f) the tachyon with complex \(\nu\) enters the stability region \(|\text{Re}(\nu)|<3/2\) because its real part decreases as \(\tilde{\beta}_{\text{eff}}\) is increased. Therefore, the theory is tachyon-stable from snapshot (f) to snapshot (i) and will continue to be stable for even larger values of \(\tilde{\beta}_{\text{eff}}\). As \(\tilde{\beta}_{\text{eff}}\) is increased, the pole which became tachyon-stable in (f) goes to the imaginary axis and then forms a double pole at the intersection of the real and imaginary axes. When a tachyon is present, it is important to determine the time scale of the instability, which is fixed by the value of \(\nu\) in the same way for both scalars and tensors (see appendix G for details). For a tachyonic mode with \(Re(\nu)>3/2\), the solution of (109) which dominates at large \(t\) behaves as35: Footnote 35: For \(\text{Re}(\nu)<-3/2\), there would be a sign flip \(\nu\to-\nu\) in (104). \[\tilde{\theta}_{ij}\underset{H\to+\infty}{\propto}e^{-Ht(3/2-\nu)}. \tag{104}\] The characteristic rate \(\Gamma\) of the exponential divergence in (104) is therefore given by \[\Gamma=H(|\text{Re}(\nu)|-3/2). \tag{105}\] We now turn to the analysis of ghost instabilities. The sign of the residue of each pole is obtained by computing numerically \(\mathcal{F}^{\prime}(\nu)\) and applying the formula (111). The resulting sign is encoded in the colour of each dot in Figure 4. Green dots correspond to negative residues, indicating a ghost-free pole. Red dots correspond to ghosts with positive residue, and purple dots correspond to complex residues. One can observe that from Figure 4, a ghost (either positive or complex residue) is always present, for any value of \(\tilde{\beta}_{\text{eff}}\). For generic values of \(\tilde{\beta}_{\text{eff}}\), the mass of the ghost defined by (110) is large compared to the Hubble rate \(H\). For large values of \(\tilde{\beta}_{\text{eff}}\) (both positive and negative), the ghost pole approaches \(\nu=1/2\), matching the \(N=0\) case (110) in the limit \(\beta\to\infty\). The divergence rate \(\Gamma\) is computed numerically as a function of \(\tilde{\beta}_{\text{eff}}\), and the results are shown in Figure 5 for the set of parameters we have used in Figure 4. Figure 5 shows in green the tachyonic pole, in red the tachyonic ghost, in purple the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red, in the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red the tachyonic pole, in red, in the tachyonic pole, in red, the tachyonic pole, in red, the tachyonic, in red, the tachyonic, in red, the tachyonic, in red the tachyonic, in red tachyonic, in the tachyonic, in red the tachyonic, in red the tachyonic, in red the tachyonic, in red, the tachyonic, in red, the tachyonic, in red, the tachyonic, in red, the tachyonic, in red, red, in the tachyonic, tachyonic, in red the tachyonic, in red, the tachyonic, in red, the tachyonic, in red, red, in the tachyonic, in the tachy, tachyonic, in the tachyonic, in red, the tachyonic, in red, the tachyonic, in the tachy, tachyonic, in the tachy, tachyonic, in the tachy, tachyonic, in the tachy, tachyonic, in the tachy, tachyonic, in the tachy, tachyonic, in the tachyonic, in the tachy, tachyonic, in the tachyonic, in the tachy, tachyonic, in the tachy, tachyonic, in the tachy, tachyonic, in the tachy, tachy, in the tachy, tachy, in the tachy, tachy, in the tachy, tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy in tachy, in the tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy in tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, the tachy, in tachy, in the tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in the tachy, in tachy, the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, in the tachy, in tachy, complex pole and in grey the massless pole which is neither a ghost nor a tachyon. The two poles of pure gravity, are also shown for comparison. The massive pole (6.6) is a blue dashed curve, while the massless pole is an orange line. As \(\tilde{\beta}_{\text{eff}}\) is increased, the tachyonic rate \(\Gamma\) (8.2) decreases, down to the point where the spin-2 mode becomes tachyon-stable around \(\tilde{\beta}_{\text{eff}}\sim 9.3\). The pure gravity massive pole for large and negative \(\beta\) coincides with the ghost pole for large and negative \(\tilde{\beta}_{\text{eff}}\) if we set \(\tilde{\beta}_{\text{eff}}=\pi\beta\), i.e \(N=1\) in the definition of \(\tilde{\beta}_{\text{eff}}\) (5.19). Large and positive \(\tilde{\beta}_{\text{eff}}\) also agree with the pure gravity result, even if this is not visible from this figure. Figure 5: _In this plot, we show the real part of the spin-2 poles in de Sitter with the same parameters as in Figure 4. The real part also gives the inverse time scale, or strength, (8.2) of the dS tachyonic instability, in units of the species cutoff, as a function of \(\tilde{\beta}_{\text{eff}}\). Red and green markers correspond respectively to the ghost-like (lighter) tachyon and a non-ghostly (heavier) tachyon. Purple markers correspond to two complex conjugate ghost-like poles. For comparison, the blue dashed line and the yellow line correspond to the case of pure gravity with \(\beta=\tilde{\beta}_{\text{eff}}/\pi\) (6.4). For pure gravity curves, the horizontal axis is \(\pi\beta\). The grey vertical line is the value of \(\tilde{\beta}_{\text{eff}}\) which saturates (8.11), (corresponding to snapshot (c) of Figure 4), where two poles merge and move to the complex plane as two complex conjugate poles as \(\tilde{\beta}_{\text{eff}}\) is further increased. Each pole above the green line at \(\text{Re}(\nu)=3/2\), is tachyonic. The complex pole crosses this green line around \(\tilde{\beta}_{\text{eff}}\approx 9.3\), it then becomes non-tachyonic for larger values of \(\tilde{\beta}_{\text{eff}}\)._ One would need to look at much higher values of \(\tilde{\beta}_{\rm eff}\) (a few thousand) to see that the complex pole goes to the real axis, as it is shown in snapshots (g,h,i) of Figure 4. In this case, both the massive pole of pure gravity and the ghost asymptote at \(\nu=1/2\), as expected by a naive \(\tilde{\beta}_{\rm eff}\to+\infty\) limit of (111). For very large and negative \(\tilde{\beta}_{\rm eff}\) the massless pole becomes a ghost. According to (108), one would need to have \(\tilde{\beta}_{\rm eff}\leq-\frac{2\pi}{GN^{2}H^{2}}\approx-628.3\). In the presence of the CFT, however, we find numerically that the massless pole is a ghost for \(\tilde{\beta}_{\rm eff}\lesssim-624.2\). This critical value of \(\tilde{\beta}_{\rm eff}\) corresponds to the merging of the ghost with the massless pole which were both present in snapshot (a) of Figure 4. The results of Figure 5 are compatible with the paper [68] because they have \(\alpha=0\) and study the \(GN^{2}H^{2}<<1\) regime where they also find a complex pole, which is also present in flat space (see Figure 3). In this paper, the authors have found an approximate value of \(\tilde{\beta}_{\rm eff}\) at which the complex pole crosses the massless line displayed in orange. Larger values of \(\tilde{\beta}_{\rm eff}\) then correspond to an absence of tachyonic instabilities. However, the pole is still complex up to \(\tilde{\beta}_{\rm eff}\approx 5026.43\) where it becomes a _real_ ghost. Figure 6 shows the modulus of the mass squared of the tensor modes in de Sitter, plotted in units of the species scale (7). This figure is a numerical evaluation for the same parameters as the ones chosen in Figures 4 and 5. As in Figure 5, the green, red and blue curves are respectively the tachyonic, the ghost and the massive mode of pure gravity. The modulus of the ghost mass agrees with the pure gravity massive mode for large values of \(|\tilde{\beta}_{\rm eff}|\). For generic values, both the ghost and the tachyon lie above the species cutoff. The complex pole, which appears for \(\tilde{\beta}_{\rm eff}\gtrsim-4.1838\), goes beyond the species scale for large and positive values of \(\tilde{\beta}_{\rm eff}\). Our second example corresponds to \(GN^{2}H^{2}\) of order unity (rather than \(GN^{2}H^{2}\ll 1\) as was the case in figure 4). Specifically, we take \(GN^{2}H^{2}=\pi/4\) and \(\tilde{\alpha}=10\). The spin-2 poles of de Sitter in this case are shown in Figure 7. For large and negative values of \(\tilde{\beta}_{\rm eff}\) we find a different behaviour than in Figure 4: in the present case there is only one tachyon, and it is not a ghost. However, the massless pole at \(\nu=3/2\) is now a ghost. As \(\tilde{\beta}_{\rm eff}\) is increased, the tachyon becomes lighter and lighter from snapshot (a) to (c). It stays on the positive real axis (another difference from Figure 4). Snapshots (a-f) correspond to a tachyon-unstable theory because the heaviest solution has a real part larger than \(3/2\). The tachyon merges with the massless graviton in snapshot (g) and becomes a ghost when \(\tilde{\beta}_{\rm eff}\) is increased. This ghost then moves towards \(\nu=1/2\) for large and positive values of \(\tilde{\beta}_{\rm eff}\) in snapshot (i). This matches the decoupling limit \(N=0\), as can be seen from (108). For large and negative \(\tilde{\beta}_{\rm eff}\), the massless pole \(\nu^{2}=9/4\) is a ghost whereas the \(\nu^{2}=1/4\) is not. For large and positive \(\tilde{\beta}_{\rm eff}\), the respective signs of their residues are switched. This is what is observed by comparing snapshots (a) and (i), where the red and green poles are interchanged. The characteristic rate of the tachyonic instability is plotted in Figure 8 as a function of \(\tilde{\beta}_{\text{eff}}\). This figure is obtained with the same parameters as Figure 7. Compared with 5, in Figure 8 there is no merging between two unstable massive poles (which is denoted by a grey vertical line in Figure 5). Here instead, we have a single tachyon moving along the real axis as \(\tilde{\beta}_{\text{eff}}\) is increased, it merges with the massless pole at \(\nu=3/2\) to form a safe massless pole and a ghost. The grey vertical line in Figure 8 marks the value of \(\tilde{\beta}_{\text{eff}}\) for above which the theory becomes tachyon-like. The black horizontal line in Figure 8 marks the values of \(\tilde{\beta}_{\text{eff}}\) for pure gravity. The black horizontal line in Figure 8 marks the values of \(\tilde{\beta}_{\text{eff}}\) for pure gravity. The black horizontal line in Figure 8 marks the values of \(\tilde{\beta}_{\text{eff}}\) for pure gravity. There are unstable modes lighter than the species cutoff only for large values of \(|\tilde{\beta}_{\text{eff}}|\). The characteristic rate of the tachyonic instability is plotted in Figure 8 as a function of \(\tilde{\beta}_{\text{eff}}\). This figure is obtained with the same parameters as Figure 7. Compared with 5, in Figure 8 there is no merging between two unstable massive poles (which is denoted by a grey vertical line in Figure 5). Here instead, we have a single tachyon moving along the real axis as \(\tilde{\beta}_{\text{eff}}\) is increased, it merges with the massless pole at \(\nu=3/2\) to form a safe massless pole and a ghost. The grey vertical line in Figure 8 marks the value of \(\tilde{\beta}_{\text{eff}}\) for above which the theory becomes tachyon-like. The black horizontal line in Figure 8 marks the values of \(\tilde{\beta}_{\text{eff}}\) for pure gravity. The black horizontal line in Figure 8 marks the values of \(\tilde{\beta}_{\text{eff}}\) for pure gravity. The black horizontal line in Figure 8 marks the values of \(\tilde{\beta}_{\text{eff}}\) for pure gravity. The black horizontal line in Figure 8 marks the values of \(\tilde{\beta}_{\text{eff}}\) for pure gravity. There are unstable modes lighter than the species cutoff only for large values of \(|\tilde{\beta}_{\text{eff}}|\). Figure 6: _In this plot, we show, for the same parameters as in Figure 4, the complex modulus of the mass of the spin-2 tachyonic poles in de Sitter, defined in (5.26), in units of the species scale (1.7), as a function of \(\tilde{\beta}_{\text{eff}}\). Red and green markers correspond respectively to the light ghost-like tachyon and the heavy non-ghostly tachyon. Purple markers correspond to two complex conjugate ghost-like poles. The species scale is shown by a horizontal solid black line. For comparison, different curves show the poles in pure gravity with \(\beta=\tilde{\beta}_{\text{eff}}/\pi\): the dashed blue and black curves are respectively tachyonic and non-tachyonic ghosts. The orange curve is safe. For pure gravity curves, the vertical axis is \(\log(G^{\frac{1}{2}}|m|^{2}+1)\) while the horizontal axis is \(\pi\beta\). The grey vertical line is the value of \(\tilde{\beta}_{\text{eff}}\) which saturates (8.11), (corresponding to snapshot (c) of Figure 4), where these two poles merge and move to the complex plane as two complex conjugate poles as \(\tilde{\beta}_{\text{eff}}\) is further increased. The black horizontal line marks the species cut-off (or the Planck scale cut-off for pure gravity). There are unstable modes lighter than the species cutoff only for large values of \(|\tilde{\beta}_{\text{eff}}|\)._ stable. This corresponds to the value in snapshot (g) of Figure 7. Figure 9 shows the modulus of the mass squared of the tensor modes in de Sitter, plotted in units of the species scale (7). This figure is a numerical evaluation for the same parameters as the one chosen in Figures 7 and 8. As in Figure 8, the green, Figure 7: _Zeros of the real (blue curve) and imaginary (orange curve) parts of the inverse spin-2 propagator of de Sitter (5.36) for different values of \(\tilde{\beta}_{\text{eff}}\), with fixed \(\tilde{\alpha}=10\) and \(GN^{2}H^{2}=\pi/4\). Solutions are therefore given by the intersection of blue and orange lines. The tachyon, which was outside the window, arrives from snapshot (e) and merges at the critical value (8.14) in snapshot (g). Increasing \(\tilde{\beta}_{\text{eff}}\) to large and positive values makes the ghost pole converge at \(\nu=1/2\)._ red and blue curves are respectively the tachyonic, the ghost and the massive mode of pure gravity. The modulus of the ghost mass agrees with the pure gravity massive mode for large values of \(|\tilde{\beta}_{\text{eff}}|\). For generic values, both the ghost and the tachyon lie above the species cutoff. One can observe in this Figure that the massless pole is a ghost for \(\tilde{\beta}_{\text{eff}}\lesssim 12\), whether the CFT is present or not. More precisely, the massless pole is a ghost for \(\pi\beta<12\) in pure gravity (6.4), whereas the actual value in the Figure 8: _In this plot, we show the real part of the spin-2 poles in de Sitter, with the same parameters as in Figure 7. The real part also gives the inverse time scale (or strength) of the dS tachyonic instability (8.2), in units of the species cutoff, as a function of \(\tilde{\beta}_{\text{eff}}\). Grey markers correspond to safe poles (one for large and negative \(\tilde{\beta}_{\text{eff}}\) disappearing around \(\sim-100\), the other is massless for \(\tilde{\beta}_{\text{eff}}>11.7415\)), while red and green markers correspond respectively to the non-ghostly tachyonic pole and the light ghost pole. For comparison, different curves show the case of pure gravity with \(\beta=\tilde{\beta}_{\text{eff}}/\pi\) (6.6): the solid blue line is a non-ghostly tachyon, the solid yellow line is non-ghostly-non-tachyonic, and the black dashed line is a non-tachyonic ghost. There is a gap at \(-90\lesssim\tilde{\beta}_{\text{eff}}<0\) for pure gravity because the massive pole (6.6) is purely imaginary in this interval. In the presence of the CFT, the safe pole disappears into negative (i.e not allowed) values of \(\text{Re}(\nu)\). For pure gravity curves, the horizontal axis is \(\pi\beta\). The vertical grey line is the value of \(\tilde{\beta}_{\text{eff}}\) corresponding to the transition from tachyonic to non-tachyonic (8.14), where the tachyon merges with the massless ghost at \(\nu=3/2\). This value is also displayed in snapshot (\(g\)) of Figure 7._ presence of the CFT is \(\tilde{\beta}_{\text{eff}}\lesssim 11.7415\) as one can observe in snapshot (g) of Figure 7. Qualitatively, the cases displayed in Figures 4 and 7 (which, we remind the reader, correspond to small \(GN^{2}H^{2}\) and \(O(1)\)\(GN^{2}H^{2}\) respectively) have a different behaviour as a function of \(\tilde{\beta}_{\text{eff}}\): in the first case, a complex tachyonic ghost becomes non-tachyonic through the complex plane when \(\tilde{\beta}_{\text{eff}}\) is increased; in the second case, a real tachyonic pole becomes tachyon-stable on the real axis as \(\tilde{\beta}_{\text{eff}}\) is increased. We chose to present only two different cases because they are paradigmatic of what happens in the whole parameter space. We span more values of \(GN^{2}H^{2}\) and \(\alpha\) in appendix M. As a result, any point choice of \((GN^{2}H^{2},\tilde{\alpha})\) space should be similar to Figure 9: _In this plot, obtained with the same parameters as in Figures 7 and 8, we show the complex modulus of the mass of the spin-2 tachyonic poles in de Sitter, defined in (5.26), in units of the species scale (1.7), as a function of \(\tilde{\beta}_{\text{eff}}\). Grey markers correspond to safe poles (one for large and negative \(\tilde{\beta}_{\text{eff}}\) disappearing around \(\sim-100\), the other is massless for \(\tilde{\beta}_{\text{eff}}>11.7415\)), while red and green markers correspond respectively to the non-ghostly tachyonic pole and the light ghost pole. The species scale is shown by a horizontal solid black line. For comparison, different curves correspond to the pure gravity modes with \(\beta=\tilde{\beta}_{\text{eff}}/\pi\): the blue curve is tachyonic, the black dashed curve is a non-tachyonic ghost, and the orange curve is safe. For pure gravity curves, the vertical axis is \(\log(G^{\frac{1}{2}}|m|^{2}+1)\) while the horizontal axis is \(\pi\beta\). The vertical grey line is the value of \(\tilde{\beta}_{\text{eff}}\) corresponding to snapshot (g) of Figure 7), where the tachyonic pole becomes massless, and therefore stops being tachyonic._ one of these two cases discussed above. The ArXiv webpage of this paper contains ancillary files, including animated gifs. Each snapshot of these gifs corresponds to a different value of \(\tilde{\beta}_{\rm eff}\) for fixed \((GN^{2}H^{2},\tilde{\alpha})\). In the next subsection, we present an analytic approximation which explains these two different behaviours. Our findings indicate that once we stay below the species cutoff, qualitatively the behaviour is similar to the case without the CFT if we rescale the parameters of the effective gravity theory with a factor of \(N^{2}\). We also find that a richer set of phenomena can happen above the cutoff, but we cannot trust our description. In previous works, [52], the analysis was done for situations that were in the general area of the cutoff or above. ### Analytic results for tensor tachyonic modes in dS at large \(|\nu|\) In this subsection, we provide approximate analytical results for the location of the tachyonic poles in the tensor propagator (114) on de Sitter. These analytics provide a better understanding of the qualitative picture presented in the previous subsection We focus on the "non-trivial" poles, i.e. away from the massless graviton pole \(\nu=9/4\). Therefore, we look for the zeros of \(Q_{\rm dS}(\nu)\) defined in (113). There is no simple analytic expression to this function for an arbitrary location in the complex plane. However, it can be approximated by a logarithm when \(|\nu|\) is large. This occurs in particular for small curvature, as it was the case in Figure 4: indeed, with \(GN^{2}H^{2}\ll 1\), and finite \(\tilde{\alpha}\) and \(\tilde{\beta}_{\rm eff}\), solving \(Q_{\rm dS}(\nu)=0\) requires cancelling the large value of \(\frac{2\pi}{GN^{2}H^{2}}\) against a large value of \(|\nu|\), as it is argued in [68]. With these considerations, in the rest of this subsection, we develop an analytic approximation for the poles in the limit of large \(|\nu|\). In this limit, we can use the Stirling formula \[\mathcal{H}(z)\underset{|z|\to\infty}{=}\log z+\gamma_{E}+\mathcal{O}(z^{-1}), \tag{115}\] where \(\gamma_{E}\) is the Euler-Mascheroni constant. The large \(\nu\) expansion of (113) is then given by \[Q_{\rm dS}=1-\frac{2\pi}{GN^{2}H^{2}}+2\tilde{\alpha}-\frac{\nu^{2}}{2}\left[ \tilde{\beta}_{\rm eff}-\frac{1}{2}+\log\left(GN^{2}H^{2}\right)+2\log\nu-2 \gamma_{E}+\mathcal{O}(|\nu|^{-1})\right]. \tag{116}\] We have \(2\log(\nu)=\log(\nu^{2})\), since we have chosen \({\rm Re}(\nu)>0\) in (111), insofar as the branch-cut of the log function in (115) is on the negative real axis. The equation of motion (112) then takes a similar form to the one of flat space (115), \[X\log X=-a, \tag{117}\] where now \(X\) and \(a\) depend on the curvature and are given by: \[X\equiv\nu^{2}GN^{2}H^{2}\exp\left\{\tilde{\beta}_{\rm eff}-\frac{1}{2}+2\gamma_{E }\right\}, \tag{111}\] \[a\equiv 2GN^{2}H^{2}\left[2\left(\frac{\pi}{GN^{2}H^{2}}-\tilde{\alpha}\right)- 1\right]\exp\left\{\tilde{\beta}_{\rm eff}-\frac{1}{2}+2\gamma_{E}\right\}. \tag{112}\] We can observe that there are similar definitions for \(X\) and \(a\) in flat spacetime (110). However, \(a\) can now be negative using specific combinations of \(\tilde{\alpha}\) and the curvature. Equations (111,112,113) hold for large \(|\nu|\) and any curvature. In particular, they can be used to understand the flat space limit: indeed, by comparing the eigenvalue equations for de Sitter (108) to the one for Minkowski (111), we observe that the flat limit can be taken by defining \[\nu^{2}H^{2}\underset{H^{2}\to 0}{\rightarrow}k^{2}, \tag{113}\] where the limit is taken by sending \(|\nu|\rightarrow\infty\) so that \(k^{2}\) is kept finite. In this limit, \(\tilde{\alpha}\) becomes negligible and we find: \[\mathcal{F}_{\rm dS}\underset{H\to 0}{\rightarrow}\mathcal{F}_{\rm flat}. \tag{114}\] which coincides with the result we have obtained by the direct flat space calculation, equation (110). We have therefore shown that the dS propagator matches continuously onto the flat space propagator when we take the curvature to zero. The de Sitter tachyon-stability condition (109) also becomes the flat space condition (108) in this limit. Indeed, taking the flat space limit of de Sitter condition \(|{\rm Re}(\nu)|<3/2\) we obtain \[{\rm Re}(k)\sim H|{\rm Re}(\nu)|<\frac{3H}{2}\to 0, \tag{115}\] i.e. the flat space tachyon-stability condition. We now turn to arbitrary curvatures, but still, search for solutions satisfying \(|\nu|\gg 1\). This allows us to use equation (111) to better understand the results we have found in the two typical examples in the last subsection. We keep \(GN^{2}H^{2}\) and \(\tilde{\alpha}\) finite, so \(X\) (111) and \(a\) (112) differ from their flat space analogs (110). In particular, \(a\) can be negative for finite curvature, unlike in flat space where \(a>0\). When looking for solutions of (111), one can distinguish three cases: * If \(a<0\), the unique solution \(X\) of (111) is real. * If \(0\leq a\leq e^{-1}\), there are two real solutions, one degenerate solution if \(a=e^{-1}\). * If \(e^{-1}<a\), there are two complex conjugate solutions. The second and third items describe Figure 4. Snapshots (a) and (b) correspond to \(0\leq a\leq e^{-1}\). As \(\tilde{\beta}_{\rm eff}\) increases, \(a\) increases up to \(e^{-1}\) where the two solutions merge in snapshot (c). Using the definition of \(a\) in (102), one can obtain approximately the critical value at the merging: \[\tilde{\beta}_{\rm eff}^{\rm merge}\equiv-\frac{1}{2}-2\gamma_{E}-\log\left(2 GN^{2}H^{2}\right)-\log\left[2\left(\frac{\pi}{GN^{2}H^{2}}-\tilde{\alpha} \right)-1\right]. \tag{105}\] Applying this formula to the parameters of Figure 4, we find \(\tilde{\beta}_{\rm eff}^{\rm merge}=-4.18386\), which corresponds to snapshot (c). After the merging, the complex solution travels in the complex plane up to crossing the green stability line. The value of \(\tilde{\beta}_{\rm eff}\) chosen to plot snapshot (h) corresponds to equation (104) obtained in the small \(H\) approximation. We now consider the case \(a<0\), which corresponds to a single real tachyonic solution. From equation (102), \(a<0\) is equivalent to \[\frac{\pi}{GN^{2}H^{2}}<\tilde{\alpha}+\frac{1}{2}. \tag{106}\] It is intriguing that the inequality (106) turns out to be the same as the condition for the scalar mode to be a ghost (103). Equation (106) holds in every snapshot of Figure 7, in which \(\tilde{\alpha}\) and \(GN^{2}H^{2}\) are fixed. Therefore in all these snapshots, we have \(a<0\). Even if we are not in the large-\(|\nu|\) regime, it is a remarkable fact the analysis above still gives an accurate qualitative description of the results: we have a single real tachyon which moves along the real axis towards the massless pole. If the transition from tachyonic to non-tachyonic does indeed happen on the real axis, then it must be at \(\nu=3/2\). If this is the case, it is sufficient to evaluate \(Q_{\rm dS}(\nu)\) at \(\nu=3/2\) to obtain a stability condition for the other parameters as follows. Using \(\mathcal{H}(1)=1\) in (104), we obtain \[Q_{\rm dS}(3/2)=-\frac{1}{2}-\log\left(GN^{2}H^{2}\right)-2\left[\frac{\pi}{ GN^{2}H^{2}}+\frac{\tilde{\beta}_{\rm eff}}{2}-\tilde{\alpha}\right]. \tag{107}\] The transition between stability and instability corresponds to \(Q_{\rm dS}(3/2)=0\), in which case we obtain \[\tilde{\beta}_{\rm eff}^{\rm massless}\equiv-\frac{1}{2}-\log\left(GN^{2}H^{ 2}\right)+2\left(\tilde{\alpha}-\frac{\pi}{GN^{2}H^{2}}\right). \tag{108}\] If the parameters of the theory satisfy (108), then \(\nu=3/2\) is a double pole of the tensor two-point function. But if \(\tilde{\beta}_{\rm eff}>\tilde{\beta}_{\rm eff}^{\rm massless}\), then the theory is tachyon-stable. Evaluating (108) for the parameters taken in Figure 7 gives the value chosen to plot snapshot (c). It is clear from this snapshot that the solution which was unstable in snapshot (b) crosses the stability line. Therefore, the assumption made for (108) that the transition would happen on the real axis is verified numerically for this particular set of parameters. In the \(a<0\) case, we were able to derive an exact formula for tachyonic stability as a function of \(\tilde{\beta}_{\text{eff}}\) in (106). This was obtained assuming that the tachyonic solution would cross the point \(\nu=3/2\). However, in the case of \(a>0\), the tachyonic solution is complex and can cross the stability line with a generic imaginary part, as was shown in Figure 4. If we make the further assumption that \(a\gg 1\), it is possible to perform an additional approximation to find \(X\). As argued also in [68] in the case \(\alpha=0\), a solution of equation (104) for large and positive \(a\) can be found using the ansatz \[X\underset{a\rightarrow+\infty}{\sim}=-\frac{a}{\log(-a)}. \tag{107}\] Injecting this ansatz in the original equation (104), we then find \[X\log X=-a\left[1-\frac{\log(\log(-a))}{\log(-a)}\right]\underset{a\to 0}{ \sim}-a, \tag{108}\] which is a "slow" convergence as \(\frac{\log(\log(-a))}{\log(-a)}\to 0\). We then have an imaginary part in the solution \(X\) (107) since \(\log(-a)=\pm i\pi+\log a\). The complex square root of the slowly converging solution (107) is then \[\sqrt{X}=i\sqrt{\frac{a}{\log a}}\left[\pm 1-\frac{i\pi}{2\log a}\right]+ \mathcal{O}\left(\frac{1}{\log^{3/2}a}\right). \tag{109}\] The branch with a positive real part has been taken because the real part of \(\nu\) is assumed to be positive in the bulk radial solution (108). Then, replacing \(a\) and \(X\) by their definitions (103,104), one gets the approximate solution for \(\nu\) \[\nu\approx\frac{2\left(\frac{\pi}{GN^{2}H^{2}}-\tilde{\alpha}- \frac{1}{2}\right)^{1/2}}{\left\{\tilde{\beta}_{\text{eff}}-\frac{1}{2}+2\gamma _{E}+\log\left[4GN^{2}H^{2}\left(\frac{\pi}{GN^{2}H^{2}}-\tilde{\alpha}- \frac{1}{2}\right)\right]\right\}^{\frac{1}{2}}}\times\] \[\times\left[\pm i+\frac{\pi/2}{\tilde{\beta}_{\text{eff}}-\frac{ 1}{2}+2\gamma_{E}+\log\left[4GN^{2}H^{2}\left(\frac{\pi}{GN^{2}H^{2}}-\tilde {\alpha}-\frac{1}{2}\right)\right]}\right]. \tag{110}\] For \(\alpha=0\), the solution (110) reduces to the results of [68] derived for small curvature. The stability condition (105) is then \[\tilde{\beta}_{\text{eff}}\geq\tilde{\beta}_{\text{eff}}^{\text{c}}\equiv \frac{1}{2}-2\gamma_{E}-\log\left[4GN^{2}H^{2}\left(\frac{\pi}{GN^{2}H^{2}}- \tilde{\alpha}-\frac{1}{2}\right)\right]+\left[\frac{4\pi^{2}}{9}\left(\frac{ \pi}{GN^{2}H^{2}}-\tilde{\alpha}-\frac{1}{2}\right)\right]^{1/3}. \tag{111}\] Applying this result to the case where \(\alpha=0\) and \(GN^{2}H^{2}=0.01\), we obtain that stability is reached for \(\tilde{\beta}_{\text{eff}}\geq 7.94268\) which is chosen for snapshot (f) in Figure 4. The approximation used to arrive at (110) cannot hold for large negative values of \(\tilde{\beta}_{\text{eff}}\): in this case there are two real tachyonic solutions, which cannot be described by (110). This is due to the fact that the large \(a\) approximation, equivalent to (107), cannot not hold for large negative values \(\tilde{\beta}_{\text{eff}}\) because \(a\) is proportional to \(e^{\tilde{\beta}_{\text{eff}}}\). ### Tachyonic and ghost-like instabilities for dS in parameter space After having addressed the qualitative features of the spectrum in the previous sections, we now present a full numerical scan of parameter space, identify the stability and instability regions (concerning ghosts and tachyons) and determine the characteristic scale of the instability. We do this first for tachyonic instabilities, then we move on to investigate ghost instabilities. Figure 10 shows the distinction between tachyon-stable and tachyon-unstable regions, as a function of the parameters \((GN^{2}H^{2},\tilde{\alpha},\tilde{\beta}_{\rm eff})\). In this figure, the critical value for \(\tilde{\beta}_{\rm eff}\) is given as a function of the curvature for several values of \(\tilde{\alpha}\). Each curve Figure 10: _Spin-2 tachyonic instability of de Sitter space depending on \(\tilde{\beta}_{\rm eff}\) and \(GN^{2}H^{2}\), for different values of \(\tilde{\alpha}\). Dotted lines with large dots, are the boundaries between the stable and unstable regions and have been computed numerically. Above each curve, we are in a non-tachyonic regime (\(\mbox{Re}(\nu)\leq 3/2\)), while the region below is tachyonic (\(\mbox{Re}(\nu)>3/2\)). The dotted lines with small dots are given by the exact formula (8.14) assuming that there is only one tachyon and it is located on the real axis. There is one such line for each positive \(\tilde{\alpha}\) but they are always very close to the exact boundary. The dashed lines are given by the large \(|\nu|\) and large \(a\) approximation (8.19)._ corresponds to a different \(\tilde{\alpha}\). For a given \(\tilde{\alpha}\), the region below the curve is unstable because it corresponds to lower values of \(\tilde{\beta}_{\rm eff}\), which are tachyonic. The region above the curve is stable because it corresponds to higher values of \(\tilde{\beta}_{\rm eff}\), for which the tachyon has entered the stability region \(|{\rm Re}(\nu)|<3/2\) exactly at the critical value. On each curve of figure 10, there is a regime (which roughly corresponds to small curvatures, and corresponds to the left part of the figure) in which the critical value of \(\tilde{\beta}_{\rm eff}\) decreases with increasing curvature, regardless of the value of \(\tilde{\alpha}\). For larger curvatures, one may observe a different regime: for large enough \(\tilde{\alpha}\) we observe that \(\tilde{\beta}_{\rm eff}\) starts increasing with the curvature to then decrease again. This behaviour sets in approximatively at \(\tilde{\alpha}\approx 0\). The larger \(\tilde{\alpha}\) is, the higher the increase in the critical value of \(\tilde{\beta}_{\rm eff}\). From the large-\(|\nu|\) approximation, we expect that the small curvature regime (left part of the figure) contains a complex tachyon, whereas the eventual _bump_ on the right part of the figure should contain a single tachyonic pole on the real axis. The boundary between these two regions should correspond to the value of the curvature when \(a=0\), i.e. where (111) is an equality. We have checked how well the analytic large-\(|\nu|\) approximation matches the numerical results: in the region where \(a>0\) (left part of Figure 10), the analytic approximation (110) is represented by dashed lines. On the right, where \(a<0\), the approximation (113) is represented by dotted lines. The two analytic regimes are separated by a critical value of the curvature given by the value that saturates (111). For curvatures above this value, there is a single tachyonic pole located on the real axis. This critical curvature exists only for \(\tilde{\alpha}>-1/2\). For \(\tilde{\alpha}<-1/2\), the large-\(|\nu|\) approximation (110) extends to all curvatures and leads to a monotonic behaviour of \(\tilde{\beta}_{\rm eff}^{\rm critical}\) as a function of the curvature. The analytical approximations do not exactly match the numerical results, especially the dashed lines when curvatures are not small. However, we can observe in Figure 10 that large curvatures are very well described by the exact formula (113), where (111) holds. In the large-\(|\nu|\) regime, a single tachyonic pole is located on the real axis, and we have assumed that it would stay on the real axis even for \(\nu=3/2\) while entering the stability zone. This hypothesis seems to be confirmed by the numerics because dotted lines (approximation) coincide with the large circles (numerics). Figure 11 shows the mass of the spin-2 tachyonic instability of de Sitter for the value of \(\tilde{\beta}_{\rm eff}\) at which it stops being tachyonic (\({\rm Re}(\nu)=3/2\)). The mass is plotted as a function of the curvature for different values of \(\tilde{\alpha}\). The remaining parameter is then \(\tilde{\beta}_{\rm eff}\), which is then fixed by the \({\rm Re}(\nu)=3/2\) requirement. At this transition between tachyonic and non-tachyonic, we measure the mass numerically and report it on the figure. We observe in this figure that the mass is below the cutoff for small curvatures. The mass starts to move above the cutoff at curvatures around \(GN^{2}H^{2}\approx 10^{-2.22}\) for \(\tilde{\alpha}=10\) and \(GN^{2}H^{2}\approx 10^{-2.25}\) for \(\tilde{\alpha}=-10\). The value of \(\tilde{\alpha}\) does not play an important role in the regime of such small curvatures because we are close to the flat space case in which the spin-2 pole locations do not depend on \(\tilde{\alpha}\). The tachyonic pole eventually goes back beyond the species cutoff for large curvatures if \(\tilde{\alpha}\) is not too negative. For example, for \(\tilde{\alpha}=-2/3\), the mass goes beyond the cutoff at \(GN^{2}H^{2}\approx 10^{1.2}\) which is itself above the species cutoff at \(GN^{2}H^{2}=1\). It is then possible to identify the points of Figure 10 which are above the species scale. This additional information is shown in Figure 12, which is similar to Figure 10, except that triangles correspond to poles with mass below the species cutoff whereas large dots have a mass larger than the cutoff. Figure 13 is a different representation of the critical value of \(\tilde{\beta}_{\rm eff}\) which separates the tachyon-stable from the tachyon-unstable regime: in this figure, the colour code corresponds to the critical value of \(\tilde{\beta}_{\rm eff}\) which separates between tachyon-stable and tachyon-unstable in the (\(\tilde{\alpha}\), \(GN^{2}H^{2}\)) parameter space. It also compares the value of \(\tilde{\beta}_{\rm eff}\) obtained numerically with the analytical approximations (8.14) and (8.19). Each row of this figure gives a different window for (\(\tilde{\alpha}\), \(GN^{2}H^{2}\)). The top row Figure 11: _Mass of the spin-2 tachyonic instability of de Sitter space when it crosses the \(\text{Re}(\nu)=3/2\) line, for a given set of parameters \((\tilde{\alpha},GN^{2}H^{2})\) while varying \(\tilde{\beta}_{\text{eff}}\). The mass is plotted in units of the species scale (1.7). Each coloured curve is a different choice of \(\tilde{\alpha}\)._ gives a more extensive view while the bottom row is a zoom on a space where the analytics are supposed to break down. The right panels correspond to the analytical approximations (8.19 - 8.14) with a larger number of pixels than the numerics given on the left panels. The apparent discontinuity in the right panels comes from a junction between approximation (8.14) for real-axis tachyon and (8.19) for complex tachyon. Around this junction, the large-\(|\nu|\) approximation is not valid anymore. The discontinuity which is visible on both panels on the right is an artefact of the large-\(|\nu|\) approximation and is absent from the numerics in the middle panels. Instead of a discontinuity, one can observe a valley of values for \(\tilde{\beta}_{\text{eff}}\) which are lower than expected by the analytics. This behaviour could also be observed in Figure 10 at the minima of \(\tilde{\beta}_{\text{eff}}\). Figure 10 separates tachyonic from non-tachyonic regions, but it does not contain any information on the location of the poles, which encodes the characteristic scale of Figure 12: _Spin-2 tachyonic instability of de Sitter space depending on \(\tilde{\beta}_{\text{eff}}\) and the curvature \(H\), for different values of \(\tilde{\alpha}\). Markers are placed on the boundary \(\text{Re}(\nu)=3/2\) between the tachyonic and non-tachyonic regions which have been computed numerically. Each colour corresponds to a given value of \(\tilde{\alpha}\). For each coloured curve, we are in a tachyonic regime (\(\text{Re}(\nu)>3/2\)) below the markers, while the region above is non-tachyonic (\(\text{Re}(\nu)\leq 3/2\))._ the tensor tachyonic instability. In what follows, we investigate this scale numerically. The instabilities of the tensor sector are studied in Figure 14. The colour coding corresponds to the real part of \(\nu\) for the tachyonic modes36. This is the quantity that controls the divergence rate of the mode, via equation (100). The red line \(\mathrm{Re}(\nu)=3/2\) separates tachyon-unstable from tachyon-stable regions. The four different sub Figure 13: _Values of \(\tilde{\beta}_{\text{eff}}\) such that de Sitter space goes from tachyonic-unstable to tachyonic-stable for the spin-2 mode, plotted in the (\(\tilde{\alpha}\), \(GN^{2}H^{2}\)) plane. For each pixel, values of \(\tilde{\beta}_{\text{eff}}\) lower than the given colour corresponds to instability, and higher values correspond to stability. The white vertical line on the top row panels is the separation between curvatures above and below the species cutoff (7). The right panels are obtained using large-\(|\nu|\) analytical approximations, while the left panels are numerical results. As it was done for Figure 10, approximations are split into two regimes: if the inequality (101) holds, then (102) is used. Otherwise, (100) is used. The bottom panels are zoomed on a smaller parameter space, where the analytical approximation is supposed to break down around \(a\approx 0\) (104), which corresponds to the contour of the area on the top right of each panel._ figures of Figure 14 show, using a colour code, the size of \(\text{Re}(\nu)\) as a function of two of the parameters (\(\tilde{\alpha}\), \(GN^{2}H^{2}\)), for fixed values of \(\tilde{\beta}_{\text{eff}}\). The four sub-figures correspond to different values of \(\tilde{\beta}_{\text{eff}}\). We observe that there are two tachyonic regions, one for low enough values of \(\log(GN^{2}H^{2})\), the other for large values of both \(\tilde{\alpha}\) and \(\log(GN^{2}H^{2})\). As \(\tilde{\beta}_{\text{eff}}\) increases, these two regions move out in parameter space. In this figure, the word "stable" refers exclusively to the absence of tachyonic instabilities: we remind the reader that there are always ghost-like spin-2 poles at all points in parameter space. We shall come back to these modes at the end of this section. Figure 15 shows the mass squared (5.26) of the lightest spin-2 tachyonic pole in Figure 14: _Regions and inverse time-scale of the tachyonic instability (defined in (8.2) for the spin-2 sector in the de Sitter case. The colour code of each of the figures above gives the value of the real part of \(\nu\) at the zero of the inverse propagator (5.39) for a given value of \(\tilde{\beta}_{\text{eff}}\). The black vertical line separates curvatures which are below the species cutoff defined in (1.7) from curvatures which are above. The unstable region (5.28) is delimited by the red lines where \(\text{Re}(\nu)=3/2\). The pink lines are placed at \(\text{Re}(\nu)=5/2\), where the inverse time scale of the tachyonic instability (8.2) has the value \(\Gamma=H\). As \(\tilde{\beta}_{\text{eff}}\) increases, the stability region becomes larger._ units of the species scale (1.7). The red curves obtained from the previous Figure 14 delimit the tachyonic regions in the plane \((\tilde{\alpha},GN^{2}H^{2})\). Whereas the green curve corresponds to the species scale \(GN^{2}|m_{2}|^{2}=1\). The darker regions which are delimited by the green curve are then below the species cutoff. Each panel of Figure 15 corresponds to a different value of \(\tilde{\beta}_{\rm eff}\). Negative values, such as \(\tilde{\beta}_{\rm eff}=-3\) plotted in panel (a) contain a large tachyonic region, but the tachyon is always above the cutoff. When \(\tilde{\beta}_{\rm eff}\) is increased, the non-tachyonic region becomes larger. The small Figure 15: \(|\text{Mass}|^{2}\) _(_5.26_) of the de Sitter spin-2 lightest tachyonic pole in units of the species scale (_1.7_), plotted in the_ \((\tilde{\alpha},GN^{2}H^{2})\) _plane. The red line separates the tachyonic region from the non-tachyonic region obtained from Figure_ 14_. The green curve corresponds to the species scale_ \(GN^{2}|m|^{2}=1\)_. The vertical white line separates curvatures that are below the species scale (on the left) from curvatures which are above (on the right). In the two first panels, the mass of the tachyonic pole is always larger than the species scale. The two bottom panels show that a larger_ \(\tilde{\beta}_{\rm eff}\) _allows for some tachyonic poles under the species scale but only for small curvatures or large_ \(\tilde{\alpha}\)_. In the last panel, the small curvature region is entirely below the species cutoff._ areas in panel (b) where the pole is below the cutoff are included in the non-tachyonic regions. Therefore, the tachyon is always above the cutoff in panel (b) too. For larger values of \(\tilde{\beta}_{\rm eff}\), such as \(\tilde{\beta}_{\rm eff}=10\) in panel (c), we finally observe some overlap between the tachyonic and light (below the cutoff) regions. It means that around \(\tilde{\beta}_{\rm eff}\approx 10\) and above, de Sitter can contain a tachyon which lies below the effective cutoff of the species scale. Increasing \(\tilde{\beta}_{\rm eff}\) even more, such as in panel (d), the light tachyonic regions increase in size, the small curvature tachyon is then always below the cutoff, whereas the larger curvatures necessitate a large \(\tilde{\alpha}\) to get a light tachyon. It seems from Figure 15, that around \(\tilde{\beta}_{\rm eff}\approx 10\) and above, the small curvature region goes below the cutoff. This can be understood from inserting equation (8.15) into \(|m_{2}|^{2}\approx GN^{2}H^{2}|\nu|^{2}\leq 1\), which would correspond to the relation between \(\tilde{\beta}_{\rm eff}\), \(\tilde{\alpha}\) and the curvature such that the complex pole (with \({\rm Re}(\nu)=3/2\)) is below the cutoff. For small curvatures (\(GN^{2}H^{2}<<1\)), this relation is \[\tilde{\beta}_{\rm eff}\geq\tilde{\beta}_{\rm eff}^{\rm species}\equiv\sqrt{16 \pi^{2}-1}-\log(4\pi)+\frac{1}{2}-2\gamma_{E}\approx 9.34. \tag{8.20}\] If the inequality (8.20) holds, the small curvature region lies below the species cutoff. This value agrees with panel (d) of Figure 15, where we observe that small curvatures are below the cutoff. However, in panel (c), \(\tilde{\beta}_{\rm eff}=10>9.34\) so small curvatures should lie below the cutoff. However, we observe that this is not the case. The value obtained in (8.20) does not only rely on a small curvature approximation but also on the validity of the ansatz (8.15) for large \(a\), which converges rather slowly with first corrections given in (8.16). In particular, for \(\tilde{\beta}_{\rm eff}=\tilde{\beta}_{\rm eff}^{\rm species}\approx 9.34\) and \(GN^{2}H^{2}=10^{-3}\), the \(log(log(-a))/log(-a)\) correction term in (8.16) is approximatively equal to \(0.2\). This error propagates into the value obtained for \(\tilde{\beta}_{\rm eff}^{\rm species}\), which could explain why the left of the panel (b) disagrees with (8.20). Moreover, the correction term in (8.16), does not go to zero when \(GN^{2}H^{2}\to 0\) but converges to a finite value around \(0.2\) for \(\tilde{\beta}_{\rm eff}=\tilde{\beta}_{\rm eff}^{\rm species}\). The only way to make this error vanish is the \(\tilde{\beta}_{\rm eff}\to+\infty\) limit. Figure 16 shows the analysis of the instabilities in the scalar sector. Plotting this figure does not require a numerical approach, since the scalar propagator (4.14) is directly written as a pole for the Laplacian operator \(\Box\). Formulae (4.16) and (4.19) are used to plot the criterion for tachyonic and ghost-like instabilities respectively. In this case (unlike for the tensor) we can display both ghost-like and tachyonic instabilities on the same figure because there is only one pole in the scalar case (4.13). On the left subfigure, we plot the real part of \(\nu\) as a function of the different parameters (in the scalar sector, the equation of motion does not depend on \(\tilde{\beta}_{\rm eff}\)). Tachyonic regions are delimited by red lines and ghost-like regions by blue lines. On the right subfigure, we plot the effective mass of the scalar mode in units of the species scale \((GN^{2})^{-1/2}\), see equation (1.7). From figure 16 we see that the scalar ghost is below the species scale (1.7) for large enough values of \(\tilde{\alpha}\). For reasonable values of \(GN^{2}H^{2}\) (below or comparable to the species scale), this ghost is also a tachyon. Therefore, we will focus on tachyonic stability for the scalar mode in the following. In Figure 17 we compare the "strength" of the tensor tachyonic instability with that of the scalar tachyonic instability. By strength, we mean the inverse time scale associated with the instability, defined by \(\Gamma\) in (8.2). The decay rate \(\Gamma\) of the scalar sector is given in (4.26), while it is computed numerically for the tensor sector in Figure 14. For fixed \(\tilde{\beta}_{\rm eff}\), the regions in the \((\alpha,GN^{2}H^{2})\) plane where the tensor instability is stronger than the scalar one are coloured in green; in blue regions, the scalar tachyon instability is stronger; in white regions, there are no tachyonic instabilities. It is interesting to compare our results with those obtained by Vilenkin in [47] for the (original) Starobinsky model. In this work, the renormalized cosmological constant was chosen to vanish, i.e. \(\Lambda=0\) in our equation (2.57). This choice corresponds to the red vertical line in the four subfigures of Figure 17. In [47], the value of \(\tilde{\beta}_{\rm eff}\) was irrelevant as this work concerned only the scalar mode. Vilenkin found that the scalar mode was unstable for large negative \(\tilde{\alpha}\). According to our results Figure 16: _Regions of instabilities for the spin-0 sector in the de Sitter case. The colour code on the left panel indicates the real part of the solution \(\nu\) of the scalar sector plotted in the (\(\tilde{\alpha}\), \(GN^{2}H^{2}\)) plane. Tachyonic regions (4.25) are delimited by red lines. They correspond to regions where \(\text{Re}(\nu)>3/2\). The blue line delimits the region given by (4.16) where the scalar solution becomes a ghost. A white vertical line separates curvatures which are above (on the right) and below (on the left) the species scale (1.7). The green curve corresponds to the species scale \(GN^{2}|m|^{2}=1\) On the right panel, we compare the mass of the scalar solution with the cutoff of the theory, given by the species scale. For most of the ghost regions, the ghost is very massive compared to the cutoff. For \(\alpha=0\), the kinetic term of the scalar mode vanishes. The tachyon is also heavier than the species cutoff, except in the top left region for small curvatures and large \(\tilde{\alpha}\)._ in Figure 17, in his case, the scalar instability was indeed the strongest instability for small values of \(\tilde{\beta}_{\text{eff}}\). However, our results extend also to other regions. We observe that for small values of \(\tilde{\beta}_{\text{eff}}\) and large and negative values of \(\tilde{\alpha}\), for sufficiently small \(GN^{2}H^{2}\), the tensor tachyonic instability dominates over the scalar one. There are other regions in which the spin-2 instability is the strongest. Moreover, there are Figure 17: _Regions of parameter space in de Sitter, for several values of \(\tilde{\beta}_{\text{eff}}\), showing whether the strength of the tensor tachyonic instability is larger or smaller than the strength of the scalar tachyonic instability. In green regions, the tensor tachyon instability dominates. In blue regions, the scalar tachyon instability dominates. In white regions, there are no tachyonic instabilities (if there is one, its mass is above the cutoff). In this figure, we only consider tachyonic poles which are below the species cutoff. The vertical black line separates curvatures which are below (left) and above (right) the species cutoff. The red line in the plots corresponds to the value of the curvature chosen in [47], which can be obtained by setting the renormalized cosmological constant \(\Lambda\) to zero, as it was done in (2.57). At this fixed curvature, the scalar instability dominates over the tensorial instability for negative \(\tilde{\alpha}\). The last panel shows the region which is tachyonic for large values of \(\tilde{\beta}_{\text{eff}}\) and small curvatures (see (d) of Figure 15). Increasing \(\tilde{\beta}_{\text{eff}}\) will make this region disappear from the selected window because it will move to even smaller curvatures._ smaller regions (in white) which are tachyon-stable. These regions grow in size as \(\tilde{\beta}_{\rm eff}\) becomes large and positive, and shrink as \(\tilde{\beta}_{\rm eff}\) becomes large and negative. As we have mentioned, in the regions denoted "stable" in Figure 14, all tensor modes are non-tachyonic. However, even in those regions, as we have seen in subsection 8.1, there is always one ghost pole37. Its residue can be positive or complex. This was already seen in the two examples given by Figures 4 and 7. Footnote 37: Except for some fine-tuned values of the parameters for which two poles merge to form a double pole. One important question is whether the ghost pole is above or below the UV cutoff scale (7): if the ghost mass is below the cutoff, then it must be regarded Figure 18: _This figure indicates the modulus of the mass squared defined in (110), for the ghost pole of the spin-2 propagator (110) in units of the species cutoff defined in (7), in the de Sitter case. The parameter space is spanned by the dimensionless curvature on the horizontal axis and the \(R^{2}\) parameter \(\tilde{\alpha}\) on the vertical axis. The vertical white line separates curvatures that are above and below the species cutoff. The green line is the boundary of the region where the ghost mass is below the cutoff, which corresponds to darker areas._ as a true instability of the low energy effective theory. The results of this analysis are shown in Figure 18, in which the colour code represents the mass squared for the spin-2 ghost in units of the species cutoff (7). The green line separates the regions where the ghost mass is below the cutoff (darker colours) from the regions where it is above (lighter colours). In this figure, we observe that the ghost becomes lighter and lighter as \(\tilde{\beta}_{\rm eff}\) becomes large and positive, as well as when the parameter \(GN^{2}H^{2}\) decreases. In conclusion, de Sitter spacetime is generally ghost-unstable for large and positive \(\tilde{\beta}_{\rm eff}\), whereas generic values of \(\tilde{\beta}_{\rm eff}\) are ghost-unstable only in the top-right corner of Figure 18, corresponding to large curvatures and/or large \(\tilde{\alpha}\). ## 9 Poles of the AdS spin-two propagator and stability We now turn to the negative curvature case, and repeat the same analysis as in the previous for AdS. First, we study two paradigmatic regions of parameters. Then, we provide analytical approximations to understand these two examples. Finally, we study numerically the stability of the system of gravity plus holographic CFT in AdS. A new feature we will find in this case is the presence of an infinite tower of stable solutions which only exists in AdS-slicing. These solutions are found near the poles of the stress-tensor two-point function, which in AdS appear in an infinite discrete set. ### Results for two typical sets of parameters In this subsection, we focus on two examples (with small and large curvature, respectively), solve the spectral equation (109) for tensor modes numerically and follow the evolution of the solutions as we change the parameters. In the following, when it is not specified, it will be understood that we are using the single-boundary condition (110), which leads to the inverse propagator \(\mathcal{F}_{(-)}\) given in (108). Recall that in the negative curvature case, a tachyon corresponds to a pole on the imaginary axis, see section 5.3 and in particular equation (110). The results of our first example are shown in Figure 19. In this case, we choose a small value of \(GN^{2}\chi^{2}\) (i.e. the AdS curvature in species-scale units) and \(\tilde{\alpha}=0\). As we can observe from figure 19, large and negative values of \(\tilde{\beta}_{\rm eff}\) always display two tachyons lying on the imaginary axis, with opposite signs for their residues: snapshot (b) shows that the lightest tachyon is also a ghost whereas the heavy tachyon is not. These two tachyon-unstable solutions merge for a value of \(\tilde{\beta}_{\rm eff}\) close to \(-4.18\), snapshot (c). For larger values of \(\tilde{\beta}_{\rm eff}\), there are no imaginary solutions and the theory is tachyon-stable. Following snapshots from (d) up to (g), a complex solution moves closer and closer to the real axis when \(\tilde{\beta}_{\rm eff}\) is increased. This solution merges with the lightest stable pole (close to \(\nu=3/2\)) and forms a double pole in snapshot (h). Figure 19: _Zeros of the real (blue curve) and imaginary (orange curve) parts of the AdS spin-2 inverse correlator \(\mathcal{F}_{(\cdot)}^{-1}(\nu)\) (5.52) for \(\alpha=0\), \(GN^{2}\chi^{2}=0.01\). Each panel is obtained for a different value of \(\tilde{\beta}_{\text{eff}}\). The colour of a pole represents the sign of its residue. Green is for positive, red for negative and purple for complex residue. Two tachyons on the imaginary axis merge around the value of \(\tilde{\beta}_{\text{eff}}\) given in equation (9.7), shown in snapshot (c). This merging is a second-order pole because \(\mathcal{F}_{\text{dS}}^{\prime}\) vanishes. The two complex conjugate poles move to the complex plane as \(\tilde{\beta}_{\text{eff}}\) is increased. Two poles in (g) merge in snapshot (h), where they form a massless second-order pole. Higher values of \(\tilde{\beta}_{\text{eff}}\) (i) have a massless ghost and a \(\nu=1/2\) stable pole._ If we continue to increase \(\tilde{\beta}_{\text{eff}}\), two single poles appear. First, a pole stays close to the massless \(\nu=3/2\), with a negative residue. A second pole moves towards \(\nu=1/2\) with a positive residue. The cloud of stable poles denoted by green points on the real axis is an infinite series of poles lying close to every half-integer: these half-integers are not poles, but zeros of the propagator, corresponding to poles of the stress-tensor two-point function, for which the inverse propagator \(\mathcal{F}_{(-)}\) defined in (110), diverges. These can be traced back to poles in the harmonic number \(\mathcal{H}\) appearing in \(Q_{(\cdot)}\) given in (111). The mass of every pole in the snapshots of Figure 19 (except the infinite series of safe poles on the real axis) is plotted in Figure 20. By plotting the masses in units of the species scale, this figure allows us to see directly which pole is above or below the cutoff given by \(GN^{2}|m|^{2}=1\). As a result, for generic values of \(\tilde{\beta}_{\text{eff}}\), the tachyon, the ghost and the complex ghost are all above the cutoff. Whereas large values of \(|\tilde{\beta}_{\text{eff}}|\) have a ghost-like pole lying below the cutoff, as is also the case for the pure gravity massive pole shown in dashed lines. This figure shows that the two tachyons (the ghost shown in red and the ghost-free in green) merge to form a pair of complex conjugate poles (non-tachyonic). This merging happens at masses that are much above the species cutoff. Our second example is displayed in Figure 21. In this case, we take the AdS curvature to be of the order of the species scale, specifically \(GN^{2}\chi^{2}=\pi/4\). We observe a very different approach to stability as \(\tilde{\beta}_{\text{eff}}\) is increased: we have a single tachyon on the imaginary axis, which we observe entering the region shown in snapshot (d), and moves from large imaginary values to small imaginary values until it reaches the origin \(\nu=0\) in snapshot (g). As we further increase \(\tilde{\beta}_{\text{eff}}\), the mode becomes stable because the pole moves off the imaginary axes. The tachyon-safe solution converges to \(\nu=1/2\) when \(\tilde{\beta}_{\text{eff}}\) is increased to high positive values. It is important to remark that a pole with negative residue (corresponding to a ghost) is present for all values of \(\tilde{\beta}_{\text{eff}}\). For large and negative \(\tilde{\beta}_{\text{eff}}\), it is close to \(\nu=1/2\), whereas for large and positive values of \(\tilde{\beta}_{\text{eff}}\), it approaches \(\nu=3/2\). This can be understood using equation (110) which is valid for asymptotically large values of \(\tilde{\beta}_{\text{eff}}\) since it was derived for \(N=0\). One can observe that two poles are present in this formula. For large and negative \(\tilde{\beta}_{\text{eff}}\), the massless \(\nu=3/2\) poles are healthy residue and the \(\nu=1/2\) is a ghost. This is verified in snapshot (a) where the lightest pole is a ghost, and the first pole after \(\nu=3/2\) is safe. For large and positive \(\tilde{\beta}_{\text{eff}}\) the sign of their residue switch. This is observed in the last snapshot where the \(\nu=1/2\) pole is safe whereas the massless \(\nu=3/2\) is a ghost. Figure 22 shows the mass of the poles that are found in Figure 21. The mass of the poles which are obtained in the presence of the CFT is shown in coloured dots, while the poles which were already present in pure gravity are shown using coloured curves. In this plot, we observe that all poles are above the species cutoff except a safe (non-tachyonic, non-ghost) pole for large and negative values of \(\tilde{\beta}_{\text{eff}}\) (as in pure gravity) which is massless in the \(\tilde{\beta}_{\text{eff}}\to-\infty\) limit. This pole becomes massive when \(\tilde{\beta}_{\text{eff}}\) is increased, while the ghost (in red) moves below the species cutoff. For large and positive values of \(\tilde{\beta}_{\text{eff}}\), only the ghost is below the cutoff. Its mass goes to zero in the \(\tilde{\beta}_{\text{eff}}\to+\infty\) limit. This analysis holds in the pure gravity case. Indeed, the only regime where the CFT plays a role is for generic values of \(\tilde{\beta}_{\text{eff}}\), where most of the poles have a mass higher than the species cutoff and must therefore be discarded from the EFT analysis. Qualitatively, the two cases displayed in Figures 19 and 21 represent quite different behaviours when \(\tilde{\beta}_{\text{eff}}\) is varied. We have chosen to discuss only these two cases Figure 20: _In this plot, obtained with the same parameters as in Figure 19, we show the modulus of the mass of the spin-2 tachyonic poles in AdS, defined in (110), in units of the species scale (7), as a function of \(\tilde{\beta}_{\text{eff}}\). Red and green markers correspond respectively to the light ghost-like tachyon and the heavy non-ghostly tachyon. Purple markers correspond to the complex conjugate pair of poles, which also have a complex residue. The species scale is shown by a horizontal solid black line. Black and blue curves correspond to the pure gravity modes with \(\beta=\tilde{\beta}_{\text{eff}}/\pi\): the blue line is tachyonic and the black line is non-tachyonic, both are non-ghostly. The vertical grey line is the value of \(\tilde{\beta}_{\text{eff}}\) at which the two tachyonic poles merge and move off the imaginary axis as \(\tilde{\beta}_{\text{eff}}\) is further increased. This merging is displayed in snapshot (c) of Figure 19._ because they turn out to be paradigmatic of what happens in the whole parameter space. More cases are shown in appendix M, where each point in \((GN^{2}\chi^{2},\alpha)\) space is a \((GN^{2}\chi^{2},\alpha)\) space. Figure 21: _Zeros of the real (blue curve) and imaginary (orange curve) parts of the AdS spin-2 inverse correlator \(\mathcal{F}_{\mbox{\tiny$(-)$}}^{-1}(\nu)\) (5.36) for different values of \(\tilde{\beta}_{\mbox{\scriptsize{eff}}}\), with fixed \(\alpha=-10\) and \(GN^{2}\chi^{2}=\pi/4\). The colour of a pole represents the sign of its residue. Green is for positive, red for negative and purple for complex residue From negative values of \(\tilde{\beta}_{\mbox{\scriptsize{eff}}}\), up to some critical value in snapshot (g), there is one tachyon which cannot be seen in snapshot (a) because it is outside the window and moves towards the origin. Snapshot (g) shows the transition from tachyonic instability to tachyonic stability, where the origin is a double pole._ corresponds to a set of snapshots. Each example turns out to have the same behaviour as either one of the two cases already discussed. The ArXiv webpage of this paper contains ancillary files, including animated gifs. Each snapshot of these gifs corresponds to a different value of \(\tilde{\beta}_{\text{eff}}\) for fixed \((GN^{2}\chi^{2},\tilde{\alpha})\). Figure 22: _In this plot, obtained with the same parameters as in Figure 21, we show the modulus of the mass of the spin-2 tachyonic poles in AdS, defined in (110), in units of the species scale (7), as a function of \(\tilde{\beta}_{\text{eff}}\). Grey markers show the mass of the lightest safe mode. Red and green markers correspond respectively to the light non-tachyonic ghost and the heavy non-ghostly tachyon. The species scale is shown by a horizontal solid black line. The black, blue and orange curves correspond to the pure gravity modes with \(\beta=\tilde{\beta}_{\text{eff}}/\pi\): the blue line is tachyonic, the black dashed line is the non-tachyonic ghost, and the orange line is safe. For pure gravity curves, the vertical axis is \(\log(G^{\frac{1}{2}}|m|^{2}+1)\) while the horizontal axis is \(\pi\beta\). The vertical green line is the value of \(\tilde{\beta}_{\text{eff}}\) at which the tachyonic pole forms a double zero at \(\nu=0\) and becomes real (non-ghostly) as \(\tilde{\beta}_{\text{eff}}\) is further increased. This double zero is displayed in snapshot (g) of Figure 21, where the transition between tachyonic and non-tachyonic happens. For larger values of \(\tilde{\beta}_{\text{eff}}\), the pole becomes safe. In pure gravity, the tachyonic pole becomes stable at \(\pi\beta=96\), which is slightly different from the value with the CFT given by the vertical green line, where the tachyonic pole stops being tachyonic, crosses the origin at \(\nu=0\) and becomes the lightest safe pole. Larger values of \(\tilde{\beta}_{\text{eff}}\) coincide with pure gravity._ In the following subsection, we present an analytical argument which explains why these two cases are typical of what happens more generally, and how we can distinguish between these two types of behaviour. ### Analytic results for tensor tachyonic modes in AdS in the large-\(|\nu|\) regime Tachyonic modes in AdS correspond to purely imaginary \(\nu\). In this section, we provide an analytical approximation which allows us to better understand the two examples given in the previous subsection. This approximation is the limit for large \(|\nu|\), in which case the pole mass is much larger than the AdS curvature scale (but it may still lie below the species cut-off). Interestingly, as was the case in dS, we shall observe that the approximation for large \(|\nu|\) turns out to be still valid for poles with values of \(|\nu|\) which may even be close to 1. We shall observe that the two cases studied in the previous subsection are paradigmatic: the whole parameter space may be separated into two regions, in which the behaviour of the poles is similar to the one shown in Figures 19 and 21, respectively. In the large-\(|\nu|\) regime, we can use Stirling's approximation (8.3) to replace the harmonic number \({\cal H}\) with a simpler log function. The validity of the large-\(|\nu|\) approximation will be checked afterwards, by comparing the analytical predictions with numerical evaluations of the inverse propagator. Using Stirling formula (8.3), equation (5.52) becomes \[Q_{(-)}(\nu) =1+2\left(\frac{\pi}{GN^{2}\chi^{2}}+\tilde{\alpha}\right)- \frac{\nu^{2}}{2}\left[\tilde{\beta}_{\rm eff}-\frac{1}{2}+\log\left(GN^{2} \chi^{2}\right)+\right. \tag{9.1}\] \[\left.+\log(\nu)+\log(-\nu)-2\gamma_{E}+{\cal O}(|\nu|^{-1}) \right].\] One can already see the difference with the de Sitter case (8.4): the log is split in a sum which is symmetric in \(\nu\leftrightarrow-\nu\). If we write \(\nu=|\nu|e^{i{\rm arg}(\nu)}\), then \[Q_{(\cdot)}(\nu) =1+2\left(\frac{\pi}{GN^{2}\chi^{2}}+\tilde{\alpha}\right)- \frac{\nu^{2}}{2}\left[\tilde{\beta}_{\rm eff}-\frac{1}{2}+\log\left(|\nu|^{2 }GN^{2}\chi^{2}\right)+\right. \tag{9.2}\] \[\left.+2i{\rm arg}(\nu)-i\pi{\rm sign}({\rm arg}(\nu))-2\gamma_{ E}+{\cal O}(|\nu|^{-1})\right].\] We now apply (9.2) it to tachyonic modes, \[\nu=ix,\quad x\ {\rm real}. \tag{9.3}\] The complex phases cancel in (9.2). Then, the equation of motion (5.52) can be written as \[X\log X=-a, \tag{9.4}\] where \[X\equiv x^{2}GN^{2}\chi^{2}\exp\left\{\tilde{\beta}_{\rm eff}-\frac{1}{2}+2\gamma_ {E}\right\}, \tag{9.5}\] \[a\equiv 2GN^{2}\chi^{2}\left[2\left(\frac{\pi}{GN^{2}\chi^{2}}+\tilde{\alpha} \right)+1\right]\exp\left\{\tilde{\beta}_{\rm eff}-\frac{1}{2}+2\gamma_{E} \right\}. \tag{9.6}\] This is similar to the corresponding equations we found in the de Sitter case, (8.5), (8.6) and (8.7), up to a few sign flips. The large \(x\) regime described by equations (9.4), (9.5) and (9.6) is valid both for the asymmetric condition in the bulk (5.46), and for the symmetric boundary condition (5.55). To see why this regime is independent of boundary conditions, we observe that the difference between \(Q_{(-)}\) (5.53) for single-boundary condition (5.46) and \(Q_{\rm sym}\) for the symmetric case (5.55) vanishes exponentially with \(x\). Similarly to the large \(|\nu|\) regime for de Sitter spacetime, to discuss equation (9.4) we distinguish three cases : * If \(0<a<e^{-1}\) there are two tachyonic solutions (\(x\) real). * If \(a>e^{-1}\), no real solution for \(x\), the theory is then tachyonic-stable. Therefore, large \(x\) solutions are always stable when \(a>e^{-1}\). This is equivalent to \[\tilde{\beta}_{\rm eff}>\tilde{\beta}_{\rm eff}^{\rm merge}\equiv-\frac{1}{2} -2\gamma_{E}-\log\left(2GN^{2}\chi^{2}\right)-\log\left[2\left(\frac{\pi}{GN^ {2}\chi^{2}}+\tilde{\alpha}\right)+1\right],\] (9.7) which is similar to (8.11) in de Sitter. However, the physics of the poles is different: in de Sitter, \(\beta_{\rm merge}\) does not correspond to a transition from instability to stability, but rather to the merging of two real solutions which then move off the real axis. In AdS on the other hand, (9.7) indicates the critical value at which solutions leave the imaginary axis, and therefore it represents a stability condition, valid for large \(x\) and \(a>0\). This condition is valid in the example of Figure 19, where the transition between tachyonic and non-tachyonic behaviour occurs around the value of \(\tilde{\beta}_{\rm eff}\) chosen for the snapshot (c) for which \(\tilde{\beta}_{\rm eff}=\tilde{\beta}_{\rm eff}^{\rm merge}\) (9.7). * If \(a<0\), there is a single tachyonic solution \(x\), whose value decreases when \(a\) is increased. In terms of the parameters, the condition \(a<0\) is equivalent to \[\frac{\pi}{GN^{2}\chi^{2}}<-\tilde{\alpha}-\frac{1}{2}.\] (9.8) The condition (9.8) for having only one solution \(\nu=ix\) is analogous to equation (8.12) in de Sitter, which in that case was the condition for having a single \(\nu\) on the real axis. However, unlike in dS, in the AdS case, this equation also gives information about the number of tachyons. If (9.8) is verified, we have one tachyon, and if it is not, then we have either none or two tachyons depending on the value of \(\tilde{\beta}_{\rm eff}\) through inequality (9.7). Interestingly the condition (9.8), like the analogous inequality for de Sitter (8.12), turns out to be the same as the condition (4.16) for the scalar to be a ghost38. Footnote 38: We do not know whether there is a deep reason for this. If we select the parameters such that (9.8) is verified, and if the tachyonic pole stays on the imaginary axis even for small values of \(x\) where the approximation above breaks down, then this pole should cross the origin \(\nu^{2}=0\) (and becomes stable) as \(\tilde{\beta}_{\rm eff}\) is increased. This corresponds to the usual BF bound, which is respected for positive \(\nu^{2}\). Increasing \(\tilde{\beta}_{\rm eff}\) increases \(a\), such that the unique solution for \(x\) decreases. At some point, \(x\) eventually crosses the origin at \(x=0\) and the pole becomes non-tachyonic. Therefore, the tachyon-stability condition for negative \(a\) corresponds to \[\tilde{\beta}_{\rm eff}\geq\tilde{\beta}_{\rm eff}^{\rm BF}\equiv\frac{1}{2}- \log\left(GN^{2}\chi^{2}\right)-2\mathcal{H}(-1/2)-8\left[1+2\left(\frac{\pi}{ GN^{2}\chi^{2}}+\tilde{\alpha}\right)\right],\] (9.9) where \(\tilde{\beta}_{\rm eff}^{\rm BF}\) corresponds to the value for which we have \(Q_{(-)}(0)=0\). The stability condition (9.9) is different from (9.7) because it applies to the \(a<0\) case. In the large-\(x\) approximation, the tachyon stays on the imaginary axis as we vary \(a\). If this statement continues to hold for small \(x\) down to \(x=0\) where this solution becomes non-tachyonic, then the formula (9.9) would give an exact stability condition. This is indeed what happens in the example of Figure 21: the formula (9.9) describes exactly the transition and gives an accurate condition for the onset of the tachyonic instability, as we chose \(\tilde{\beta}_{\rm eff}=\tilde{\beta}_{\rm eff}^{\rm BF}\) in snapshot (g) where the theory is at the transition from tachyon-unstable to tachyon-stable. Large-\(|\nu|\) solutions exist as long as a term in \(Q_{(\cdot)}(\nu)\) (5.53) (or \(Q_{\rm sym}(\nu)\)) (5.60) is large and positive, which is the case for example when \(\tilde{\beta}_{\rm eff}\) is large and negative or when \(GN^{2}\chi^{2}\) is small. In the small curvature regime (or in the large and negative \(\tilde{\beta}_{\rm eff}\) regime), the cancellation in the spectral equation can be done using the \(\nu\) dependent terms \(\mathcal{H}(-1/2\pm\nu)\). This includes two types of solutions. First, large-\(|\nu|\) type of solutions were studied in this subsection. Second, \(\nu\) can be close to a pole of the harmonic number \(\mathcal{H}\). All these poles are located on the real axis, for each half-integer. This second type of solution, which was not present in dS, will be studied in the next subsection. Before that, we first comment on the flat space limit of the AdS spin-2 propagator. Flat space limit of the AdS spin-2 propagatorIn the limit of vanishing curvatures \(GN^{2}\chi^{2}\to 0\), the curvature-dependent term of the propagator (5.53 or 5.60 depending on IR conditions) diverges, as it was the case in de Sitter. Indeed, the term \(\frac{\pi}{GN^{2}\chi^{2}}\) in \(Q_{(\cdot)}\) (109), must be cancelled by the harmonic numbers \(\mathcal{H}(-1/2\pm\nu)\). This can be done by taking \(|\nu|\) large as it was done for de Sitter [68]. However, in AdS, the bulk normalizable modes are present in the inverse-propagator in the form of poles of the harmonic number \(\mathcal{H}(-1/2-\nu)\). In the flat space limiting procedure, we exclude real-valued \(\nu\) because they would go to non-tachyonic poles in flat space. To see why we first ask that the flat space limit should be taken such that the eigenvalues of both Laplacians (the AdS\({}_{4}\) Laplacian and the Minkowski Laplacian) match. This requires \[\nu^{2}\chi^{2}\underset{\chi\to 0}{\sim}-k^{2}. \tag{111}\] We then directly observe that real-valued \(\nu\) corresponds to \(k^{2}<0\), which was excluded from the flat space propagator in (106). We can therefore ignore real-valued \(\nu\) and therefore avoid the poles of the harmonic number located on the real axis. Inserting the large-\(|\nu|\) limit into the asymmetric 2-point function of AdS (108) or the symmetric one (109) leads in both cases to \[\mathcal{F}_{(-)}\underset{\chi\to 0}{\rightarrow}\frac{N^{2}k^{2}}{64 \pi^{2}}\left\{\frac{2\pi}{GN^{2}k^{2}}+\right.\] \[\left.\frac{k^{2}}{2}\left[\tilde{\beta}_{\text{eff}}-\frac{1}{2 }+\log\left(|\nu|^{2}GN^{2}\chi^{2}\right)+2i\text{arg}(\nu)-i\pi\text{sign}( \text{arg}(\nu))-2\gamma_{E}\right]\right\}. \tag{112}\] Comparing this with the flat space propagator (106), we find that \[\mathcal{F}_{(-)}(\nu)\overset{k^{2}\rightarrow-\nu^{2}\chi^{2}}{\underset{ \chi\to 0}{\rightarrow}}\mathcal{F}_{\text{flat}}. \tag{113}\] The terms involving the complex phase \(\text{Arg}(\nu)\) in (112) coincide with the expression of the Minkowski propagator in this limit only if we require \[\chi\nu\sim\text{sign}(\text{Im}(\nu))ik. \tag{114}\] If \(\nu\) is imaginary, we recover the purely tachyonic modes where \(k\) is real, and therefore \(k^{2}>0\). On the other hand, when \(\nu\) is real, this constraint is ill-defined because \(\nu^{2}>0\) corresponds to \(k^{2}<0\). In Minkowski space, we have defined the propagator away from the real axis which contains all the healthy propagating modes. ### Infinite series of stable solutions As one can observe in Figure 21, there is an infinite set of massive solutions on the real axis. It is also remarkable that these poles are not displayed in every snapshot, and this is due to the lack of numerical precision when these poles are too close to a half-integer. There is a pole for every blue circle near each half-integer. In Figure 19, these poles can also be seen (albeit less clearly) for some regions on the real axis. These poles are present for every half-integer but most of them are too close to a pole at half-integers to be resolved by the numerics. Numerically, one finds that the solutions on the real axis are near each half-integer \(\nu=n+1/2\), where \(\mathcal{H}(-\nu-1/2)\) has poles. To understand this feature is then instructive to expand the harmonic-number function close to its poles: \[\mathcal{H}(\epsilon-n)=-\frac{1}{\epsilon}+\mathcal{H}(n-1)+\mathcal{O}( \epsilon). \tag{111}\] Using this expression in equation (109), the result for \(n>1\) is: \[\mathcal{F}_{(-)}(n+1/2+\epsilon)=\frac{N^{2}\chi^{4}}{64\pi^{2}}\left[a_{1} \epsilon^{-1}+a_{0}+\mathcal{O}(\epsilon)\right], \tag{112}\] where \[a_{1}=-\frac{1}{2}(n-1)n(n+1)(n+2), \tag{113}\] \[a_{0}=a_{1}\left\{\frac{2n+1}{(n-1)(n+2)}-2n(n+1)\left[2\left(\frac{\pi}{GN^{ 2}\chi^{2}}+\tilde{\alpha}\right)+\frac{1}{2}-n\right]+\right.\] \[\left.-\frac{1}{2}+\tilde{\beta}_{\text{eff}}+\log\left(GN^{2}\chi^{2}\right) +2\mathcal{H}(n)\right\}. \tag{114}\] In the vicinity of half integer \(\nu=n+1/2+\epsilon\), with \(\epsilon<<1\), solutions to the spectral equation (109) are then given approximately by choosing \(\epsilon\) small but finite and approximately given by \[\epsilon\simeq-\frac{a_{1}}{a_{0}}. \tag{115}\] A zero of \(\mathcal{F}_{(-)}(\nu)\) is then found at \[\nu_{0}\equiv n+\frac{1}{2}+\epsilon. \tag{116}\] Neglecting \(\mathcal{O}(\epsilon)\) terms in (112), it is then possible to conclude from (115) that there is always a solution near a half-integer value of \(\nu\) as long as \(a_{1}/a_{0}\ll 1\). If \(a_{1}/a_{0}\) turns out to be of order 1 or larger, then (115) cannot be trusted. Since we have a formula both for \(a_{1}\) and \(a_{0}\) in (114), we can check the value for \(\epsilon\) for every \(n\). The value of \(\epsilon(n)\) is plotted by the green curve in the left panels of Figure 23. The green curves are continuous (not a discrete set of values) because \(n\) is replaced by \(\text{Re}(\nu)-1/2\) in this plot. It turns out that most of the \(\epsilon(n)\) are small. There are some values of \(n\) however, for which \(\epsilon(n)\) is large. These values of \(n\) which correspond to a large \(\epsilon\) are centred around a particular value for which \(\epsilon^{-1}\sim 0\), where we observe a sign flip of \(\epsilon\). This region of the real axis is displayed in Figure 23. In Figure 23, we observe a small part of the real axis centred to the point where \(\epsilon^{-1}\sim 0\). The green lines in this figure show the expected value of \(\epsilon(n)\) (115) which can be compared to the size of the blue circles. The radius of these circles roughly corresponds to the distance between a half-integer and the closest pole of the propagator. The actual pole of the propagator found numerically, is the intersection between the blue circles and the real axis, where green dots are placed. The green line predicts well the size of the blue circles everywhere, except where it is above 1, as one could have expected. The place where \(\epsilon\) is supposed to diverge according to (9.18) corresponds to the place in Figure 23 where the unique open blue curve is the intersection between the blue and the real axis. The green line represents the intersection between the blue and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and the real axis. The green line represents the intersection between the red and real axis. The green line represents the intersection between the red and real axis. The green line represents the intersection between the red and the real axis. crosses the real axis. The exact place where it crosses lies exactly at a half-integer, and this half-integer corresponds to a value of \(n\) for which \(\epsilon(n)\) is maximum. It turns out that the breakdown of the small \(\epsilon\) expansion, roughly at \(\epsilon^{-1}\sim 0\), corresponds to the middle of the cloud of solutions in Figure 19 where the open blue curve crosses the real axis and \(\epsilon\) changes sign. This region of large values for \(\epsilon\) is the same as in Figure 19 where the numerics can find these zeros. Large values of \(\epsilon\) make these blue circles large enough to be resolved by the numerics. This analysis shows that there is always a solution near a half-integer \(\nu=n+1/2\). The small \(\epsilon\) expansion is valid for every \(n\), except in an interval where these solutions are not close enough (\(\epsilon\sim 1\)) to a half-integer. In this region where the approximation cannot be trusted, the numerics in Figure 23 confirm the existence of such solutions even if \(\epsilon\) is not small. We shall now investigate whether solutions corresponding to (9.19) are ghost-like. For this purpose, we need to expand the inverse propagator \(\mathcal{F}_{(-)}\) (5.54) close to a solution of the form (9.19), such that \[\nu=n+1/2-\frac{a_{1}}{a_{0}}+\varepsilon, \tag{9.20}\] where \(\varepsilon\) is a book-keeping parameter defined in order to expand \(\mathcal{F}_{(-)}\) close to a given zero. When \(\varepsilon=0\), we sit exactly at the zero of the inverse propagator found perturbatively in (9.15). The expansion of \(\mathcal{F}_{(-)}\) near the zero at \(\varepsilon=0\) then reads \[\mathcal{F}_{(-)}\left(n+\frac{1}{2}-\frac{a_{1}}{a_{0}}+\varepsilon\right)= \frac{N^{2}\chi^{4}}{64\pi^{2}}\left[-\frac{a_{0}^{2}}{a_{1}}\varepsilon+ \mathcal{O}(\varepsilon^{2})\right]. \tag{9.21}\] As a reminder of what was done in the dS case (5.63), the residue of the pole of \(\mathcal{F}_{(-)}^{-1}\) near \(\nu_{0}\) in the \(\nu^{2}\) plane is given by \[\text{res}[\mathcal{F}_{(-)}^{-1}](\nu_{0}^{2})=\frac{\nu_{0}}{\mathcal{F}_{( -)}(\nu_{0})}\frac{1}{\nu^{2}-\nu_{0}^{2}}. \tag{9.22}\] Therefore, applying this formula to (9.21), we find that the residue of a pole lying close to a half-integer is given by \[\text{Res}[\mathcal{F}_{(-)}^{-1}\left([n+1/2-a_{1}/a_{0}]^{2}\right)]=-\frac {64\pi^{2}}{N^{2}\chi^{4}}\frac{a_{1}}{a_{0}^{2}}\left(n+\frac{1}{2}-\frac{a_{ 1}}{a_{0}}\right). \tag{9.23}\] Since \(a_{1}<0\) the residue (9.23) is positive for the whole tower of massive particles close to half integers. They have the same sign as the massless graviton in AdS with pure gravity. The argument that the residue is positive near the real axis is verified numerically in Figure 23. This Figure shows some poles of the tensor propagators \(\mathcal{F}_{(-)}\) and \(\mathcal{F}_{\text{sym}}\) on the real axis near the region where the \(\epsilon\) expansion breaks down. This figure also confirms that \(\epsilon\) changes sign where the open blue curve (not the circles) crosses the real axis. This crossing happens at the position of the half-integer for which we have \(\epsilon^{-1}\sim 0\). For the symmetric boundary condition, the propagator \(\mathcal{F}_{\rm sym}\) (108) can also be expanded in \(\epsilon\) as in (107) but with an additional term on the right-hand-side coming from the \(\frac{\pi}{\cos\pi\nu}\) piece in (108). Using \[\frac{\pi}{\cos\pi\nu}=\frac{(-1)^{n}}{\epsilon}+\mathcal{O}(\epsilon), \tag{109}\] we obtain new expressions for \(a_{0}\) and \(a_{1}\) defined in (106) for asymmetric boundary conditions. For symmetric boundary conditions, we define \[\mathcal{F}_{\rm sym}(n+1/2+\epsilon)=\frac{N^{2}\chi^{4}}{64\pi^{2}}\left[a_{ 1}^{\rm sym}\epsilon^{-1}+a_{0}^{\rm sym}+\mathcal{O}(\epsilon)\right]. \tag{110}\] In the case of \(n\) odd, \[a_{1}^{\rm sym}=0, \tag{111}\] which does not allow for a solution near odd half integers. However, if \(n\) is even, then \[a_{1}^{\rm sym}=2a_{1}, \tag{112}\] which allows for a solution near an even half-integer, but where \(\epsilon\) is approximatively twice as big as in the asymmetric case (107). As a consequence, we do not find a linear solution near \(n+\frac{1}{2}\) if \(n\) is odd for the symmetric boundary condition. This perturbative result is confirmed numerically in the bottom part of Figure 23, where only even integers present a blue circle, which is twice the size of the same circles in the asymmetric case (top panels of Figure 23). ### Tachyons and ghosts in parameter space for the AdS case We first discuss the tensor sector. The regions of parameter space where tachyonic tensor modes occur in AdS can be read-off from Figure 24, which was obtained by solving the spectral equation numerically. This figure is the negative-curvature analogue of Figure 10. Figure 24 shows the value of \(\tilde{\beta}_{\rm eff}\) at which the tachyonic pole becomes non-tachyonic, for a given set of parameters (\(GN^{2}\chi^{2}\),\(\tilde{\alpha}\)). When \(\tilde{\beta}_{\rm eff}\) is above the curve shown in this figure, the theory is tachyon-free. As we have seen in the two typical examples in Figures 19,21, the would-be tachyonic pole leaves the imaginary axis at a particular value of \(\tilde{\beta}_{\rm eff}\) and never goes back to the imaginary axis as \(\tilde{\beta}_{\rm eff}\) goes to \(+\infty\). Therefore, the critical value of \(\tilde{\beta}_{\rm eff}\) shown in Figure 24 is the border in parameter space between tachyonic and non-tachyonic theories. The dashed coloured lines correspond to the large-\(|\nu|\) analytical approximation obtained in section 9.2 for the case \(a>0\) (106), whereas the dotted lines correspond to the case \(a<0\) (107). For large \(\tilde{\alpha}\) the interpolating curves are monotonic in the curvature, and as \(\tilde{\alpha}\) decreases they start displaying a maximum. From the large-\(|\nu|\) approximation we expect there to be a critical curvature, given by equation (114), above which \(a\) is negative. This is where we decide to start the dotted lines. In the \(a<0\) case, the large-\(|\nu|\) approximation suggests that there is only one single tachyon on the imaginary axis. We then make the further hypothesis that the transition from tachyon-instability to tachyon-stability occurs at the origin \(\nu=0\), where the large-\(|\nu|\) approximation cannot be valid. However, this hypothesis is verified numerically since the dotted lines agree perfectly with the numerics. An example of such transition was already shown in snapshot (g) of Figure 21. Figure 25 shows the mass of the spin-2 tachyonic pole of AdS corresponding to the circles of Figure 24, at the value of \(\tilde{\beta}_{\rm eff}\) corresponding to the transition between tachyonic and non-tachyonic regimes. Therefore, the mass plotted in this figure corresponds to a tachyonic pole, lying on the imaginary axis, which is about to merge Figure 24: _Tachyonic regions of the spin-2 AdS perturbations. Tachyonic regions correspond to the existence of a solution \(\nu\) to the spectral equation (109) such that \(\mbox{Re}(\nu)=0\) (102). Several values of \(\tilde{\alpha}\) are taken and stability is plotted in the space of (\(\tilde{\beta}_{\rm eff}\), \(GN^{2}\chi^{2}\)). Dashed lines are the analytical predictions obtained from the large \(|\nu|\) and large \(a\) approximation (113). Lines made of small squares are obtained assuming that the tachyonic pole stays on the imaginary axis and becomes stable at the origin (114). Large dots are numerical results._ with another tachyon and leave the imaginary axis for larger values of \(\tilde{\beta}_{\text{eff}}\). According to this figure, the transition between tachyonic and non-tachyonic regimes appears to happen always above the species scale, except for large and negative values of \(\tilde{\alpha}\), for which the masses are below the species scale in a small interval of curvatures. Figure 26 shows the occurrence of tachyon-instability in the tensor sector for a few fixed values of \(\tilde{\beta}_{\text{eff}}\), with additional information shown about the value of the real part of the pole which is closest to the imaginary axis (recall that a tachyon corresponds to a purely imaginary \(\nu\).) In the first panel (a) the value of \(\tilde{\beta}_{\text{eff}}\) corresponds to the critical value separating tachyon-stability and instability in the zero-curvature limit of equation (100). As \(\tilde{\beta}_{\text{eff}}\) increases above this value (panels (b), (c) and (d)) the small curvature region becomes tachyon-stable as expected from equation (100), and the size of the tachyon-stable region increases. The region marked "stable" in Figure 26 are such only concerning tachyonic instabilities: even in these regions there is always one ghost-like tensor mode. The Figure 25: _Mass of the spin-2 tachyonic pole of AdS space-time for the value of \(\tilde{\beta}_{\text{eff}}\) given by Figure 24. This value of \(\tilde{\beta}_{\text{eff}}\) corresponds to the merging of two tachyons on the imaginary axis \(\text{Re}(\nu)=0\), creating a complex pole, which is non-tachyonic (\(\text{Re}(\nu)\neq 0\)). The mass is plotted in units of the species scale (7). Each colored curve is a different choice of \(\tilde{\alpha}\)._ mass of the ghost (in units of the species scale) is represented by the colour code in Figure 28. Lighter colours correspond to heavier ghosts. The green lines indicate the boundary of the region beyond which the ghost is heavier than the species cut-off, and therefore is outside of the regime of validity of effective field theory. For small values of \(\tilde{\beta}_{\text{eff}}\), the ghost mass is always above the species scale except in a small region for negative values of \(\tilde{\alpha}\) (panels (a) and (b)). As \(\tilde{\beta}_{\text{eff}}\) is increased to large and positive values, the ghost becomes lighter and lighter. Ghost masses that are below the species scale appear for small curvatures in the last two panels of Figure 28. Increasing \(\tilde{\beta}_{\text{eff}}\) even more than 100 will not change the result since the Figure 26: _Regions of tachyon-stability of the spin-2 mode in the AdS case. The colour code of each of the subfigures above gives the minimum value of the real part of \(\nu\) among all the spin-2 poles, at a fixed value of \(\tilde{\beta}_{\text{eff}}\). Different panels correspond to different values of \(\tilde{\beta}_{\text{eff}}\). The vertical white line separates curvatures that are above and below the species cutoff. The red line separates tachyonic regions with \(\text{Re}(\nu)=0\) as shown in appendix H, from the tachyon-safe regions with \(\text{Re}(\nu)\neq 0\). The first panel takes the value corresponding to the zero-curvature limit of (110). Increasing \(\tilde{\beta}_{\text{eff}}\) moves the tachyonic regions to lower values of \(\tilde{\alpha}\) and larger values of \(GN^{2}\chi^{2}\)._ ghost stabilizes at \(\nu=3/2\). We now turn to the scalar sector which has a single excited mode given by the solution of equation (4.19). Therefore, the graphical representation of ghost-like and tachyon-like instabilities can be given in a single figure. Figure 29 shows the mass of the scalar solution (4.19) in units of the \(AdS_{4}\) scale \(\chi^{-1}\) (left panel) and in units of the \(AdS_{4}\) scale \(\chi^{-1}\) (left panel) and in units of the \(AdS_{4}\) scale \(\chi^{-1}\) (right panel). Figure 27: _The would-be tachyonic spin-2 mode in AdS. The colour code of each of the subfigures above represents the mass (5.42) of this pole in units of the species scale (1.7), at a fixed value of \(\tilde{\beta}_{\text{eff}}\). The different panels correspond to different values of \(\tilde{\beta}_{\text{eff}}\). The first panel takes the value for \(\tilde{\beta}_{\text{eff}}\) corresponding to the zero-curvature limit of (9.7), for which asymptotically small curvatures are critical between tachyon-stable and tachyon-unstable. The tachyonic region on the bottom right corner of each panel is delimited by the red line. The cut-off at \(GN^{2}|m|^{2}=1\) corresponds to the green line, which separates masses which are above the cutoff (bright colours) from masses which are below the cutoff (darker colours). The first panel presents a discontinuity in the bottom half (tachyonic region) because the bottom-right corner has only 1 heavy tachyonic pole (as in Fig. 22) while the bottom-left corner has a second tachyonic pole, which is lighter and ghost-like (as in Fig. 20)._ of the species scale (right panel). In the scalar sector, only \(\tilde{\alpha}\) and the curvature are relevant parameters since \(\tilde{\beta}_{\text{eff}}\) does not enter the scalar spectral equation. The red lines separate tachyon-unstable from tachyon-stable regions and correspond to the points where equation (4.29) is saturated. The blue lines separate the regions in which the scalar mode is a ghost from those in which the scalar is healthy, as prescribed by equation (4.16). ## Acknowledgements We would like to thank D. Anninos, M. Kleban, D. Mateos, M. Montero, V. Niarchos, A. Porfyriadis, C. Rosen and I. Valenzuela. E. Kiritsis, F. Nitti and V. Nourry are supported in part by CNRS grant IEA 199430. Figure 28: _This figure displays the modulus of the mass of the ghost tensor pole in AdS, in units of the species scale (1.7). The colour coding shows the mass of the ghost in units of the species scale in a log scale. Each panel of this figure corresponds to one particular value of \(\tilde{\beta}_{\text{eff}}\). The green line shows where the mass is equal to the species cutoff \(GN^{2}|m|^{2}=1\). Darker colours are below the cutoff._ ## Appendix A Ghosts and tachyons in Effective Field Theory It is well known that, when starting from a healthy UV theory, ghosts and/or tachyons can appear in effective field theories as an artefact of integrating out some degrees of freedom and performing the low-energy expansion. In these cases, the mass of the unstable mode is always of the order of, or above the cut-off (the mass of the states which were integrated out). We give an example of this phenomenon in a simple model based on free scalar fields. ### A simple model Consider two massive scalars coupled to each other. \[L=\frac{1}{2}\left[(\partial\varphi_{1})^{2}-m_{1}^{2}\varphi_{1}^{2}\right]+ \frac{1}{2}\left[(\partial\varphi_{2})^{2}-m_{2}^{2}\varphi_{2}^{2}\right]+g \varphi_{1}\varphi_{2}\] (A.1) This action can be diagonalized by an orthogonal transformation \[\varphi_{1}=\cos\theta\ \varphi_{+}+\sin\theta\ \varphi_{-}\ \,\ \ \ \varphi_{2}=-\sin\theta\ \varphi_{+}+\cos\theta\ \varphi_{-}\] (A.2) Figure 29: _Regions of instabilities for the spin-0 sector in the anti-de Sitter case. The colour code on the left panel indicates \(|\nu|^{2}\). It is plotted as a function of \(\tilde{\alpha}\) and \(GN^{2}\chi^{2}\). Tachyonic regions, where \(\text{Re}(\nu)=0\), are delimited by the red lines and the ghost region is delimited by the blue line using the inequality (4.16). The vertical white line separates the curvatures which are above (on the right) and below (on the left) the species cutoff (1.7). The green curve corresponds to the species scale \(GN^{2}|m|^{2}=1\) On the right panel, we compare the mass of the scalar solution with the species scale (1.7). These diagrams do not depend on \(\tilde{\beta}_{\text{eff}}\). On the left of the vertical white dashed line, the AdS scale is below the species cutoff._ with \[\tan(2\theta)=\frac{2g}{m_{1}^{2}-m_{2}^{2}}\] (A.3) and the action becomes \[L=\frac{1}{2}\left[(\partial\varphi_{+})^{2}-m_{+}^{2}\varphi_{+}^{2}\right]+ \frac{1}{2}\left[(\partial\varphi_{-})^{2}-m_{-}^{2}\varphi_{-}^{2}\right] \quad,\quad 2m_{\pm}^{2}=m_{1}^{2}+m_{2}^{2}\pm\sqrt{(m_{1}^{2}-m_{2}^{2})^{2 }+4g^{2}}\] (A.4) When \[g\leq m_{1}m_{2}\] (A.5) the theory contains two non-interacting scalars with positive kinetic terms and with \(m_{\pm}^{2}\geq 0\). Of course if \(m_{2}>m_{1}\)\(\varphi_{2}\) is unstable to decay (convert) to \(\varphi_{1}\), but \(\varphi_{\pm}\) are stable. This is a typical example that will cause oscillations like in the case of neutrinos. \(\varphi_{\pm}\) are the eigenstates of the Hamiltonian. So we have in terms of one-particle states, for example \[|\varphi_{1}(p=0,t)\rangle=\cos\theta e^{im_{+}t}|\varphi_{+}\rangle+\sin \theta e^{im_{-}t}|\varphi_{-}\rangle\] (A.6) ### Eft We now assume \(m_{2}\gg m_{1}\) and we integrate out39\(\varphi_{2}\). The two equations of motion are Footnote 39: We do this by solving the classical equation of motion, but since the theory is Gaussian this is the same as performing the path integral over \(\varphi_{2}\). \[(\square+m_{1}^{2})\varphi_{1}=g\varphi_{2}\quad,\quad(\square+m_{2}^{2}) \varphi_{2}=g\varphi_{1}\] (A.7) We solve for \(\varphi_{2}\) \[\varphi_{2}=g(\square+m_{2}^{2})^{-1}\varphi_{1}\] (A.8) and substitute in the equation for \(\varphi_{1}\) \[(\square+m_{1}^{2})\varphi_{1}=g^{2}(\square+m_{2}^{2})^{-1}\varphi_{1}\] (A.9) which is obtained from the effective action \[L_{\rm eff}=-\frac{1}{2}\varphi_{1}\left[(\square+m_{1}^{2})-g^{2}(\square+m _{2}^{2})^{-1}\right]\varphi_{1}\] (A.10) \(L_{\rm eff}\) is completely equivalent to \(L\) and the effective propagator is \[D_{L^{\prime}}^{-1}=\frac{\square+m_{2}^{2}}{(\square+m_{1}^{2})(\square+m_{ 2}^{2})-g^{2}}=\frac{R_{+}}{\square+m_{+}^{2}}+\frac{R_{-}}{\square+m_{-}^{2}} =\langle\varphi_{1}\varphi_{1}\rangle\] (A.11) with \[R_{\pm}=\frac{\pm(m_{1}^{2}-m_{2}^{2})+\sqrt{(m_{1}^{2}-m_{2}^{2})^{2}+4g^{2}} }{2\sqrt{(m_{1}^{2}-m_{2}^{2})^{2}+4g^{2}}}\] (A.12) Both residues are positive. Similarly \[\frac{R_{-}}{\square+m_{+}^{2}}+\frac{R_{+}}{\square+m_{-}^{2}}=\langle\varphi _{2}\varphi_{2}\rangle\] (A.13) ### The IR expansion We shall now evaluate the EFT by taking a low-energy approximation to our integrating-out procedure. We expand (A.8) in the IR \[\varphi_{2}=\frac{g}{m_{2}^{2}}\left(1-\frac{\Box}{m_{2}^{2}}+\frac{\Box^{2}}{m_ {2}^{4}}+\cdots\right)\varphi_{1}\] (A.14) Substituting in (K.22) we obtain \[(\Box+m_{1}^{2})\varphi_{1}=\frac{g^{2}}{m_{2}^{2}}\left(1-\frac{\Box}{m_{2}^ {2}}+\frac{\Box^{2}}{m_{2}^{4}}+\cdots\right)\varphi_{1}\quad\Rightarrow\] (A.15) \[\Rightarrow\quad\left[m_{1}^{2}-\frac{g^{2}}{m_{2}^{2}}+\left(1+\frac{g^{2}}{ m_{2}^{4}}\right)\Box-\frac{g^{2}}{m_{2}^{6}}\Box^{2}+\mathcal{O}(\Box^{3}) \right]\varphi_{1}=0\] (A.16) and the relevant action is \[L_{IR}=-\frac{1}{2}\varphi_{1}\left[m_{1}^{2}-\frac{g^{2}}{m_{2}^{2}}+\left(1 +\frac{g^{2}}{m_{2}^{4}}\right)\Box-\frac{g^{2}}{m_{2}^{6}}\Box^{2}\right] \varphi_{1}\] (A.17) with propagator \[D_{IR}^{-1}=\frac{1}{m_{1}^{2}-\frac{g^{2}}{m_{2}^{2}}+\left(1+\frac{g^{2}}{m _{2}^{4}}\right)\Box-\frac{g^{2}}{m_{2}^{6}}\Box^{2}}=\frac{\tilde{R}_{-}}{ \Box+\tilde{m}_{-}^{2}}-\frac{\tilde{R}_{+}}{\Box+\tilde{m}_{+}^{2}}\] (A.18) with \[\tilde{m}_{\pm}^{2}=-\frac{m_{2}^{2}}{2g^{2}}\left[m_{2}^{4}+g^{2}\pm\sqrt{(m _{2}^{4}+g^{2})^{2}+4g^{2}(m_{1}^{2}m_{2}^{2}-g^{2})}\right]\] (A.19) \[\tilde{R}_{\pm}=\frac{m_{2}^{4}}{\sqrt{(m_{2}^{4}+g^{2})^{2}+4g^{2}(m_{1}^{2} m_{2}^{2}-g^{2})}}\] (A.20) Using \(m_{2}\ll m_{1}\) and (A.5) we can simplify the expressions above as \[\sqrt{(m_{2}^{4}+g^{2})^{2}+4g^{2}(m_{1}^{2}m_{2}^{2}-g^{2})}\simeq m_{2}^{4} +g^{2}+2g^{2}\frac{m_{1}^{2}}{m_{2}^{2}}\] (A.21) \[\tilde{m}_{-}^{2}\simeq m_{1}^{2}\;\;\;,\;\;\;\tilde{m}_{+}^{2}\simeq-\frac{ m_{2}^{6}}{g^{2}}\;\;\;,\;\;\;\tilde{R}_{\pm}\simeq\mp 1\] (A.22) The pole associated \(\tilde{m}_{-}\) has the correct positive residue and the correct position corresponding to the slightly corrected light state \(\varphi_{1}\), The extra pole at \(\tilde{m}_{+}\) is ghost-like and tachyonic with a mass scale \[|m_{+}^{2}|=\frac{m_{2}^{6}}{g^{2}}\geq\frac{m_{2}^{4}}{m_{1}^{2}}\geq m_{2}^ {2}\] (A.23) that is above the cutoff of the theory that is the mass of the heavier state. This simple example shows that integrating out degrees of freedom in an IR expansion of the EFT may generically create ghosts and tachyons, even if the underlying UV theory is perfectly healthy. However, these ghosts/tachyons always have masses above the cutoff scale. In conclusion, in the context of EFT, only unstable modes whose masses are parametrically smaller than the UV cut-off can be considered as giving rise to true instabilities of the theory. Conversely, one cannot reach any conclusion about the stability of the theory based on the presence of ghosts or tachyons whose mass is at or above the cut-off. ## Appendix B Renormalized action In this appendix we briefly review the results of [70] in \(d=4\) for the computation of divergent terms of the bulk action (13) evaluated on-shell. These divergences are then cancelled by the bare gravity action defined in (14). The Gibbons-Hawking term of \(S_{\text{bulk}}\) contains the extrinsic curvature which is defined by \[K=G^{ab}\nabla_{a}n_{b}, \tag{114}\] where \(n^{a}\) is the unit vector normal to the boundary which points to the exterior. The induced metric and normal vector \(n^{a}\) on the \(\rho=\epsilon\) boundary are given by \[\gamma_{\omega\sigma}(\epsilon,x)=\frac{1}{\epsilon}g_{\omega\sigma}(\epsilon,x) \tag{115}\] \[n^{a}=\left.\frac{\partial^{a}\rho}{\sqrt{G_{cd}\partial^{c}\rho\partial^{d} \rho}}\right|_{\rho=\epsilon}=\frac{2\epsilon}{L}\delta^{\rho a} \tag{116}\] The bulk action (13) evaluated on-shell can then be written in terms of \(g_{\alpha\beta}(\rho,x)\) as \[S_{\text{bulk}}=\frac{1}{16\pi G_{N}L}\int d^{d}x\left[\int_{\epsilon}^{+ \infty}d\rho\frac{d}{\rho^{\frac{d}{2}+1}}\sqrt{g}+\frac{1}{\rho^{\frac{d}{2 }}}\left(-2d\sqrt{g}+4\rho\partial_{\rho}\sqrt{g}\right)_{\rho=\epsilon}\right] \tag{117}\] This action can be written as a power series of \(\epsilon\) by inserting the expansion of the metric (21) into (117). Furthermore, the first few terms of (21) are obtained in terms of \(g_{\omega\sigma}^{(0)}\) by solving perturbatively the bulk Einstein field equation [70] \[L^{2}R_{ab}[G]+dG_{ab}=0. \tag{118}\] The linear term is then given by \[g_{\omega\sigma}^{(2)}=-\frac{L^{2}}{d-2}\left(R_{\omega\sigma}-\frac{R}{2(d- 1)}g_{\omega\sigma}^{(0)}\right), \tag{119}\] where, in our notation \(R_{\omega\sigma}\equiv R_{\omega\sigma}[g^{(0)}]\). However, only the trace and the divergence of \(g^{(4)}\) are constrained by the near-boundary reconstruction of the bulk. We shall obtain \(g^{(4)}\) and its perturbation starting from the \(AdS_{5}\) bulk in section 5. The log-term \(\hat{g}\) is given by \[\hat{g}_{\omega\sigma}=\frac{L^{4}}{16}\left\{2R_{\omega\kappa\sigma\lambda}R^ {\kappa\lambda}-\frac{1}{3}\nabla_{\omega}\nabla_{\sigma}R+\nabla^{2}R_{ \omega\sigma}-\frac{2}{3}RR_{\omega\sigma}+(\frac{1}{6}R^{2}-\frac{1}{6} \nabla^{2}R-\frac{1}{2}R_{\kappa\lambda}R^{\kappa\lambda})g^{(0)}_{\omega \sigma}\right\},\] (B.7) which is traceless. Inserting (2.21) into the bulk action (B.4) gives a power series in \(\epsilon\), given in \(d=4\) by \[S_{\rm bulk}=\frac{1}{16\pi G_{N}L}\int d^{4}x\sqrt{g^{(0)}}\left(\epsilon^{- 2}a_{(0)}+\epsilon^{-1}a_{(2)}-\log\epsilon a_{(4)}\right)+{\cal O}(\epsilon^ {0}),\] (B.8) where \[a_{(0)}=2(1-d)=-6,\] (B.9) \[a_{(2)}=\frac{(4-d)(d-1)}{d-2}{\rm Tr}g^{(2)}=0,\] (B.10) \[a_{(4)}=\frac{1}{2}(({\rm Tr}(g^{(2)}))^{2}-{\rm Tr}((g^{(2)})^{2}))=-\frac{L^ {4}}{8}\left(R^{\omega\sigma}R_{\omega\sigma}-\frac{1}{3}R^{2}\right).\] (B.11) The divergent piece of the bulk action \[S_{\rm bulk}=S_{\rm div}+{\cal O}(\epsilon^{0})\] (B.12) is then given by \[S_{\rm div}=\frac{1}{16\pi G_{N}L}\int d^{4}x\sqrt{g_{(0)}}\left\{-6\epsilon^{ -2}+\frac{1}{8}\log\epsilon\left(R^{\omega\sigma}R_{\omega\sigma}-\frac{1}{3} R^{2}\right)\right\}.\] (B.13) Inverting perturbatively series for \(\sqrt{g}\) and \(R_{\omega\sigma}[g]\) in powers of \(\epsilon\) allows one to express \(\sqrt{g_{(0)}}\) and \(R_{\omega\sigma}=R_{\omega\sigma}[g_{(0)}]\) covariantly in a power series of curvature tensors of the induced metric \(\gamma_{\omega\sigma}\). These useful formulae are given by \[\sqrt{g_{(0)}}=\epsilon^{2}\sqrt{\gamma}\left[1-\frac{\epsilon}{2}{\rm Tr}g_ {(2)}+\frac{\epsilon^{2}}{8}({\rm Tr}g_{(2)}^{2}+({\rm Tr}g_{(2)})^{2})+{ \cal O}(\epsilon^{3})\right],\] (B.14) \[R=\frac{1}{\epsilon}\left\{R[\gamma]-\frac{L^{2}}{2}\left(R^{\omega\sigma}[ \gamma]R_{\omega\sigma}[\gamma]-\frac{1}{6}R[\gamma]^{2}\right)+{\cal O}(R[ \gamma]^{3})\right\}.\] (B.15) Using these expansions into (B.13) allows us to obtain the covariant counterterms written in the main text (2.27). C Comparison with the Starobinsky model In this appendix, we relate our analysis to the Starobinksy \(R+R^{2}\) model of inflation which is one of the most favoured single-field inflationary models by CMB observations [50]. This model is obtained from the original Starobinsky model of anomaly-driven inflation without a cosmological constant [46, 47], by neglecting the non-local anomaly terms and keeping only the local \(R^{2}\) term. This can be justified when the coefficient \(\alpha\) of the \(R^{2}\) term dominates. In our setup, this amounts to ignoring the CFT contribution (setting \(N=0\)) as well as setting \(\beta=0\), and keeping only the \(\alpha R^{2}\) pure gravity term. Dropping the non-local terms pushes the de Sitter solution to infinite curvature: in equation (2.57) with \(\Lambda=0\), the de Sitter solution is the non-trivial one with \(\bar{R}=48\pi/GN^{2}\), and in the limit \(N\to 0\) the curvature diverges. However, by writing the model as a scalar-tensor theory and performing a Weyl transformation to the Einstein frame, one obtains a single-field inflationary model with a quasi-de Sitter solution with a finite Hubble parameter. The action for the simplified \(R^{2}\) Starobinsky model is \[S=-\int d^{d}x\sqrt{-g}\left\{\frac{R}{16\pi G}-\hat{\alpha}R^{2}\right\}.\] (C.1) Identifying this action to the \(R^{2}\) action of our model (2.4) gives the relation between \(\hat{\alpha}\) and \(\alpha\): \[\hat{\alpha}=\frac{\alpha}{384\pi}\] (C.2) The favoured observational value is \[\alpha\approx-5.95\times 10^{11}\;,\] (C.3) obtained from the amplitude of the power spectrum of primordial curvature fluctuations [84]. As mentioned above, the model (C.1) corresponds to the \(H\to\infty\) limit of our analysis. We can still compare our results with the full model (including the CFT) [47, 46] with the same value of \(\alpha\) as the one favoured by data. In this case, the curvature is fixed to \[GN^{2}H^{2}=4\pi.\] (C.4) The large \(R^{2}\) term (2.4) makes the scalaron \(\psi\) light and tachyonic as we will see below. Notice that this model, due to (C.4), falls outside of the regime of effective field theory. Scalar sector in Starobinsky inflationWe first discuss the scalar mode (scalaron), which is the one that, in the pure \(R+R^{2}\) model, can be identified with the inflaton and in the presence of the CFT makes de Sitter unstable. Indeed, by inserting (C.4) into the condition (4.25), we conclude that \(\alpha<0\) is in the tachyonic regime. The characteristic decay rate \(\Gamma\) of the scalaron instability can be read off by substituting (C.4) into (4.26): \[\Gamma=H\left[-\frac{3}{2}+\sqrt{\frac{9}{4}-\frac{1}{\tilde{\alpha}}}\right],\] (C.5) where \(\tilde{\alpha}=\frac{\pi\alpha}{N^{2}}\), which agrees with the value found in [47] close to the de Sitter solution. For a long-lived de Sitter, we need \(|\tilde{\alpha}|\sim|\alpha|/N\gg 1\), which also implies \(|\alpha|\gg 1\). This model then matches qualitatively the features of the pure \(R+R^{2}\) model, with an unstable de Sitter replaced by a slowly-rolling FRW spacetime. Tensor sector in Starobinsky inflationAs explained above, our more general setup can retrieve the \(R+R^{2}\) model (C.1) by setting \(N=\beta=0\). In this case, the only propagating tensor mode is the massless graviton \(\nu=3/2\) as one can see from equation (6.4) applied to \(\beta=0\). Therefore, there is no ghost or tachyonic spin-2 mode in the Starobinsky model. We now turn on \(\beta\neq 0\) while keeping \(|\tilde{\alpha}|\gg 1\). Now the tensor sector acquires an additional propagating mode. In such a regime, the \(R^{2}\) term of the action dominates over the CFT. Therefore, the spin-2 propagator with the CFT (5.39) can be approximated by the pure (modified) gravity propagator (6.4). This propagator contains the usual massless pole and a massive one. The massless pole is a ghost if \[\frac{1}{2\pi}-2\tilde{\alpha}+\tilde{\beta}<0,\quad\Rightarrow\quad\text{ massless ghost}\] (C.6) otherwise, the massive pole is a ghost. The second case holds for large and negative \(\tilde{\alpha}\) and generic values of \(\tilde{\beta}\). In addition to ghost-like instabilities, the massive pole is a tachyon if \[\frac{2}{\tilde{\beta}}\left(\tilde{\alpha}-\frac{1}{4}\right)<1.\quad \Rightarrow\quad\text{tachyonic}\] (C.7) Thus, large and negative \(\tilde{\alpha}\) are associated with tachyonic spin-2 perturbations for positive and generic values of \(\tilde{\beta}\). However, the massive mode lies below the species scale when \(|\tilde{\beta}|\gg|\tilde{\alpha}|\gg 1\). If \(\beta\) is positive, the spin-2 pole is tachyonic and ghost-like. If \(\beta\) is negative, the spin-2 pole is only ghost-like. If we decide to take the CFT contribution into account, one must refer to Figure 17 instead of equation (C.7). This figure shows which sector (scalar or tensor) represents the strongest tachyonic instability. The vertical red line is for \(\Lambda=0\) as is the case in Starobinsky's model. This figure shows that negative \(\tilde{\alpha}\) are associated with scalar tachyonic instability, which is convenient for an inflationary scenario. However, small curvatures can be associated with tensor tachyonic instabilities dominating the usual scalaron. ## Appendix D AdS slicing coordinates In this appendix, we describe the AdS\({}_{d+1}\) metric in AdS\({}_{d}\) slice coordinates. Lorentzian \(AdS_{d+1}\) is the hyperboloid \[\eta_{AB}X^{A}X^{B}=-L^{2}.\] (D.1) where \(A,B=-1,...,d+1\) and \(\eta_{AB}=\text{diag}(-1,-1,1,...,1)\). global \(AdS_{d+1}\) coordinates are obtained by choosing \[X^{-1}=L\cos t\cosh\rho,\] (D.2a) \[X^{0}=-L\sin t\cosh\rho,\] (D.2b) \[X^{\mu}=L\Omega^{\mu}\sinh\rho,\] (D.2c) where \(\mu=1,...,d\) and \[\delta_{\mu\nu}\Omega^{\mu}\Omega^{\nu}=1\] (D.3) AdS slicing is obtained by choosing \(u\) as a radial coordinate crossing Lorentzian \(AdS_{d}\) slices. Global coordinates can be chosen to describe the \(d-\)dimensional slice. The \(AdS\) slicing coordinates are then given by \[X^{-1}=L\cosh u\cos\tau\cosh r,\] (D.4a) \[X^{0}=-L\cosh u\sin\tau\cosh r,\] (D.4b) \[X^{i}=Ln^{i}\cosh u\sinh r,\] (D.4c) \[X^{d}=L\sinh u,\] (D.4d) where \(i,j=1,...,d-1\) and \[\delta^{ij}n^{i}n^{j}=1.\] (D.5) Using this coordinate system, we can reach the infinity of the embedding space \(X^{A}\) either by taking \(u\rightarrow\pm\infty\) or \(r\rightarrow+\infty\). Therefore, the boundary of the hyperboloid has two pieces (both infinities for \(u\)) which are connected by the common boundary \(r\rightarrow+\infty\) of the slice \(AdS_{4}\). To obtain a map between these two coordinate systems, we first rewrite \((X^{\mu})^{2}\) using both (D.2) and (D.4). It gives the relation \[\sinh^{2}\rho=\cosh^{2}u\sinh^{2}r+\sinh^{2}u.\] (D.6) This can be rewritten as \[\sinh\rho=\left(\cosh^{2}u\cosh^{2}r-1\right)^{\frac{1}{2}}.\] (D.7) For all \(u\) and \(r\) we have \(\cosh u\cosh r>1\) (global AdS is ill-defined when \(\rho=0\)), so we can use the formula \(\sqrt{x^{2}-1}=\sinh(\text{Arccosh}x)\). Therefore we obtain \[t=\tau\] (D.8a) \[\cosh\rho=\cosh u\cosh r.\] (D.8b) \[\Omega^{d}=\sinh u(\cosh^{2}u\cosh^{2}r-1)^{-\frac{1}{2}}\] (D.8c) \[\Omega^{i}=\cosh u\sinh r(\cosh^{2}u\cosh^{2}r-1)^{-\frac{1}{2}}n^{i}\] (D.8d) The inverse transformation can be obtained using \[\cosh r=\frac{\cosh\rho}{\cosh u}\] (D.9) and the expression for \(X^{d}\) in the two sets of coordinates which gives \[\sinh u=\Omega^{d}\sinh\rho,\] (D.10) \[\cosh u=\sqrt{1+(\Omega^{d}\sinh\rho)^{2}},\] (D.11) so that we can replace each \(u\) and \(r\) into every equation in (D.4). The transformation from AdS slicing to global AdS is then \[\tau=t\] (D.12a) \[u=\text{Arcsinh}\left(\Omega^{d}\sinh\rho\right),\] (D.12b) \[\cosh r=\cosh\rho\left(1+(\Omega^{d}\sinh\rho)^{2}\right)^{-\frac{1}{2}}\] (D.12c) \[n^{i}=\Omega^{i}\left(1-(\Omega^{d})^{2}\right)^{-\frac{1}{2}}\] (D.12d) One can easily check that \(\delta_{ij}n^{i}n^{j}=1\) is still true using (D.3). Indeed, \[\delta_{ij}n^{i}n^{j}=\frac{\delta_{ij}\Omega^{i}\Omega^{j}}{1-(\Omega^{d})^{2 }}=\frac{\delta_{\mu\nu}\Omega^{\mu}\Omega^{\nu}-(\Omega^{d})^{2}}{1-(\Omega^ {d})^{2}}=1.\] (D.13) We can also choose a parametrisation of the \((d-1)\)-sphere \(\Omega^{\mu}\) and specifically pick \(\Omega^{d}\) to be the polar axis (so that it is parametrized by only one angle \(\theta\in[0,\pi]\)): \[\begin{array}{rl}\Omega^{d}&=\cos\theta,\\ \Omega^{1}&=\sin\theta\cos\varphi_{1},\\ &\vdots\\ \Omega^{i}&=\sin\theta\sin\varphi_{1}...\sin\varphi_{i-1}\cos\varphi_{i},\\ &\vdots\\ \Omega^{d-1}&=\sin\theta\sin\varphi_{1}...\sin(\varphi_{d-2}).\end{array}\] (D.14) where \(\varphi_{i}\in[0,2\pi[\). Now the change of coordinates from AdS slicing to global AdS is written as \[\tau=t \tag{115a}\] \[u=\text{Arcsinh}\left(\cos\theta\sinh\rho\right),\] (115b) \[\cosh r=\cosh\rho\left(1+(\cos\theta\sinh\rho)^{2}\right)^{-\frac{1}{2}}\] (115c) \[n^{i}=\frac{\Omega^{i}}{\sin\theta} \tag{115d}\] The induced metric in AdS slicing is given by \[ds^{2}=L^{2}\left\{du^{2}+(\cosh u)^{2}ds_{d}^{2}\right\}=G_{ab}dx^{a}dx^{b}, \tag{116}\] where \(ds_{d}^{2}\) is the metric of unit \(AdS_{d}\). ## Appendix E Schrodinger problem in the bulk In this appendix, we write the bulk radial equation for the spin-2 perturbation (102) as a Schrodinger equation for each slicing (flat, de Sitter and anti-de Sitter). This procedure gives a physical interpretation for bulk solutions with different values of the slice momentum \(\nu\) (in dS and AdS) and \(k\) (in Minkowski) which is identified to the energy of this Schrodinger problem. Moreover, the Schrodinger problem provides a norm which we can use to check the normalizability of the solutions. In particular, we want to check the normalizability of solutions near the horizon \(u=0\) in de Sitter slicing coordinates. The easiest way to write the bulk equation for the spin-2 modes (102) as a Schrodinger equation is to write the background metric (46) \[ds_{d+1}^{2}=L^{2}du^{2}+a^{2}\bar{\zeta}_{\omega\sigma}dx^{\omega}dx^{\sigma} \tag{117}\] into conformal coordinates \[=a^{2}\left[\ell^{2}dr^{2}+\bar{\zeta}_{\omega\sigma}dx^{\omega}dx^{\sigma} \right], \tag{118}\] where \(\ell\) is the radius of the slice metric \(\bar{\zeta}_{\omega\sigma}\) which we write here in order to keep \(r\) dimensionless like \(u\). To find such a coordinate \(r\), we need to solve \[\ell^{2}dr^{2}=L^{2}a^{-2}du^{2}, \tag{119}\] where the conformal factors we consider are \(a=L\chi\cosh u\) (50) in AdS slicing, \(a=LH\sinh u\) (47) in dS slicing and \(a=e^{-u}\) (48) in flat slicing. We just need to find the appropriate conformal coordinate \(r\), compute \(a(r)\) and transform the bulk equation of motion (101) into a Schrodinger equation in this new coordinate. #### AdS slicing For AdS slicing (2.50), \(\ell=\chi^{-1}\), we find \[\tan\left(\frac{r}{2}\right)=\tanh\left(\frac{u}{2}\right).\] (E.4) The conformal boundary of AdS\({}_{5}\) located at \(u\to\pm\infty\) corresponds to \(r=\pm\pi\). From (E.4), we obtain \[\left\{\begin{array}{lll}\sinh u&=&\tan r\\ \cosh u&=&\frac{1}{\cos r}.\end{array}\right.\] (E.5) Using (E.5), it is then possible to write equation (5.43) in terms of the conformal coordinate \(r\). The equation is \[\left\{\partial_{r}^{2}+(d-1)\tan r\partial_{r}+\nu^{2}-\left(\frac{d-1}{2} \right)^{2}\right\}F=0.\] (E.6) Defining the rescaled field \(\Psi\) as \[\Psi=a^{\frac{d-1}{2}}F,\] (E.7) the equation becomes a Schrodinger problem \[\left\{-\frac{d^{2}}{dr^{2}}+V(r)\right\}\Psi=E\Psi,\] (E.8) where \[V(r)\equiv\left(\frac{d-1}{2}\right)\left[1+\left(\frac{d+1}{2}\right)\tan^{ 2}r\right],\] (E.9) \[E\equiv-\nu^{2}+\left(\frac{d-1}{2}\right)^{2}\] (E.10) #### dS slicing For dS slicing (2.48), \(\ell=H^{-1}\) and the conformal coordinate \(r\) is a solution of (E.3), which for positive \(u\) is given by \[e^{r}=\tanh\left(\frac{u}{2}\right).\] (E.11) The limit \(u\to+\infty\) which goes to the conformal boundary of AdS\({}_{5}\) then corresponds to taking \(r\to+\infty\). The horizon at \(u=0\) then corresponds to \(r=-\infty\). The following steps are identical to the ones done in AdS-slicing. Instead of (E.5), we have the following relations \[\left\{\begin{array}{lll}\sinh u&=&\frac{1}{\sinh r}\\ \cosh u&=&-\coth r.\end{array}\right.\] (E.12) The bulk radial equation of motion for spin-2 perturbations (5.27) is then written in terms of the conformal coordinate \(r\) as \[\left\{\partial_{r}^{2}-(d-1)\coth r\partial_{r}-\nu^{2}+\left(\frac{d-1}{2} \right)^{2}\right\}F=0.\] (E.13) Using the same redefinition as in (E.7) with \(a=LH\sinh u\), we find the Schrodinger equation (E.9,E.10) satisfied by \(\Psi=a^{\frac{d-1}{2}}F\). The potential and energy are respectively given by \[V(r)\equiv\left(\frac{d-1}{2}\right)\left[-1+\left(\frac{d+1}{2}\right)\coth^ {2}r\right],\] (E.14) \[E\equiv\nu^{2}-\left(\frac{d-1}{2}\right)^{2}.\] (E.15) The scalar product associated with this Schrodinger problem is defined as \[\langle f,g\rangle=\int_{\mathbb{R}_{-}^{*}}drf^{*}(r)g(r).\] (E.16) The two linearly independent solutions obtained in (5.29) should be normalizable according to the norm associated with the scalar product (E.16). The asymptotic behaviour of the Schrodinger field near \(u=0\) for \(d=4\) is given by \[\Psi(u)=(\sinh u)^{\frac{3}{2}}F(u)\underset{u\to 0}{\sim}u^{\pm\nu}.\] (E.17) The normalizable condition near the horizon at \(u=0\) for the Schrodinger field \(\Psi\) for \(d=4\) is then \[|\Psi|^{2}=\langle\Psi(r,\nu),\Psi(r,\nu)\rangle=\int_{\frac{-\pi}{2}}^{\frac {\pi}{2}}dr|\Psi(r)|^{2}\sim\int_{0}\frac{du}{u}u^{\pm\nu}<\infty.\] (E.18) This integral converges on the horizon \(u=0\) if \(\text{Re}(\nu)>0\) for the "\(C_{-}\)" solution in (5.29), which has a negative sign exponent in (E.18). Conversely, it converges if \(\text{Re}(\nu)<0\) for the "\(C_{+}\)" solution. In conclusion, the sign of \(\text{Re}(\nu)\) determines which solution \(C_{\pm}\) is normalizable. In section 5.2, we decide to take a positive real part of \(\nu\) and therefore need to choose \(C_{-}=0\) which is not normalizable because the norm (E.18) diverges near the horizon \(u=0\). ## Appendix F Flat space tachyonic time scale In this appendix, we study the time dependence of a tachyonic perturbation using the formalism of Green functions. We show that a tachyonic pole of the Minkowski propagator is associated with a runaway in the retarded Green function of the perturbation. Both the scalar (4.18) and the tensor (5.6) perturbations are decomposed into eigenmodes of the Minkowski Laplacian operator \(\partial^{2}\). Then, a single mode \(\varphi\) perturbation associated with the eigenvalue \(k^{2}\) is a solution of the Klein-Gordon equation \[(\partial^{2}-m^{2})\varphi(x)=0,\] (F.1) where \(x\) stands for the 4-dimensional coordinate vector \(x\equiv(t,\mathbf{x})\). Equation (F.1) can be separated into 3 different cases. First, we study the case \(m^{2}>0\), which corresponds to the usual Klein-Gordon equation for a positive mass squared. Second, we study the \(m^{2}<0\) case. Finally, we study the case where \(m^{2}\) is complex but away from the real axis. We shall observe that the retarded Green function contains a runaway in the two last cases, and obtain the characteristic time of this runaway. The spectral equation for (F.1) is obtained by performing a Fourier transform over the four space-time coordinates: \[\varphi(t,\mathbf{x})=\int d^{4}xe^{-ik.x}\tilde{f}(\omega,\mathbf{k})=\int dtd^{3}\bm {x}e^{-i\mathbf{k}.\mathbf{x}+i\omega t}\tilde{\varphi}(\omega,\mathbf{k}).\] (F.2) The spectral equation is then \[(\omega^{2}-\mathbf{k}^{2}-m^{2})\tilde{\varphi}=0\] (F.3) The most general solution of (F.3) is given by \[\tilde{\varphi}=\alpha(\mathbf{k})\delta(\omega-E_{k})+\beta(\mathbf{k})\delta(\omega +E_{k}),\] (F.4) where we have defined \(E_{k}\) as one of the square roots of \[E_{k}^{2}\equiv\mathbf{k}^{2}+m^{2}.\] (F.5) We can choose arbitrarily one of the two square roots. Taking one or the other would simply exchange \(\alpha(\mathbf{k})\) and \(\beta(\mathbf{k})\) in the solution (F.4). We specify which square root is chosen for each following subsection (\(m^{2}\) positive, negative or complex). The Green function \(G\) associated to equation (F.1) is defined as \[(\partial^{2}-m^{2})G(x)=\delta(x)\] (F.6) The most general solution for \(G\) is then given by \[G(x)=\varphi(x)+G_{p}(x),\] (F.7) where \(\delta(x)\) is a Dirac distribution centered at \(x=0\), \(\varphi\) is the homogeneous solution (F.4) and \(G_{p}\) is a particular solution. This particular solution can be obtained via the inverse 4-dimensional Fourier transform of (F.6). The result is \[G_{p}(x)=\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}e^{-i\mathbf{k}.\mathbf{x}}\int\frac{d \omega}{2\pi}e^{i\omega t}\frac{1}{\omega^{2}-E_{k}^{2}}.\] (F.8) #### Positive mass squared In that case, \(E_{k}\) is real, and we choose it to be the positive square root of (111). The integral over \(\omega\) is evaluated using the residue theorem, and the contour can be chosen arbitrarily (one can choose either the retarded, advanced or the Feynman prescription). For example, the retarded prescription circles the two poles \(E_{k}=\pm\omega\) for positive \(t\). The result is then \[G_{p}^{\rm R}(t>0,\mathbf{x})=-\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}e^{-i\mathbf{k}.\mathbf{x} }\frac{\sin(E_{k}t)}{E_{k}}. \tag{112}\] This is the usual retarded Green function of the Klein-Gordon operator. Its time dependence appears only in a sine function and therefore does not contain a runaway. Another prescription would have given another combination of complex exponentials with positive and negative signs. Therefore, all the different prescriptions are safe. For example, the Feynman prescription would have given \[G_{p}^{F}(x)=i\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}e^{-i\mathbf{k}.\mathbf{x}}\frac{e^{iE _{k}|t|}}{2E_{k}}. \tag{113}\] #### Negative mass squared \(m^{2}=-\mu^{2}\) In that case, it is necessary to separate the values of \(|\mathbf{k}|\) into two distinct regimes. * When \(|\mathbf{k}|\geq 0\), then \(E_{k}\) is real and we choose the positive square root of (111) as we did in the last subsection. * When \(|\mathbf{k}|<0\), then \(E_{k}\) is purely imaginary, and we choose the positive imaginary square root of (111). We can then write the integral (110) as a sum of two integrals : \[G_{p}(x)=\left[\int_{|\mathbf{k}|<\mu}\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}+\int_{|\mathbf{k }|\geq\mu}\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\right]e^{-i\mathbf{k}.\mathbf{x}}\int\frac{d \omega}{2\pi}e^{i\omega t}\frac{1}{\omega^{2}-E_{k}^{2}} \tag{114}\] \[\equiv G_{1}(x)+G_{2}(x) \tag{115}\] The first term of \(G_{p}(x)\), for which \(E_{k}\) is imaginary, is called \(G_{1}\). The second term, for which \(E_{k}\) is real, is called \(G_{2}\). The \(G_{2}\) integral has poles on the real axis of \(\omega\). The prescription for the contour can be chosen similarly as in the previous case with positive mass squared. In particular, the retarded prescription gives (110) with a UV cutoff at \(|\mathbf{k}|=\mu\). \(G_{1}\), however, has poles on the imaginary axis of \(\omega\). Therefore, the path along the real axis does not encounter any poles. As a consequence, the contour prescription is fixed. The residue theorem then gives \[G_{1}(x)=i\int_{|\mathbf{k}|<\mu}\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}e^{-i\mathbf{k}.\mathbf{x} }\frac{e^{iE_{k}|t|}}{2E_{k}}. \tag{116}\] In this case, we already observe the absence of a runaway, because \(iE_{k}<0\), so the integrand of (110) decreases exponentially with time. However, \(G_{1}\) breaks the causality of \(G_{p}\) because it is not zero for negative times, and this cannot be cancelled by \(G_{2}\). Following the same idea as in [85], we remark that an acausality can be traded with a runaway by changing the prescription of the Green function. To retrieve causality and build a retarded propagator, we now add the homogeneous solution \(\varphi\) (109) to \(G_{p}\). First, we choose the Feynman prescription for \(G_{2}\) (106), such that the integrand of \(G_{2}\) coincides with the one of (110) at \(|\mathbf{k}|=k\), where \(E_{k}=0\). The most general Green function (108) is then given by \[G(x)=\left[\int_{|\mathbf{k}|<\mu}\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}+\int_{|\mathbf{k}| \geq\mu}\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\right]e^{-i\mathbf{k}.\mathbf{x}}\left[i\frac{ e^{iE_{k}|t|}}{2E_{k}}+\alpha(\mathbf{k})e^{iE_{k}t}+\beta(\mathbf{k})e^{-iE_{k}t}\right]\] \[=\int_{\mathbb{R}^{3}}\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}e^{-i\mathbf{k}.\mathbf{x}}\left[i \frac{e^{iE_{k}|t|}}{2E_{k}}+\alpha(\mathbf{k})e^{iE_{k}t}+\beta(\mathbf{k})e^{-iE_{k }t}\right] \tag{111}\] The retarded Green function corresponding to \(G^{\rm R}(t<0)=0\) is obtained by setting \[\alpha(\mathbf{k})=0, \tag{112a}\] \[\beta(\mathbf{k})=-\frac{i}{2E_{k}}. \tag{112b}\] The result for positive \(t\) is given by \[G^{\rm R}(t>0,\mathbf{x})=-\int_{\mathbb{R}^{3}}\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}e^{ -i\mathbf{k}.\mathbf{x}}\frac{\sin(E_{k}t)}{E_{k}}. \tag{113}\] Again, this integral can be written as a sum of two integrals as \(G^{\rm R}=G^{\rm R}_{1}+G^{\rm R}_{2}\), where \(G^{\rm R}_{2}\) is the same as (107) with a UV cutoff at \(|\mathbf{k}|=k\), and \(G^{\rm R}_{1}\) is different because \(E_{k}=i\sqrt{\mu^{2}-\mathbf{k}^{2}}\) is purely imaginary. More explicitly, \[G^{\rm R}_{1}(t>0,\mathbf{x})=-\int_{|\mathbf{k}|<\mu}\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}e^ {-i\mathbf{k}.\mathbf{x}}\frac{\sin(E_{k}t)}{E_{k}}. \tag{114}\] Since the integrand only depends on the modulus of \(\mathbf{k}\), we define \(k=|\mathbf{k}|\) and \(r\equiv|\mathbf{x}|\) as new variables for our integral which is now written as \[G^{\rm R}_{1}(t>0,r)=-\frac{1}{4\pi^{2}}\int_{0}^{\mu}dkk^{2}\frac{\sin sr}{ sr}\frac{\sinh(t\sqrt{\mu^{2}-k^{2}})}{\sqrt{k^{2}-s^{2}}}. \tag{115}\] For \(r=0\), we have \[G^{\rm R}_{1}(t>0,r=0)=-\frac{1}{4\pi^{2}}\int_{0}^{\mu}dkk^{2}\frac{\sinh(t \sqrt{\mu^{2}-k^{2}})}{\sqrt{\mu^{2}-k^{2}}}\] \[=-\frac{1}{4\pi^{2}}\int_{0}^{\mu}dE_{k}\sqrt{E_{k}^{2}-k^{2}}\sinh(E_{k}t) \tag{111}\] For small times, it is easy to expand the \(\sinh\) in (111) and integrate over \(E_{k}\). The result is \[G_{1}^{\rm R}(t\sim 0^{+},r=0)=-\frac{\mu^{2}}{16\pi}t+\mathcal{O}(t^{2}) \tag{112}\] The large-time asymptotic behaviour of (111) is obtained by remarking that the integral is proportional to the Struve function denoted \(L_{1}(\mu t)\). As a result, \[G_{1}^{\rm R}(t>0,r=0)=-\frac{1}{8\pi}\frac{\mu}{t}L_{1}(\mu t). \tag{113}\] This function behaves as the modified Bessel \(K_{1}\) for large arguments. Therefore, we have \[G_{1}^{\rm R}(t>0,r=0)\underset{\mu t\to+\infty}{\propto}\frac{e^{\mu t}}{( \mu t)^{3/2}} \tag{114}\] This diverges exponentially with time, where \(\mu\) is the inverse time scale. #### Complex \(m^{2}\) In this subsection, we assume \(m^{2}\) is complex, and we write its real and imaginary parts as \[m^{2}=a+ib \tag{115}\] First, we relate \(E_{k}\) (110) to the real and imaginary parts of \(m^{2}\). We define the real and imaginary parts of \(E_{k}\) as \(A\) and \(B\) respectively. We find that \[A=\pm\frac{1}{2}\left(\tilde{a}+\sqrt{\tilde{a}^{2}+b^{2}}\right), \tag{116}\] \[B=\frac{b}{2A}, \tag{117}\] where \[\tilde{a}\equiv a+\mathbf{k}^{2}. \tag{118}\] As in the previous cases, we choose arbitrarily one of the two square roots for \(E_{k}\). We pick the sign in (116) such that \(B>0\). This is always possible since the case \(b=0\) was already studied in the previous two subsections. Therefore, the sign to pick in (116) should be the same sign as \(b\) such that we have \(B>0\). The two poles \(\pm E_{k}\) of \(G_{p}\) (110) are now located on the complex plane, away from the real axis. Therefore, the contour prescription is fixed, as in the negative mass squared case. We then obtain the particular solution \[G_{p}(x)=i\int_{\mathbb{R}^{3}}\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}e^{-i\mathbf{k}.\mathbf{x }}\frac{e^{iE_{k}|t|}}{2E_{k}}. \tag{119}\] This does not contain a runaway since \(\text{Im}(E_{k})>0\). However, the retarded Green function is obtained the same way as in (F.14), by adding the homogenous solution to \(G_{p}\). By fixing \(\alpha\) and \(\beta\) to the same values as in (F.15), we obtain the retarded Green function \[G^{\text{R}}(t>0,\mathbf{x})=-\int_{\mathbb{R}^{3}}\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}e^ {-i\mathbf{k}.\mathbf{x}}\frac{\sin(E_{k}t)}{E_{k}},\] (F.28) and \(G^{\text{R}}(t\leq 0)=0\). This retarded Green function now contains a runaway, coming from the imaginary part of \(E_{k}\). We would need to perform the integral over \(\mathbf{k}\) to obtain the exact time dependence of this runaway. However, we can observe directly that the strongest runaway for large times will come from the largest imaginary part of \(E_{k}\), \(B\), which was chosen to be positive. In order to maximize \(B\) (F.25) for a fixed \(a\) and \(b\), the only possibility is to minimize \(\tilde{a}\), and therefore take \(\mathbf{k}^{2}=0\). Therefore, the strongest runaway comes from the homogeneous mode \(\mathbf{k}=\mathbf{0}\). In this case, \(E_{k}^{2}=m^{2}\). \(\sin(E_{k}t)\) diverges exponentially with time, and the inverse time scale is given by \[B=\frac{|b|}{a+\sqrt{a^{2}+b^{2}}}.\] (F.29) Figure 2 shows the inverse time scale of the runaway corresponding to the worst possible mode \(\mathbf{k}=\mathbf{0}\), as a function of the parameters of the theory in flat space. The inverse time scale corresponds to the real part of \(k\). Indeed, for \(\mathbf{k}^{2}=0\), we have \[E_{k}=\pm m.\] (F.30) If \(k\) is the square root of \(k^{2}\) with the positive real part, then we have to take the \(+\) branch of (F.30) and it follows that \[B=\text{Im}(m).\] (F.31) In the main text, we are searching for poles of the propagator for the Laplacian eigenvalue defined by (5.6), where \(k^{2}\equiv-m^{2}\). Therefore, the strength of a tachyonic pole where \(k^{2}>0\) is then given by \[B=\text{Re}(k).\] (F.32) ## Appendix G Tachyonic tensor eigenmodes in de Sitter In this appendix, we determine which tensor perturbations are tachyonic. Since these perturbations are decomposed into eigenmodes of the covariant de Sitter Laplacian, we study the tachyon nature of the spin-two perturbation for a single eigenvalue \(\nu\in\mathbb{C}\) defined in (5.26). We then obtain a criterion on the value of \(\nu\). The perturbed expanding Poincare coordinates of \(d\)-dimensional de Sitter are given by \[ds_{dS}^{2}=g_{\omega\sigma}^{(0)}dx^{\omega}dx^{\sigma}=\left[(H\tau)^{-2} \eta_{\omega\sigma}+\delta\zeta_{\omega\sigma}^{b}\right]dx^{\alpha}dx^{ \beta}.\] (G.1) The equation of motion for metric perturbations follows from the linearization of the Einstein equation (2.6). One can substitute the cosmological constant using the background equation (2.57). The Einstein equation is then a sum of linear curvature terms, quadratic curvature terms and matter content in the stress tensor. The Lichnerowicz operator \(L[\delta\zeta^{b}]_{\omega\sigma}\) is defined by the variation of the linear curvature terms with respect to \(\delta\zeta^{b}_{\omega\sigma}\) by \[L[\delta\zeta^{b}]_{\omega\sigma}=\delta\left(R_{\omega\sigma}-\frac{1}{2}Rg^{ (0)}_{\omega\sigma}-H^{2}\frac{d(d-1)}{2}g^{(0)}_{\omega\sigma}\right).\] (G.2) If we restrict to the transverse traceless perturbations \(h^{(0)}_{\omega\sigma}\) (4.4), then it takes the very simple form \[L[h^{(0)}]_{\omega\sigma}=\left(H^{2}-\frac{1}{2}\nabla^{2}\right)h_{\omega \sigma},\] (G.3) where we recognize the last term of (5.24). The eigenvalue problem (5.26) for \(h_{(0)\alpha\beta}\) is then just written in terms of the Lichnerowicz operator as \[-2H^{-2}L[h^{(0)}]_{\omega\sigma}=-\left(\nu^{2}-\frac{(d-1)^{2}}{4}\right)h^ {(0)}_{\omega\sigma}.\] (G.4) In order to write this equation as a set of scalar equations acting on each component of \(h^{(0)}_{\omega\sigma}\), we define the new metric perturbation \(\gamma_{\omega\sigma}=(H\tau)^{2}h^{(0)}_{\omega\sigma}\) such that the metric (G.1) is now written \[ds^{2}_{dS}=(H\tau)^{-2}(\eta_{\omega\sigma}+\gamma_{\omega\sigma})dx^{\omega }dx^{\sigma}.\] (G.5) We now define a differential operator acting on the new metric perturbation \(\gamma_{\alpha\beta}\) as \[D[\gamma]_{\omega\sigma}\equiv-2\tau^{2}L\left[\frac{\gamma}{\tau^{2}}\right] _{\omega\sigma}\] (G.6) Equation (G.4) is then written as \[D[\gamma]_{\omega\sigma}=-\left(\nu^{2}-\frac{(d-1)^{2}}{4}\right)\gamma_{ \omega\sigma}.\] (G.7) We now try to write an explicit formula for \(D[\gamma]\) in terms of \(\gamma\). A direct computation using Poincare coordinates (G.1) gives \[D[\gamma]_{\omega\sigma}=H^{-2}\Box\gamma_{\omega\sigma}-2d\gamma_{0(\omega} \delta^{0}_{\sigma)}+4\tau\partial_{(\omega}\gamma_{\sigma)0}+2\eta_{\omega \sigma}\gamma_{00},\] (G.8) where \(\Box\) is the de Sitter Laplacian acting on scalars. Derivatives with respect to \(\tau\) are now labelled with the index \(0\). The expression (G.8) shows us that different components of \(\gamma_{00}\) are coupled to each other in equation (G.7). The strategy is now to further decompose (G.7) into 3 equations; a scalar equation for \(\gamma_{00}\), a vector equation for \(\gamma_{0i}\) and a tensor equation for \(\gamma_{ij}\)40. This splitting is done by using Footnote 40: Index \(i\) and \(j\) will refer to the spatial coordinates of the \(d\)-dimensional metric. \[D[\gamma]_{00}=\left\{H^{-2}\Box-2(d+1)+4\tau\partial_{0}\right\}\gamma_{00},\] (G.9a) \[D[\gamma]_{0i}=\left\{H^{-2}\Box-d+2\tau\partial_{0}\right\} \gamma_{0i}+2\tau\partial_{i}\gamma_{00},\] (G.9b) \[D[\gamma]_{ij}=H^{-2}\Box\gamma_{ij}+4\tau\partial_{(i}\gamma_{j)0}+2\delta_ {ij}\gamma_{00}.\] (G.9c) These three equations are still coupled because \(\gamma_{00}\) contributes to \(D[\gamma]_{0i}\), and \(\gamma_{0\alpha}\) contributes to \(D[\gamma]_{ij}\). Furthermore, these components are also related to each other through the transverse-traceless property of \(h_{\omega\sigma}\). Indeed, the tracelessness condition (3.8) gives \[\eta^{\omega\sigma}\gamma_{\omega\sigma}=0,\] (G.10) and transversality conditions (3.9) give \[\gamma_{0\omega}+\frac{\tau}{d}\partial^{\sigma}\gamma_{\sigma\omega}=0.\] (G.11) After taking these constraints into account, \(\gamma_{\omega\sigma}\) only propagates 5 degrees of freedom. We now solve equations (G.9). To do that, we first give the scalar Laplacian of the de Sitter background (G.1) as \[H^{-2}\Box=\tau^{2}(-\partial_{0}^{2}+\partial_{i}^{2})+\tau(d-2)\partial_{0}.\] (G.12) The first step is to diagonalize the \((d-1)\)-dimensional euclidean Laplacian \(\delta^{ij}\partial_{i}\partial_{j}=\partial_{i}^{2}\) using a Fourier transform 41. In Fourier space, the scalar de Sitter Laplacian acting on a Fourier mode \(\tilde{\gamma}_{\omega\sigma}\) is given by 42 Footnote 41: The convention of the Fourier transform is chosen to be \(\gamma_{\omega\sigma}=\int\frac{dk^{i}}{(2\pi)^{d-1}}e^{-ik^{i}y^{i}}\tilde{ \gamma}_{\omega\sigma}\) Footnote 42: a slight abuse of notation allows us to write the scalar Laplacian in momentum space the same way as we write it in real space. \[H^{-2}\Box=-\tau^{2}\left(\partial_{0}^{2}+\mathbf{k}^{2}\right)+(d-2)\tau\partial _{0},\] (G.13) where the momentum squared is defined as \[\mathbf{k}^{2}\equiv\delta_{ij}k^{i}k^{j}.\] (G.14) #### Scalar equation The only equation in (G.9) which involves only one component of the perturbation \(\gamma_{\alpha\beta}\) is the scalar equation (G.9a). The solution to the eigen-problem (G.7) is then any linear combination of the solutions \(\tilde{\gamma}_{00}^{\pm}\) written as \[\tilde{\gamma}_{00}^{\pm}=(k\tau)^{\frac{3+d}{2}}J_{\pm\nu}(k\tau)\underset{ \tau\to 0}{\sim}(k\tau)^{\frac{3+d}{2}\pm\nu},\] (G.15) where \(\lambda_{\pm}\) are integration constants, they do not depend on \(\tau\). We also defined \(k=\sqrt{\mathbf{k}^{2}}\). Since we are only interested in the eventual divergence of \(\tilde{\gamma}_{00}\) for large time (\(H\tau=e^{-2Ht}\to 0\)), then (110) shows that this scalar quantity is unstable if \[|\mathrm{Re}(\nu)|>\frac{3+d}{2}. \tag{111}\] The exact solution (110) for \(\tilde{\gamma}_{00}\) can then be used to find an exact solution for \(\tilde{\gamma}_{0i}\) (111b). #### Vector equation Instead of solving (111b), which is an inhomogeneous equation for \(\gamma_{0i}\) coupled to \(\gamma_{00}\), we can use the transversality constraint (110). We split \(\gamma_{0i}\) into a transverse and a longitudinal component. In Fourier space (of the 3-dimensional spatial coordinates) \[\tilde{\gamma}_{0i}=b_{i}+k_{i}b, \tag{112}\] such that \[k_{i}b_{i}=0, \tag{113}\] then transversality (110) fixes the longitudinal part in terms of \(\gamma_{00}\) which we already solved. It gives \[ik^{2}b=\left(-\partial_{0}+\frac{d}{\tau}\right)\tilde{\gamma}_{00}. \tag{114}\] We already know the solution (110) for \(\tilde{\gamma}_{00}\). Therefore, \(b\) is given by a linear combination of the two independent solutions \(b^{\pm}\) given by \[ikb^{\pm}=(k\tau)^{\frac{1+d}{2}}\left[\frac{d-3}{2}-(k\tau)\frac{d}{d(k\tau) }\right]J_{\pm\nu}(k\tau), \tag{115}\] which for large time evaluates to \[ikb^{\pm}\underset{\tau\to 0}{\sim}\left(\frac{d-3}{2}-\nu\right)(k\tau)^{ \frac{1+d}{2}\pm\nu}. \tag{116}\] The transverse part of (111b) in Fourier space is obtained by applying the transverse projection operator given by \[b_{i}=\left(\delta_{ij}-\frac{k_{i}k_{j}}{k^{2}}\right)\tilde{\gamma}_{0j}. \tag{117}\] This projection is applied to the "\(0i\)" component of equation (110) to obtain \[\left\{H^{-2}\Box+\nu^{2}-\frac{(d-1)^{2}}{4}-d+2\tau\partial_{0}\right\} \tilde{b}_{i}=0. \tag{118}\] Again, this equation is solved using Bessel functions as any linear combination of the two independent solutions \(b_{i}^{\pm}\) given by \[b_{i}^{\pm}=(k\tau)^{\frac{1+d}{2}}J_{\pm\nu}(k\tau). \tag{119}\] To conclude on the vector perturbation, both longitudinal and transverse parts of \(\gamma_{0i}\) are unstable if \[|\text{Re}(\nu)|>\frac{d+1}{2}. \tag{108}\] By comparing with the result from the scalar component, the scalar instability criterion (107) is stronger than the vector result (108). A scalar instability implies a vector instability. #### Tensor equation As we did for the vector perturbation, we decompose \(\tilde{\gamma}_{ij}\) into \[\tilde{\gamma}_{ij}=\theta_{ij}+k_{i}k_{j}\varphi+2k_{(i}\mathcal{V}_{j)}+ \delta_{ij}\Psi, \tag{109}\] such that \[\delta^{ij}\theta_{ij}=k_{i}\mathcal{V}_{i}=0 \tag{110}\] and \[k_{i}\theta_{ij}=0. \tag{111}\] Every quantity defined in (109) can be expressed as a projection of \(\tilde{\gamma}_{ij}\), where the projector is a function of \(k_{i}\), as it was the case for the vector transverse projection in (107). We now search for a solution for each quantity in (109). First, the tracelessness constraint (106) written in terms of the decomposition (109) is \[\gamma_{00}=(d-1)\Psi+k^{2}\varphi. \tag{112}\] The spatial components of the transversality constraint (109) are given by \[k_{i}\left[\left(\frac{d}{\tau}-\partial_{0}\right)b-i\Psi-ik^{2}\varphi \right]+\left(\partial_{0}-\frac{d}{\tau}\right)b_{i}-\partial_{j}^{2} \mathcal{V}_{i}=0. \tag{113}\] The longitudinal part of equation (113) is \[\left[\left(\frac{d}{\tau}-\partial_{0}\right)b-i(\Psi+k^{2}\varphi)\right]=0, \tag{114}\] and the transverse part is obtained using the same projection as in (107): \[-ik^{2}\tilde{\mathcal{V}}_{i}+\left(\frac{d}{\tau}-\partial_{0}\right)b_{i}=0. \tag{115}\] The three constraints in momentum space (112,131,132) are solved algebraicaly for \(\varphi\), \(\Psi\) and \(\mathcal{V}_{i}\) in terms of the known solutions \(\gamma_{00}\), \(b\) and \(b_{i}\). The result is \[\Psi=\frac{1}{d-2}\left[\tilde{\gamma}_{00}+i\left(\frac{d}{\tau}-\partial_{0 }\right)\tilde{b}\right], \tag{116a}\] \[k^{2}\tilde{\varphi}=-\frac{1}{d-2}\left[\tilde{\gamma}_{00}+i(d-1)\left(\frac{d}{ \tau}-\partial_{0}\right)\tilde{b}\right], \tag{116b}\] \[-k^{2}\tilde{\mathcal{V}}_{i}=\left(\partial_{0}-\frac{d}{\tau}\right)\tilde{b}_{i}. \tag{116c}\] From the solutions for \(\tilde{\gamma}_{00}\) (114), for \(b\) (115) and \(b_{i}\) (116), one can observe that each of these three perturbations scales for large time as \(\sim\tau^{\frac{d-1}{2}\pm\nu}\). We now solve the transverse-traceless part of the eigenproblem (114) which is simply given by \[\left\{\Box+\nu^{2}-\left(\frac{d-1}{2}\right)^{2}\right\}\theta_{ij}=0 \tag{117}\] Therefore, the solution to the spatial tensor part of (114) (which is also a Bessel equation) is \[\tilde{\theta}_{ij}=\lambda_{ij}^{\pm}(k^{i})(k\tau)^{\frac{d-1}{2}}J_{\pm}(k \tau). \tag{118}\] We therefore find that the instability conditions on \(\theta_{ij}\), \(\varphi\), \(\Psi\) and \(\mathcal{V}_{i}\) all read \[|\text{Re}(\nu)|>\frac{d-1}{2}, \tag{119}\] which is weaker than the vector criterion (116). Therefore, the existence of an instability relies only on the tensor component \(\gamma_{ij}\). The criterion (119) was also derived in [68] but using a different decomposition which made all 5 degrees of freedom equally unstable. ## Appendix H Tachyonic tensor eigenmodes in anti de Sitter In this appendix, we derive a criterion on the eigenvalue \(\nu\) which determines if a given mode is tachyonic or not. All the steps done in the previous appendix G can be adapted to AdS in Poincare coordinates by changing the metric (117) to \[ds^{2}_{AdS}=(\chi z)^{-2}(dz^{2}-dt^{2}+d\mathbf{x}^{2})+h_{\alpha\beta}dx^{ \alpha}dx^{\beta}, \tag{120}\] \[=\left[(\chi z)^{-2}\eta_{\omega\sigma}+h_{\omega\sigma}\right]dx^{\omega}dx^ {\sigma}. \tag{121}\] where \(\mathbf{x}\) is a (d-2)-dimensional vector, and we define \(\eta_{\omega\sigma}=\text{diag}(+1,-1,+1,...,+1)\). Greek indices \(\omega,\sigma\) are for the full set of \(d\)-dimensional coordinates. We also use the index "0" for the time coordinate \(t\), and roman letters such as \(i,j\) for \((t,\mathbf{x})\). As we did for de Sitter slicing, the eigenproblem (115) is studied using the rescaled perturbation \[\gamma_{\omega\sigma}=(\chi z)^{2}h_{\omega\sigma}. \tag{122}\] Similarly to de Sitter (114), we define a differential operator from the left-hand side of the eigenproblem (115). It is written as \[\bar{D}[\gamma]_{\omega\sigma}\equiv z^{2}(\chi^{-2}\nabla^{2}+2)(\gamma_{ \omega\sigma}z^{-2}), \tag{123}\] And the eigen-problem (5.42) is then written as \[\bar{D}[\gamma]_{\omega\sigma}=\left[\nu^{2}-\left(\frac{d-1}{2}\right)^{2} \right]\gamma_{\omega\sigma}.\] (H.5) Doing the same computation which resulted in (G.8), we obtain \[\bar{D}[\gamma]_{\omega\sigma}=\Box\gamma_{\omega\sigma}+2d\delta^{z}_{( \omega}\gamma_{\sigma)z}-4z\partial_{(\omega}\gamma_{\sigma)z}+2\eta_{\omega \sigma}\gamma_{zz}.\] (H.6) We are now going to solve this equation in momentum space as we did for de Sitter because it allows us to solve ordinary differential equations. The scalar AdS Laplacian appearing in (H.6) in Poincare coordinates is given by \[\chi^{-2}\Box=z^{2}(\partial_{z}^{2}-\partial_{t}^{2}+\partial_{\mathbf{x}}^{2})-(d -2)z\partial_{z}\] (H.7) In de Sitter, we diagonalized the \((d-1)\)-dimensional Laplacian \(\partial_{i}^{2}\) using Fourier modes. However, in AdS, the Laplacian we want to diagonalize is the \((d-1)\)-dimensional Minkowski space Laplacian as \[(-\partial_{0}^{2}+\partial_{\mathbf{x}}^{2})\gamma_{\omega\sigma}=\left[-(k^{0}) ^{2}+\mathbf{k}^{2}\right]\gamma_{\omega\sigma}\equiv-k^{2}\gamma_{\omega\sigma}.\] (H.8) which can contain complex eigenvalues \(k^{2}\) if the Laplacian is not self-adjoint. In particular, if we assume self-adjointness of the spatial part \(\partial_{\mathbf{x}}^{2}\) but allow for solutions which diverge with time, the imaginary part of \(k^{2}\) will be contained in \((k^{0})^{2}\). A frequency squared \((k^{0})^{2}\) has two square roots \(k^{0}\). Its imaginary part is then responsible for an exponentially growing solution of (H.8) corresponding to one of the two square roots \(k^{0}\). Using (H.8), the scalar Laplacian of AdS in Poincare coordinates is given by \[\chi^{-2}\Box=z^{2}(\partial_{z}^{2}-k^{2})-(d-2)z\partial_{z}.\] (H.9) We then decompose \(\gamma_{\omega\sigma}\) into as we did for de Sitter. We had seen that it was enough to solve the eigenproblem (5.42) for 3 quantities \(\gamma_{00}\), \(b_{i}\) and \(\theta_{ij}\) defined in (G.17) for the transverse vector and (G.26) for the tensorial decomposition. All the other quantities (namely \(b\), \(\varphi\), \(\Psi\) and \(\mathcal{V}\)) are constrained by the transverse-traceless properties of \(h_{\omega\sigma}\) (3.8,3.9), so they cannot represent new instabilities. In Poincare AdS (H.2), the constraints read \[\eta^{\omega\sigma}\gamma_{\omega\sigma}=0,\] (H.10) \[\frac{\tau}{d}\partial^{\sigma}\gamma_{\omega\sigma}=\gamma_{0\omega}.\] (H.11) It is therefore enough to study three equations obtained from (H.5). They are given by: * the \(zz\) component of (H.5) \[\left\{\chi^{-2}\Box-\nu^{2}+\left(\frac{d-1}{2}\right)^{2}+2(d+1)-4z\partial_{z} \right\}\gamma_{zz}=0,\] (H.12a) * the transverse part of the \(0i\) component of (H.5) \[\left\{\chi^{-2}\Box-\nu^{2}+\left(\frac{d-1}{2}\right)^{2}+d-2z\partial_{z} \right\}b_{i}=0,\] (H.12b) * and the transverse-traceless part of the \(ij\) component of (H.5) \[\left\{\chi^{-2}\Box-\nu^{2}+\left(\frac{d-1}{2}\right)^{2}\right\}\theta_{ij }=0.\] (H.12c) Since all these equations are solved by (modified) Bessel functions, we can bring them to the same form. We define the spin \(s=0\),\(1\) or \(2\) perturbation \(\mathcal{Z}_{s}\) being either equal to \(\tilde{\gamma}_{00}\) for \(s=0\), \(b_{i}\) for \(s=1\) or \(\theta_{ij}\) for \(s=2\). A rescaled function \(\psi_{s}\) is also defined as \[\psi_{s}(w,k^{i})=e^{-n_{s}w}\mathcal{Z}_{s}(w,k^{i}),\] (H.13) with the radial coordinate \(w\equiv\log z\). A direct computation using (H.9) into equations (H.12) shows that each \(\psi_{s}\) satisfies the same Schrodinger equation \[\left[-\frac{d^{2}}{dw^{2}}+k^{2}e^{2w}\right]\psi_{s}=-\nu^{2}\psi_{s}\] (H.14) if \[n_{s}=\frac{d+3}{2}-s.\] (H.15) The most general solution of (H.14) is \[\psi_{s}=\lambda_{s}^{\pm}I_{\pm\nu}(ke^{w}),\] (H.16) where \(k\) is the square root of \(k^{2}\) with positive real part. We now study its solutions depending on the sign of \(\mathrm{Re}(k)\). * If \(\mathrm{Re}(k)=0\), this corresponds to timelike \(k^{2}<0\) modes which possess real frequencies \(k^{0}\). These modes do not diverge with time and are timelike. They are therefore stable. The solution (H.16) is then evaluated at an imaginary argument, which gives Bessel functions (not modified) \[\psi_{s}=\lambda_{s}^{\pm}J_{\pm\nu}(|k|e^{w}).\] (H.17) The timelike solution which is regular at \(z\to 0\) is given by one of these two Bessel functions depending on the sign of the real part of \(\nu\). In conclusion, timelike solutions allow any eigenvalue \(\nu\in\mathbb{C}\). * We now turn to complex \(k^{2}\), with \(|{\rm Arg}(k^{2})|<\pi\) which are necessarily unstable. For example, \(k^{2}>0\) corresponds to a tachyonic mode. Since \(k^{2}\) contains two square roots, we take the \(k\) with a positive real part. The linear combination of (H.16) solutions which is regular at the horizon \(z\to+\infty\) is the Bessel \(K_{\nu}\): \[\psi_{s}=\lambda_{s}K_{\nu}(ke^{w})\underset{z\to+\infty}{\to}\sqrt{\frac{\pi }{2kz}}\exp\left\{-ke^{w}\right\}.\] (H.18) As one could already observe from (H.1), the radial coordinate of AdS is not time but \(z\). Furthermore, the eigenproblem (5.42) defines a momentum \(\nu\) which is dual to \(z\) in AdS. In dS, the value of \(\nu\) was fixing the characteristic time of the instability. In AdS, however, the boundary does not correspond to infinite time. In order to address stability in AdS, we turn to a viewpoint similar to the BF bound analysis [81]. The stability condition is following: For a given \(\nu\in\mathbb{C}\), AdS is unstable if there exist a regular solution \(\psi_{s}\) with \({\rm Re}(k)\neq 0\). To address the stability of a given \(\nu\), we then need to look if there is a \(\psi_{s}\) solution (H.18) which is regular at \(w\to-\infty\), since (H.18) is already exponentially decreasing at \(w\to+\infty\). * If \(Re(\nu)>0\), the leading behavior is then \[K_{\nu}(ke^{w})\underset{w\to-\infty}{\sim}\frac{\pi}{2\sin(\pi\nu)}\frac{2^{ \nu}k^{-\nu}e^{-w\nu}}{\Gamma(1-\nu)}\to\infty.\] (H.19) * If \(Re(\nu)<0\) \[K_{\nu}(ke^{w})\underset{w\to-\infty}{\sim}\frac{\pi}{2\sin(\pi\nu)}\frac{2^{ -\nu}k^{\nu}e^{w\nu}}{\Gamma(1+\nu)}\to\infty.\] (H.20) * If \({\rm Re}(\nu)=0\), or \(\nu=i\mu\) for \(\mu\in\mathbb{R}\), the solution is a combination of plane waves: \[\psi_{s}(ke^{w})\underset{w\to-\infty}{\to}\frac{1}{2i\mu}\left[\left(\frac{k }{2}\right)^{-i\mu}\Gamma(1+i\mu)e^{-i\mu w}-\left(\frac{k}{2}\right)^{i\mu} \Gamma(1-i\mu)e^{i\mu w}\right].\] (H.21) We therefore find plane-wave normalizable \(\psi_{s}(w)\) for imaginary \(\nu\) and spacelike (\(k^{2}>0\)) modes. We conclude that \(\nu=i\mu\) allows for a regular solution with non-zero \({\rm Re}(k)\). These modes are unstable eigenvalues of the operator \(\bar{D}[\gamma]\) defined for AdS slicing in (H.4). To conclude, the only possibility for a spacelike \(k^{2}>0\) mode to be regular both at the AdS boundary \(w\to-\infty\) and the horizon \(w\to+\infty\) is to have \({\rm Re}(\nu)=0\). It agrees with the usual BF bound \(\nu^{2}<0\)[81] in the limit where \(\nu^{2}\) is real. Indeed, the usual BF bound analysis was done for real mass squared, which is the eigenvalue of the scalar Laplacian. By looking at the equation for \(\theta_{ij}\) (H.12c), we can observe that the mass squared is identified to \(\nu^{2}-\frac{(d-1)^{2}}{4}\). ## I Asymptotic behaviour of Legendre functions This appendix is devoted to the asymptotic behaviour of associated Legendre functions of argument \(x\in]-1,1[\), which enter into the solution of tensor modes in the bulk with AdS\({}_{3}\) slicing. This appendix allows us to relate boundary conditions (see 5.3.1 and 5.3.2) to the value of constants of integrations \(\lambda_{1},\lambda_{2}\) in (5.45). In the AdS-slicing case (5.3), the bulk equation of motion for tensor perturbations (5.40) can be written as a Legendre function for the holographic coordinate \(u\) (5.43). Its solutions are given by the linear combination \[F(u,\nu)=(\cosh u)^{-2}\left(\lambda_{1}P_{\nu-1/2}^{2}(\tanh u)+\lambda_{2}Q_ {\nu-1/2}^{2}(\tanh u)\right).\] (I.1) In the single-boundary case 5.3.1, it is easy to fix the integration constants of (I.1) because Legendre functions are defined as hypergeometric series in \(\tanh u\) and \(\lambda_{1}(\cosh u)^{-2}P_{\nu-1/2}^{2}(\tanh u)\) is the only solution which vanishes at \(u\to+\infty\). Therefore, imposing \(F(u,\nu)\to 0\) at \(u\to+\infty\) sets \(\lambda_{2}=0\) and one can obtain the expansion of the remaining solution using the usual expression for associated Legendre functions in terms of hypergeometric series : \[P_{\nu}^{\mu}(x)=\frac{1}{\Gamma(1-\mu)}\left(\frac{1+x}{1-x}\right)^{\frac{ \mu}{2}}{}_{2}F_{1}\,\left(-\nu,\nu+1;1-\mu;\frac{1-x}{2}\right),x\in[-1,1].\] (I.2) The formula (I.2) is applied to (I.1) for \[x=\tanh u.\] (I.3) The formula (I.2) is convenient for an expansion close to \(x\to 1\) using the definition of the hypergeometric series (from now we use the short notation \(F={}_{2}F_{1}\) ) \[F(a,b;c;x)=\sum_{n=0}^{+\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}}\frac{x^{n}}{n!},\] (I.4) where \[(a)_{n}\equiv a(a+1)...(a+n-1).\] (I.5) Using (I.2) into (I.2) gives \[(1-x^{2})P_{\nu-1/2}^{2}(x)=2\left(\frac{1-x}{2}\right)^{2}\left(\nu^{2}- \frac{9}{4}\right)\left(\nu^{2}-\frac{1}{4}\right)+\mathcal{O}((1-x)^{3}).\] (I.6) Therefore, the only contribution to \(F(x=1)\) comes from \(Q_{\nu-1/2}^{2}\), which is fixed to zero by the boundary condition (5.46). However, for other boundary conditions, we may need to expand (I.1) close to \(x=-1\). The hypergeometric transformation adapted to the limit \(\mu\to 2\). This transformation is given on page 49 of [82], or (15.3.12) of Abramowitz and Stegun - Hypergeometric functions. We reproduce it here for consistency : \[F(a,b;a+b-\mu;y)=\frac{\Gamma(\mu)\Gamma(a+b-\mu)}{\Gamma(a)\Gamma(b)}\sum_{n=0}^{m -1}\frac{(a-\mu)_{n}(b-\mu)_{n}}{n!(1-\mu)_{n}}(1-y)^{n-m}\] \[-\frac{(-1)^{\mu}\Gamma(a+b-\mu)}{\Gamma(a-\mu)\Gamma(b-\mu)}\sum_{n=0}^{+ \infty}\frac{(a)_{n}(b)_{n}}{n!(n+\mu)!}(1-y)^{n}\left[\log(1-y)\right.\] \[\left.-\psi(n+1)-\psi(n+\mu+1)+\psi(a+n)+\psi(b+n)\right],\] (I.7) valid for \(|1-y|<1\) and \(|\text{Arg}(1-y)|<\pi\). This is the case because we take \[y=\frac{1-x}{2},\] (I.8) Therefore, the small parameter of our expansion at \(u\rightarrow-\infty\) is going to be \[1-y=\frac{1+x}{2}\in]0,1[.\] (I.9) Applying (I.7) to \[a=\frac{1}{2}-\nu,\] (I.10) \[b=\nu+\frac{1}{2},\] (I.11) we obtain \[\frac{1}{2}\left(\frac{1+x}{2}\right)^{2}\left(\nu^{2}-\frac{9}{4}\right) \left(\nu^{2}-\frac{1}{4}\right)\] \[\times\left[\log\left(\frac{1+x}{2}\right)-\frac{3}{2}+\mathcal{H}\left(\nu- \frac{1}{2}\right)+\mathcal{H}\left(-\nu-\frac{1}{2}\right)\right]\bigg{\}}+ \mathcal{O}\left((1+x)^{3}\right)\] (I.12) Now let's compute the expansions of \(Q_{\nu-1/2}^{\mu}(x)\) near the two boundaries. The first one can be obtained using the formula from page 170 of [82] \[P_{\nu}^{\mu}(-x)=\cos(\pi(\nu+\mu))P_{\nu}^{\mu}(x)-\frac{2}{\pi}\sin(\pi( \nu+\mu))Q_{\nu}^{\mu}(x),\] (I.13) valid for \(0<x<1\). Isolate \(Q_{\nu}^{\mu}(x)\) and take \(\mu=2\): \[Q_{\nu-1/2}^{2}(x)=-\frac{\pi}{2\cos(\pi\nu)}\left[\sin(\pi\nu)P_{\nu-1/2}^{2} (x)-P_{\nu-1/2}^{2}(-x)\right].\] (I.14) The expansion in powers of \((1-x)\) for the second term is simply given by (I.12) evaluated at \(-x\). The first term is (I.6). The result is \[(1-x^{2})Q_{\nu-1/2}^{2}(x)=-2\left\{1+\left(\nu^{2}-\frac{9}{4}\right)\left( \frac{1-x}{2}\right)-\frac{1}{2}\left(\frac{1-x}{2}\right)^{2}\left(\nu^{2}- \frac{9}{4}\right)\left(\nu^{2}-\frac{1}{4}\right)\right.\] \[\times\left.\left[\pi\tan(\pi\nu)+\log\left(\frac{1-x}{2}\right)-\frac{3}{2}+ \mathcal{H}\left(\nu-\frac{1}{2}\right)+\mathcal{H}\left(-\nu-\frac{1}{2} \right)\right]\right\}+\mathcal{O}\left((1-x)^{3}\right).\] (I.15) The last expansion we need is \(Q_{\nu-1/2}^{2}(x)\) close to \(x=-1\). This can be obtained using another formula from page 170 of [82]: \[Q_{\nu}^{\mu}(-x)=-\cos(\pi(\nu+\mu))Q_{\nu}^{\mu}(x)-\frac{\pi}{2}\sin(\pi( \nu+\mu))P_{\nu}^{\mu}(x)\] (I.16) valid for \(0<x<1\). Therefore, for \(-1<x<0\), we can use \[Q_{\nu}^{\mu}(x)=-\cos(\pi(\nu+\mu))Q_{\nu}^{\mu}(-x)-\frac{\pi}{2}\sin(\pi( \nu+\mu))P_{\nu}^{\mu}(-x)\] (I.17) where the first term expansion is given by (I.15) evaluated at \(-x\) and the second term is given by (I.6) evaluated at \(-x\) too. The result is given by \[(1-x^{2})Q_{\nu-1/2}^{2}(x)=2\sin(\pi\nu)\left\{1+\left(\nu^{2}-\frac{9}{4} \right)\left(\frac{1+x}{2}\right)-\right.\] \[\left.\frac{1}{2}\left(\frac{1+x}{2}\right)^{2}\left(\nu^{2}-\frac{9}{4} \right)\left(\nu^{2}-\frac{1}{4}\right)\left[\pi\tan(\pi\nu)-\pi\cot(\pi\nu)+\right.\right.\] \[\left.\left.\log\left(\frac{1+x}{2}\right)-\frac{3}{2}+\mathcal{H}\left(\nu- \frac{1}{2}\right)+\mathcal{H}\left(-\nu-\frac{1}{2}\right)\right]\right\}+ \mathcal{O}\left((1+x)^{3}\right).\] (I.18) Finally, the two different expansions for \(F(x)\) are given by \[F(x)\underset{x\to 1}{=}2\lambda_{1}\left(\frac{1-x}{2}\right)^{2}\left( \nu^{2}-\frac{9}{4}\right)\left(\nu^{2}-\frac{1}{4}\right)\] \[\times\left.\left[\pi\tan(\pi\nu)+\log\left(\frac{1-x}{2}\right)-\frac{3}{2}+ \mathcal{H}\left(\nu-\frac{1}{2}\right)+\mathcal{H}\left(-\nu-\frac{1}{2} \right)\right]\right\}+\mathcal{O}\left((1-x)^{3}\right)\] (I.19) \[F(x)\underset{x\to-1}{=}\left(\frac{4\lambda_{1}\cos\pi\nu}{\pi}-2\lambda_{2} \sin\pi\nu\right)\left\{1+\left(\nu^{2}-\frac{9}{4}\right)\left(\frac{1+x}{2 }\right)-\right.\] \[\left.\frac{1}{2}\left(\frac{1+x}{2}\right)^{2}\left(\nu^{2}-\frac{9}{4} \right)\left(\nu^{2}-\frac{1}{4}\right)\left[\log\left(\frac{1+x}{2}\right)- \frac{3}{2}+\mathcal{H}\left(\nu-\frac{1}{2}\right)+\mathcal{H}\left(-\nu- \frac{1}{2}\right)\right]\right\}\] \[+\frac{\pi\lambda_{2}}{\cos\pi\nu}\left(\frac{1+x}{2}\right)^{2}\left(\nu^{2}- \frac{9}{4}\right)\left(\nu^{2}-\frac{1}{4}\right)+{\cal O}\left((1-x)^{3}\right)\] (I.20) The limits of \(F(x)\) on both direction are \[F(u)\longrightarrow\left\{\begin{array}{ll}2\lambda_{2}&\mbox{when $u\longrightarrow+ \infty$}\\ \frac{\lambda_{1}}{\pi}\cos(\pi\nu)-2\lambda_{2}\sin(\pi\nu)&\mbox{when $u \longrightarrow-\infty$}\end{array}\right.\] (I.21) In order to read out the vacuum expectation value term \(h^{(4)}\) from the Fefferman-Graham expansion (3.13), we first take the coordinate transformation (I.3) to obtain \[\frac{1\pm x}{2}=e^{\pm 2u}(1-e^{\pm 2u})+{\cal O}(e^{\pm 4u}).\] (I.22) And the Fefferman-Graham coordinate \(\rho^{\pm}\) is related to \(u\) by \[e^{\mp 2u}=\left(\frac{L\chi}{2}\right)^{2}\rho^{\pm}.\] (I.23) In terms of \(u\), the expansions of \(F\) near each boundary is then given by \[F(u)\underset{u\rightarrow+\infty}{=}2\lambda_{1}e^{-4u}\left(\nu^{2}-\frac{ 1}{4}\right)\left(\nu^{2}-\frac{9}{4}\right)+\] \[2\lambda_{2}\left\{1+\left(\nu^{2}-\frac{9}{4}\right)e^{-2u}-e^{-4u}\left(\nu ^{2}-\frac{9}{4}\right)\left[1+\right.\right.\] \[\left.\left.\frac{1}{2}\left(\nu^{2}-\frac{1}{4}\right)\left(\pi\tan(\pi\nu)- 2u-\frac{3}{2}+{\cal H}\left(\nu-\frac{1}{2}\right)+{\cal H}\left(-\nu-\frac{ 1}{2}\right)\right)\right]\right\}\] (I.24) \[F(u)\underset{u\rightarrow-\infty}{=}\left(\frac{4\lambda_{1}}{\pi}\cos(\pi \nu)-2\lambda_{2}\sin(\pi\nu)\right)\left\{1+\left(\nu^{2}-\frac{9}{4}\right) e^{2u}-\right.\] \[\left.\left.\lambda_{2}\pi\cos(\pi\nu)e^{4u}\left(\nu^{2}-\frac{1}{4}\right) \left(\nu^{2}-\frac{9}{4}\right)\right.\] (I.25) Replacing in (I.24, I.25) the integration constants \(\lambda_{1}\) and \(\lambda_{2}\) by boundary conditions (5.46) gives the result (5.49). In the case of symmetric boundary conditions (5.55), the bulk radial solution is given by \[F_{\rm sym}(u)\underset{u\rightarrow\pm\infty}{=}1+e^{\mp 2u}\left(\nu^{2}- \frac{9}{4}\right)-e^{\mp 4u}\left(\nu^{2}-\frac{9}{4}\right)\left[1+\right.\] \[\left.\frac{1}{2}\left(\nu^{2}-\frac{1}{4}\right)\left(\mp 2u-\frac{\pi}{ \cos(\pi\nu)}-\frac{3}{2}+{\cal H}\left(\nu-\frac{1}{2}\right)+{\cal H}\left( -\nu-\frac{1}{2}\right)\right)\right].\] (I.26) From this solution, one can then identify equations (5.41) with (3.13) to obtain all the terms in (5.51). ## J Quadratic action In this appendix, we obtain the quadratic terms of the boundary action (2.1) for scalar and tensor perturbations. In the scalar case, the quadratic action is local and can therefore be expressed in terms of curvature invariants of the boundary and of the Laplacian operator applied to the scalar \(\Psi\). However, the CFT contribution for the quadratic action of spin-2 modes is obtained in momentum space, and is explicitly non-local, since it depends on logs or more complicated functions of the momentum \(k\) (for flat space) or \(\nu\) (in curved space). ### Scalar The linearization of the generalized Einstein tensor defined in (4.5) and traced in (4.8) is given by \[\dot{E}[\psi]\equiv\lim_{\varepsilon\to 0}\frac{E[(1+\varepsilon\psi)\bar{ \zeta}]-E[\bar{\zeta}]}{\varepsilon}=\left[1+\frac{\alpha G}{4}\Box-\frac{GN^ {2}\bar{R}}{24\pi}\right](3\Box+\bar{R})\psi=0.\] (J.1) Equation (4.10) is a linear _local_ equation for \(\psi(x)\). We now turn to the computation of the scalar propagator from varying the action as in (4.5). \[S[(1+\psi)\bar{\zeta}]=S[\bar{\zeta}]+\int d^{4}x\ \frac{\delta S[g^{(0)}]}{ \delta\psi}\bigg{|}_{\bar{\zeta}}\psi+\frac{1}{2}\int d^{4}x\ \frac{\delta^{2}S[g^{(0)}]}{\delta\psi^{2}}\psi^{2} \bigg{|}_{\bar{\zeta}}+...\] (J.2) From the definition of \(E_{\omega\sigma}\) in (4.5), the linear piece of the action is given by the background value of \[\frac{\delta S[g^{(0)}]}{\delta\psi}=-\frac{1}{16\pi G}\sqrt{g^{(0)}}\psi E[ g^{(0)}].\] (J.3) This is zero on-shell when Einstein's equation (4.8) holds. This is also true for the background metric \(\bar{\zeta}\) which solves (4.8) to give (2.57). To obtain the quadratic piece of (J.2), we take the functional derivative of (J.3) with respect to \(\psi\). The result, after evaluating it on the background metric \(\bar{\zeta}\) and removing another off-shell term, is given by \[\frac{\delta^{2}S[g^{(0)}]}{\delta\psi^{2}}\psi^{2}\bigg{|}_{\bar{\zeta}}=- \frac{1}{16\pi G}\sqrt{\bar{\zeta}}\psi\dot{E}[\psi],\] (J.4) where \(\dot{E}[\psi]\) was computed in (4.10). It would be tempting to define the inverse scalar propagator as \[\widetilde{\mathcal{F}}\equiv\frac{1}{\sqrt{\bar{\zeta}}}\left.\frac{\delta^ {2}S[g^{(0)}]}{\delta\psi^{2}}\right|_{\bar{\zeta}}.\] (J.5) Inserting (4.10) into (J.5), the result would then be \[\mathcal{F}=-\frac{1}{16\pi G}\left[1+\frac{\alpha G}{4}\Box-\frac{GN^{2}\bar {R}}{24\pi}\right](3\Box+\bar{R}).\] (J.6) ### Tensor We start from the action (2.38): \[S[g^{(0)}]=S_{\rm CFT}[g^{(0)}]-\frac{1}{16\pi G}\int\sqrt{g^{(0)}}\left\{R-2 \Lambda+\frac{\alpha G}{24}R^{2}+4\beta G\left(R^{\omega\sigma}R_{\omega\sigma}- \frac{1}{3}R^{2}\right)\right\}\] (J.7) Doing an expansion in the metric with respect to a spin-2 TT perturbation \(h^{(0)}_{\omega\sigma}\) as \[g^{(0)}_{\omega\sigma}=\bar{\zeta}_{\omega\sigma}+\epsilon h^{(0)}_{\omega \sigma},\] (J.8) where \(\epsilon\) is a book-keeping parameter. A Taylor expansion of the action (J.7) in \(\epsilon\) is written as \[S[g_{(0)}]=S[\bar{\zeta}]+\epsilon S^{(1)}[\bar{\zeta}]+\frac{\epsilon^{2}}{2 }S^{(2)}[\bar{\zeta}]+{\cal O}(\epsilon^{3}).\] (J.9) The linear order \(S^{(1)}\) is already given by the generalized Einstein tensor computed in (4.6). This has to be evaluated on the background metric \(\bar{\zeta}_{\omega\sigma}\), and the general metric perturbation is specialized to the TT mode \(h^{(0)}\), so \(\delta g^{(0)\omega\sigma}=h^{(0)\omega\sigma}\). We recall here the definition (4.5) of the generalized Einstein tensor: \[S^{(1)}[g_{(0)}]=-\frac{1}{16\pi G}\int\sqrt{g_{(0)}}h^{(0)\omega\sigma}E_{ \omega\sigma}[g^{(0)}].\] (J.10) Taking the second derivative of the action (J.7) is equivalent to taking the first derivative of \(E_{\omega\sigma}[g^{(0)}]\) with respect to \(\epsilon\). We give some useful linearization formulae in the following. First, we rewrite the generalized Einstein tensor where we replace \(\Lambda\) by its expression (2.57) which comes from the trace of the background Einstein equation (4.6). The result is \[E_{\omega\sigma}=\tilde{G}_{\omega\sigma}+8\pi G(^{(\alpha)}H_{\omega\sigma}+ ^{(\beta)}H_{\omega\sigma}-\langle T_{\omega\sigma}\rangle^{T}),\] (J.11) where \[\tilde{G}_{\omega\sigma}\equiv R_{\omega\sigma}-\frac{1}{2}Rg^{(0)}_{\omega \sigma}+\frac{R}{4}g^{(0)}_{\omega\sigma},\] (J.12) and \[\langle T_{\omega\sigma}\rangle^{T}\equiv\langle T_{\omega\sigma}\rangle- \frac{1}{4}\bar{\zeta}_{\omega\sigma}\left\langle T^{\kappa}_{\kappa}\right\rangle.\] (J.13) The linearization of each term in (J.11) is given by \[\delta_{h}\tilde{G}_{\omega\sigma}=\left(-\frac{\nabla^{2}}{2}+\frac{R}{12} \right)h^{(0)}_{\omega\sigma},\] (J.14) \[\delta_{h}{}^{(\alpha)}H_{\omega\sigma}=\frac{\alpha}{8\pi}\frac{R}{12}\left( \frac{\nabla^{2}}{2}-\frac{R}{12}\right)h^{(0)}_{\omega\sigma},\] (J.15) \[\delta_{h}{}^{(\beta)}H_{\omega\sigma}=\frac{\beta}{32\pi}\left(\frac{1}{2} \Box^{2}h_{\omega\sigma}-\frac{R}{4}\Box h_{\omega\sigma}+\frac{R^{2}}{36}h_{ \omega\sigma}\right)=-\frac{\beta}{2\pi}\frac{\hat{h}_{\omega\sigma}}{L^{4}},\] (J.16) where \(\hat{h}_{\alpha\beta}\) is defined in (3.13). And the linearization of the CFT stress tensor depends on the background choice (flat, dS or AdS). Since \(\delta_{h}\left\langle T_{\omega\sigma}\right\rangle\) is only given for a single Laplacian mode, we first need to specify the background geometry and then decompose the action (J.7) into a complete basis of Laplacian eigenvalues. Therefore, the quadratic part of the action is \[S^{(2)}[\bar{\zeta}]=-\frac{1}{16\pi G}\int d^{4}x\sqrt{\tilde{\zeta}}h^{(0) \omega\sigma}\left\{\left(-\frac{\nabla^{2}}{2}+\frac{R}{12}\right)\left(1- \frac{\alpha GR}{12}\right)h^{(0)}_{\omega\sigma}-4G\beta\frac{\hat{h}_{\omega \sigma}}{L^{4}}-8\pi G\left\langle T_{\omega\sigma}\right\rangle^{T}\right\}. \tag{177}\] Replacing the stress tensor by its expression in terms of the expansion \(h^{(n)}\), we obtain \[S^{(2)}[\bar{\zeta}]=\frac{N^{2}}{2\pi^{2}}\int d^{4}x\sqrt{ \tilde{\zeta}}h^{(0)\omega\sigma}\left\{h^{(4)}_{\omega\sigma}+\left(\tilde{ \beta}_{\rm eff}+1-2\log(\mu L)\right)\hat{h}_{\omega\sigma}+\right.\] \[\left.\frac{RL^{4}}{24}\left(\nabla^{2}-\frac{R}{6}\right)\left( \frac{3\pi}{GN^{2}R}-\frac{\pi\alpha}{4N^{2}}-\frac{1}{4}\right)h^{(0)}_{ \omega\sigma}\right\}. \tag{178}\] Flat space quadratic actionThe quadratic action given in (177) is applied to zero curvature. It gives \[S^{(2)}[\eta]=-\frac{1}{16\pi G}\int d^{4}xh^{(0)\omega\sigma}\left\{-\frac{ \nabla^{2}}{2}h^{(0)}_{\omega\sigma}-\frac{4G\beta}{L^{4}}\hat{h}_{\omega \sigma}-8\pi G\delta_{h}\left\langle T_{\omega\sigma}\right\rangle^{T}\right\} \tag{179}\] Since the last term is only known in momentum space (5.16), we first apply a usual Fourier transform to the metric perturbation \[h^{(0)}_{\omega\sigma}=\int\frac{d^{4}k}{(2\pi)^{4}}e^{-ik.x}\tilde{h}^{(0)}_ {\omega\sigma}, \tag{180}\] in order to write the quadratic action (179) as \[S^{(2)}[\eta]=-\frac{1}{16\pi G}\int\frac{d^{4}k}{(2\pi)^{4}}\tilde{h}^{(0) \omega\sigma}(-k)\left\{\frac{k^{2}}{2}\tilde{h}^{(0)}_{\omega\sigma}(k)-\frac {4G\beta}{L^{4}}\hat{\tilde{h}}_{\omega\sigma}(k)-8\pi G\delta_{h}\left\langle \tilde{T}_{\omega\sigma}\right\rangle^{T}(k)\right\} \tag{181}\] Using (5.16) and the fact that the trace anomaly is zero on a flat background, we obtain the result \[S^{(2)}[\eta]=\int\frac{d^{4}k}{(2\pi)^{4}}\tilde{h}^{(0)\omega\sigma}(-k) \tilde{h}^{(0)}_{\omega\sigma}(k)\mathcal{F}_{\rm flat}(k), \tag{182}\] where \[\mathcal{F}_{\rm flat}(k)\equiv\frac{N^{2}}{64\pi^{2}}k^{2}\left\{-\frac{2\pi} {GN^{2}}+\frac{k^{2}}{2}\left[\frac{1}{2}-2\gamma_{E}-\log\left(GN^{2}k^{2} \right)-\tilde{\beta}_{\rm eff}\right]\right\} \tag{183}\] If we add a source to the action (182) as \[S^{(2)}[J_{\omega\sigma}]\equiv\int\frac{d^{4}k}{(2\pi)^{4}}\tilde{h}^{(0) \omega\sigma}(-k)\left\{\tilde{h}^{(0)}_{\omega\sigma}(k)\mathcal{F}_{\rm flat }(k)-\tilde{J}_{\omega\sigma}(k)\right\}, \tag{184}\] then the classical solution which cancels the functional derivative of the quadratic action (102) with respect to \(h^{(0)\omega\sigma}\) can be written as \[h^{(0)}_{\omega\sigma}(x)=\int d^{4}zJ_{\omega\sigma}(z)G(x-z), \tag{103}\] where \[G(x)\equiv\int\frac{d^{4}k}{(2\pi)^{4}}\frac{e^{-ik.x}}{\mathcal{F}_{\text{ flat}}(k)}. \tag{104}\] Curved space-time quadratic actionThe spin-2 perturbation is decomposed into a basis of tensor (transverse-traceless) eigenmodes \(\theta_{\omega\sigma}(x)\) as \[h^{(0)}_{\omega\sigma}(x)=\int d\nu\theta_{\omega\sigma}(\nu,x)\tilde{h}(\nu), \tag{105}\] where \[(\nabla^{2}-\frac{\bar{R}}{6})\theta_{\omega\sigma}=-\frac{\bar{R}}{12}\left( \nu^{2}-\frac{9}{4}\right)\theta_{\omega\sigma}, \tag{106}\] where eigenvectors \(\theta_{\omega\sigma}\) of different eigenvalues are orthogonal, and normalized such that \[\int d^{4}x\sqrt{\tilde{\zeta}}\theta^{\omega\sigma}(\nu,x)\theta_{\omega \sigma}(\mu,x)=\delta\left(\mu-\nu\right). \tag{107}\] This orthogonality relation allows us to write \[\int d^{4}x\sqrt{\zeta}\theta^{\omega\sigma}(\nu,x)h^{(0)}_{\omega\sigma}(x) =\int d\mu\delta\left(\mu-\nu\right)\tilde{h}(\mu)=\tilde{h}(\nu). \tag{108}\] which tells that \(\tilde{h}\) is the scalar product between the eigenvector and \(h_{(0)\alpha\beta}\). In momentum space (105), \(h^{(4)}\) can be written explicetly in terms of \(h^{(0)}\) if we specialize to a specific boundary geometry. * For AdS with single-sided boundary conditions, we use (101b) to replace \(h^{(4)}\) and write the action (102) only in terms of \(h^{(0)}\). Using the orthogonality (107), we obtain \[S^{(2)}[\bar{\zeta}]=\int d\nu\tilde{h}(\nu)^{2}\mathcal{F}_{(\cdot)}(\nu),\] (109) where \[\mathcal{F}_{(\cdot)}\equiv\frac{N^{2}\chi^{4}}{64\pi^{2}}\left(\nu^{2}-\frac{ 9}{4}\right)Q_{(\cdot)}(\nu)\] (109) and \[Q_{(\cdot)}(\nu)=1+\frac{2\pi}{N^{2}}\left(\frac{1}{G\chi^{2}}+\alpha\right) -\frac{1}{2}(\nu^{2}-1/4)\left[\tilde{\beta}_{\text{eff}}\right.\] \[\left.+\log\left(GN^{2}\chi^{2}\right)-\frac{1}{2}+\mathcal{H} \left(-\frac{1}{2}-\nu\right)+\mathcal{H}\left(-\frac{1}{2}+\nu\right)\right].\] (110) * For de Sitter, we replace \(h^{(4)}\) using (111) into (116). The result is \[\mathcal{F}_{\text{dS}}(\nu)=\frac{N^{2}H^{4}}{64\pi^{2}}\left(\nu^{2}-\frac{9}{ 4}\right)Q_{dS}(\nu),\quad\text{Re}(\nu)>0.\] (111) where \[Q_{\text{dS}}(\nu)\equiv 1-\frac{2\pi}{GN^{2}H^{2}}+2\tilde{\alpha}- \frac{1}{2}\left(\nu^{2}-\frac{1}{4}\right)\left[\log\left(GN^{2}H^{2}\right)- \frac{1}{2}+2\mathcal{H}(\nu-1/2)+\tilde{\beta}_{\text{eff}}\right],\] \[\text{Re}(\nu)>0,\] (112) ## Appendix K Unphysical scalar mode due to gauge fixing The scalar quadratic action (110) contains an unphysical solution \((3\Box+\bar{R})\Psi=0\) because it was obtained only from the trace of Einstein equation. In this appendix, we first derive the non-diagonal components of Einstein equations for \(\Psi\) using bulk diffeomorphisms which asymptote to \(\Psi\) on the boundary [86]. Then, we solve these non-diagonal Einstein equations and show that the unphysical mode can be absorbed into a gauge transformation of the boundary metric perturbations. ### Non-diagonal components of the scalar equation The trace of the linearized Einstein equation given in (110) is not the whole story because \(\psi\) also gets constraints from non-diagonal components of Einstein equation (6) linearized with respect to \(\psi\). To obtain this full set of equations, we need to know how \(g^{(4)}_{\omega\sigma}\) depends on \(\psi\) to linear order. Since \(\psi\) does not propagate in the bulk, we need to perform a diffeomorphism in the bulk as shown in [86, 70]. This is exactly what we need for obtaining the full set of equations of motion for \(\psi\). In order to obtain constraints on the variation of \(g^{(4)}_{\omega\sigma}\) under a conformal transformation \(\psi\), we solve order by order in the Fefferman-Graham expansion, a particular class of bulk diffeomorphisms which evaluate at \(\psi\) on the boundary \(\rho=\epsilon\). In [86], such a diffeomorphism is defined as \[\left\{\begin{aligned} &\tilde{\rho}=\rho e^{-\psi(x)}\approx \rho(1-\psi(x)),\\ &\tilde{x}^{\sigma}=x^{\sigma}+a^{\sigma}.\end{aligned}\right. \tag{113}\] Under \(\rho\rightarrow\tilde{\rho}\) and \(x^{\sigma}\rightarrow\tilde{x}^{\sigma}\), the slice metric transforms as \[g_{\omega\sigma}(x,\rho)\to g_{\omega\sigma}(x,\rho)+\psi_{\omega \sigma}(x,\rho), \tag{114}\] where \(\psi_{\omega\sigma}(x,\rho)\) is given by \[\psi_{\omega\sigma}=\psi(1-\rho\partial_{\rho})g_{\omega\sigma}+2\nabla^{g}_{ (\omega}a_{\sigma)}. \tag{115}\] The covariant derivative \(\nabla^{g}\) is the one compatible with \(g_{\omega\sigma}(x,\rho)\). The bulk diffeomorphism \(\psi_{\omega\sigma}(x,\rho)\) admits the Fefferman-Graham expansion \[\psi_{\omega\sigma}(x,\rho)=\psi g^{(0)}_{\omega\sigma}+\rho\psi^{(2)}_{\omega \sigma}+\rho^{2}\psi^{(4)}_{\omega\sigma}+\rho^{2}\log\rho\hat{\psi}_{\omega \sigma}+\mathcal{O}(\rho^{3}), \tag{100}\] where we have already used the leading order \(\mathcal{O}(\rho^{0})\) of (100), which gives \(\psi g^{(0)}_{\omega\sigma}\). One can already observe that \(\psi\) in (100) is indeed the same scalar variation as in (100) because requiring that (100) preserves the tensorial structure of the Fefferman-Graham metric (such that cross-terms \(dx^{\omega}d\rho\) vanish), one can relate the near-boundary expansion of \(a^{\sigma}\) with the one of \(g_{\omega\sigma}\)[86]. The two first terms are enough in our case, they read \[a^{\sigma}(x,\rho)=\rho\frac{L^{2}}{4}\partial^{\sigma}\psi(x)-\rho^{2}\frac{ L^{2}}{8}g^{(2)\omega\sigma}\partial_{\omega}\psi(x)+\mathcal{O}(\rho^{3}), \tag{101}\] where indices are now raised and lowered using the boundary metric \(g^{(0)}_{\omega\sigma}\). Inserting (101) into (100) then leads to the result \[\psi_{\omega\sigma}=\psi g^{(0)}_{\omega\sigma} \tag{102}\] \[\psi^{(2)}_{\omega\sigma}=\frac{L^{2}}{2}\nabla_{\omega}\partial_{\sigma}\psi \tag{103}\] \[\psi^{(4)}_{\omega\sigma}=-\psi(g^{(4)}_{\omega\sigma}+\hat{g}_{\omega\sigma} )+\frac{L^{2}}{4}\left[\partial^{\kappa}\psi\nabla_{\kappa}g^{(2)}_{\omega \sigma}-\nabla_{(\omega}\left\{g^{(2)}_{\sigma)\kappa}\partial^{\kappa}\psi \right\}+2g^{(2)}_{\kappa(\omega}\nabla_{\sigma)}\partial^{\kappa}\psi\right] \tag{104}\] \[\hat{\psi}_{\omega\sigma}=-\psi\hat{g}_{\omega\sigma} \tag{105}\] One can check the above formulae for \(g^{(0)}_{\omega\sigma}\), \(g^{(2)}_{\omega\sigma}\) and \(\hat{g}_{\omega\sigma}\) starting with the solutions (102) and (103) and linearize them with \(\psi\). However, the variation (104) can only be obtained from the bulk diffeomorphism (101). Substituting all of these variation formulae into the CFT stress-tensor we obtain that it transforms under (100) as \[\left\langle T_{\omega\sigma}\right.\!\left[(1+\psi)g^{(0)}\right]\!-\!\left \langle T_{\omega\sigma}\right\rangle\left[g^{(0)}\right]=-\psi\left\langle T _{\omega\sigma}\right\rangle\!-\!\frac{N^{2}}{4\pi^{2}L^{4}}\left\{2\psi\hat{ g}_{\omega\sigma}-\frac{L^{2}}{2}\left[\partial^{\kappa}\psi\nabla_{\kappa}g^{(2)}_{ \omega\sigma}-\nabla_{(\omega}\left\{g^{(2)}_{\sigma)\kappa}\partial^{\kappa} \psi\right\}\right.\right.\] \[\left.\left.+2g^{(2)}_{\kappa(\omega}\nabla_{\sigma)}\partial^{\kappa}\psi \right]+2(g^{(2)}\psi^{(2)})_{\omega\sigma}-\frac{1}{2}g^{(0)}_{\omega\sigma} \mathrm{Tr}\left[g^{(2)}\psi^{(2)}\right]+\frac{1}{2}g^{(0)}_{\omega\sigma} \mathrm{Tr}\left[g^{(2)}\right]\mathrm{Tr}\left[\psi^{(2)}\right]\right.\] \[\left.-\frac{1}{2}\left(\psi^{(2)}_{\omega\sigma}\mathrm{Tr}\left[g^{(2)} \right]+g^{(2)}_{\omega\sigma}\mathrm{Tr}\left[\psi^{(2)}\right]\right) \right\}. \tag{106}\] We now restrict to a variation around (A)dS background with constant \(\bar{R}\). The background values of \(g^{(2)}_{\omega\sigma}\) and \(g^{(4)}_{\omega\sigma}\) are then given by table (53). The stress-tensor variation (106) then evaluates to the much simpler expression given by \[\left\langle T_{\omega\sigma}\right\rangle[(1+\psi)\bar{\zeta}]-\left\langle T _{\omega\sigma}\right\rangle[\bar{\zeta}]=-\psi\left\langle T_{\omega\sigma} \right\rangle-\frac{N^{2}\bar{R}}{192\pi^{2}}(\nabla_{\omega}\partial_{\sigma} -\bar{\zeta}_{\omega\sigma}\Box)\psi, \tag{107}\] where now, all geometrical quantities such as \(\bar{R}\) and \(\nabla_{\omega}\) are built with the background metric \(\bar{\zeta}_{\omega\sigma}\). The first term corresponds to the classical scaling law for the stress tensor under a Weyl transformation. However, the second term of (110) brings a correction coming from the conformal anomaly of the quantum vacuum expectation value \(\langle T_{\omega\sigma}\rangle\). We now have all the ingredients to linearize the Einstein equation (6) under a conformal transformation \(\psi\). One can use the conformal variation of the Ricci tensor \[R_{\omega\sigma}[(1+\psi)\bar{\zeta}]-R_{\omega\sigma}[\bar{\zeta}]=-\frac{1}{2 }\left(\bar{\zeta}_{\omega\sigma}\Box+2\nabla_{\omega}\partial_{\sigma}\right)\psi. \tag{112}\] Using (110) and linearizing all other terms with (112), we obtain the following equation \[\left\{\bar{\zeta}_{\omega\sigma}\Box-\nabla_{\omega}\partial_{\sigma}+\Lambda \bar{\zeta}_{\omega\sigma}+\frac{G\alpha}{4}\left[\left(\frac{\bar{R}}{4}+ \Box\right)\bar{\zeta}_{\omega\sigma}-\nabla_{\omega}\partial_{\sigma}\right]\Box\right.\] \[\left.-\frac{GN^{2}\bar{R}}{24\pi}\left[\frac{\bar{R}}{8}\bar{\zeta}_{\omega \sigma}+\left(\bar{\zeta}_{\omega\sigma}\Box-\nabla_{\omega}\partial_{\sigma} \right)\right]\right\}\psi=0. \tag{113}\] Inserting the value of \(\Lambda\) (57) allows us to write (113) in a simple, factorized form given by \[\left\{\left(\Box+\frac{\bar{R}}{4}\right)\bar{\zeta}_{\omega\sigma}-\nabla_ {\omega}\partial_{\sigma}\right\}\left[1+\frac{G\alpha}{4}\Box-\frac{GN^{2} \bar{R}}{24\pi}\right]\psi=0. \tag{114}\] Taking the trace of (114) gives back equation (10). The factor in curly braces is responsible for the solution we are now trying to discard. The squared brackets in (114) are absorbed into an auxiliary field \(\Psi\) defined by \[\Psi\equiv\left[1+\frac{G\alpha}{4}\Box-\frac{GN^{2}\bar{R}}{24\pi}\right]\psi. \tag{115}\] It is then possible to verify that the most general solution for \(\Psi\) in (114) is a pure gauge transformation. The idea is to find which infinitesimal conformal transformation \(\Psi\) is. The proof of this statement and the corresponding gauge transformation is found in the next subsection. Therefore, \(\Psi\) can be consistently set to zero. As a consequence, the equation of motion for the physical \(\psi\) is \[\left[1+\frac{G\alpha}{4}\Box-\frac{GN^{2}\bar{R}}{24\pi}\right]\psi=0 \tag{116}\] ### Pure gauge scalar This subsection is devoted to showing that the only physical solution for \(\psi\) is the solution to the equation \(\Psi=0\) as it is defined in (115). First, we observe that imposing flat space (\(R=0\)) in (114) reduces to the case studied in [7]. More precisely, our equation (114) for \(R=0\) is their equation (8) for \(K_{2}=\frac{\alpha}{192\pi}\). We are going to follow their arguments and generalize them to spacetime with non-zero curvature. In flat space,equation (106) reduces to \[(\Box\eta_{\omega\sigma}-\partial_{\omega}\partial_{\sigma})\Psi, \tag{107}\] where \[\Psi=\psi+\frac{G\alpha}{4}\Box\psi. \tag{108}\] We now solve (107). Taking its trace, we find \[\Box\Psi=0. \tag{109}\] Inserting this result back into (107), we find \[\partial_{\omega}\partial_{\sigma}\Psi=0. \tag{110}\] The most general solution is then \[\Psi=x^{\sigma}b_{\sigma}+a, \tag{111}\] where \(b_{\sigma}\) and \(a\) are constants. The most general solution of (108) is \[\psi=\Psi+\varphi, \tag{112}\] where \(\varphi\) is the homogeneous solution, i.e solution of the Klein-Gordon equation \[\varphi+\frac{G\alpha}{4}\Box\varphi=0. \tag{113}\] We now search for a conformal Killing vector \(\xi_{\sigma}\) such that \[\Psi\eta_{\omega\sigma}=\partial_{(\omega}\xi_{\sigma)}. \tag{114}\] Inserting (111) on the left-hand side and the ansatz \[\xi_{\sigma}=A_{\sigma}+Ax_{\sigma}+(x^{\omega}B_{\omega})x_{\sigma}+x^{2} \tilde{B}_{\omega}, \tag{115}\] we find the following constraints: \[\left\{\begin{array}{rcl}A&=&a\\ B_{\omega}&=&b_{\omega}\\ \tilde{B}_{\omega}&=&-\frac{1}{2}b_{\omega}.\end{array}\right. \tag{116}\] Therefore, any vector of the form \[\xi_{\sigma}=A_{\sigma}+ax_{\sigma}+(b.x)x_{\sigma}-\frac{1}{2}x^{2}b_{\sigma}, \tag{117}\] can absorb \(\Psi\) into a gauge transformation. Therefore, \(\Psi\) is a pure gauge. Its presence is due to the fact that we had fixed the gauge in (103). Such a gauge artefact would not be present if we were solving equations of motion of gauge invariant quantities. Indeed, the gauge invariant tensor perturbation \(h^{(0)}\) (104) does not contain such problematic solutions. The particular solution \(\Psi\) in (112) is then removed from our analysis, and we are left to study the homogeneous solution \(\varphi\), which cannot be removed by a gauge transformation [7]. In de Sitter, \(\Psi\) is a solution to the equation \[\left\{\left(\Box+\frac{\bar{R}}{4}\right)\zeta_{\omega\sigma}-\nabla_{\omega} \partial_{\sigma}\right\}\Psi=0 \tag{102}\] By tracing this equation and inserting back the expression for \(\Box\Psi\) in it, we obtain \[\nabla_{\omega}\partial_{\sigma}\Psi+\frac{\bar{R}}{12}\bar{\zeta}_{\omega \sigma}\Psi=0. \tag{103}\] This equation is a little more complicated than its flat space analogue (101) by the presence of a second term and the covariant derivative \(\nabla_{\omega}\). It can be written using the Poincare coordinates (100), in which it takes the form \[\partial_{\omega}\partial_{\sigma}\Psi+\tau^{-2}\eta_{\omega\sigma}\Psi+\tau^{ -1}(2\partial_{(\omega}\delta^{0}_{\sigma)}+\eta_{\omega\sigma}\partial_{0}) \Psi=0. \tag{104}\] First, the \(\omega=0,\sigma=i\) component of this equation is \[(\partial_{0}+\tau^{-1})\partial_{i}\Psi=0, \tag{105}\] for which the most general solution is \[\Psi=f(\tau)+\tau^{-1}A(x_{i}). \tag{106}\] Inserting this result in the \(\omega=\sigma=0\) component of (104) gives the equation \[f^{\prime\prime}-\tau^{-2}f+\tau^{-1}f^{\prime}=0, \tag{107}\] which is solved by \(f=\frac{1}{\tau}\). Since this solution was already included in the second term of (106), we can just write \(\Psi\) as \[\Psi=\frac{A(x_{i})}{\tau}. \tag{108}\] Inserting this result in the \(\omega=i,\sigma=j\) component of (104), we obtain \[\partial_{i}\partial_{j}A=0, \tag{109}\] which is a similar equation as the flat space case (101) and is also solved by \(A=a+b^{i}x_{i}\). Therefore, the most general solution of (104) is \[\Psi=\tau^{-1}(a+b^{i}x_{i}). \tag{110}\] We now search for the existence of a vector field such that \[\nabla_{(\omega}\xi_{\sigma)}=\Psi\bar{\zeta}_{\omega\sigma}, \tag{111}\] which would then prove that \(\Psi\) is also a pure gauge in curved space too. We start with the ansatz \[\left\{\begin{array}{l}\xi_{0}=\tau^{-2}(A+B_{i}x_{i}),\\ \xi_{i}=\tau^{-3}(C_{i}(\tau)+Cx_{i}+(D.x)x_{i}+x^{2}E_{i}).\end{array}\right. \tag{100}\] The covariant derivative \(\nabla_{0}\xi_{0}\) is then written (in the Poincare basis) as \[\nabla_{0}\xi_{0}=-\tau^{-3}(A+B_{i}x_{i}). \tag{101}\] Identifying it with (100), we obtain \[A=a, \tag{102}\] \[B_{i}=b_{i}. \tag{103}\] The spatial component of the left-hand side of (100) is \[\nabla_{(i}\xi_{j)}=\tau^{-3}((C+a)\delta_{ij}+D_{i}+\delta_{ij}((D+b).x)+2x_{ (i}E_{j)}). \tag{104}\] Identifying it with the right-hand side then gives the constraints \[C=0=D_{i}=E_{i}. \tag{105}\] Therefore, we are left with \[\left\{\begin{array}{l}\xi_{0}=\tau^{-2}(a+b_{i}x_{i}),\\ \xi_{i}=\tau^{-3}C_{i}(\tau).\end{array}\right. \tag{106}\] The last equation which may constraint \(C_{i}\) is the \(\omega=0,\sigma=i\) component of (100). The cross-components of the covariant derivative read \[\nabla_{i}\xi_{0}=\tau^{-2}b_{i}+\tau^{-1}\xi_{i} \tag{107}\] \[\nabla_{0}\xi_{i}=\frac{1}{2}(b_{i}\tau^{-2}-\tau^{-1}\xi_{i}). \tag{108}\] Therefore, inserting them into (100), we obtain \[C_{i}(\tau)=\tau^{2}b_{i}. \tag{109}\] Finally, the infinitesimal vector which solves (100) from the ansatz (100) is \[\left\{\begin{array}{l}\xi_{0}=\tau^{-2}(a+b_{i}x_{i}),\\ \xi_{i}=\tau^{-1}b_{i}.\end{array}\right. \tag{110}\] Therefore, we arrive at the same conclusion as in flat space, that \(\Psi\) is a pure gauge quantity. It can therefore be excluded from the analysis because it can be removed from the dynamical scalar by performing a gauge transformation. It is possible to extend this analysis to the Poincare patch of AdS (100). By a similar computation as in dS, where \(\tau^{2}\) is replaced by \(-z^{2}\), the conclusion that \(\Psi\) is a pure gauge quantity holds for AdS. ## Appendix L Comparison with previous results for a dS boundary In this appendix, we map our parameters \((\tilde{\alpha},\tilde{\beta}_{\rm eff},GN^{2}R)\) and compare some of our results to previous papers which have used a similar setup. The first study of de Sitter stability with a non-perturbative CFT obtained holographically was done in [52]. This paper corresponds to the particular case where the (renormalized) cosmological constant is zero. So the only contribution to the background curvature comes from the CFT. We also compare our results to the more recent paper [68], which also studies the stability of de Sitter with a holographic CFT. In their study, the \(R^{2}\) coefficient \(\alpha\) is set to zero. In our paper, stability conditions rely on the poles of the propagator of metric perturbations and on the residue of these poles. The variable of these propagators is the eigenvalue \(\nu^{2}\) of the laplacian operator. Thus, we must compare our definition of \(\nu\) (and \(k\) for flat space) to the ones of previous papers. First, the definition of [68] for \(\nu\) is the same as ours (5.26). In addition, the radial part of the bulk equation of motion (5.27) coincides with eq. (49) of [68]. Since one must choose the sign of \({\rm Re}(\nu)\) (see discussion below (5.29)), their solution corresponds to taking \({\rm Re}(\nu)<0\), which is equivalent to \(C_{+}=0\) in our case. As discussed already in the paper, negative real parts of \(\nu\) are obtained by taking \(\nu\to-\nu\) in (5.37). To compare with the second paper [52], we relate \(\nu\) to their eigenvalue labeled by \(p\) as \[H^{-2}\nabla^{2}h^{(0)}_{\omega\sigma}=(2-p(p+3))h^{(0)}_{\omega\sigma}.\] (L.1) Comparing this equation with (5.26) leads to \[p=\pm\nu-\frac{3}{2}.\] (L.2) Similarly with the choice of \({\rm Re}(\nu)\) positive or negative, the normalizability of the bulk solution depends on the choice of \(p\) in (L.2). In [52], they choose \({\rm Re}(p)>-3/2\). Therefore, one needs to use the replacement \(p=-\nu-3/2\) to retrieve the results of [68] which has negative real parts for \(\nu\) and \(p=\nu-3/2\) to compare with our results for which \(\nu\) has positive real parts. We now compare the spin-2 equation of motions in these different papers. In [68], the equation of motion is given by \[\left(\nu^{2}-\frac{9}{4}\right)Q_{\rm C}(\nu)=0,\] (L.3) where \[Q_{C}(\nu)\equiv 256+\frac{8GN^{2}H^{2}}{\pi-GN^{2}H^{2}}\left\{16+(1-4\nu^{ 2})\left[3-4\mathcal{H}\left(-\frac{1}{2}-\nu\right)+4\log\left(\frac{2E}{H} \right)\right]\right\},\] (L.4) where \(E\) must be related to our parameter \(\beta\). On the other hand, the inverse propagator \(F(p)\) in eq. (3.82) of [52] is given by \[F_{H}(p,\beta)=\Psi(p)+\frac{4\pi R^{2}}{GN^{2}}(p^{2}+3p+6)+2\beta _{H}p(p+1)(p+2)(p+3)-4\alpha_{H}p(p+3),\] (L.5) \[\Psi(p)=p(p+1)(p+2)(p+3)[\psi(p/2+5/2)+\psi(p/2+2)-\psi(2)-\psi(1)]\] \[+p^{4}+2p^{3}-5p^{2}-10p-6\;,\] (L.6) where \(\alpha_{H}\) and \(\beta_{H}\) must be related to the parameters \(\alpha\) and \(\beta\) from our setup (2.4), (2.5). Using (L.2), we obtain an algebraic relation between \(Q_{C}(\nu)\) and \(F(p)\) given by \[F_{H}(p)=\frac{\pi-GN^{2}H^{2}}{64GN^{2}H^{2}}Q_{C}(\nu),\] (L.7) if the parameters of [52] and [68] are related by \[\alpha_{H}=0,\] (L.8) \[GN^{2}H^{2}=4\pi,\] (L.9) \[\log\left(\frac{H}{E}\right)=\beta_{H}+\frac{3}{4}.\] (L.10) First, (L.8) is due to the absence of a \(R^{2}\) term in the boundary action of [68]. This additional term is proportional to \(\alpha_{H}\) in [52]. Second, (L.9) is explained by the absence of a cosmological constant in the Einstein-Hilbert action of [52], which fixes the dimensionless curvature \(GN^{2}H^{2}\) to \(4\pi\) since the CFT is the only contribution to the background curvature. This is equivalent to setting \(\Lambda=0\) in (2.57), leading to (L.9). Before, we compare our results with these two previous papers. In our case, the curvature is not fixed (as in [68]), and the coefficient of the \(R^{2}\) term is also arbitrary (as in [52]). We now compare our inverse propagator \(\mathcal{F}_{\text{dS}}(\nu)\) (5.39) with \(F_{H}(p)\) (L.7), which obey the algebraic relation \[-2\mathcal{F}_{\text{dS}}(\nu)=F_{H}(p),\] (L.11) valid under the following conditions \[\tilde{\alpha}=\alpha_{H},\] (L.12) \[GN^{2}H^{2}=4\pi,\] (L.13) \[\tilde{\beta}_{\text{eff}}=\frac{1}{2}+2\beta_{H}-\log(16\pi).\] (L.14) Our results agree with [52] if we set the curvature as in (L.13). Their analysis is similar to the one done in the discussion of Figure 7, which corresponds to the special case where the massless spin-2 pole is a ghost. Our results also agree with [68] for \(\alpha=0\). For instance, their final result is the obtention of (8.19) for \(\alpha=0\), which tells the value of \(\beta\) from which the spin-2 sector is non-tachyonic. However, they do not discuss the presence of ghosts and whether their tachyon is above or below the species cutoff. ## Appendix M Further plots In this appendix, we provide snapshots for the poles of the tensor propagator in the \(\nu\in\mathbb{C}\) plane, for a wide range of parameters \((\tilde{\alpha},\tilde{\beta}_{\text{eff}},GN^{2}R)\). ### More snapshots for de Sitter In the main text, we have given two examples of parameters \((\tilde{\alpha},GN^{2}H^{2})\) for which we find numerically the poles of the tensor propagator in de Sitter space-time in Figure 4 (small curvature, zero \(\tilde{\alpha}\)) and in Figure 7 (generic curvature, negative \(\tilde{\alpha}\)). These two examples give different behaviours when \(\tilde{\beta}_{\text{eff}}\) is varied. One could question if these two examples are paradigmatic or if another choice of \((\tilde{\alpha},GN^{2}H^{2})\) would lead to different results. In the following table, we give the links to snapshots for 9 different regimes. \begin{tabular}{|c|c|c|c|} \hline \(\tilde{\alpha}\)\(GN^{2}H^{2}\) & \(<<1\) & \(\sim\pi\) & \(>>1\) \\ \hline \(-\tilde{\alpha}>>1\) & Fig. 30 & Fig. 31 & Fig. 32 \\ \hline \(|\tilde{\alpha}|\leq 1\) & Fig. 4 & Fig. 37, Fig. 38 & Fig. 39, Fig. 40 \\ \hline \(\tilde{\alpha}>>1\) & Fig. 36, Fig. 35 & Fig. 34 & Fig. 33 \\ \hline \end{tabular} \(a\) (8.7) is either positive if "Fig" is written in green, or negative if it is written in red. Blue boxes correspond to regimes where the sign of \(a\) must be determined by the inequality (8.12) : \[a<0\iff\frac{\pi}{GN^{2}H^{2}}<\tilde{\alpha}+\frac{1}{2}\] (M.1) The conclusion is that a given point in the \((\tilde{\alpha},GN^{2}H^{2})\) plane corresponds either to the behaviour of * **Type A**: Figure 4, where two tachyons merge on the real axis and form a pair of complex conjugate poles when \(\tilde{\beta}_{\text{eff}}\) is increased. * **Type B**: Figure 7, where all the poles are real valued \(\nu^{2}\) for any value of \(\tilde{\beta}_{\text{eff}}\) (no complex pole). In general, for a given point in the \((\tilde{\alpha},GN^{2}H^{2})\) plane, a positive \(a\) is type A and a negative \(a\) is type B as predicted by the large-\(|\nu|\) approximation (8.5). However, this is not necessarily true for large \(GN^{2}H^{2}\) and \(a\) close to zero, where the large-\(|\nu|\) analysis breaks down. ## Appendix A Figure 33: _dS, \(\tilde{\alpha}=1000\), \(GN^{2}H^{2}=1000\)._ Figure 34: _dS, \(\tilde{\alpha}=1000\), \(GN^{2}H^{2}=2\pi\)._ Figure 35: _dS, \(\tilde{\alpha}=1000\), \(GN^{2}H^{2}=0.01\)._ Figure 37: _dS, \(\tilde{\alpha}=-1\), \(GN^{2}H^{2}=2\pi\)._ Figure 38: \(\alpha=0\)_, \(GN^{2}H^{2}=4\pi\)._ ## 10 Summary In this paper we have studied the effects of the \(\tilde{\beta}_{\mathit{eff}}\) and \(\tilde{\beta}_{\mathit{eff}}\) on the \(\tilde{\beta}_{\mathit{eff}}\) and \(\tilde{\beta}_{\mathit{eff}}\). We have studied the effects of the \(\tilde{\beta}_{\mathit{eff}}\) and \(\tilde{\beta}_{\mathit{eff}}\) on the \(\tilde{\beta}_{\mathit{eff}}\) and \(\tilde{\beta}_{\mathit{eff}}\). We have studied the effects of the \(\tilde{\beta}_{\mathit{eff}}\) and \(\tilde{\beta}_{\mathit{eff}}\) on the \(\tilde{\beta}_{\mathit{eff}}\). Figure 40: _dS, \(\tilde{\alpha}=0\), \(GN^{2}H^{2}=1000\)._ ### More snapshots for AdS As we did for dS in the previous subsection, we provide snapshots for some examples of parameters \((\tilde{\alpha},GN^{2}H^{2})\) while varying \(\tilde{\beta}_{\text{eff}}\). They are summarized in the following table. \begin{tabular}{|c|c|c|c|} \hline \(\tilde{\alpha}\)\(GN^{2}\chi^{2}\) & \(<<1\) & \(\sim\pi\) & \(>>1\) \\ \hline \(-\tilde{\alpha}>>1\) & Fig. 47, Fig. 46 & Fig. 45 & Fig. 44 \\ \hline \(|\tilde{\alpha}|\leq 2\) & Fig. 19 & Fig. 48, Fig. 49 & Fig. 50, Fig. 51 \\ \hline \(\tilde{\alpha}>>1\) & Fig. 41 & Fig. 42 & Fig. 44 \\ \hline \end{tabular} \(a\) (9.6) is either positive if "Fig" is written in green, or negative if it is written in red. \(\boxed\) correspond to regimes where the sign of \(a\) must be determined by the inequality (9.8) : \[a<0\iff\frac{\pi}{GN^{2}\chi^{2}}<-\left(\tilde{\alpha}+\frac{1}{2}\right)\] (M.2) The conclusion is that a given point in the \((\tilde{\alpha},GN^{2}\chi^{2})\) plane corresponds either to the behaviour of * **Type A**: Figure 19, where two tachyons merge on the imaginary axis and form a pair of complex conjugate poles when \(\tilde{\beta}_{\text{eff}}\) is increased. * **Type B**: Figure 21, where all the poles are real valued \(\nu^{2}\) for any value of \(\tilde{\beta}_{\text{eff}}\) (no complex pole). ## 6 Conclusions In this paper we have studied the effects of the \(\tilde{\beta}_{\mathit{eff}}\) and \(\tilde{\beta}_{\mathit{eff}}\) on the \(\tilde{\beta}_{\mathit{eff}}\) and \(\tilde{\beta}_{\mathit{eff}}\). We have studied the effects of the \(\tilde{\beta}_{\mathit{eff}}\) and \(\tilde{\beta}_{\mathit{eff}}\) on the \(\tilde{\beta}_{\mathit{eff}}\) and \(\tilde{\beta}_{\mathit{eff}}\). Figure 44: _AdS, \(\tilde{\alpha}=-1000\), \(GN^{2}\chi^{2}=1000\)._ Figure 45: _AdS, \(\tilde{\alpha}=-1000\), \(GN^{2}\chi^{2}=2\pi\)._ Figure 46: _AdS, \(\tilde{\alpha}=-1000\), \(GN^{2}\chi^{2}=0.01\)._ Figure 47: _AdS, \(\tilde{\alpha}=-100\), \(GN^{2}\chi^{2}=0.001\)._ Figure 48: _AdS, \(\tilde{\alpha}=0\), \(GN^{2}\chi^{2}=2\pi\)._ Figure 49: \(\alpha=-2\), \(GN^{2}\chi^{2}=2\pi\). Figure 50: _AdS, \(\tilde{\alpha}=0\), \(GN^{2}\chi^{2}=1000\)._ Figure 51: _AdS, \(\tilde{\alpha}=-1\), \(GN^{2}\chi^{2}=1000\)._
2305.15179
GAN-AE : An anomaly detection algorithm for New Physics search in LHC data
In recent years, interest has grown in alternative strategies for the search for New Physics beyond the Standard Model. One envisaged solution lies in the development of anomaly detection algorithms based on unsupervised machine learning techniques. In this paper, we propose a new Generative Adversarial Network-based auto-encoder model that allows both anomaly detection and model-independent background modeling. This algorithm can be integrated with other model-independent tools in a complete heavy resonance search strategy. The proposed strategy has been tested on the LHC Olympics 2020 dataset with promising results.
Louis Vaslin, Vincent Barra, Julien Donini
2023-05-24T14:13:37Z
http://arxiv.org/abs/2305.15179v2
# GAN-AE : An anomaly detection algorithm for New Physics search in LHC data ###### Abstract In recent years, interest has grown in alternative strategies for the search for New Physics beyond the Standard Model. One envisaged solution lies in the development of anomaly detection algorithms based on unsupervised machine learning techniques. In this paper, we propose a new Generative Adversarial Network-based auto-encoder model that allows both anomaly detection and model-independent background modeling. This algorithm can be integrated with other model-independent tools in a complete heavy resonance search strategy. The proposed strategy has been tested on the LHC Olympics 2020 dataset with promising results. ## Introduction The search for New Physics beyond the Standard Model is one of the main goals of high-energy physics. A fairly common strategy is to search for a localized deviation in an invariant mass spectrum that could correspond to a new heavy particle. This kind of search usually depends on accurate simulations of the Standard Model processes and also on several signal hypotheses. However, simulating data from experiments such as ATLAS [1] is computationally intensive and is limited by modelling uncertainties. Also, assuming a signal model without knowing what lies beyond the Standard Model can be a source of bias that reduces the generalizability of an analysis. To overcome these limitations, much effort has been put into defining generic search strategies that do not rely on specific theoretical models of New Physics. One possible solution is to use algorithms that don't need a specific signal model to train on, but still detect events that differ from the Standard Model predictions. Such unsupervised anomaly detection algorithms [2] can potentially identify anomalous events by evaluating an anomaly score, so that in the search for New Physics processes, signal events can be seen as an anomaly with respect to the Standard Model. A well-known class of anomaly detection algorithms using unsupervised machine learning is the auto-encoder (AE) and its derivatives [3, 4]. Such models can be trained directly on data with the only assumption that signal events are very rare. In the following sections, we present a GAN-AE algorithm inspired by AEs and generative models that allows for both anomaly detection and data-driven background modeling. This model is tested on the LHC Olympics 2020 challenge dataset [5] as a benchmark. For this search a complete strategy including the model independent BumpHunter algorithm [6] has been defined. The code used to build and train the GAN-AE algorithm on this dataset is accessible online1. Footnote 1: [https://github.com/lovaslin/GAN-AE](https://github.com/lovaslin/GAN-AE) ## 1 The GAN-AE algorithm The GAN-AE algorithm proposes to combine a vanilla auto-encoder together with a discriminator network in an adversarial manner similar to that of a Generative Adversarial Network (GAN) [7]. Other algorithms propose similar models, such as Outliers Exposure [8] and Self-Adversarial AE [9]. In these works, the goal is either to constrain the latent space of an AE or to improve the sensitivity to anomalies in a semi-supervised setting. With the GAN-AE algorithm, the objective is to construct an alternative measure of reconstruction error using a multilayer perceptron network trained to distinguish reconstructed and original events. Figure 1 shows a synoptic view of the GAN-AE architecture. Traditionally, auto-encoders are trained using a possibly regularized measure of the (Euclidean) distance between their input and output. A well known metric for this task is the Mean Square Error (MSE). In this work, we propose an alternative metric based on a supervised discriminator network trained to classify reconstructed events (labeled 0) and original events (labeled 1). This binary classifier (bc) model is trained with the usual binary cross-entropy loss function: \[\mathrm{bc}\left(y^{(d)},y^{(l)}\right)=-\left[y^{(l)}\log\left(y^{(d)}\right) +\left(1-y^{(l)}\right)\log\left(1-y^{(d)}\right)\right]\,, \tag{1}\] where \(y^{(d)}\) is the output of the discriminator and \(y^{(l)}\) the associated label. In order to train this two-party GAN-AE network, we define a training procedure divided into two main phases. The first step is to train the discriminator network parameters \(\mathbf{\theta_{D}}\) with a mixture of original data and events reconstructed by the AE. Parameters \(\mathbf{\theta_{D}}\) are then updated for a few epochs while keeping the parameters \(\mathbf{\theta_{AE}}\) of the AE fixed. The second step is to train the auto-encoder parameters \(\mathbf{\theta_{AE}}\) using the discriminator output as constraint. This training is done with a special loss function that combines both the usual distance metric and the information coming from the discriminator. The distance metric used is a modified Euclidean distance defined as: \[\mathrm{D}\left(y^{(o)},y^{(r)}\right)=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left(y _{i}^{(o)}-y_{i}^{(r)}\right)^{2}}\,, \tag{2}\] with \(y^{(o)}\) the input vector (original event), \(y^{(r)}\) the output vector (reconstructed event) and \(N\) the dimension of both vectors. The constraint of the discriminator is introduced by modifying the binary cross-entropy loss function defined in equation 1. In fact, while the goal of the discriminator is to correctly identify reconstructed events associated with the label '0', the goal of the AE is, on the contrary, to confuse the discriminator network. Thus, the AE must be trained so that the output of the discriminator comes closer to the label '1' corresponding to (real) original events. This can be achieved by computing the binary cross-entropy loss of the discriminator using reconstructed events associated with the label of the original events as the target. The two metrics are then combined to define the loss for a given event \(k\) as follows: \[\mathrm{L}_{k}\left(y^{(o)},y^{(r)},y^{(d)}\right)=\mathrm{bc}\left(y^{(d)},y ^{(l)}=1\right)+\varepsilon\mathrm{D}\left(y^{(o)},y^{(r)}\right)\,, \tag{3}\] with \(\varepsilon\) a hyperparameter that balances the relative importance of the two terms. This loss is used to update \(\mathbf{\theta_{AE}}\) for a few epochs. Figure 1: Schematic of the global layout of the GAN-AE architecture. The auto-encoder network (AE) is trained to produce reconstructed events that closely resemble the original events. The discriminator network (D) is trained to discriminate between reconstructed and original events with labels 0 and 1, respectively. The AE has an architecture composed of 5 layer: the encoding part with the input layer, a hidden layer and the latence space, and a decoding part that is exactly symmetrical to the encoder part. The activation function used for the hidden layers is the LeakyReLU function, while the latent space and output are linear. As an additional constraint, we used the tied weight trick discussed in [10] to impose that the weight tensors of the decoder are the transposed ones of those of the encoder: \[W^{(n-k)}=\left(W^{(k)}\right)^{\mathrm{T}}\,, \tag{4}\] where \(W^{(k)}\) is the weight tensor between layers \(k\) and \(k+1\) of the encoder and \(W^{(n-k)}\) is the weight tensor between layers \(n-k\) and \(n-k-1\) of the decoder. Dropout is applied to each hidden layers. The structure of the discriminator network is defined as a fully connected multilayer perceptron with 4 hidden layers using LeakyReLU activation. The output is one-dimensional with a sigmoid activation function compatible with the binary cross-entropy loss function. Dropout is applied to the hidden layers of the discriminator. The main hyperparameters of the GAN-AE algorithm are reported in Table 2. In this architecture, the discriminator is used to enhance the training of the auto-encoder. However, in the application step. only the trained AE is actually used. The anomaly score is defined as the modified Euclidean distance (equation 2). Thus, the most anomalous events, here assimilated to the most signal-like events, can be identified as those with the highest anomaly score. The selected anomalous events can then be compared to a reference to test for the presence of an anomaly. The next section describes how to obtain this reference. ## 2 Background modelling an mass sculpting mitigation In order to integrate the GAN-AE algorithm into a complete and fully data-driven search strategy, we propose a method to extract a viable background model directly from the data. This method is based on the hypothesis that the signal that we might expect to find in the data is a rare process, such that the data is dominated by the background. In this case, when performing a bump hunt in a relevant spectrum, such as an invariant mass, one would expect the signal to be invisible unless proper selections are made. Thus, the invariant mass spectrum prior to any selection could be assimilated to a background distribution. However, in order to use this distribution as a reference background, we must ensure that its shape is not affected by the selection based on the anomaly score described in the previous section. Even if the GAN-AE model is trained without using the invariant mass as an input variable, this condition is generally not met, as illustrated in Figure 2. Figure 2: Normalized histograms of the invariant mass. The blue histogram shows the spectrum before applying any selection to the anomaly score. The orange and green histograms show the spectra after selection at the 50th and 85th percentiles of the anomaly score distribution, respectively. The data used to obtain this figure is described in Section 3. To get rid of the mass sculpting induced by the selection process, we propose two mitigation techniques that can be combined. First, an event weight is applied in order to uniformize the invariant mass distribution. This is done because otherwise events with a low invariant mass will be overrepresented in the data compared to others, leading to a bias in the reconstruction error. Then, to reduce the mass sculpting, the Distance Correlation regularization (DisCo) [11, 12] is added to the loss of the auto-encoder. This is done because otherwise event with a low invariant mass will be over-represented in the data compared to others, inducing a bias in the reconstruction error.of the Auto-Encoder. As it requires the use of independent and identically distributed samples of the distribution to decorrelate, this term is defined for a batch of events. By combining the DisCo regularization term and the event weighting, we can define the modified loss function of the auto-encoder: \[\mathrm{L}_{k}\left(y^{(o)},y^{(r)},y^{(d)}\right)=\sum_{i=1}^{N_{b}}w_{i} \left[\mathrm{bc}\left(y_{i}^{(d)},y_{i}^{(l)}=1\right)+\varepsilon\mathrm{D} \left(y_{i}^{(o)},y_{i}^{(r)}\right)\right]\,+\alpha\mathrm{DisCo}\left(\mathrm{ D}_{\Sigma},y^{(m)}\right) \tag{5}\] with \(w_{i}\) the weight associated to event \(i\), \(N_{b}\) the number of events in a batch, \(\alpha\) a new hyperparameter of the loss, \(y^{(m)}\) the vector of invariant mass value associated to a batch and \(\mathrm{D}_{\Sigma}\) the vector of anomaly score value associated to a batch. Note that the event weights should not be applied when computing the DisCo regularization. Since the goal of this term is to decorrelate the invariant mass and anomaly score distributions, it is important to keep both distributions unchanged. With this new loss function, we can ensure that the invariant mass distribution prior to the selection on the anomaly score is a valid reference model for the background. Now we need to compare this reference with the distribution of selected events in order to look for a localized deviation. For this purpose we use the pyBumpHunter package [13]2 which provides an improved version of the BumpHunter algorithm [6] implemented in Python. This tool has the advantage of locating any deviation in a model-independent way, evaluating both local and global significance by removing the Look Elsewhere Effect [14]. Now we have all the tools needed to build a complete and model-independent strategy for resonant New Physics searches. The next section shows an example of application using a benchmark dataset. Footnote 2: GitHub - scikit-hep/pyBumpHunter at v0.4.0 ## 3 Application to LHC Olympics 2020 data In order to test and evaluate the performance of the techniques developed in the previous section, we use the public dataset proposed for the LHC Olympics 2020 challenge [5]. This dataset provides a good case study for testing and comparing anomaly detection algorithms in the context of model-independent New Physics searches. The strategy that we will use for this challenge is illustrated in figure 3. The challenge proposes a so-called RnD dataset [15] to assist the development of anomaly detection algorithms. This dataset is composed of a background sample containing QCD multijet events and a benchmark New Physics signal model. The signal events consist of a Z' boson with a mass of 3.5 TeV (inspired by [16]) decaying into two heavy resonances X and Y with masses of 500 GeV and 100 GeV, respectively. Two types of signal signatures are considered, one where both X and Y decay to two quarks and form boosted jets with 2-pronged substructure, and another where both X and Y decay to three quarks, resulting in boosted jets with 3-pronged substructure. A total of 1M events were generated for the background model, along with 100k events for each signal hypothesis. The events are generated using Pythia[17] and Delphes 3.4.1[18] with no pile-up or multiple parton Figure 3: Diagram representing the analysis flow applied for the LHC Olympics 2020 challenge. interaction included, and with a detector architecture similar to the ATLAS experiment. Events are selected using a large radius (R=1) jet trigger with a \(p_{T}\) threshold of 1.2 TeV. The anomaly detection algorithms are tested on three different Black Box datasets [19] containing unknown event samples. The only information given to the challenge participants is that the events contain at least two jets with a different background modelling than the RnD data. The goal is then to determine if there is a hidden signal in the Black Boxes and at what mass. For each event, a list of up to 700 hadrons 4-vectors is provided. Jets are reconstructed using the anti-\(k_{t}\) algorithm implemented in the FastJet 3.3.3 library [20] with a large jet radius \(R=1\). A second clustering is performed within the large jets with a smaller radius \(r=0.2\) in order to characterize their substructure. The list of the variables computed in this preprocessing procedure is presented in Table 1. For a clustering in two large jets, we have a total of 45 variables. The code used to preprocess the data is publicly available3. Footnote 3: [https://gitlab.cern.ch/idinu/clustering-llheo](https://gitlab.cern.ch/idinu/clustering-llheo) ### Results on RnD data In order to evaluate the performance of the GAN-AE algorithm and validate the background modeling procedure, we use the RnD dataset. The results are presented for a clustering in two large jets. The GAN-AE model is trained on 100k background events and tested on a mixture of background and signal. All variables listed in Table 1 are used in the training except for the di-jet invariant mass and the azimuthal angle \(\phi\) of the jets, for a total of 42 input variables. The set of hyperparameters used to produce the results are shown in Table 2. The anomaly scores obtained for the background and both signal test samples are shown in Figure 3(a). These corresponding ROC curves are shown in Figure 3(b). The Area Under the Curve (AUC) obtained on the test set is 0.82 for the first RnD signal (2-prong) and 0.74 for the second (3-prong) This result confirm that the Auto-Encoder trained using the GAN-AE algorithm is able to distinguish the signal from the background. Another point to check is the ability to remove mass sculpting. The modeling of the reference background distribution, after applying a selection on the anomaly score, is evaluated using background events of the testing set. Figure 4(a) shows the normalized distribution of the di-jet invariant mass, before and after selection at different thresholds. To quantitatively assess the deformation of the invariant mass spectra induced by the selection, we use Jensen-Shannon divergence as a metric [22]. By continuously varying the selection threshold, we can evaluate this metric to produce the curve shown in Figure 4(b). Compared to the results \begin{table} \begin{tabular}{|c|c|} \hline \multicolumn{1}{|c|}{4-vectors} & \(p_{T}\), \(\eta\), \(\phi\), \(E\) \\ \hline Jet mass and constituents & \(m_{jet}\), \(n_{c}\) \\ \hline Number of subjets [20] & \(N_{incl}\), \(N_{excl}\) \\ \hline N-subjetiiness [21] & \(\tau_{1}\), \(\tau_{2}\), \(\tau_{3}\), \(\tau_{21}\), \(\tau_{31}\) \\ \hline Energy Rings & \(E_{ring,1}\), \(E_{ring,2}\),..., \(E_{ring,10}\) \\ \hline \hline Dijet invariant mass & \(mjj\) \\ \hline \end{tabular} \end{table} Table 1: Table summarizing of all variables, computed in the preprocessing of the LHC Olympics 2020 data, for each large jets, except for the last variable which is defined for pairs of jets. \begin{table} \begin{tabular}{|c|c||c|c|} \hline Hyperparameter & Value & Hyperparameter & Value \\ \hline Latent space dimension & 14 & Number of training cycles & 100 \\ \hline AE hidden dimension & 84 & AE epochs per cycle & 5 \\ \hline D hidden dimensions & \(\{\)300, 200, 100, 50\(\}\) & D epochs per cycle & 7 \\ \hline \(\varepsilon\) (reconstruction term) & 6.0 & Pre-training of AE & True \\ \hline \(\alpha\) (DisCo term) & 65.0 & Dropout rate & 20\% \\ \hline \end{tabular} \end{table} Table 2: Hyperparameters of the GAN-AE algorithm and their values. A pre-training of the Auto-Encoder is performed without adversary using only the reconstruction error before the main training loop. shown on Figure 2, the invariant mass distribution is no longer modified when applying a selection based on the anomaly score. The fact that Jensen-Shannon divergence stays below 0.1 up to a \(99^{th}\) percentile threshold indicates that the invariant mass distribution of the background before selection remains compatible with that after selection. By comparison, a GAN-AE model trained without the mass sculpting mitigation techniques results in the Jensen-Shannon divergence curve shown in Figure 6. This metric increases rapidly with the selection threshold reaching more than twice the distance obtained with the mitigation techniques. This strong constraint on the mass sculpting can be realized simultaneously with the good ability to separate signal and background shown in Figure 4. This achievement is a good improvement over classically trained Auto-Encoders for which applying such constraints generally deteriorates the quality of the anomaly detection. ### Results on Black Box datasets After validating the GAN-AE algorithm and the mass sculpting mitigation procedure, we can apply the complete strategy chain to the Black-Box datasets. For each Black-Box, a GAN-AE model is trained on 100k events using the set of hyperparameters presented in Table 2. The trained model is applied to each dataset in order to evaluate the anomaly score distribution. A selection is applied on the \(99^{th}\) percentile of this distribution. Then, the invariant mass distribution of the di-jets in this subsample is compared to the invariant mass distribution of the di-jets in the pre-selection data, which serves as a reference background. The reference histogram is normalized to the selected data using a side-band normalization procedure. Figure 4: Results obtained with the RnD data of the LHC Olympics 2020 challenge showing the separation of background and signal: (a) anomaly scores for background and signal events; (b) ROC curves obtained from the test set of the RnD data. The labels signal 1 (orange) and signal 2 (green) correspond to 2-prong and 3-prong jet substructure, respectively. Figure 5: Results obtained with the RnD data of the LHC Olympics 2020 challenge showing the capacity to mitigate the mass sculpting: (a) di-jet invariant mass of background events before (blue) and after selection at the \(50^{th}\) (orange) and \(85^{th}\) (green) percentile of the anomaly score distribution, (b) Jensen-Shannon divergence between the invariant mass distribution before and after selection for different thresholds. Results obtained with pyBumpHunter for Black-Box 1 are presented in Figure 7. The BumpHunter algorithm finds a deviation in data, with respect to the data-driven reference background, around 3.97 TeV with a local significance of almost 3\(\sigma\) (Figure 6(a)). No other significant excess, or deficit, is observed outside the selected interval. Figure 6(b) shows the background-only test statistics from which a global significance of 1.2\(\sigma\) is derived. The low overall significance is partly explained by the fact that the bump hunt search is performed without assuming a prior signal and with a floating background normalization. After the end of the challenge, the content of each Black-Box was revealed by the organisers. Figure 8 shows the histograms of di-jet invariant mass in Black-Box 1, along with the true labels corresponding to background and signal events. The region of the spectrum identified by the BumpHunter algorithm is indeed the location of the true signal. The signal generated for this dataset corresponds to a 3.8 TeV Z' boson decaying to two heavy resonances with a similar 2-prong substructure jet signature as in the RnD data. The initial signal over background ratio (S/B) is 0.08%. After applying the full strategy chain to this dataset, we obtain an improvement in the S/B ratio of a factor of 20. The signal efficiency after selection at the 99\({}^{th}\) percentile of the anomaly score distribution is over Figure 6: Jensen-Shannon divergence obtained using the mass sculpting mitigation techniques (blue) and without using them (orange) Figure 7: Results obtained with the data of Black-Box 1 of the LHC Olympics 2020 challenge after applying the complete analysis chain. 15% for a background rejection of almost 99%. We also note that the data-driven reference background fits quite well the true background distribution after selection. The deviation identified by BumpHunter corresponds to the true signal with a small bias on the mass of the Z\({}^{\prime}\) (less than 200 GeV). The same methodology has been applied to the two other Black-Boxes and results are summarized below. Black-Box 2 did not contain any signal, as this data set was actually provided for the purpose of testing the identification of spurious signals. Our algorithm successfully modeled the shape of the background and found no significant deviations. The third black box contained a complex signal signature, as the generated resonance could decay into either two or three jets, with a branching ratio of one third and two thirds, respectively. In the case of Black-Box 3 and with the 2-jet clustering, the GAN-AE algorithm was unable to distinguish between signal and background events. However, the process of modeling the background shape from the data still worked. ## Conclusion The development of alternative search strategies for New Physics beyond the Standard Model has gained much importance in recent years. Events such as the LHC Olympics challenge proposed in 2020 are part of this effort. In this context, we propose a model-independent analysis strategy based on unsupervised machine learning and data-driven background modeling. The GAN-AE algorithm offers an interesting alternative to the classical training of auto-encoders by defining an new measure of reconstruction error given by an adversary network. This algorithm offers good performance and stability, even when using strong constraints to reduce the mass sculpting such as the DisCo regularization term. Thanks to this constraint, we can derive a reference background model directly from the data, with the only assumption that the signal is rare enough. The background model can then be used as a reference for the BumpHunter algorithm, which allows the evaluation of both local and global significance. The strategy was tested using the LHC Olympics 2020 challenge datasets. The results on the RnD dataset as well as on the first black box are promising, allowing us to correctly identify the hidden signal with a local significance of 2.9\(\sigma\). This result is comparable to those obtained by other participants. Our strategy is also the only one to propose a built-in evaluation of the global significance, showing its completeness. A possible way to improve the method would be to include the GAN-AE algorithm in a weakly supervised setting, such as the Tag N'Train (TNT) algorithm [23], which obtained one of the best results in the LHC Olympics 2020 challenge. Figure 8: Histograms showing the true background (blue) and signal (orange) distributions for Black-Box 1, after selection on the 99\({}^{th}\) percentile of the anomaly score distribution. The reference background histogram used for BumpHunter is shown in green and the selected interval is represented by the vertical dashed lines. ## Acknowledgement The authors would like to thank Gregor Kasieczka, Ben Nachman, and David Shih, the organizers of the LHC Olympics 2020 Anomaly Detection Challenge, for providing the datasets used in this study and for the opportunity to develop and test the GAN-AE architecture. Louis Vaslin acknowledges the support received by the French government IDEX-ISITE initiative 16-IDEX-0001 (CAP 20-25).
2306.14737
Tidal heating as a direct probe of strangeness inside neutron stars
It has been discussed whether viscous processes in neutron star matter during a binary inspiral can damp out the tidal energy induced by the companion and heat up the star. Earlier investigations concluded that this tidal heating is negligible for normal neutron star matter. In this work, we suggest a novel effect of tidal heating involving strange matter in the neutron star interior, that can significantly heat up the star, and is potentially observable by current and future gravitational wave detectors. We propose that this could serve as a direct probe of strangeness in neutron stars.
Suprovo Ghosh, Bikram Keshari Pradhan, Debarati Chatterjee
2023-06-26T14:45:43Z
http://arxiv.org/abs/2306.14737v2
# Tidal heating as a direct probe of Strangeness inside Neutron stars ###### Abstract The cores of neutron stars (NS) reach densities several times the nuclear saturation density and could contain strangeness containing exotic particles such as hyperons. During the binary inspiral, viscous processes inside the NS matter can damp out the tidal energy induced by the companion and convert this to thermal energy to heat up the star. We demonstrate that the bulk viscosity originating from the non-leptonic weak interactions involving hyperons is several orders of magnitude higher than the standard neutron matter shear viscosity in the relevant temperature range of \(10^{6}-10^{9}\)K and for heavier mass NSs (\(M\geq 1.6M_{\odot}\)) that contain a significant fraction of hyperons in their core, the bulk viscosity can heat up the stars upto \(0.1-1\) MeV before the final merger. This "tidal heating" process also introduces a net phase shift of \(10^{-3}-0.5\) rad, depending on the component mass, in the gravitational wave (GW) signal that can potentially be detected using current and future generation GW detectors. Such a detection would be the direct confirmation of the presence of hyperons inside the NS core, having a great significance for the study of dense matter under extreme condition. ## I Introduction Neutron stars are unique astrophysical dense objects to study cold and dense matter under extreme conditions far beyond the reach of terrestrial experiments. At such high densities inside the NS core, strangeness containing exotic particles, such as hyperons, kaons or even deconfined quark matter, which are observed only briefly in particle accelerators, can become stable components due to weak equilibrium and have important consequences for the equation of state (EoS), composition, structure and evolution of these compact objects [1; 2; 3]. Earlier various astrophysical observations e.g., the measurement of high mass NS [4; 5], radius measurement from low-mass X-ray binaries [6; 7], thermal evolution [8] and GW emission from hot and rapidly rotating newly born neutron stars due to unstable r-modes [9; 10; 11] have been used to constrain the strangeness content inside their interior [12]. The recent observation of the binary neutron star (BNS) merger event GW170817 [13; 14; 15] using the LIGO-Virgo-Kagra (LVK) global network of GW detectors [16; 17; 18; 19] along with detection of its electromagnetic counterpart has opened up the modern era of multi-messenger astronomy [20]. The measurement of dimensionless tidal deformability [21] from the GW170817 merger along with recent NICER estimates of mass-radius posteriors from two pulsars, PSR \(J0030+0451\)[22; 23] and PSR \(J0740+6620\)[24; 25], have significantly constrained the unknown neutron star EOS [26; 27; 28; 29]. Tidal interaction due to a companion in the binary system transfer mechanical energy and angular momentum to the star at the expense of the orbital energy and the damping mechanisms inside the star convert this energy to heat which we refer to as "tidal heating". Tidal heating could potentially cause mass ejection in a radiation-driven outflow prior to the merger [30] or spin the stars up to corotation before merger [31; 32] but those are not achievable for viscosities arising from nucleonic or leptonic weak processes. In particular, Lai (1994) [33] found that shear and bulk viscous damping of mode oscillations of NS matter could only heat the stellar core to \(\sim 10^{8}\)K. Recent work by Arras et al. (2019) [34] considers heating due to direct Urca reactions during inspiral and concluded that they are heated upto \(k_{B}T\sim 0.01\)MeV which is one order of magnitude higher than the estimates by Lai (1994) [33]. But still these estimates show that NS remains cold during the inspiral and tidal heating does not produce any detectable effect in the inspiral GW signal. Hyperon bulk viscosity has been extensively studied over the past two decades mostly in the context of damping of \(r-\)mode oscillations from young, fast rotating pulsars [9; 10; 11; 35; 36; 37; 38; 39; 40]. It is found to strongly suppress the r-mode instability window below \(10^{10}\)K due to the high bulk viscosity produced by the non-leptonic weak-interaction channels involving hyperons [9; 38]. In this letter, we show that the high viscosity (\(\approx 10^{8}-10^{10}\) times the shear viscosity from \(e-e\) scattering in the temperature region of \(10^{6}-10^{8}\)K) due to the presence of hyperons inside the high density core of the neutron star can sufficiently heat up the star during the inspiral to a much higher temperature estimated from earlier studies and these effects are potentially detectable as a deviation of the orbital decay rate from the general relativistic point-mass result using current and future generational GW detectors. ## II Hyperonic bulk viscosity The EoS of nuclear matter inside NSs including hyperonic degrees of freedom can be described using different approaches [41; 12]. In this work we consider some standard parametrisations within the Relativistic Mean Field (RMF) model [42; 43] where the strong interactions among baryons are mediated by mesons. Such EoS models have been widely applied to reproduce both nuclear matter properties as well as NS astrophysical data. Depending on the hyperon-nucleon potentials and the nuclear saturation parameters, hyperons appear at different onset densities inside the NS core. A detailed description of the chosen EoS parametrisation along with the hyperon potentials and onset densities of \(\Lambda\) hyperons are given in Table 1. Bulk viscosity (BV) arises due to the pressure and density variations associated with the mode oscillations inside the NS that drive the system out of chemical equilibrium [35]. In addition to the leptonic hyperon processes, weak nonleptonic hyperon processes contribute more efficiently to the bulk viscosity [9; 35]. The standard nonleptonic reactions that contributes to the hyperon bulk viscosity are \[n+n\longleftrightarrow p+\Sigma^{-}, \tag{1}\] \[n+p\longleftrightarrow p+\Lambda, \tag{2}\] \[n+n\longleftrightarrow n+\Lambda. \tag{3}\] We do not consider the channel in Eq. (3) as this process has no simple \(W\)-exchange contribution [9]. Among the nucleon-hyperon potentials, the best constrained potential is that of the hyperon \(\Lambda\), having a value of \(U_{\Lambda}=-30\) MeV [44; 45]. Although the potential depths for hyperons \(\Sigma\) and \(\Xi\) are not known precisely, it has been concluded that the \(\Sigma\)-nucleon potential is repulsive [46; 47; 48] that restricts the appearance of \(\Sigma\) hyperons inside NS matter. So, we only consider the reaction (2) that contributes to the hyperon bulk viscosity. For any weak interaction process, the real part of the bulk viscosity coefficient(\(\zeta\)) is calculated in terms of relaxation times of microscopic processes [10] \[\zeta=n_{B}\frac{\partial P}{\partial n_{n}}\frac{dn_{n}}{dn_{B}}\frac{\tau}{ 1+(\omega\tau)^{2}}, \tag{4}\] where \(P\) stands for the pressure, \(n_{B}\) is the total baryon number density, \(n_{n}\) the neutron density, \(\omega\) the oscillation frequency of the \((l,m)\) mode and \(\tau\) is the characteristic timescale of the reaction. The rates of these reactions can be calculated from the tree-level Feynman diagrams involving the exchange of a \(W\)-boson [9; 35]. For the weak non-leptonic hyperon processes of Eqn. (2), the temperature (\(T\)) dependence of the relaxation time is given as \(\tau\propto T^{-2}\)[9]. From the typical expression of bulk viscosity, we see that it shows a resonant behaviour when the mode oscillation frequency (\(\omega\)) matches with the timescale (\(\tau\)). In Fig. 1, we plot the bulk viscosity due to this weak-interaction channel for the chosen RMF parametrisations given in Table 1, as a function of temperature for different mode frequency characteristic values; 1 & 2 kHz for typical \(f\)-mode [49] and \(\sim 100\) Hz for \(g\)-mode oscillations [50]. Depending on the mode oscillation frequency, we see that the resonances occurs in the temperature range of \(10^{8}-10^{9}\)K and the resonant maximum value of \(\zeta\) can reach a values of \(10^{31}-10^{32}\) gm cm\({}^{-1}\)s\({}^{-1}\) which is several order of magnitude higher than the canonical shear and bulk viscosity [51] in this temperature region. ## III Tidal heating in binary NS During the binary NS inspiral, the tidal energy transferred to the various oscillation modes will be damped out due to the viscous dissipation inside the NS matter. For normal nuclear matter inside NS in a binary, the main source of viscosity is the shear viscosity (SV) originating from electro-electron scattering in the relevant temperature region of \(10^{6}-10^{8}\) K during the inspiral [34]. But in Fig. 1, we showed that in this temperature range, hyperon bulk viscosity dominates the \(e-e\) scattering shear viscosity by several orders of magnitude and also peaks at a temperature between \(10^{8}-10^{9}\)K depending on the Figure 1: Relative strengths of various sources of viscosities inside NS matter as a function of temperature for \(n=2.5n_{0}\)( \(n_{0}\) = the nuclear saturation density). The bands corresponds to hyperon bulk viscosities coming from different EoSs given in Table 1 with the black solid line corresponding to HZTCS EoS. The standard neutron matter shear viscosity coming from \(e-e\) scattering and bulk viscosity from m-URCA reactions at f = 1kHz are also plotted along with their temperature dependence. mode oscillation frequency. To investigate the effect of this high viscous damping on the heating of the NS matter, let us first analyse the timescale for mode damping. The mode damping rate is given by \[\gamma_{\alpha}=\dot{E}_{visc,\alpha}/2E_{\alpha} \tag{5}\] where \(\alpha\) denotes the particular eigenmode, \(E_{\alpha}\) is the energy of the mode and \(\dot{E}_{visc,\alpha}\) is the energy dissipation rate. The rate of dissipated energy is given in terms of the viscous stress tensor \(\sigma_{ij}\)[33] \[\dot{E}_{visc}=\int d^{3}x\sigma_{ij}\mathbf{v}_{i,j} \tag{6}\] where \(\mathbf{v}\) denotes the perturbation velocity vector and the viscous stress tensor \(\sigma_{ij}\) can be written as [55] \[\sigma_{ij}=\eta\left(\mathbf{v}_{i,j}+\mathbf{v}_{i,j}-\frac{2}{3}\delta_{ij}\nabla. \mathbf{v}\right)+\zeta\delta_{ij}\nabla.\mathbf{v} \tag{7}\] where \(\eta\) and \(\zeta\) are the shear and bulk viscosity coefficients respectively. In the adiabatic approximation, the effect of the tidal potential due to the companion star is measured in terms of the Lagrangian fluid displacement vector \(\mathbf{\xi}(r,t)\) from its equilibrium position. This displacement can be analysed in terms of the normal modes of the neutron star, \[\mathbf{\xi}(r,t)=\sum_{\alpha}\mathbf{\xi}_{\alpha}(r)e^{-i\omega_{\alpha}t} \tag{8}\] where \(\alpha\) denotes the normal mode index and \(\omega_{\alpha}\) is the angular frequency of the normal mode. For a particular mode, the eigenfunction can be written as a sum of the radial and tangential components \[\mathbf{\xi}_{\alpha}(r)=\left[\zeta_{nl}^{r}(r)\mathbf{e_{r}}+r\zeta_{nl}^{\perp}(r )\mathbf{\nabla}\right]Y_{lm}(\theta,\phi) \tag{9}\] where \(\mathbf{e_{r}}\) is the radial vector and \(Y_{lm}(\theta,\phi)\) are the spherical harmonic functions. In this work, we will only consider the dominant fundamental (\(f\)) mode contribution to the tidal heating. We determine the \(f\)-mode frequency via a relativistic Cowling approximation [49] and the normalised mode eigenfunctions are used to calculate the tidal coupling [56]. From the expression for the displacement vector \(\xi\) given in Eq. (8), the velocity field is given by \(\mathbf{v}=-i\omega\mathbf{\xi}\) (we drop the \(\alpha\) subscript as we are working only with the \(f\)-mode). If, we consider only the bulk viscosity contribution to the energy dissipation given in Eqn. (6), we can express the viscous dissipation rate as \[\gamma_{bulk}=\frac{1}{2}\frac{(l+|m|)!}{(l-|m|)!}\int_{0}^{R}r^{2}dr\zeta \left(\frac{\partial\xi^{r}}{\partial r}+\frac{2}{r}\xi^{r}-l(l+1)\frac{\xi ^{\perp}}{r}\right)^{2}. \tag{10}\] We should compare this viscous dissipation timescale (which is given by inverse of the rate \(\tau_{bulk}=1/\gamma_{bulk}\)) with the inspiral timescale to confirm whether the hyperon bulk viscous dissipation can effectively drain energy from the decaying orbit during the inspiral. We consider leading order gravitational radiation where the GW emission takes energy from the orbit at a rate of \[\dot{E}_{gw}=\frac{-32\mathcal{M}\Omega}{5c^{5}}(G\mathcal{M}\Omega)^{7/3}. \tag{11}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline EOS & Max. mass & \(\Lambda\) onset & Mass(\(M_{\odot}\)) & Central & Radius & Hyperon core & f-mode & Max. temp(K) & \(\Delta\Phi=2\pi\Delta\mathcal{N}\) \\ & (\(M_{\odot}\)) & density & & density & (km) & radius(km) & Freq.(Hz) & at \(D/R=3\) & (rad) \\ \hline \hline NL3 & 2.10 & 1.90\(n_{0}\) & 1.6 & 2.07\(n_{0}\) & 14.7 & 3.42 & 1847 & 9.7\(\times 10^{8}\) & 0.001 \\ [43] & & & 1.8 & 2.48\(n_{0}\) & 14.6 & 6.19 & 1909 & 3.3\(\times 10^{9}\) & 0.08 \\ & & & 2.0 & 3.35\(n_{0}\) & 14.2 & 8.10 & 2009 & 6.2\(\times 10^{9}\) & 0.3 \\ \hline TM1 & 2.06 & 2.02\(n_{0}\) & 1.6 & 2.24\(n_{0}\) & 14.55 & 3.16 & 1873 & 8.7\(\times 10^{8}\) & 0.0008 \\ [52] & & & 1.8 & 2.77\(n_{0}\) & 14.37 & 6.18 & 1947 & 3.4\(\times 10^{9}\) & 0.09 \\ & & & 2.0 & 4.06\(n_{0}\) & 13.6 & 8.15 & 2092 & 6.7\(\times 10^{9}\) & 0.34 \\ \hline TMA & 2.12 & 2.09\(n_{0}\) & 1.8 & 2.54\(n_{0}\) & 14.2 & 5.13 & 1948 & 2.3\(\times 10^{9}\) & 0.02 \\ [52] & & & 2.0 & 3.35\(n_{0}\) & 13.89 & 7.36 & 1909 & 5.1\(\times 10^{9}\) & 0.16 \\ \hline HZTCS & 2.00 & 2.28\(n_{0}\) & 1.6 & 2.67\(n_{0}\) & 13.2 & 4.89 & 2108 & 2.3\(\times 10^{9}\) & 0.02 \\ [53] & & & 1.8 & 3.32\(n_{0}\) & 13.1 & 6.82 & 2171 & 4.7\(\times 10^{9}\) & 0.16 \\ & & & 2.0 & 5.32\(n_{0}\) & 12.25 & 8.17 & 2305 & 7.9\(\times 10^{9}\) & 0.44 \\ \hline FSU2 & 2.03 & 1.92\(n_{0}\) & 1.6 & 2.22\(n_{0}\) & 14.4 & 4.98 & 1898 & 2.1\(\times 10^{9}\) & 0.03 \\ [43] & & & 1.8 & 2.72\(n_{0}\) & 14.2 & 7.07 & 1968 & 4.5\(\times 10^{9}\) & 0.19 \\ & & & 2.0 & 3.82\(n_{0}\) & 13.6 & 8.54 & 2099 & 7.4\(\times 10^{9}\) & 0.47 \\ \hline Stiff EOS & 2.01 & 2.31\(n_{0}\) & 1.6 & 2.71\(n_{0}\) & 13.5 & 3.88 & 2047 & 1.4\(\times 10^{9}\) & 0.004 \\ from Ghosh et al. & & & 1.8 & 3.39\(n_{0}\) & 13.4 & 6.34 & 2119 & 4.1\(\times 10^{9}\) & 0.11 \\ (2022) [54] & & & 2.0 & 5.5\(n_{0}\) & 12.5 & 8.05 & 2256 & 7.2\(\times 10^{9}\) & 0.37 \\ \hline \end{tabular} \end{table} Table 1: Detailed list of NS properties corresponding to different EOS models considered in this work. The potential depth of \(\Lambda\) hyperons in normal nuclear matter is taken as \(U_{\Lambda}^{N}=-30\) MeV [5]. Densities are given in terms of \(n_{0}\), the nuclear saturation density(\(\sim 0.15\) fm\({}^{-3}\)[41]). The phase difference \(\Delta\Phi\) at the last column denotes the total phase difference accumulated at the end of inspiral around \(f\sim 500\)Hz. Here the chirp mass is given by \[\mathcal{M}=M\left(\frac{q^{3}}{1+q}\right)^{1/5} \tag{12}\] with the primary star mass is \(M\) and the companion mass is \(qM\). \(\Omega\) denotes the orbital frequency which is given by \[\Omega^{2}=\frac{GM(1+q)}{D^{3}} \tag{13}\] where \(D\) is the separation between the masses. In this Newtonian evolution dynamics, the orbital timescale is given as [56] \[\frac{1}{t_{D}}\equiv\frac{\dot{\Omega}}{\Omega}=\frac{96}{5c^{5}}(G\mathcal{M }\Omega)^{5/3}\Omega. \tag{14}\] In Fig. 2, we plot this inspiral timescale for the relevant LIGO frequency band of \(20-500\) Hz with the viscous dissipation and tidal heating timescale for the case of equal binary mass of \(1.8\)M\({}_{\odot}\) for all the EOSs given in Table. 1. We see that both the hyperon bulk viscous dissipation and tidal heating timescales are smaller than the inspiral timescale, confirming that unlike the case of shear viscous dissipation, the hyperon bulk viscous dissipation and heating happens faster than the orbital evolution and it can efficiently damp out the tidal energy to heat up the star during the inspiral. Now that we have established that bulk viscous dissipation will effectively damp out the mode oscillation during the inspiral, let us estimate the temperature the NS will reach till they come into contact. Since we are only considering the dominant mode \(l=m=2\) contribution, the energy dissipation rate can be estimated from the mode amplitude [33] \[\dot{E}_{visc}=\frac{12\pi}{5}\frac{GM^{2}}{R}q^{2}(1+q)\omega_{0}^{-4}Q_{0}^ {2}\left(\frac{R}{D}\right)^{9}2\gamma_{bulk} \tag{15}\] where \(R\) is the radius of the star, \(Q_{0}\) tidal coupling strength of the \(f\)-mode and \(\omega_{0}\) the normalised frequency of the \(f\)-mode. The heat content of the star due to the degenerate fermionic gas in the core can be given as [33] \[U\approx 4.5\times 10^{45}T_{8}^{2}erg=4.5\times 10^{22}T^{2}J. \tag{16}\] During the inspiral the thermal evolution of the star can be written as \[\frac{dU}{dt}=\dot{E}_{visc}+\dot{E}_{cool}, \tag{17}\] where \(\dot{E}_{cool}\) denotes the rate of cooling due to neutrino emission and surface photon luminosity. But the timescales for both these cooling processes are very high compared to the binary inspiral timescale as shown in Fig. 2, and therefore can be neglected [30]. After integrating the thermal evolution equation (17) from \(D\rightarrow\infty\) when the stars were far apart and at a very low temperature (\(10^{5}-10^{6}\)K), we can get an estimate of the temperature reached as a function of their separation \(D\) \[\begin{split}\left[\frac{T^{4}}{4}+Bln(T)\right]&= \frac{\pi}{21870}\omega_{0}^{-4}Q_{0}^{2}q\frac{A}{10^{22}}\\ &\times\left(\frac{c^{2}R}{GM}\right)\left(\frac{c^{3}R^{2}}{G} \right)\left(\frac{3R}{D}\right)^{5},\end{split} \tag{18}\] where \(A\) and \(B\) are parameters fitted to the functional dependence of \(\gamma_{bulk}\) on the temperature (\(T\)), \(\gamma_{bulk}=\frac{AT^{2}}{B+T^{4}}\) coming from the temperature dependence of timescale for hyperon BV. In Table 1, we provide the estimates of the temperature reached at a separation of \(D=3R\) when the stars are about to merge. We see that, the temperatures are \(\sim 10^{9}-10^{10}\)K, which is twice order of magnitude higher than the earlier estimates [33; 34]. ## IV Phase error estimation and detectability The energy loss due to tidal heating during the binary inspiral will lead to a change in the number of wave cycles (\(\Delta\mathcal{N}\)) or equivalently, a phase shift \(\Delta\Phi=2\pi\Delta\mathcal{N}\) in the observed frequency range of the GW detectors. This is very crucial to accurately guess the phase of the signal, otherwise the GW template can destroy a possible detection using matched filter technique [57]. This additional torque to the viscous dissipation of energy will lead to a Figure 2: Estimated timescale for the different processes as a function of GW frequency compared against the inspiral timescale (\(t_{D}\)) for a NS binary of equal mass \(1.8M_{\odot}\). Shear viscous(SV) and hyperon BV dissipation corresponds to the dominant \(f\)-mode dissipation by shear viscosity from \(e-e\) scattering [33] and BV from hyperons respectively. The tidal heating corresponds to the heating timescale from the hyperon BV dissipation. The different bands indicates uncertainities due to the choice of different EoSs given in Table 1. total change in the number of cycles given by \[\Delta\mathcal{N}=-\int_{f_{a}}^{f_{b}}t_{D}\left(\frac{\dot{E}_{visc}}{\dot{E}_{ gw}}\right)df \tag{19}\] where \(f_{a}\) is the frequency when the signal first enters the detector band, and \(f_{b}\), when it dives into the noise again. Since, \(f\)-modes are not resonantly excited, we need to do the integration over the whole frequency range unlike the cases for resonantly excited \(g\)-modes or \(r\)-modes where additional energy loss is associated with the particular mode frequency [56, 58]. Using the expressions given in Eqn. (11) and (15), the net change in the number of cycles is given by \[\Delta\mathcal{N}=\frac{15\pi}{384}\left(\frac{c^{2}R}{GM}\right)^{6}\omega_{0 }^{-4}Q_{0}^{2}\frac{1}{q(1+q)}\left(\frac{R}{c}\right)^{2}\int_{f_{a}}^{f_{b} }\gamma_{bulk}df. \tag{20}\] In Table 1, we demonstrate the net phase difference accumulated at GW frequency of 500 Hz for all the different EoSs and different values of equal mass binaries. In Fig. 3, we display how this phase difference grows as a function of frequency, taking into consideration uncertainties due to the different choice of EOSs. For equal \(1.6M_{\odot}\) binaries, we see that the net phase difference is of the order of \(10^{-3}-10^{-2}\) rad and for higher masses of \(1.8M_{\odot}\) or \(2M_{\odot}\), we get a net phase difference in the order of \(0.1-0.5\) rad. To be able to measure this phase difference using the current or future generation GW detectors, the phase uncertainty of detected GWs must be smaller than the phase shift. Earlier estimates based on a detection with signal-to-noise ratio (SNR) of 10 and an approximate single detector sensitivity showed that phase uncertainty was to be around \(\Delta\Phi\approx 1-3\) rad (\(\Delta\mathcal{N}=0.5\) equivalent to \(\Delta\Phi=\pi\)) [59]. However recent improvements of analysis [60] have shown the phase error to be around \(\Delta\phi\sim\pm 0.1\) rad for \(f_{GW}\leq 300\) Hz inclusive of calibration uncertainties for the GW170817 signal analysed using GW waveform model _IMRPhenomProv2_NRTidal_[61]. More recently, Read (2023) [62] compares a number of GW waveform models and shows that the uncertainty due to waveform differences is \(\sim\pm 0.02\) rad for A+ [63] and \(\pm 10^{-3}\) rad for Cosmic Explorer (CE) [64]. So, from these estimates, we see that a BNS event with SNR like GW170817 would produce enough tidal heating to be detectable using the current LVK detectors if it has a heavier component mass \(\geq 1.8M_{\odot}\). In the 3G detectors, we can measure evidence of tidal heating due to hyperons even for much lower mass NS components. ## V Discussion This work considers for the first time the effect of tidal heating in NSs during binary inspirals due to bulk viscosity originating from non-leptonic weak interaction processes involving hyperons inside the NS core. We consider several state-of-the-art EoSs including the hyperonic degrees of freedom consistent with multi-messenger observations and calculate the bulk viscous dissipation of the dominant \(f-\)mode oscillations excited due to the tidal interaction during the inspiral. This dissipated energy can be effectively converted to thermal energy during the inspiral timescale and can heat up the stars upto \(0.1-1\) MeV during the last orbits before coalescing. For systems with mass ratio \(q>1\) such as neutron star-black hole (NSBH) systems, it is evident from Eqn. (18) and (20) that the temperature estimate will be higher by a factor of \(q^{1/4}\) and the net phase shift decreases by a factor of \(q(1+q)\) respectively. Although these estimates are higher by orders of magnitudes than earlier estimates by Lai (1994) [33] and Arras et al. (2019) [34], they are not sufficiently high to demand inclusion of thermal effects in the EoS during inspiral. Recent studies [65] have shown that thermal correction to the tidal deformability and radius is negligible for temperatures below 1 MeV. For neutron stars with lower mass \(\sim 1.4M_{\odot}\), the central density may be too low support sufficient hyperon fraction to produce significant tidal heating. Recent analysis of populations of galactic binary NSs by Farrow et al.(2019) [66] suggests two distinct mass distributions for recycled and slow NSs, and bimodality of the recycled NS mass distribution. Although this model predicts that we need \(\mathcal{O}(10)\) and \(\mathcal{O}(100)\) BNS events to have a component mass \(\geq 1.6M_{\odot}\) and \(\geq 1.8M_{\odot}\) respectively, Figure 3: Estimated phase shift accumulated in the GW signal for equal mass binaries as a function of GW frequency. The different bands indicate uncertainties due to the choice of different EoSs given in Table 1 corresponding to different masses of equal mass binaries considered and the solid lines define their boundaries. recent observations of BNS event GW190425 [67] or NSBH event GW200105 [68] having component NS masses \(m_{1}^{90\%}=[1.6,1.87]M_{\odot}\) and \(m_{2}=1.9^{+0.3}_{-0.2}\) respectively, suggest that there might be other formation channels for higher mass NS in binary systems. The induced phase shift to the GW signal due to the tidal heating is \(\sim 0.1\) rad for component masses \(\geq 1.8M_{\odot}\) which is detectable with current and future generation GW detectors, although a detailed post-Newtonian (PN) analysis of the effect of viscous dissipation on the GW phasing is required to accurately determine bulk viscosity from a future detection. A recent work by Most et al. (2022) [69] performed an order-of-magnitude PN estimate about the direct effect of bulk viscosity on the orbital motion ignoring back-reaction of viscosity on the mode oscillations that leads to tidal heating. Considering their estimates, at \(f=100\)Hz, when the bulk viscosity is \(\sim 10^{32}\) gm cm\({}^{-1}\) s\({}^{-1}\)(from Fig. 1), we would require a SNR of 100 to get a 10% measurement on the bulk viscosity which is achievable using the third generation GW detectors. A future detection of this high bulk viscosity effect originating from non-leptonic weak interaction processes involving hyperons either in terms of deviation in the orbital motion or tidal heating would unequivocally confirm the presence of hyperons in the interior of neutron stars. Even a non-detection can place an upper limit on the bulk viscosity and the hyperon fraction inside the neutron star which is also very crucial for modelling of dense nuclear matter. There are various directions in which our research can be further developed. First, a detailed investigation of the PN order at which the tidal heating becomes relevant will have to be performed to confirm the conclusions related to the qualitative Newtonian inspiral discussed in this work. Second, we only considered the contribution of the dominant \(f\)-mode dissipation but Lai (1994) [33] showed that tidal heating due to the resonant \(g-\)mode oscillations can also be as significant in spite of their tidal coupling coefficient being too small (\(\sim 10^{-3}-10^{-4}\)[56]). Due to their frequency being \(\sim 100\) Hz, the bulk viscosity value is also one order of magnitude order higher than those of \(f-\)modes, so the tidal heating due to resonant \(g\)-modes should also be studied in detail. Third, the dominant channel in unpaired strange quark matter, \(d+u\longleftrightarrow s+u\) process, can also produce bulk viscosity as high as \(10^{29}-10^{30}\) gm cm\({}^{-1}\) s\({}^{-1}\) in the temperature region of \(10^{8}-10^{9}\)K [70; 71]. Although this estimate is one order of magnitude less than those of hyperons, it is still worthwhile to see the amount of tidal heating produced in a strange star during binary inspiral. Finally, we need to consider the effect of superfluidity on the hyperon bulk viscosity since they are known to reduce the rates of the weak interactions and also effective below the critical temperature of \(10^{9}\)K [38]. These topics are currently work in progress and will be reported in forthcoming publications. ###### Acknowledgements. The authors would like to acknowledge the usage of the IUCAA HPC computing facility, PEGASUS for the numerical calculations.
2307.15700
MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking
As a video task, Multiple Object Tracking (MOT) is expected to capture temporal information of targets effectively. Unfortunately, most existing methods only explicitly exploit the object features between adjacent frames, while lacking the capacity to model long-term temporal information. In this paper, we propose MeMOTR, a long-term memory-augmented Transformer for multi-object tracking. Our method is able to make the same object's track embedding more stable and distinguishable by leveraging long-term memory injection with a customized memory-attention layer. This significantly improves the target association ability of our model. Experimental results on DanceTrack show that MeMOTR impressively surpasses the state-of-the-art method by 7.9% and 13.0% on HOTA and AssA metrics, respectively. Furthermore, our model also outperforms other Transformer-based methods on association performance on MOT17 and generalizes well on BDD100K. Code is available at https://github.com/MCG-NJU/MeMOTR.
Ruopeng Gao, Limin Wang
2023-07-28T17:50:09Z
http://arxiv.org/abs/2307.15700v3
# MeMORR: Long-Term Memory-Augmented Transformer ###### Abstract As a video task, Multi-Object Tracking (MOT) is expected to capture temporal information of targets effectively. Unfortunately, most existing methods only explicitly exploit the object features between adjacent frames, while lacking the capacity to model long-term temporal information. In this paper, we propose MeMORR, a long-term memory-augmented Transformer for multi-object tracking. Our method is able to make the same object's track embedding more stable and distinguishable by leveraging long-term memory injection with a customized memory-attention layer. This significantly improves the target association ability of our model. Experimental results on DanceTrack show that MeMORR impressively surpasses the state-of-the-art method by 7.9% and 13.0% on HOTA and AssA metrics, respectively. Furthermore, our model also outperforms other Transformer-based methods on association performance on MOT17 and generalizes well on BDD100K. Code is available at [https://github.com/MCG-NJU/MeMORR](https://github.com/MCG-NJU/MeMORR). + Footnote †: �2018graygray ## 1 Introduction Multi-Object Tracking (MOT) [8, 23, 30] aims to detect multiple objects and maintain their identities in a video stream. MOT can be applied to numerous downstream tasks, such as action recognition [7], behavior analysis [16], and so on. It is also an important technique for real-world applications, _e.g._, autonomous driving and surveillance. According to the definition of MOT, this task can be formally divided into two parts: object detection and association. For a long time, pedestrian tracking datasets (like MOT17 [23]) have had mainstream domination in the community. However, these datasets have insufficient challenges in target association because of their almost linear motion pattern. Therefore, tracking-by-detection methods [5, 33, 44] achieve the state-of-the-art performance of MOT for several years. They first adopt a robust object detector (_e.g._, YOLOX [12]) to independently localize the objects in each frame and associate them with IoU [3, 41] or ReID features [27]. However, associating targets becomes a critical challenge in some complex scenarios, like group dancers [30] and sports players [8, 13]. These similar appearances and erratic movements may cause existing methods to fail. Recently, Transformer-based tracking methods [22, 43] have introduced a new fully-end-to-end MOT paradigm. Through the interaction and progressive decoding of detect and track queries in Transformer, they simultaneously complete detection and tracking. This paradigm is expected to have greater potential for object association due to the flexibility of Transformer, especially in the above complex scenes. Although these Transformer-based methods achieve excellent performance, they still struggle with some complicated issues, such as analogous appearances, irregular motion patterns, and long-term occlusions. We hypothesize that more intelligent leverage of temporal information can provide the tracker a more effective and robust representation for each tracked target, thereby relieving the above issues and boosting the tracking performance. Unfortunately, most previous methods [22, 43] only exploit the image or object features between two adjacent frames, which lacking the utilization of long-term temporal information. Based on the analysis above, in this paper, we focus on leveraging temporal information by proposing a long-term **M**emory-augmented **M**ulti-**O**bject **T**racking method with **T**R**ransformer, coined as **MeMORR**. We exploit detect and track embeddings to localize newborn and tracked objects via a Transformer Decoder, respectively. Our model maintains a long-term memory with the exponential recursion update algorithm [29] for each tracked object. Afterward, we inject this memory into the track embedding, reducing its abrupt changes and thus improving the model association ability. As multiple tracked targets exist in a video stream, we apply a memory-attention layer to produce a more distinguishable representation. Besides, we present an adaptive aggregation to fuse the object feature from two adjacent frames to improve tracking robustness. In addition, we argue that the learnable detection query in DETR [6] has no semantic information about specific objects. However, the track query in Transformer-based MOT methods like MOTR [43] carries information about a tracked object. This difference will cause a semantic information gap and thus degrade the final tracking performance. Therefore, to overcome this issue, we use a light decoder to perform preliminary object detection, which outputs the detect embedding with specific semantics. Then we jointly input detect and track embeddings into the subsequent decoder to make MeMORR tracking results more precise. We mainly evaluate our method on the DanceTrack dataset [30] because of its serious association challenge. Experimental results show that our method achieves the state-of-the-art performance on this challenging DanceTrack dataset, especially on association metrics (, AssA, IDF1). We also evaluate our model on the traditional pedestrian tracking dataset of MOT17 [23] and the multi-categories tracking dataset of BDD100K [42]. In addition, we perform extensive ablation studies further demonstrate the effectiveness of our designs. ## 2 Related Work **Tracking-by-Detection** is a widely used MOT paradigm that has recently dominated the community. These methods always get trajectories by associating a given set of detections in a streaming video. The objects in classic pedestrian tracking scenarios [9, 23] always have different appearances and regular motion patterns. Therefore, appearance matching and linear motion estimation are widely used to match targets in consecutive frames. SORT [3] uses the Intersection-over-Union (IoU) to match predictions of the Kalman filter [34] and detected boxes. Deep-SORT [35] applies an additional network to extract target features, then utilizes cosine distances for matching besides motion consideration in SORT [3]. JDE [33], FairMOT [45], and Unicorn [39] further explore the architecture of appearance embedding and matching. ByteTrack [44] employs a robust detector based on YOLOX [12] and reuses low-confidence detections to enhance the association ability. Furthermore, OC-SORT [5] improves SORT [3] by rehabilitating lost targets. In recent years, as a trendy framework in vision tasks, some studies [36, 48] have also applied Transformers to match detection bounding boxes. Moreover, Dendorfer [10] attempt to model pedestrian trajectories by leveraging more complex motion estimation methods (like S-GAN [14]) from the trajectory prediction task. The methods described above have powerful detection capabilities due to their robust detectors. However, although such methods have achieved outstanding performance in pedestrian tracking datasets, they are mediocre at dealing with more complex scenarios having irregular movements. These unforeseeable motion patterns will cause the trajectory estimation and prediction module to fail. **Tracking-by-Query** usually does not require additional post-processing to associate detection results. Unlike the tracking-by-detection paradigm mentioned above, tracking-by-query methods apply the track query to decode the location of tracked objects progressively. Inspired by DETR-family [6], most of these methods [22, 43] leverage the learnable object query to perform newborn object detection, while the track query localizes the position of tracked objects. TransTrack [31] builds a siamese network for detection and tracking, then applies an IoU matching to produce newborn targets. TrackFormer [22] utilizes the same Transformer decoder for both detection and tracking, then employs a non-maximum suppression (NMS) with a high IoU threshold to remove strongly overlapping duplicate bounding boxes. MOTR [43] builds an elegant and fully end-to-end Transformer for multi-object tracking. This paradigm performs excellently in dealing with irregular movements due to the flexibility of query-based design. Furthermore, MQT [17] employs different queries to represent one tracked object and cares more about class-agnostic tracking. However, current query-based methods typically exploit the information of adjacent frames (query [43] or feature [22] fusion). Although the track query can be continuously updated over time, most methods still do not explicitly exploit longer temporal information. Cai [4] explore a large memory bank to benefit from time-related knowledge but suffer enormous storage costs. In order to use long-term information, we propose a long-term memory to stabilize the tracked object feature over time and a memory-attention layer for a more distinguishable representation. Our experiments further approve that this approach significantly improves association performance in MOT. ## 3 Method ### Overview We propose the **MeMORR**, a long-term memory-augmented Transformer for multi-object tracking. Different from most existing methods [22, 43] that only explicitly utilize the states of tracked objects between adjacent frames, our core contribution is to build a _long-term memory_ (in Section 3.3) that maintains the long-term temporal feature for each tracked target, together with a _temporal interaction module (TIM)_ that effectively injects the temporal information into subsequent tracking processes. Like most DETR-family methods [6], we use a ResNet-50 [15] backbone and a Transformer Encoder to produce the image feature of an input frame \(I^{t}\). As shown in Figure 1, the learnable detect query \(Q_{det}\) is fed into the _Detection Decoder_\(\mathcal{D}_{det}\) (in Section 3.2) to generate the detect embedding \(E^{t}_{det}\) for the current frame. Afterward, by query ing the encoded image feature with \([E^{t}_{det},E^{t}_{tck}]\), the Transformer Joint Decoder \(\mathcal{D}_{joint}\) produces the corresponding output \([\hat{O}^{t}_{det},\hat{O}^{t}_{tck}]\). For simplicity, we merge the newborn objects in \(\hat{O}^{t}_{det}\) (yellow box) with tracked objects' output \(\hat{O}^{t}_{tck}\), denoted by \(O^{t}_{tck}\). Afterward, we predict the classification confidence \(c^{t}_{i}\) and bounding box \(b^{t}_{i}\) corresponding to the \(i^{th}\) target from the output embeddings. Finally, we feed the output from adjacent frames \([O^{t}_{tck},O^{t-1}_{tck}]\) and the long-term memory \(M^{t}_{tck}\) into the Temporal Interaction Module, updating the subsequent track embedding \(E^{t+1}_{tck}\) and long-term memory \(M^{t+1}_{tck}\). The details of our components will be elaborated in the following sections. ### Detection Decoder In the previous Transformer-based methods [22, 43], the learnable detect query and the previous track query are jointly input to Transformer Decoder from scratch. This simple idea extends the end-to-end detection Transformer [6] to multi-object tracking. Nonetheless, we argue that this design may cause misalignment between detect and track queries. As discussed in numerous works [6, 20], the learnable object query in DETR-family plays a role similar to a learnable anchor with little semantic information. On the other hand, track queries have specific semantic knowledge to resolve their category and bounding boxes since they are generated from the output of previous frames. Therefore, as illustrated in Figure 1, we split the original Transformer Decoder into two parts. The first decoder layer is used for detection, and the remaining five layers are used for joint detection and tracking. These two decoders have the same structure but different inputs. The Detection Decoder \(\mathcal{D}_{det}\) takes the original learnable detect query \(Q_{det}\) as input and generates the corresponding detect embedding \(E^{t}_{det}\), carrying enough semantic information to locate and classify the target roughly. After that, we concatenate the detect and track embedding together and feed them into the Joint Decoder \(\mathcal{D}_{joint}\). ### Long-Term Memory Unlike previous methods [17, 43] that only exploit adjacent frames' information, we explicitly introduce a _long-term memory_\(M^{t}_{tck}\) to maintain longer temporal information for tracked targets. When a newborn object is detected, we initialize its long-term memory with the current output. It should be noted that in a video stream, objects only have minor deformation and movement in consecutive frames. Thus, we suppose the semantic feature of a tracked object changes only slightly in a short time. In the same way, our long-term memory should also update smoothly over time. Inspired by [29], we apply a simple but effective running average with exponentially decaying weights to update long-term memory \(M^{t}_{tck}\): \[\widetilde{M}^{t+1}_{tck}=(1-\lambda)M^{t}_{tck}+\lambda\cdot O^{t}_{tck}, \tag{1}\] where \(\widetilde{M}^{t+1}_{tck}\) is the new long-term memory for the next frame. The memory update rate \(\lambda\) is experimentally set to \(0.01\), following the assumption that the memory changes smoothly and consistently in consecutive frames. We also tried some other values in Table 7. ### Temporal Interaction Module **Adaptive Aggregation for Temporal Enhancement.** Issues such as blurring or occlusion are often seen in a video stream. An intuitive idea to solve this problem is using Figure 1: **Overview of MeMOTR.** Like most DETR-based [6] methods, we exploit a ResNet-50 [15] backbone and a Transformer [32] Encoder to learn a 2D representation of an input image. We use different colors to indicate different tracked targets, and the learnable detect query \(Q_{det}\) is illustrated in gray. Then the Detection Decoder \(\mathcal{D}_{det}\) processes the detect query to generate the detect embedding \(E^{t}_{det}\), which aligns with the track embedding \(E^{t}_{tck}\) from previous frames. Long-term memory is denoted as \(M^{t}_{tck}\). The initialization process in the blue dotted arrow will be applied to newborn objects. Our Long-Term Memory and Temporal Interaction Module is discussed in Section 3.3 and 3.4. More details are illustrated in Figure 2. multi-frame features to enhance the single-frame representation. Therefore, we fuse the outputs from two adjacent frames with an adaptive aggregation algorithm in our MeM-OTR. Due to occlusions and blurring, the output embedding \(O^{t}_{tck}\) of the current frame may be unreliable. Thus, as illustrated in Figure 2, we generate a _channel-wise weight_\(W^{t}_{tck}\) for each tracked instance to alleviate this problem: \[W^{t}_{tck}=\mathrm{Sigmoid}(\mathrm{MLP}(O^{t}_{tck})). \tag{2}\] We multiply this weight \(W^{t}_{tck}\) with the current output \(O^{t}_{tck}\) and then concatenate the result with \(O^{t-1}_{tck}\) from the previous frame. Furthermore, we apply a two-layer MLP to produce the fusion outcome \(\widetilde{O}^{t}_{tck}\). This adaptive aggregation enhances the target representation with short-term temporal modeling. However, we do not use the above channel-wise weight for previous output \(O^{t-1}_{tck}\). As we will discuss in Section 3.5, there is a difference between \(O^{t}_{tck}\) and \(O^{t-1}_{tck}\). During inference, we employ a score threshold \(\tau_{tck}\) to guarantee that \(O^{t-1}_{tck}\) is relatively reliable. Therefore, we input it entirely into the subsequent fusion step without the adaptive weight. **Generate Track Embedding.** As discussed in Section 3.1, we exploit the track embedding \(E^{t}_{tck}\) to produce the location and category of each tracked target. Therefore, generating more reliable and distinguishable track embedding is the key to improving tracking performance. Our processing is illustrated in Figure 2. Since there are multiple similar objects in the same frame, we believe that learning more discriminative representations is also crucial to the tracker. Thus we employ a Multi-Head Attention [32] structure called _memory-attention layer_ to achieve this interaction between different trajectories. Due to the reliability of long-term memory \(M^{t}_{tck}\), we use it as \(K\), and the aggregation \(\tilde{O}^{t}_{tck}\) and the output embedding \(O^{t}_{tck}\) as \(Q\) and \(V\), respectively. After that, we combine long-term memory \(M^{t}_{tck}\) and the result of memory-attention layer using addition, then input into an FFN network to predict the subsequent track embedding \(\widetilde{E}^{t+1}_{tck}\). As shown in Equation (1), long-term memory is gradually changing over time. Therefore, by incorporating information from long-term memory, the track embedding \(\widetilde{E}^{t+1}_{tck}\) avoids abrupt changes that may cause association mistakes in consecutive frames. This design significantly improves the performance of object association, corroborated by ablation experiments shown in Table 6. ### Inference Details At time step \(t\), we jointly input the learnable detect query \(Q_{det}\) and track embedding \(E^{t}_{tck}\) (\(E^{0}_{tck}=\emptyset\)) into our model to produce detection and tracking results, respectively. The detection result with a confidence score of more than \(\tau_{det}\) will transform into a newborn object. Target occlusion is a common issue in multi-object tracking task. If a tracked object is lost (confidence \(\leq\tau_{tck}\)) in the current frame, we do not directly remove its track embedding but mark it as \(inactive\) trajectory. Afterward, the inactive target will be removed entirely after \(\mathcal{T}_{miss}\) frames. It is worth noting that we do not update the track embedding and long-term memory at every time step for each object. Instead, we choose to update those track embedding with high confidence. The choice of update threshold \(\tau_{next}\) yields the following formulation for updating: \[[E^{t+1}_{i},M^{t+1}_{i}]=\begin{cases}[\widetilde{E}^{t+1}_{i},\widetilde{M} ^{t+1}_{i}],&c^{t}_{i}>\tau_{next}\\,&c^{t}_{i}\leq\tau_{next}\end{cases}, \tag{3}\] where \(i\) is the target index, \(t\) is the frame index, and \(c^{t}_{i}\) is the predicted classification confidence of the \(i^{th}\) object at time step \(t\). \(\widetilde{E}^{t+1}_{i}\) and \(\widetilde{M}^{t+1}_{i}\) are the predictions of track embedding and long-term memory, generated by Temporal Interaction Module shown in Figure 2. For simplicity, we set \(\tau_{det}=\tau_{tck}=\tau_{next}=0.5\) in our experiments. \(\mathcal{T}_{miss}\) is set to \(30\), \(15\) and \(10\) on DanceTrack, MOT17 and BDD100K, respectively. ## 4 Experiments ### Datasets and Metrics **Datasets.** We mainly evaluate MeMOR on the DanceTrack [30] dataset since they have more severe association challenges than traditional pedestrian tracking datasets. For a comprehensive evaluation, we also conduct experiments on MOT17 [23] and BDD100K [42]. **Metrics.** Because of providing a balanced way to measure both detection and association performance explicitly, we use Higher Order Metric for Evaluating Multi-Object Tracking (HOTA) [21] to evaluate our method, especially analyzing our memory mechanism using Association Accuracy (AssA). We also list the MOTA [2] and IDF1 [26] metrics in our experimental results. ### Implementation Details Following the settings in MOTR [43], we use several data augmentation methods, such as random resize and random crop. The shorter side of the input image is resized to 800, and the maximum size is restricted to 1536. We built MeMOTR upon DAB-Deformable-DETR [20] with a ResNet50 [15] backbone and initialize our model with the official DAB-Deformable-DETR [20] weights pre-trained on the COCO [19] dataset. We suggest that the anchor-based position-prior from DAB-Deformable-DETR is quite effective due to the tracked box's smoothness in time and can be further exploited in future works. We also provide the results of our model based on Deformable-DETR [49] for fair comparison in Table 1. Our models are conducted on PyTorch with 8 NVIDIA Tesla V100 GPUs. By using PyTorch gradient checkpoint technology, we implement a memory-optimized version that can also be trained on NVIDIA GPUs with less than 10GB GPU memory. The batch size is set to 1 per GPU, and each batch contains a video clip with multiple frames. Within each clip, video frames are sampled with random intervals from 1 to 10. We use the AdamW optimizer with the initial learning rate of \(2.0\times 10^{-4}\). During training, we filter out the tracked target lower than the score threshold \(\tau_{update}=0.5\) and IoU threshold \(\tau_{IoU}=0.5\). On **DanceTrack**[30], we train MeMOTR for \(18\) epochs on the train set and drop the learning rate by a factor of \(10\) at the \(12^{th}\) epoch. Firstly, we use two frames within a clip for training. And then increase the clip frames to 3, 4, and 5 at the \(6^{th}\), \(10^{th}\), and \(14^{th}\) epochs, respectively. On **MOT17**[23], due to the small train set (about 5K frames), it is easy to cause overfitting problems. Therefore, we add CrowdHuman [28] validation set to build a joint train set with MOT17 training data. CrowdHuman val set provides about 4K static images. Therefore, we apply random shifts from CenterTrack [47] to generate pseudo trajectories. Finally, we train MeMOTR for \(130\) epochs, and the learning rate decays by a factor of \(10\) at the \(120^{th}\) epoch. The initial length of the training video clip is \(2\) and gradually increases to \(3\) and \(4\) at the \(60^{th}\) and \(100^{th}\) epochs, respectively. On **BDD100K**[42], we modify the sampling length at the \(6^{th}\) and \(10^{th}\) epochs and totally train 14 epochs while reducing the learning rate at the \(12^{th}\) epoch. ### Comparison on the DanceTrack Dataset Since DanceTrack [30] is a dataset with various motions that cannot be modeled by classic linear motion estimation [3, 5], it provides a better choice to verify our tracking performance, especially the association performance. We compare MeMOTR with the state-of-the-art methods on the DanceTrack [30] test set in Table 1. Our method achieves \(68.5\) HOTA and gains a vast lead on the AssA metric (\(58.4\) AssA), even surpassing some methods [40] that use additional datasets for training. Due to the limitations of the linear motion estimation module, some tracking-by-detection methods, for example, ByteTrack [44], although they can achieve great detection results (\(71.0\) DetA), still cannot handle complex object association problems (\(32.1\) AssA). However, their MOTA metrics are still high because MOTA overemphasizes detection performance. Our temporal interaction module, shown in Figure 2, leverages temporal information gracefully and efficiently. Moreover, the separated detection decoder \(\mathcal{D}_{det}\) discussed in Section 3.2 alleviates the conflicts between detection and tracking tasks. Therefore, we earn an impressive association performance (\(58.4\) AssA and \(71.2\) IDF1) and competitive detection performance (\(80.5\) DetA) compared with the state-of-the-art methods. We further prove our components' effectiveness in Section 4.6. ### Comparison on the MOT17 Dataset In order to make a comprehensive comparison, we also evaluate our method on the classic pedestrian tracking benchmark. Table 2 compares our method with state-of-the-art methods on the MOT17 [23] test set. Recent tracking-by-detection methods [41, 44] exploit robust detectors (like YOLOX [12]) to achieve really excel \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Methods & HOTA & DetA & AssA & MOTA & IDF1 \\ \hline _w/o extra data:_ & & & & & \\ FairMOT [45] & 39.7 & 66.7 & 23.8 & 82.2 & 40.8 \\ CenterTrack [47] & 41.8 & 78.1 & 22.6 & 86.8 & 35.7 \\ TraDS [37] & 43.3 & 74.5 & 25.4 & 86.2 & 41.2 \\ TransTrack [31] & 45.5 & 75.9 & 27.5 & 88.4 & 45.2 \\ ByteTrack [44] & 47.7 & 71.0 & 32.1 & 89.6 & 53.9 \\ GTR [48] & 48.0 & 72.5 & 31.9 & 84.7 & 50.3 \\ QDTrack [24] & 54.2 & 80.1 & 36.8 & 87.7 & 50.4 \\ MOTR [43] & 54.2 & 73.5 & 40.2 & 79.7 & 51.5 \\ OC-SORT [5] & 55.1 & 80.3 & 38.3 & **92.0** & 54.6 \\ C-BIoU [41] & 60.6 & **81.3** & 45.4 & 91.6 & 61.6 \\ MeMOTR\({}^{*}\) (ours) & 63.4 & 77.0 & 52.3 & 85.4 & 65.5 \\ MeMOTR (ours) & **68.5** & 80.5 & **58.4** & 89.9 & **71.2** \\ \hline _with extra data:_ & & & & & \\ MT\_IoT [40] & 66.7 & 84.1 & 53.0 & 94.0 & 70.6 \\ MOTRv2 [46] & 69.9 & 83.0 & 59.0 & 91.9 & 71.7 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison with state-of-the-art methods on the DanceTrack [30] test set. Results for existing methods are from DanceTrack [30]. MeMOTR\({}^{*}\) means the result based on standard Deformable-DETR. lent detection performance (up to \(64.8\) DetA). Since performance on MOT17 overemphasizes detection performance, these methods perform immensely well. In this regard, there is still a massive gap in the detection performance of Transformer-based methods [4, 43] because too many dense and small object predictions are involved. In addition, the joint query in a shared Transformer decoder produces tracking and detection simultaneously, which may cause internal conflicts. The detect query is inhibited by the track query in the self-attention [32] structure, limiting the ability to detect newborn objects, especially those close to tracked targets, and vice versa. Because of this, TransTrack [31] achieves significantly better detection performance (\(61.6\) DetA) due to its siamese network structure. This architecture decouples tracking and detection to resolve the above conflict, but its simple post-processing matching algorithm decreases its association performance. On the other hand, we found that Transformer-based methods [43] suffer from serious overfitting problems in MOT17 [23] because of the tiny train set, which only contains about 5K frames. Although we use an additional CrowdHuman validation set for training mentioned in Section 4.2, severe overfitting still happens. Thus, we get \(\sim\)\(90.0\) HOTA and \(\sim\)\(95.0\) MOTA on the train set. However, too much additional training data can lead to inductive bias toward static people. Therefore, we argue that the train set of MOT17 is too small to train our model completely. Eventually, our method slightly improves the performance on the MOT17 test set to \(58.8\) HOTA. We gain competitive detection accuracy (DetA) compared with other Transformer-based methods. In particular, we improved the performance of object association, which AssA and IDF1 metrics can reflect. As a method that also uses the memory mechanism, our MeMOTR achieves higher AssA and IDF1, surpassing MeMOT [4] by \(3.2\%\) and \(2.5\%\), respectively. These experimental results further validate the effectiveness of our method. (53.6 TETA), especially in association (56.7 mAssocA), further demonstrating the effectiveness of our proposed method for associating targets. ### Ablation Study In this section, we study several components of our model, such as the long-term memory, adaptive aggregation, memory-attention layer, and separated detection decoder. Since our main contribution is a better utilization of temporal information, we choose to conduct ablation experiments on DanceTrack [30] due to its more severe object association challenges. On the other hand, DanceTrack [30] has more extensive training data to avoid severe overfitting (about \(10\times\) compared with MOT17 [23]). We train our model on the train set and evaluate it on the official val set. **Detection Decoder.** For joint tracking and detection, the tracking-by-query paradigm processes detect and track queries in a shared Transformer decoder from scratch. However, the track query has rich semantic content from the previous tracked targets, in contrast to the object query for detection. On the other hand, learnable detect anchors often cover a larger range to find potential targets. In contrast, the anchor of tracked object pays more attention to a small area where the target appeared in the previous frame. This may cause a large gap between the anchors of tracked and newborn objects, as we visualized in Figure 3 (left). In this paper, we apply a separated Transformer decoder layer to perform preliminary target detection. The output \(E^{t}_{det}\) of this Detection Decoder \(\mathcal{D}_{det}\) will be better aligned with the track embedding \(E^{t}_{tck}\) generated by the previous frame to improve the tracking performance. We experimentally confirmed the effectiveness of this design, as shown in Table 4. Using only one separate Detection Decoder layer dramatically improves HOTA and AssA metrics by \(1.8\%\) and \(2.8\%\), respectively. However, continuing to increase the layers of the Detection Decoder will reduce the refinement steps of track embeddings, thus slightly weakening the association performance. Furthermore, we visualize the bounding boxes after Detection Decoder in Figure 3 (right). This indicates that \(\mathcal{D}_{det}\) is able to locate objects roughly. **Adaptive Aggregation.** In Section 3.4, we design an adaptive aggregation, which dynamically fuses object features from adjacent frames. We ablate this structure in Table 5. The first two results only use the current output \(O^{t}_{tck}\) to generate the temporal aggregation \(\hat{O}^{t}_{tck}\). In contrast, the next two lines fuse the previous output \(\hat{O}^{t-1}_{tck}\) into \(\hat{O}^{t}_{tck}\). Introducing \(O^{t-1}_{tck}\) provides additional object features from neighboring frames, thus improving tracking performance. We suppose this offers a complementary feature augmentation that can combat video ambiguity and uncertainty. Furthermore, we explore the impact of the dynamic weight \(W^{t}_{tck}\). As shown in Table 5, it only provides a little boost without \(O^{t-1}_{tck}\) from the previous. We explain that dynamic \(W^{t}_{tck}\) leads to missing information without complementary features from previous \(O^{t-1}_{tck}\). The result of the last row shows that utilizing both dynamic weight \(W^{t}_{tck}\) and previous output \(O^{t-1}_{tck}\) produces significantly better performance, with \(+2.6\%\) HOTA and \(+2.3\%\) AssA. **Long-Term Memory.** We propose a long-term memory in Section 3.3 to utilize longer temporal information and further inject it into subsequent track embedding to augment the object feature. We explore the impact of long-term memory \(M^{t}_{tck}\) and show the experimental results in Table 6. For a more comprehensive comparison, we also experiment with another track embedding generation structure that removes the \begin{table} \begin{tabular}{c c|c c c c} \hline \hline \(M^{t}_{tck}\) & \(attn\) & HOTA & DetA & AssA & IDF1 \\ \hline \multicolumn{5}{c}{_naive_} & 61.1 & 74.2 & 50.6 & 63.7 \\ \hline & & 61.9 & 73.8 & 52.1 & 64.1 \\ ✓ & & 62.5 & 74.2 & 52.9 & 64.7 \\ \hline & ✓ & 61.1 & 74.0 & 50.7 & 62.4 \\ ✓ & ✓ & **63.9** & **74.6** & **55.0** & **67.1** \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation study of long-term memory \(M^{t}_{tck}\) and memory-attention layer \(attn\). _naive_ means a naive baseline with a single FFN for \(O^{t}_{tck}\) to generate the track embedding \(\tilde{E}^{t+1}_{tck}\). \begin{table} \begin{tabular}{c|c c c c c} \hline \hline \(\mathcal{L}_{\mathcal{D}_{det}}\) & HOTA & DetA & AssA & MOTA & IDF1 \\ \hline 0 & 62.1 & 74.3 & 52.2 & 83.1 & 65.6 \\ 1 & **63.9** & **74.6** & **55.0** & **83.4** & **67.1** \\ 2 & 63.2 & 73.8 & 54.3 & 81.9 & 65.8 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation experiments on the layers of the separate Detection Decoder, which is denoted as \(\mathcal{L}_{\mathcal{D}_{det}}\). \begin{table} \begin{tabular}{c|c c c c} \hline \hline \(O_{t-1}\) & \(W^{t}_{tck}\) & HOTA & DetA & AssA & IDF1 \\ \hline & & 62.3 & 73.9 & 52.7 & 64.6 \\ & ✓ & 62.4 & 74.5 & 52.5 & 64.6 \\ \hline ✓ & & 62.7 & 74.4 & 53.1 & 65.3 \\ ✓ & ✓ & **63.9** & **74.6** & **55.0** & **67.1** \\ \hline \hline \end{tabular} \end{table} Table 5: Ablations on different designs of Adaptive Aggregation. memory-attention layer by passing the temporal aggregation \(\hat{O}_{tck}^{t}\) directly to an FFN network. Our experimental results show that utilizing long-term memory produces a better association performance, with \(+0.8\%\) and \(+4.3\%\) AssA for _w/o_ and _w/ memory-attention layer_. The injection of long-term memory significantly stabilizes and augments the identity information of each track embedding, as visualized in Figure 4(c) and 4(d). **Memory Attention.** In Table 6, we also ablate memory-attention layer. It shows that by using memory-attention layer, our MeMOTR achieves much better performance (\(63.9\)_vs._\(62.5\) HOTA), especially improving AssA by \(2.1\%\). This attention layer establishes interactions between different trajectories that help the track embedding learns discriminative features. However, without long-term memory \(M_{tck}^{t}\), memory-attention layer produces worse performance (\(-1.4\%\) AssA and \(-1.7\%\) IDF1). We explain that the track embedding without memory augmentation is unstable. Therefore, interacting with such unreliable information can be counterproductive, as visualized in Figure 4(b). **Memory Update Rate.** Here, we explored the impact of long-term memory update rate \(\lambda\) on the tracking performance in Equation 1. As shown in Table 7, when progressively increasing the \(\lambda_{L}\) from \(0.005\) to \(0.04\), our model achieves the highest HOTA score at \(\lambda=0.01\) while DetA score is decreasing slightly. We suggest the update rate \(\lambda\) may be a hyperparameter that needs to be chosen according to different datasets. For example, scenarios with plenty of target non-rigid deformation may need a higher memory update rate to adapt to the rapidly changing features. ### Limitations Although our MeMOTR brings a significant improvement in association performance, detection performance is still a drawback, especially in crowded scenarios (like MOT17 [23]). During experiments, we observed that sometimes newborn objects are suppressed by tracked targets in the self-attention structure, which leads to reduced detection performance. Therefore, resolving this conflict is a crucial challenge for the joint tracking paradigm. This may help improve the detection capabilities, which can boost the overall tracking performance of the model, as studied in [46]. In addition, in pedestrian tracking, existing datasets are still limited in size and diversity. We suggest that training with other simulation datasets (like MOTSynth [11]) may alleviate the overfitting problem of our model and achieve better tracking performance. ## 5 Conclusion We have proposed MeMOTR, an end-to-end long-term memory-augmented Transformer for multi-object tracking. Our method builds a stable long-term memory for each tracked object and exploits this memory to augment the representation of track embedding, thus improving its association performance. Furthermore, by leveraging a memory-attention layer, our model makes different targets more distinguishable. As a result, our approach achieves the state-of-the-art performance on MOT benchmarks, especially in scenes with irregular motion patterns. Extensive ablation experiments and visualizations demonstrate the effectiveness of our components. We hope that future work will pay more attention to the use of long-term temporal information for object tracking. **Acknowledgements.** This work is supported by National Key R\(\&\)D Program of China (No. 2022ZD0160900), National Natural Science Foundation of China (No. 62076119, No. 61921006), Fundamental Research Funds for the Central Universities (No. 020214380091, No. 020214380099), and Collaborative Innovation Center of Novel Software Technology and Industrialization. Besides, Ruopeng Gao would like to thank Muyan Yang for her social support. Figure 4: **Visualization of Track Embedding \(E_{tck}^{t}\) (the first \(50\) frames in sequence dancetrack0063) from different structure designs by using t-Distributed Stochastic Neighbor Embedding (t-SNE). Track embeddings for different tracked targets (IDs) are marked in different colors and shapes. Our design 4(d) helps the model learn a more stable and distinguishable representation for the track embedding. Corresponding tracking performance is shown in Table 6.** ## Appendix A Boosting Tracking Performance For the tracking-by-detection paradigm, with the development of the Object Detection task, they upgraded the detector used in MOT from Faster R-CNN [25] to YOLOX [12] and obtained impressive detection performance. Although Deformable-DETR [49] has competitive detection performance, it still lags behind some popular detectors such as YOLOX [12]. This will impair the final tracking performance. Recently, unlike the original Deformable-DETR [49], some methods [46] generate the position embeddings from the learnable anchors. On the one hand, this design will improve the model's detection performance, as discussed in many object detection studies [20]. On the other hand, the anchor-based position-prior is quite effective for tracking due to frame continuity. Therefore, as discussed in Section 4.2, we built our MeMORR upon DAB-Deformable-DETR [20] instead of Deformable-DETR [49]. We believe that better detection performance of DAB-Deformable-DETR will lead to better tracking performance, as shown in Table 8 (#2 _vs._#5). We discuss that DAB-Deformable-DETR can be applied in future works as a technology development (like from Faster R-CNN [25] to YOLOX [12] in the tracking-by-detection paradigm). For a fair comparison with previous transformer-based methods [22, 43], we also provide the results of MeMORR based on the standard Deformable-DETR in Table 1 and 8 (#2 and #3). This indicates our method still has impressive performance without DAB-Deformable-DETR. As done in MOTRv2 [46], we further add the anchor-based position generation process to the standard Deformable-DETR in our method, thus slightly improving the tracking performance (Table 8 #3 _vs._#4). Moreover, we also add the YOLOX [12] proposal to our model following MOTRv2 [46]. As they concluded, this significantly improves the detection and tracking performance simultaneously (Table 8 #8). Since the proposals are generated from a frozen CNN-based model, it makes the whole model a non-fully-end-to-end method. For this reason, we list MOTRv2 [46] in Table 2 as a new hybrid architecture. In summary, we provide the cumulative improvements over MOTR [43] on the val and test set of DanceTrack [30], as shown in Table 8. This further verifies the effectiveness of our various components and gives a more intuitive comparison. ## Appendix B Comparison on Difficult Sequences In order to further certify the improvement of our method on target association challenge, we list experimental metrics on some challenging sequences. We selected eight sequences with the lowest AssA metric of MOTR [43] on the DanceTrack [30] validation set. As shown in Table 9, the association results on these complex sequences are unsatisfactory (\(23.6\) average AssA), although the detection performance is passable (\(65.8\) average DetA). Our method substantially improves the performance of object association (\(35.5\)_vs._\(23.6\) AssA) while slightly improving detection performance (\(70.9\)_vs._\(65.8\) DetA). However, compared to the overall association performance (\(58.4\) AssA of our method), there is still a significant deficiency in the results of these challenging sequences. Therefore, we suggest that improving the object association performance of multi-object tracking is still an unsolved problem that should not be ignored. ## Appendix C More visualizations In this section, we supply additional visualization results. Same as Figure 4 in our main paper, we utilize t-Distributed Stochastic Neighbor Embedding (t-SNE) to vi \begin{table} \begin{tabular}{l|c c c c c|c c c c c} \# Row & \multicolumn{4}{c|}{val set} & \multicolumn{4}{c}{test set} \\ & \multicolumn{2}{c}{HOTA} & DetA & AssA & MOTA & IDF1 & HOTA & DetA & AssA & MOTA & IDF1 \\ \hline 1. MOTR (baseline) & 51.7 & 69.4 & 38.7 & 75.6 & 49.7 & 54.2 & 73.5 & 40.2 & 79.7 & 51.5 \\ \hline \hline 2. \#1 + _memory-augment_ & 56.5 & 70.4 & 45.5 & 78.4 & 58.8 & 62.5 & 77.0 & 50.9 & 85.1 & 63.5 \\ 3. \#2 + \(\mathcal{L}_{d}\), \(\mathcal{L}_{j}=1,5\) & 61.0 & 71.2 & 52.5 & 79.2 & 64.1 & 63.4 & 77.0 & 52.3 & 85.4 & 65.5 \\ 4. \#3 + Anchor & 61.1 & 73.0 & 51.3 & 81.3 & 63.8 & 64.6 & 78.4 & 53.4 & 87.6 & 67.3 \\ 5. \#2 + DAB-D-DETR & 62.1 & 74.3 & 52.2 & 83.1 & 65.6 & 65.9 & 78.8 & 55.2 & 87.9 & 68.9 \\ **6. \#5 + \(\mathcal{L}_{d}\), \(\mathcal{L}_{j}=1,5\)** & **63.9** & **74.6** & **55.0** & **83.4** & **67.1** & **68.5** & **80.5** & **58.4** & **89.8** & **71.2** \\ 7. \#5 + \(\mathcal{L}_{d}\), \(\mathcal{L}_{j}=2,4\) & 63.2 & 73.8 & 54.3 & 81.9 & 65.8 & 66.2 & 80.2 & 54.8 & 89.5 & 68.7 \\ 8. \#6 + YOLOX [11] & 66.8 & 78.7 & 57.0 & 88.1 & 70.5 & 70.0 & 81.8 & 60.1 & 90.3 & 72.5 \\ \hline \end{tabular} \end{table} Table 8: Supplemental comparison on DanceTrack [30]. Best viewed in color. The same base color results represent using the same DETR framework (D-DETR [49] or DAB-D-DETR [20]). \(\mathcal{L}_{d}\) and \(\mathcal{L}_{j}\) are the numbers of detection and joint decoder layers in Figure 1, respectively. It should be noted that except for the baseline (#1), training augmentations (track query erasing and false positive inserting in MOTR [43]) are removed from other experiments. sualize track embeddings. More visualizing results are provided in Figure 5, the upper (Figure 5(a) to 5(d)) is from _dancetrack0025_, and the lower (Figure 5(e) to 5(h)) is from _dancetrack0034_ sequence. These results further verify that our _long-term memory_ and _memory-attention layer_ help learn a more stable and distinguishable representation for the tracked target.
2303.16926
Charged Gauss-Bonnet black holes supporting non-minimally coupled scalar clouds: Analytic treatment in the near-critical regime
Recent numerical studies have revealed the physically intriguing fact that charged black holes whose charge-to-mass ratios are larger than the critical value $(Q/M)_{\text{crit}}=\sqrt{2(9+\sqrt{6})}/5$ can support hairy matter configurations which are made of scalar fields with a non-minimal negative coupling to the Gauss-Bonnet invariant of the curved spacetime. Using {\it analytical} techniques, we explore the physical and mathematical properties of the composed charged-black-hole-nonminimally-coupled-linearized-massless-scalar-field configurations in the near-critical $Q/M\gtrsim (Q/M)_{\text{crit}}$ regime. In particular, we derive an analytical resonance formula that describes the charge-dependence of the dimensionless coupling parameter $\bar\eta_{\text{crit}}=\bar\eta_{\text{crit}}(Q/M)$ of the composed Einstein-Maxwell-nonminimally-coupled-scalar-field system along the {\it existence-line} of the theory, a critical border that separates bald Reissner-Nordstr\"om black holes from hairy charged-black-hole-scalar-field configurations. In addition, it is explicitly shown that the large-coupling $-\bar\eta_{\text{crit}}(Q/M)\gg1$ analytical results derived in the present paper for the composed Einstein-Maxwell-scalar theory agree remarkably well with direct numerical computations of the corresponding black-hole-field resonance spectrum.
Shahar Hod
2023-03-29T18:00:01Z
http://arxiv.org/abs/2303.16926v1
# Charged Gauss-Bonnet black holes supporting non-minimally coupled scalar clouds: ###### Abstract Recent numerical studies have revealed the physically intriguing fact that charged black holes whose charge-to-mass ratios are larger than the critical value \((Q/M)_{\rm crit}=\sqrt{2(9+\sqrt{6})}/5\) can support hairy matter configurations which are made of scalar fields with a non-minimal negative coupling to the Gauss-Bonnet invariant of the curved spacetime. Using _analytical_ techniques, we explore the physical and mathematical properties of the composed charged-black-hole-nonminimally-coupled-linearized-massless-scalar-field configurations in the near-critical \(Q/M\gtrsim(Q/M)_{\rm crit}\) regime. In particular, we derive an analytical resonance formula that describes the charge-dependence of the dimensionless coupling parameter \(\bar{\eta}_{\rm crit}=\bar{\eta}_{\rm crit}(Q/M)\) of the composed Einstein-Maxwell-nonminimally-coupled-scalar-field system along the _existence-line_ of the theory, a critical border that separates bad Reissner-Nordstrom black holes from hairy charged-black-hole-scalar-field configurations. In addition, it is explicitly shown that the large-coupling \(-\bar{\eta}_{\rm crit}(Q/M)\gg 1\) analytical results derived in the present paper for the composed Einstein-Maxwell-scalar theory agree remarkably well with direct numerical computations of the corresponding black-hole-field resonance spectrum. ## I Introduction Early no-hair theorems [1; 2; 3; 4; 5; 6], which were mainly motivated by Wheeler's no-hair conjecture for black holes [7; 8], have revealed the physically interesting fact that black holes with spatially regular horizons cannot support static matter configurations which are made of scalar fields. However, recent studies [9; 10; 11; 12] (see also [13; 14; 15; 16]) have explicitly demonstrated that the no-hair conjecture can be violated in generalized Einstein-scalar field theories whose actions are characterized by a direct non-trivial coupling \(f(\phi)\mathcal{G}\) between the scalar field \(\phi\) and the Gauss-Bonnet invariant \(\mathcal{G}\) of the curved spacetime. In particular, it has been revealed in the physically important works [9; 10; 11; 12] that spatially regular matter configurations which are made of non-minimally coupled scalar fields can be supported in asymptotically flat black-hole spacetimes [17; 18; 19; 20; 21; 22; 23; 24]. The physically intriguing phenomenon of spontaneous scalarization of black holes in generalized Einstein-Gauss-Bonnet-scalar field theories [9; 10; 11; 12; 13; 14; 15; 16] one its existence to the presence of an effective spatially-dependent mass term of the linearized form \(-\bar{\eta}\mathcal{G}\), which reflects the direct non-trivial coupling between the scalar field and the Gauss-Bonnet curvature invariant, in the Klein-Gordon wave equation of the supported scalar configurations [see Eq. (10) below]. The dimensionless physical parameter \(\bar{\eta}\) of the composed field theory controls the strength of the direct non-minimal interaction between the scalar field and the spatially-dependent Gauss-Bonnet invariant of the curved spacetime. The spontaneous scalarization phenomenon of _charged_ black holes in composed Einstein-Maxwell-Gauss-Bonnet-scalar-field theories has been explored, using numerical techniques, in the physically interesting works [25; 26] (see [13; 14; 27; 28; 29; 30; 31; 32] for recent studies of the spontaneous scalarization phenomenon of asymptotically flat spinning black holes). Intriguingly, it has been revealed in [25; 26] that the composed charged-black-hole-nonminimally-coupled-scalar-field system is characterized by a charge-dependent _existence-line_\(\bar{\eta}=\bar{\eta}(\bar{Q})\)[33] which marks the onset of the spontaneous scalarization phenomenon in the generalized Einstein-Maxwell-scalar field theory. In particular, for a given value of the black-hole electric charge \(\bar{Q}\), the critical existence line marks the boundary between bald Reissner-Nordstrom black holes and composed charged-black-hole-nonminimally-coupled-scalar-field hairy configurations. The charge-dependent critical existence-line \(\bar{\eta}=\bar{\eta}(\bar{Q})\) of the generalized Einstein-Maxwell-scalar theory is composed of charged Reissner-Nordstrom black holes that support scalar 'clouds' [34; 35], spatially regular matter configurations which are made of the non-trivially coupled linearized scalar fields. The characteristic existence-line of the physical system is universal in the sense that different non-trivially coupled Einstein-Maxwell-scalar field theories that share the same weak-field functional behavior \(f(\phi)=1+\bar{\eta}\phi^{2}/2+O(\phi^{4})\)[25; 26] of the coupling function are characterized by the same critical boundary between bald and hairy black-hole spacetimes. Interestingly, the numerical results presented in [25; 26] have revealed that the spontaneous scalarization phenomenon of black holes in composed Einstein-Maxwell-scalar field theories with _negative_ values of the non-minimal coupling parameter \(\bar{\eta}\) may be induced by the electric charge of the supporting black hole. In particular, one finds that, in the \(\bar{\eta}<0\) regime, the onset of the spontaneous scalarization phenomenon is marked by the dimensionless black-hole electric charge [25; 26] \[\bar{Q}_{\rm crit}=\frac{\sqrt{2(9+\sqrt{6})}}{5}. \tag{1}\] Only charged black holes with \(\bar{Q}\geq\bar{Q}_{\rm c}\) can support non-minimally coupled spatially regular scalar clouds in the negative coupling \(\bar{\eta}<0\) regime. Intriguingly, it has been demonstrated numerically in [25; 26] that, in the near-critical regime \(\bar{Q}/\bar{Q}_{\rm crit}\to 1^{+}\), the hairy charged black holes are characterized by the large-coupling asymptotic relation \[-\bar{\eta}\to\infty\quad\mbox{ for }\quad\bar{Q}/\bar{Q}_{\rm crit}\to 1^{+}. \tag{2}\] The main goal of the present paper is to study, using _analytical_ techniques, the physical and mathematical properties of the composed charged-Reissner-Nordstrom-black-hole-nonminimally-coupled-scalar-field cloudy configurations in the near-critical regime \(\bar{Q}\gtrsim\bar{Q}_{\rm crit}\). In particular, a remarkably compact WKB resonance formula, which describes the charge-dependence \(\bar{\eta}=\bar{\eta}(\bar{Q})\) of the characteristic critical existence-line of the composed Einstein-Maxwell-Gauss-Bonnet-nonminimally-coupled-massless-scalar-field theory in the dimensionless large-coupling \(-\bar{\eta}\gg 1\) regime, will be derived. Interestingly, we shall explicitly show below that the analytically derived near-critical resonance formula of the present paper [see Eq. (40) below] provides a simple _analytical_ explanation for the _numerically_ observed [25; 26] monotonically decreasing functional behavior \(|\bar{\eta}|=|\bar{\eta}(\bar{Q})|\) of the critical existence-line that characterizes, in the \(\bar{\eta}<0\) regime, the composed charged-Reissner-Nordstrom-black-hole-nonminimally-coupled-linearized-scalar-field configurations. ## II Description of the system We explore the physical and mathematical properties of spatially regular scalar 'clouds' (linearized scalar field configurations) which are supported by charged Reissner-Nordstrom black holes. The supported massless scalar fields are characterized by a direct non-trivial (non-minimal) coupling to the Gauss-Bonnet invariant of the curved charged spacetime. The action of the composed Einstein-Maxwell-Gauss-Bonnet-nonminimally-coupled-massless-scalar-field system is given by the expression [26; 36] \[S=\int d^{4}x\sqrt{-g}\Big{[}\frac{1}{4}R-\frac{1}{4}F_{\alpha\beta}F^{\alpha \beta}-\frac{1}{2}\nabla_{\alpha}\phi\nabla^{\alpha}\phi+f(\phi)\mathcal{G} \Big{]}\, \tag{3}\] where \[\mathcal{G}\equiv R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu \nu}+R^{2} \tag{4}\] is the Gauss-Bonnet curvature invariant. Following [11; 12; 18], we assume that the scalar function \(f(\phi)\), which controls the non-trivial direct coupling of the scalar field to the Gauss-Bonnet curvature invariant, is characterized by the leading order universal functional behavior \[f(\phi)=\frac{1}{2}\eta\phi^{2} \tag{5}\] in the weak-field regime. As discussed in [11; 12; 18], the functional behavior (5) of the scalar coupling function in the weak scalar field regime guarantees that the Einstein-matter field equations are satisfied by the familiar scalarless black-hole solutions of general relativity (in our case, the Reissner-Nordstrom black-hole spacetime) in the \(\phi\to 0\) limit. The physical parameter \(\eta\)[37], which may take either positive or negative values [25; 26], determines the strength of the direct (non-minimal) interaction between the spatially regular massless scalar field configurations and the Gauss-Bonnet invariant (4) of the charged curved spacetime. The supporting charged Reissner-Nordstrom black-hole spacetime is characterized by the line element [38; 39; 40] \[ds^{2}=-\frac{\Delta}{r^{2}}dt^{2}+\frac{r^{2}}{\Delta}dr^{2}+r^{2}d\theta^{2 }+r^{2}\sin^{2}\theta d\phi^{2}\, \tag{6}\] where the metric function \(\Delta\) is given by the functional expression \[\Delta\equiv r^{2}-2Mr+Q^{2}. \tag{7}\] The physical parameters \(\{M,Q\}\) are respectively the mass and electric charge of the central supporting Reissner-Nordstrom black hole. The horizon radii of the curved black-hole spacetime (6) are determined by the zeros of the metric function \(\Delta\): \[r_{\pm}=M\pm(M^{2}-Q^{2})^{1/2}. \tag{8}\] The radially-dependent Gauss-Bonnet invariant, which characterizes the curved Reissner-Nordstrom black-hole spacetime (6), is given by the expression [26] \[{\cal G}_{\rm RN}(r;M,Q)=\frac{8}{r^{8}}\big{(}6M^{2}r^{2}-12MQ^{2}r+5Q^{4} \big{)}. \tag{9}\] A variation of the action (3) with respect to the scalar field yields the generalized Klein-Gordon equation [26] \[\nabla^{\nu}\nabla_{\nu}\phi=\mu_{\rm eff}^{2}\phi \tag{10}\] for the non-minimally coupled scalar field with the radially-dependent effective mass term \[\mu_{\rm eff}^{2}(r;M,Q)=-\eta{\cal G}. \tag{11}\] This effective mass term reflects the direct (non-minimal) coupling of the scalar field \(\phi\) to the Gauss-Bonnet invariant \({\cal G}\) [see Eq. (9)] of the charged curved spacetime. Interestingly, one finds that, in the regime [26; 41] \[Q\geq Q_{\rm crit}=M\cdot\frac{\sqrt{2(9+\sqrt{6})}}{5}\, \tag{12}\] the charged-dependent Gauss-Bonnet curvature invariant becomes negative in the exterior region \[r_{+}\leq r\leq\Big{(}1+\frac{1}{\sqrt{6}}\Big{)}\cdot\frac{Q^{2}}{M} \tag{13}\] of the charged black-hole spacetime. Thus, the spatially-dependent effective mass term (11) with \(\eta<0\) may become negative in the interval (13). As we shall explicitly prove below, this intriguing property of the coupled Einstein-Maxwell-nonminimally-coupled-scalar-field system (3) allows the central charged Reissner-Nordstrom black hole (6) to support bound-state linearized cloudy configurations of the non-minimally coupled scalar field. Using the functional field decomposition [42] \[\phi(r,\theta,\varphi)=\sum_{lm}R_{lm}(r)Y_{lm}(\theta,\varphi)\, \tag{14}\] one obtains from Eq. (10) the differential equation [43; 26; 44] \[\frac{d}{dr}\Big{(}\Delta\frac{dR}{dr}\Big{)}-l(l+1)R+\eta\Big{(}\frac{48M^{2 }}{r^{4}}-\frac{96MQ^{2}}{r^{5}}+\frac{40Q^{4}}{r^{6}}\Big{)}R=0 \tag{15}\] for the radial part of the linearized non-minimally coupled scalar field in the supporting curved black-hole spacetime (6). The ordinary differential equation (15) determines the spatial behavior of the static non-minimally coupled linearized massless scalar field configurations in the supporting charged Reissner-Nordstrom black-hole spacetime (6). Following [25; 26], we shall assume that the scalar eigenfunction \(\psi(r)\) of the bound-state field configurations is spatially well behaved with the physically motivated boundary conditions [25; 26] \[\psi(r=r_{\rm H})<\infty\ \ \ \ ;\ \ \ \psi(r\rightarrow\infty)\to 0 \tag{16}\] at the black-hole horizon and at spatial infinity. In the next section we shall study, using analytical techniques, the physical and mathematical properties of the negatively coupled charged-black-hole-nonminimally-coupled-linearized-massless-scalar-field configurations that characterize the composed Einstein-Maxwell-scalar field theory (3). In particular, we shall explicitly prove that the composed Reissner-Nordstrom-black-hole-scalar-field cloudy configurations in the \(Q>Q_{\rm crit}\) regime [see Eq. (12)] with \(\eta<0\) owe their existence to the presence of an effective near-horizon binding potential well [see Eq. (31) below]. ## III Composed Reissner-Nordstrom-Black-hole-nonminimally-coupled-scalar-field configurations: a WKB analysis In the present section we shall determine the charge-dependent functional behavior \(\eta=\eta(Q/M)\) of the critical existence-line, which characterizes the composed charged-black-hole-nonminimally-coupled-scalar-field cloudy configurations, in the dimensionless large-coupling regime \[-\bar{\eta}\equiv-\frac{\eta}{M^{2}}\gg 1. \tag{17}\] Defining the new radial eigenfunction \[\psi\equiv rR \tag{18}\] and using the coordinate \(y(r)\), which is defined by the radial differential relation [45] \[dy=\frac{r^{2}}{\Delta}\cdot dr\, \tag{19}\] one obtains from equation (15) the Schrodinger-like radial equation \[\frac{d^{2}\psi}{dy^{2}}-V\psi=0\, \tag{20}\] where the radially-dependent potential \(V[r(y)]\) of the composed charged-Reissner-Nordstrom-black-hole-nonminimally-coupled-scalar-field system is given by the functional expression \[V(r;M,Q,l,\bar{\eta})=h(r)\Big{[}\frac{l(l+1)}{r^{2}}+\frac{2M}{r^{3}}-\frac{ 2Q^{2}}{r^{4}}\Big{]}+V_{\rm GB}(r) \tag{21}\] with [see Eq. (7)] \[h(r)\equiv\frac{\Delta}{r^{2}}=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}. \tag{22}\] The presence of the non-trivial term \[V_{\rm GB}(r;M,Q)=-\bar{\eta}\cdot\frac{h(r)}{r^{2}}\Big{(}\frac{48M^{4}}{r^{ 4}}-\frac{96M^{3}Q^{2}}{r^{5}}+\frac{40M^{2}Q^{4}}{r^{6}}\Big{)} \tag{23}\] in the effective interaction potential (21) is a direct consequence of the non-trivial (non-minimal) coupling between the Gauss-Bonnet curvature invariant (9) of the charged spacetime (6) and the supported scalar field. We shall now prove that, in the dimensionless large-coupling regime (17), the ordinary differential equation (20), which characterizes the composed Einstein-Maxwell-scalar field theory (3), is amenable to an analytical treatment. In particular, a standard second-order WKB analysis of the Schrodinger-like ordinary differential equation (20) yields the familiar quantization condition [46; 47; 48] \[\int_{y_{t-}}^{y_{t+}}dy\sqrt{-V(y;M,Q,\bar{\eta})}=\big{(}n+\frac{1}{2}\big{)} \cdot\pi\ \ \ \ ;\ \ \ \ n=0,1,2,... \tag{24}\] for the discrete spectrum \(\{\bar{\eta}(M,Q;n)_{n=0}^{n=\infty}\) of the dimensionless coupling parameter which characterizes the composed charged-Reissner-Nordstrom-black-hole-nonminimally-coupled-scalar-field cloudy configurations. Here \(n\in\{0,1,2,...\}\) is the discrete resonant parameter of the physical system. The critical existence-line of the field theory is determined by the fundamental \(n=0\) mode. The integration limits \(\{y_{t-},y_{t+}\}\) in the WKB integral relation (24) are the classical turning points of the effective binding potential (21). Using Eq. (19), one finds that the WKB relation (24) can be expressed in the integral form \[\int_{r_{t-}}^{r_{t+}}dr\frac{\sqrt{-V(r;M,Q,\bar{\eta})}}{h(r)}=\big{(}n+ \frac{1}{2}\big{)}\cdot\pi\ \ \ \ ;\ \ \ \ n=0,1,2,.... \tag{25}\] Defining the dimensionless physical parameters \[Q\equiv Q_{\rm crit}\cdot(1+\epsilon)\ \ \ \ ;\ \ \ \ \epsilon\geq 0 \tag{26}\] and \[r\equiv r_{+}\cdot(1+x)\ \ \ \ ;\ \ \ x\geq 0\, \tag{27}\] one obtains the near-critical (\(\epsilon\ll 1\)) near-horizon (\(x\ll 1\)) relations [see Eqs. (8), (12), and (22)] \[\frac{r_{+}}{M}=\frac{4+\sqrt{6}}{5}\cdot\big{(}1-\sqrt{6}\cdot\epsilon\big{)} +O(\epsilon^{2})\, \tag{28}\] \[\frac{r}{M}=\frac{r_{+}\cdot(1+x)}{M}=\frac{4+\sqrt{6}}{5}\cdot\big{(}1+x- \sqrt{6}\cdot\epsilon\big{)}+O(x^{2},\epsilon^{2},x\epsilon)\, \tag{29}\] and \[h(r)=(\sqrt{6}-2)\cdot x+O(x^{2},\epsilon^{2},x\epsilon). \tag{30}\] Substituting the relations (28), (29), and (30) into Eqs. (21) and (23), one finds the (rather cumbersome) near-critical near-horizon expression \[M^{2}\frac{V(r)}{h(r)}=\left\{\frac{11-4\sqrt{6}}{2}\cdot l(l+1)+\frac{19 \sqrt{6}-46}{2}-\bar{\eta}\Big{[}\big{(}15204\sqrt{6}-37236\big{)}\cdot x- \big{(}16752-6828\sqrt{6}\big{)}\cdot\epsilon\Big{]}\right\}\cdot[1+O(x, \epsilon)] \tag{31}\] for the effective interaction potential of the composed black-hole-massless-scalar-field cloudy configurations. We shall henceforth consider composed charged-Reissner-Nordstrom-black-hole-massless-scalar-field cloudy configurations in the dimensionless large-coupling regime [see Eqs. (17), (26), and Eq. (38) below] \[-\bar{\eta}\epsilon\gg 1\, \tag{32}\] in which case one finds from Eqs. (30) and (31) the remarkably compact leading order functional expression \[M^{2}\frac{V(r)}{[h(r)]^{2}}=\bar{\eta}\cdot 6(1396-569\sqrt{6})\cdot\Big{[}( 2+\sqrt{6})\cdot\frac{\epsilon}{x}-1\Big{]} \tag{33}\] for the effective binding potential of the composed charged-black-hole-scalar-field system. Taking cognizance of Eqs. (25), (27), and (33), one finds the resonance condition \[\frac{r_{+}}{M}\cdot\int_{0}^{(2+\sqrt{6})\cdot\epsilon}dx\sqrt{-\bar{\eta} \cdot 6(1396-569\sqrt{6})\cdot\Big{[}(2+\sqrt{6})\cdot\frac{\epsilon}{x}-1 \Big{]}}=\big{(}n+\frac{1}{2}\big{)}\cdot\pi\ \ \ \ ;\ \ \ \ n=0,1,2,... \tag{34}\] for the composed black-hole-field system. The WKB integral relation (34) can be expressed in the compact mathematical form \[\epsilon\sqrt{-\bar{\eta}}\cdot 2\sqrt{6}\sqrt{16+\sqrt{6}}\int_{0}^{1}dz \sqrt{\frac{1}{z}-1}=\big{(}n+\frac{1}{2}\big{)}\cdot\pi\ \ \ \ ;\ \ \ \ n=0,1,2,...\, \tag{35}\] where \[z\equiv\frac{1}{2+\sqrt{6}}\cdot\frac{x}{\epsilon}. \tag{36}\] Using the integral relation \[\int_{0}^{1}dz\sqrt{\frac{1}{z}-1}=\frac{\pi}{2}\, \tag{37}\] one finds from (35) the remarkably simple discrete resonance formula [49] \[\bar{\eta}(\epsilon;n)=-\frac{1}{\epsilon^{2}}\cdot\frac{1}{96+6\sqrt{6}} \cdot\big{(}n+\frac{1}{2}\big{)}^{2}\ \ \ \ ;\ \ \ \ n=0,1,2,...\, \tag{38}\] which characterizes the composed charged-Reissner-Nordstrom-black-hole-nonminimally-coupled-linearized-massless-scalar-field cloudy configurations in the dimensionless large-coupling \(-\bar{\eta}\gg 1\) regime (or equivalently, in the near-critical \(\epsilon\ll 1\) regime). The discrete resonance spectrum (38) of the composed Einstein-Maxwell-Gauss-Bonnet-nonminimally-coupled-scalar-field system (3) can be expressed in the dimensionless form [see Eqs. (1) and (26)] \[\bar{Q}(\bar{\eta};n)=\frac{\sqrt{2(9+\sqrt{6})}}{5}+\frac{1}{\sqrt{-\bar{\eta }}}\cdot\frac{1}{\sqrt{138-7\sqrt{6}}}\cdot\left(n+\frac{1}{2}\right)\quad;\quad n =0,1,2,...\, \tag{39}\] where \(\bar{Q}\equiv Q/M\). ## IV Numerical Confirmation In the present section we shall test the accuracy of the analytically derived large-coupling resonance spectrum (38), which characterizes the composed charged-Reissner-Nordstrom-black-hole-nonminimally-coupled-linearized-scalar-field cloudy configurations. The charge-dependent resonance spectrum of the black-hole-field system has recently been computed numerically in [26]. In Table 1 we present, for various values of the dimensionless coupling parameter \(\lambda\equiv 2\sqrt{|\bar{\eta}|}\)[44] used in [26], the charge-dependent ratio \(\mathcal{R}(\lambda)\equiv\epsilon^{\rm analytical}/\epsilon^{\rm numerical}\) between the analytically calculated value of the dimensionless critical parameter \(\epsilon\) [as calculated directly from the analytically derived large-coupling resonance formula (38)] and the corresponding exact (numerically computed [26]) values of the critical parameter. The data presented in Table 1 for the cloudy Reissner-Nordstrom-black-hole-scalar-field configurations reveals the fact that the agreement between the _analytically_ derived resonance formula (38) and the corresponding _numerically_ computed resonance spectrum of [26] is remarkably good in the large-coupling \(\lambda\gg 1\) regime [50] of the Einstein-Maxwell-scalar field theory (3). ## V Summary and Discussion Asymptotically flat black holes with spatially regular horizons can support bound-state matter configurations which are made of scalar fields with a direct (non-minimal) coupling to the Gauss-Bonnet invariant of the curved spacetime [9; 10; 11; 12; 13; 14; 15; 16]. The spontaneous scalarization phenomenon of charged black holes in composed Einstein-Maxwell-Gauss-Bonnet-scalar field theories has recently been studied numerically in the physically important works [25; 26]. In particular, it has been revealed in [25; 26] that a charge-dependent critical existence-line \(\bar{\eta}_{\rm crit}=\bar{\eta}_{\rm crit}(Q/M)\) separates bald Reissner-Nordstrom black-hole spacetimes from the composed charged-black-hole-nonminimally-coupled-massless-scalar-field hairy configurations of the Einstein-Maxwell-scalar theory (3) [51], where the dimensionless physical parameter \(\bar{\eta}\) quantifies the strength of the direct non-trivial interaction between the supported scalar field and the Gauss-Bonnet curvature invariant. Interestingly, it has been demonstrated [25; 26] that, in the negative coupling \(\bar{\eta}<0\) regime, the composed charged-black-hole-linearized-scalar-field cloudy configurations that sit on the critical existence-line of the Einstein-Maxwell-scalar field theory (3) are restricted to the dimensionless charge regime \(\bar{Q}>\bar{Q}_{\rm crit}=\sqrt{2(9+\sqrt{6})}/5\) [see Eq. (1)]. In particular, the numerical results presented in [25; 26] provide important evidence for an intriguing divergent functional behavior \(-\bar{\eta}(\bar{Q})\to\infty\) of the non-minimal coupling parameter of the theory along the existence-line of the system in the near-critical limit \(\bar{Q}/\bar{Q}_{\rm crit}\to 1^{+}\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(\lambda\) & 5 & 10 & 15 & 20 & 25 \\ \hline \(\mathcal{R}(\lambda)\equiv\frac{\epsilon^{\rm analytical}}{\epsilon^{\rm numerical}}\) & 1.168 & 1.065 & 1.049 & 1.019 & 1.016 \\ \hline \end{tabular} \end{table} Table 1: Composed charged-Reissner-Nordström-black-hole-nonminimally-coupled-linearized-massless-scalar-field cloudy configurations. We present, for various values of the coupling parameter \(\lambda\equiv 2\sqrt{|\bar{\eta}|}\)[26; 44] of the theory, the dimensionless ratio \(\mathcal{R}(\lambda)\equiv\epsilon^{\rm analytical}/\epsilon^{\rm numerical}\) between the analytically calculated value of the dimensionless critical parameter \(\epsilon\) [as calculated directly from the resonance formula (38) for the fundamental \(n=0\) model] and the corresponding exact values of the critical parameter as computed numerically in [26]. One finds that, in the large-coupling \(\lambda\gg 1\) regime of the composed Reissner-Nordström-black-hole-scalar-field cloudy configurations, the agreement between the analytically derived resonance formula (38) and the corresponding numerically computed resonance spectrum of [26] is remarkably good [50]. In the present paper we have used _analytical_ techniques in order to explore the physical and mathematical properties of the composed charged-Reissner-Nordstrom-black-hole-nonminimally-coupled-massless-scalar-field hairy configurations of the Einstein-Maxwell-scalar theory (3) in the large-coupling \(-\bar{\eta}(\bar{Q})\gg 1\) regime. In particular, using a WKB procedure we have derived the analytical formula (38) for the discrete resonant spectrum of the dimensionless coupling parameter \(\bar{\eta}(\bar{Q})\) which characterizes the composed charged-black-hole-scalar-field cloudy configurations in the near-critical \(Q\gtrsim Q_{\rm crit}\) regime. The analytically derived discrete WKB resonance formula (38) yields the remarkably compact charge-dependent functional relation [52] \[\bar{\eta}(\epsilon)=-\frac{1}{\epsilon^{2}}\cdot\frac{1}{4(96+6\sqrt{6})} \tag{40}\] for the critical existence-line of the composed Einstein-Maxwell-Gauss-Bonnet-nonminimally-coupled-massless-scalar field theory (3), where \(0\leq\epsilon=(\bar{Q}-\bar{Q}_{\rm crit})/\bar{Q}_{\rm crit}\ll 1\) [see Eqs. (1) and (26)] is the dimensionless distance of the system from the exact critical (\(-\bar{\eta}\to\infty\)) configuration. It is worth stressing the fact that the physical significance of the critical existence-line (40) in the Einstein-Maxwell-scalar field theory (3) stems from the fact that it separates, in the large-coupling \(-\bar{\eta}\gg 1\) regime, bald Reissner-Nordstrom black holes from hairy charged-black-hole-nonminimally-coupled-massless-scalar-field configurations. From the analytically derived resonance formula (40) one finds, in accord with the physically interesting numerical results presented in [25; 26], that the dimensionless coupling parameter \(|\bar{\eta}(\bar{Q})|\) of the theory (3) is a monotonically decreasing function of the black-hole electric charge \(\bar{Q}\) in the negative coupling \(\bar{\eta}<0\) regime. Finally, it is interesting to point out that our results provide a simple _analytical_ explanation for the physically intriguing _numerical_ observation originally made in [25; 26], according to which the charge-dependent coupling parameter \(\bar{\eta}(\bar{Q})\) of the Einstein-Maxwell-scalar field theory (3) diverges along the critical existence line of the system in the \(\bar{Q}\to\bar{Q}_{\rm crit}\) limit. In particular, the analytically derived resonance formula (40) reveals the fact that the coupling parameter \(\bar{\eta}(\bar{Q})\), which characterizes the cloudy black-hole-field configurations, diverges quadratically in the near-critical \(\epsilon\to 0\) limit. ###### Acknowledgements. This research is supported by the Carmel Science Foundation. I would like to thank Yael Oren, Arbel M. Ongo, Ayelet B. Lata, and Alona B. Tea for helpful discussions.
2305.14267
SEEDS: Exponential SDE Solvers for Fast High-Quality Sampling from Diffusion Models
A potent class of generative models known as Diffusion Probabilistic Models (DPMs) has become prominent. A forward diffusion process adds gradually noise to data, while a model learns to gradually denoise. Sampling from pre-trained DPMs is obtained by solving differential equations (DE) defined by the learnt model, a process which has shown to be prohibitively slow. Numerous efforts on speeding-up this process have consisted on crafting powerful ODE solvers. Despite being quick, such solvers do not usually reach the optimal quality achieved by available slow SDE solvers. Our goal is to propose SDE solvers that reach optimal quality without requiring several hundreds or thousands of NFEs to achieve that goal. We propose Stochastic Explicit Exponential Derivative-free Solvers (SEEDS), improving and generalizing Exponential Integrator approaches to the stochastic case on several frameworks. After carefully analyzing the formulation of exact solutions of diffusion SDEs, we craft SEEDS to analytically compute the linear part of such solutions. Inspired by the Exponential Time-Differencing method, SEEDS use a novel treatment of the stochastic components of solutions, enabling the analytical computation of their variance, and contains high-order terms allowing to reach optimal quality sampling $\sim3$-$5\times$ faster than previous SDE methods. We validate our approach on several image generation benchmarks, showing that SEEDS outperform or are competitive with previous SDE solvers. Contrary to the latter, SEEDS are derivative and training free, and we fully prove strong convergence guarantees for them.
Martin Gonzalez, Nelson Fernandez, Thuy Tran, Elies Gherbi, Hatem Hajri, Nader Masmoudi
2023-05-23T17:19:54Z
http://arxiv.org/abs/2305.14267v2
# SEEDS: Exponential SDE Solvers for Fast High-Quality Sampling from Diffusion Models ###### Abstract A potent class of generative models known as Diffusion Probabilistic Models (DPMs) has become prominent. A forward diffusion process adds gradually noise to data, while a model learns to gradually denoise. Sampling from pre-trained DPMs is obtained by solving differential equations (DE) defined by the learnt model, a process which has shown to be prohibitively slow. Numerous efforts on speeding-up this process have consisted on crafting powerful ODE solvers. Despite being quick, such solvers do not usually reach the optimal quality achieved by available slow SDE solvers. Our goal is to propose SDE solvers that reach optimal quality without requiring several hundreds or thousands of NFEs to achieve that goal. In this work, we propose Stochastic Exponential Derivative-free Solvers (SEEDS), improving and generalizing Exponential Integrator approaches to the stochastic case on several frameworks. After carefully analyzing the formulation of exact solutions of diffusion SDEs, we craft SEEDS to analytically compute the linear part of such solutions. Inspired by the Exponential Time-Differencing method, SEEDS uses a novel treatment of the stochastic components of solutions, enabling the analytical computation of their variance, and contains high-order terms allowing to reach optimal quality sampling \(\sim 3\)-\(5\times\) faster than previous SDE methods. We validate our approach on several image generation benchmarks, showing that SEEDS outperforms or is competitive with previous SDE solvers. Contrary to the latter, SEEDS are derivative and training free, and we fully prove strong convergence guarantees for them. ## 1 Introduction Diffusion Probabilistic Models (DPMs) [22; 8] have emerged as a powerful category of generative models and have proven to quickly become SOTA for generative tasks such as image, video, audio generation [5; 17; 9; 3; 4], and more [20; 26]. These models employ a forward diffusion process where noise is gradually added to the data, and the model learns to remove the noise progressively. However, sampling from most of pre-trained DPMs is done by simulating the trajectories of associated differential equations (DE) has been found to be prohibitively slow [24]. Previous attempts to accelerate this process have mainly focused on developing efficient ODE solvers. On one hand, training-based methods speed-up sampling by using auxiliary training such as Progressive Distillation [21] and Fourier Neural Operators [29], learning the noise schedule, scaling, variance, or trajectories. On the other hand, training-free methods [12; 15; 11; 27; 16] are slower but are more versatile for being employed on different models and achieve higher quality results than current training-based methods. Although these solvers are fast, they often fall short of achieving the optimal quality attained by slower SDE solvers [12]. The latter usually do not present theoretical convergence guarantees and, while being training-free, they often still require costly parameter optimization to achieve optimal results which might be difficult to estimate for large datasets. Our objective is to introduce SDE solvers that can achieve optimal quality without requiring an excessively large number of function evaluations (NFEs). To accomplish this, we propose Stochastic Exponential Derivative-free Solvers (SEEDS), which are _off-the-shelf_ SDE samplers, meaning that they offer promising high-quality sampling without further training or parameter optimization. SEEDS enhance and generalize existing Exponential Integrator [15; 16; 28] approaches to the stochastic case on several frameworks. By carefully examining the formulation of exact solutions for diffusion SDEs, SEEDS compute the linear part of these solutions analytically. Drawing inspiration from the Exponential Time-Differencing method, SEEDS employ a novel treatment of the stochastic components, enabling the analytical computation of their variance and incorporates high-order terms for optimal quality sampling while being 3-5 times faster than previous SDE methods. To validate our approach, we conduct experiments using various image generation benchmarks, demonstrating that SEEDS outperform or is competitive with existing SDE solvers. Main contributions.After introducing some background on DPMs, we present the three main contributions involved in SEEDS: a representation of exact SDE solutions which isolate linear terms to be computed analytically, a general change-of-variables methodology to simplify the integrals involved in the solutions in order to better approximate the deterministic one and a methodology to analytically compute the variance of the stochastic one. SEEDS is a multi-stage off-the-shelf method with proven convergence guarantees while improving and generalizing both the gDDIM [28] and the DPM-Solver [15; 16] methods for isotropic DPMs. SEEDS also can be combined with Churn-like methods as used in [12] to achieve outstanding sampling performance on ImageNet64 and establishes SOTA results among available solvers on several benchmarks. Although our solvers successfully apply to non-isotropic DPMs such as Critically-damped Langevin Dynamics (CLD) [7] (see Prop. 4.4 and Rem. 4.5), we will restrict our presentation to the isotropic case for which many notations become simpler. ## 2 Background on Diffusion Probabilistic Models General Isotropic DE Formulation.The evolution of a data sample \(\mathbf{x}_{0}\in\mathbb{R}^{d}\) taken from an unknown data distribution \(p_{\mathrm{data}}\) into standard Gaussian noise can be defined as a forward diffusion process \(\{\mathbf{x}_{t}\}_{t\in[0,T]}\), with \(T>0\), which is a solution to a linear SDE: \[\mathrm{d}\mathbf{x}_{t}=f(t)\mathbf{x}_{t}\mathrm{d}t+g(t)\mathrm{d}\mathbf{ \omega}_{t},\qquad f(t):=\frac{\mathrm{d}\log\alpha_{t}}{\mathrm{d}t},\quad g( t)=\alpha_{t}\sqrt{\frac{\mathrm{d}[\sigma_{t}^{2}]}{\mathrm{d}t}}, \tag{1}\] where \(f(t),g(t)\in\mathbb{R}^{d\times d}\) are called the drift and diffusion coefficients respectively and \(\mathbf{\omega}\) is a \(d\)-dimensional standard Wiener process, and \(\alpha_{t},\sigma_{t}\in\mathbb{R}^{>0}\) are differentiable functions with bounded derivatives. In practice, when specifying the SDE (1), \(\sigma_{t}\) acts as a schedule controlling the noise levels of an input at time \(t\) while \(\alpha_{t}\) acts as a time-dependent signal scaling controlling its dynamic range. By denoting \(p_{t}(\mathbf{x}_{t})\) the marginal distribution of \(\mathbf{x}_{t}\) at time \(t\), functions \(\alpha_{t}\) and \(\sigma_{t}\) are designed so that the end-time distribution of the process process is \(p_{T}(\mathbf{x}_{T})\approx\mathcal{N}(\mathbf{x}_{T}|\mathbf{0},\tilde{\sigma} ^{2}\mathbf{I}_{d})\) for some \(\tilde{\sigma}>0\). As (1) is linear, the transition probability \(p_{0t}(\mathbf{x}_{t}|\mathbf{x}_{0})\) from \(\mathbf{x}_{0}\) to \(\mathbf{x}_{t}\) is Gaussian whose mean and variance can be expressed in terms of \(\alpha_{t}\) and \(\sigma_{t}\). For simplicity, we will denote it \[p_{0t}(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t};\mu_{t} \mathbf{x}_{0},\Sigma_{t}),\qquad\mu_{t},\Sigma_{t}\in\mathbb{R}^{d\times d}.\] The evolution of the reverse time process of \(\{\mathbf{x}_{t}\}_{t\in[0,T]}\) (which we will still denote \(\{\mathbf{x}_{t}\}_{t\in[0,T]}\) for simplicity) is then driven by a backward differential equation \[\mathrm{d}\mathbf{x}_{t}=[f(t)\mathbf{x}_{t}-\frac{1+\ell^{2}}{2}g^{2}(t) \nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t})]\mathrm{d}t+\ell g(t) \mathrm{d}\tilde{\mathbf{\omega}}_{t}, \tag{2}\] where \(\mathrm{d}t\) are negative infinitesimal time-steps and \(\tilde{\mathbf{\omega}}_{t}\) is now a Wiener process with variance \(-\mathrm{d}t\). In this article, we will concentrate in the cases \(\ell=0,1\), known in the literature as the Probability Flow ODE (PFO) and diffusion reverse SDE (RSDE), respectively. Training.Denoising score-matching is a technique to train a time-dependent model \(D_{\theta}(\mathbf{x}_{t},t)\) to approach the score function \(\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t})\) at each time \(t\). Intuitively, as \(D_{\theta}\) approaches the score, it produces a sample which maximizes the log-likelihood. As such, this model is coined as a _data prediction_ model. However, in practice DPMs can be more efficiently trained by reparameterizing \(D_{\theta}\) into a different model \(F_{\theta}(\mathbf{x}_{t},t)\) whose objective is to predict the noise to be removed from a sample at time \(t\). This _noise prediction_ model is trained by means of the loss \[\mathbb{E}_{t\sim\mathcal{U}[0,T],\mathbf{x}_{0}\sim p_{\mathrm{data},e}\sim \mathcal{N}(\mathbf{0},\mathbf{I}_{d})}[\|\epsilon-F_{\theta}(\mu_{t}\mathbf{x }_{0}+\mathbf{K}_{t}\epsilon,t)\|^{2}_{\mathbf{K}_{t}^{-1}\gamma_{t}\mathbf{K}_{t}^{-\top }}],\] where \(\gamma_{t}\) is a time dependent weighting parameter and \(\mathbf{K}_{t}\mathbf{K}_{t}^{\top}=\Sigma_{t}\). ## 3 Accelerating Optimal Quality Solvers for Diffusion SDEs Once \(F_{\theta}\) or \(D_{\theta}\) have been trained, one can effectively solve (2) after replacing the score function by its corresponding expression involving either one of these models. For instance, taking the noise prediction model and \(\ell=1\), sampling is conducted by simulating trajectories of a SDE of the form \[\mathrm{d}\mathbf{x}_{t}=[A(t)\mathbf{x}_{t}+b(t)F_{\theta}(\mathbf{x}_{t},t)] \mathrm{d}t+g(t)\mathrm{d}\mathbf{\omega}_{t}, \tag{3}\] for some functions \(A(t),b(t)\) which are usually not equal to \(f(t),g^{2}(t)\). In what follows, we consider a time discretization \(\{t_{i}\}_{i=0}^{M}\) going backwards in time starting from \(t_{0}=T\) to \(t_{M}=0\) and to ease the notation we will always denote \(t<s\) for two consecutive time-steps \(t_{i+1}<t_{i}\). The usual way of representing the analytic solution \(\mathbf{x}_{t}\) at time \(t\) of (3) with respect to an initial condition \(\mathbf{x}_{s}\) is as follows: \[\mathbf{x}_{t}=\mathbf{x}_{s}+\int_{s}^{t}[A(\tau)\mathbf{x}_{\tau}+b(\tau)F_ {\theta}(\mathbf{x}_{\tau},\tau)]\mathrm{d}\tau+\int_{s}^{t}g(\tau)\mathrm{d} \mathbf{\omega}_{\tau}. \tag{4}\] The numerical schemes we propose for approaching the trajectories of (3) based on representation (4) are grounded in the 3 following principles: 1. The variation-of-parameters formula: representing analytic solutions with linear term extracted from the integrand; 2. Exponentially weighted integrals: extracting the time-varying linear coefficient attached to the network from the integrand by means of a specific choice of change of variables which allows analytic computation of the leading coefficients in the truncated Ito-Taylor expansion associated to \(F_{\theta}(\mathbf{x}_{\tau},\tau)\) up to any arbitrary order; 3. Modified Gaussian increments: after replicating such change of variables onto the stochastic integral, analytically computing its variance. Novel representation of exact solutions of diffusion SDEs.The first key insight of this work is that, using the _variation-of-parameters_ formula, we can represent the analytic solution \(\mathbf{x}_{t}\) at time \(t\) of (3) with respect to an initial condition \(\mathbf{x}_{s}\) as follows: \[\mathbf{x}_{t}=\Phi_{A}(t,s)\mathbf{x}_{s}+\int_{s}^{t}\Phi_{A}(t,\tau)b(\tau )F_{\theta}(\mathbf{x}_{\tau},\tau)\mathrm{d}\tau+\int_{s}^{t}\Phi_{A}(t, \tau)g(\tau)\mathrm{d}\mathbf{\omega}_{\tau}, \tag{5}\] where \(\Phi_{A}(t,s)=\exp\left(\int_{s}^{t}A(\tau)\mathrm{d}\tau\right)\) is called the transition matrix associated with \(A(t)\). The separation of the linear and nonlinear components is achieved by this formulation. It differs from black-box SDE solvers in that it enables the exact calculation of the linear portion, thereby removing any approximation errors associated with it. However, the integration of the nonlinear portion remains complex due to the interaction of the new coefficient \(\Phi_{A}(t,\tau)b(\tau)\) and the intricate neural network, making it challenging to approximate. Exponentially weighted integrals.Due to the regularity conditions usually imposed on the drift and diffusion coefficients of (1), one can make several choices of change-of-variables on the integral components in (5) in order to simplify it. Our second key insight is that there is a specific choice of change of variables allowing the analytic computation of the Ito-Taylor coefficients of \(F_{\theta}(\mathbf{x}_{\tau},\tau)\) with respect to \(\tau\), and based at \(s\) that will be used for crafting the SEEDS solvers. More specifically, this expansion reads \[F_{\theta}(\mathbf{x}_{\tau},\tau)=\sum_{k=0}^{n}\frac{(\tau-s)^{k}}{k!}F_{ \theta}^{(k)}(\mathbf{x}_{s},s)+\mathcal{R}_{n},\] where the residual \(\mathcal{R}_{n}\) consists of deterministic iterated integrals of length greater than \(n+1\) and all iterated integrals with at least one stochastic component. As such, we obtain \[\int_{s}^{t}\Phi_{A}(t,\tau)b(\tau)F_{\theta}(\mathbf{x}_{\tau},\tau)\mathrm{d }\tau=\sum_{k=0}^{n}F_{\theta}^{(k)}(\mathbf{x}_{s},s)\int_{s}^{t}\Phi_{A}(t, \tau)b(\tau)\frac{(\tau-s)^{k}}{k!}\mathrm{d}\tau+\tilde{\mathcal{R}}_{n}, \tag{6}\] where \(\tilde{\mathcal{R}}_{n}\) is easily obtained from \(\mathcal{R}_{n}\) and \(\int_{s}^{t}\Phi_{A}(t,\tau)b(\tau)\mathrm{d}\tau\). The third key contribution of our work is to rewrite, for any \(k\geqslant 0\), the integral \(\int_{s}^{t}\Phi_{A}(t,\tau)b(\tau)\frac{(\tau-s)^{k}}{k!}\mathrm{d}\tau\) as an integral of the form \(\int_{\lambda_{s}}^{\lambda_{t}}e^{\lambda\,\frac{(\lambda-\lambda_{s})^{k}}{ k!}}\mathrm{d}\lambda\) since the latter can be recursively analytically computed in terms of the so-called \(\varphi\)-functions \[\varphi_{0}(t):=e^{t},\qquad\varphi_{k+1}(t):=\int_{0}^{1}e^{(1-\tau)t}\frac{ \tau^{k}}{k!}\mathrm{d}\tau=\frac{\varphi_{k}(t)-\varphi_{k}(0)}{t},\qquad k \geqslant 0.\] Modified Gaussian increments.In order for making such change of variables to be consistent on the overall system, one needs to replicate it accordingly in the stochastic integral \(\int_{s}^{t}\Phi_{A}(t,\tau)g(\tau)\mathrm{d}\bar{\omega}_{\tau}\). As such, our last key contribution is to transform it into an exponentially weighted stochastic integral with integration endpoints \(\lambda_{s},\lambda_{t}\) and apply the Stochastic Exponential Time Differencing (SETD) method [1] to compute its variance analytically, as illustrated in (14) below. Let us test our methodology in two key examples. As we explained in Section 2, sampling from pretrained DPMs amounts on choosing a schedule \(\sigma_{t}\), a scaling \(\alpha_{t}\), and a parameterized learnt approximation of the score function \(\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t})\). In what follows, we denote by \(t_{\lambda}\) the inverse of a chosen change of variables \(\lambda_{t}\) and we denote \(\widehat{\mathbf{x}}_{\lambda}:=\mathbf{x}(t_{\lambda}(\lambda)),\hat{F}_{ \theta}(\widehat{\mathbf{x}}_{\lambda},\lambda):=F_{\theta}(\mathbf{x}(t_{ \lambda}(\lambda)),t_{\lambda}(\lambda))\). The VPSDE case.Let \(\tilde{\alpha}_{t}:=\frac{1}{2}\beta_{d}t^{2}+\beta_{m}t\), where \(\beta_{d},\beta_{m}>0\) and \(t\in[0,1]\). Then, by denoting \[\sigma_{t}:=\sqrt{e^{\tilde{\alpha}_{t}}-1},\qquad\alpha_{t}:=e^{-\frac{1}{2} \tilde{\alpha}_{t}},\qquad\bar{\sigma}_{t}:=\alpha_{t}\sigma_{t},\qquad\nabla _{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t})\simeq\bar{\sigma}_{t}^{-1}F_{ \theta}(\mathbf{x}_{t},t), \tag{7}\] we obtain the VP SDE framework from [24] and the following result. **Proposition 3.1**.: _Let \(t<s\). The analytic solution at time \(t\) of the RSDE (2) with coefficients (7) and initial value \(\mathbf{x}_{s}\) is_ \[\mathbf{x}_{t}=\frac{\alpha_{t}}{\alpha_{s}}\mathbf{x}_{s}-2\alpha_{t}\int_{ \lambda_{s}}^{\lambda_{t}}e^{-\lambda}\hat{F}_{\theta}(\widehat{\mathbf{x}}_{ \lambda},\lambda)\mathrm{d}\lambda-\sqrt{2}\alpha_{t}\int_{\lambda_{s}}^{ \lambda_{t}}e^{-\lambda}\mathrm{d}\bar{\omega}_{\lambda},\qquad\lambda_{t}:=- \log(\sigma_{t}). \tag{8}\] The change of variables of (8) is interesting as it allows to compute analytically the Ito-Taylor coefficients in (6) by using, for \(h=\lambda_{t}-\lambda_{s}\), the following key result which will be used in Prop. 4.2: \[\int_{\lambda_{s}}^{\lambda_{t}}e^{-\lambda}\frac{(\lambda-\lambda_{s})^{k}}{k! }\mathrm{d}\lambda=\sigma_{t}h^{k+1}\varphi_{k+1}(h). \tag{9}\] For instance, in the case when \(k=0\), it is easy to see that \(\int_{\lambda_{s}}^{\lambda_{t}}e^{-\lambda}\mathrm{d}\lambda=\sigma_{t}(e^{h} -1)\) and \(\int_{\lambda_{s}}^{\lambda_{t}}e^{-\lambda}\mathrm{d}\bar{\omega}_{\lambda}\) obeys a normal distribution with zero mean, and one can analytically compute its variance: \[\int_{\lambda_{s}}^{\lambda_{t}}e^{-2\lambda}\mathrm{d}\lambda=\frac{\sigma_{t }^{2}}{2}(e^{2h}-1). \tag{10}\] The EDM case.Denote \(\sigma_{d}^{2}\) the variance of the considered initial dataset and set \[\sigma_{t}:=t,\alpha_{t}:=1,\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t}) \simeq\frac{1}{t^{2}}\left[\frac{\sigma_{d}^{2}\mathbf{x}_{t}}{t^{2}+\sigma_{d }^{2}}+\frac{t\sigma_{d}}{\sqrt{t^{2}+\sigma_{d}^{2}}}F_{\theta}\left(\frac{ \mathbf{x}_{t}}{\sqrt{t^{2}+\sigma_{d}^{2}}},\frac{\log(t)}{4}\right)\right]. \tag{11}\] These parameters correspond to the preconditioned EDM framework introduced in [12, Sec. 5, App. B.6]. The following result is the basis for constructing customized SEEDS solvers in this case, and for which we report experimental results in Table 1. For simplicity, we will write \(F_{\theta}(\mathbf{x}_{t},t)\) for the preconditioned model in (11) and we refer to Appendix A for details. **Proposition 3.2**.: _Let \(t<s\). The analytic solution at time \(t\) of (2) with coefficients (11) and initial value \(\mathbf{x}_{s}\) is, for \(\ell=1\),_ \[\mathbf{x}_{t}=\frac{t^{2}+\sigma_{d}^{2}}{s^{2}+\sigma_{d}^{2}}\mathbf{x}_{s}+ 2(t^{2}+\sigma_{d}^{2})\int_{\lambda_{s}}^{\lambda_{t}}e^{-\lambda}\hat{F}_{ \theta}(\widehat{\mathbf{x}}_{\lambda},\lambda)\mathrm{d}\lambda-\sqrt{2}(t^{ 2}+\sigma_{d}^{2})\int_{\lambda_{s}}^{\lambda_{t}}e^{-\lambda}\mathrm{d} \overline{\boldsymbol{\omega}}_{\lambda}, \tag{12}\] _where \(\lambda_{t}:=-\log\left[\frac{t}{\sigma_{d}\sqrt{t^{2}+\sigma_{d}^{2}}}\right]\). In the case when \(\ell=0\), it is given by_ \[\mathbf{x}_{t}=\sqrt{\frac{t^{2}+\sigma_{d}^{2}}{s^{2}+\sigma_{d}^{2}}} \mathbf{x}_{s}+\sqrt{t^{2}+\sigma_{d}^{2}}\int_{\lambda_{s}}^{\lambda_{t}}e^{ -\lambda}\hat{F}_{\theta}(\widehat{\mathbf{x}}_{\lambda},\lambda)\mathrm{d} \lambda,\quad\lambda_{t}:=-\log\left[\arctan\left[\frac{t}{\sigma_{d}}\right] \right]. \tag{13}\] _Remark 3.3_.: One can wonder about the generality of such change of variables. Our method is very general in that one can always make such change of variables with very mild regularity conditions: for \(c:[0,T]\longrightarrow\mathbb{R}^{>0}\) integrable, with primitive \(C(t)>0\), we have \(c(t)=e^{\log(c(t))}\). This means we can write \(c(t)=\dot{C}(t)=e^{\lambda_{t}}\dot{\lambda}_{t}\) with \(\lambda_{t}=\log(C(t))\). In other words, for such \(c\), we have \[\int_{s}^{t}c(\tau)\mathrm{d}\tau=\int_{s}^{t}e^{\lambda_{s}}\dot{\lambda}_{ \tau}\mathrm{d}\tau=\int_{\lambda_{s}}^{\lambda_{t}}e^{\lambda}\mathrm{d}\lambda.\] ## 4 Higher Stage SEEDS for DPMs In this section we present our SEEDS solvers by putting together all the ingredients presented in the previous section. Let \(t<s\). In all what follows, we consider the analytic solution at time \(t\) of the RSDE (2) with coefficients (7), \(h=\lambda_{t}-\lambda_{s}\) and initial value \(\mathbf{x}_{s}\). Plugging (9) with \(k=0\) and (10) into the exact solution (8) allow us to infer the first SEEDS solver, given by iterations of the form \[\widetilde{\mathbf{x}}_{t}=\frac{\alpha_{t}}{\alpha_{s}}\widetilde{\mathbf{x }}_{s}-2\bar{\sigma}_{t}(e^{h}-1)\hat{F}_{\theta}(\widehat{\mathbf{x}}_{ \lambda_{s}},\lambda_{s})-\bar{\sigma}_{t}\sqrt{e^{2h}-1}\epsilon,\qquad \epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d}). \tag{14}\] We shall now prove that this method, which we call SEEDS-1, is convergent with strong order 1 under mild conditions which apply to all our experiments. **Theorem 4.1**.: _Under Assumption B.1, the numerical solution \(\widetilde{\mathbf{x}}_{t}\) produced by the SEEDS-1 method (14) converges to the exact solution \(\mathbf{x}_{t}\) of_ \[\mathrm{d}\mathbf{x}_{t}=[f(t)\mathbf{x}_{t}+g^{2}(t)\bar{\sigma}_{t}^{-1}F_{ \theta}(\mathbf{x}_{t},t)]\mathrm{d}t+g(t)\mathrm{d}\boldsymbol{\omega}_{t}, \qquad(\bar{\sigma}_{t}^{-1}:=1/\bar{\sigma}_{t}) \tag{15}\] _with coefficients (7) in Mean-Square sense with strong order 1: there is a constant \(C>0\) such that_ \[\sqrt{\mathbb{E}\left[\sup_{0\leqslant t\leqslant 1}|\widetilde{\mathbf{x}}_{t}- \mathbf{x}_{t}|^{2}\right]}\leqslant Ch,\qquad\text{as }h\longrightarrow 0.\] Demonstrating this theorem (App. B) requires a significant amount of effort and is not straightforward as it makes use of a continuous approximation of the SEEDS method. Higher stage SEEDS solvers.As announced, by fully exploiting the analytic computations enabled by the expansion (9) we now turn into crafting multi-step SEEDS solvers. Usually, SDE solvers are constructed by using the full Ito-Taylor expansion of the SDE solutions and usually need a big number of evaluations of the network \(\hat{F}_{\theta}\) to achieve higher order of convergence. As our main concern is to present stochastic solvers with a minimal amount of NFE, we choose to truncate such Ito-Taylor expansion so that the neural networks only appear in the deterministic contributions. **Proposition 4.2**.: _Assume that \(\hat{F}_{\theta}\) is a \(\mathcal{C}^{2n+1}\)-function with respect to \(\lambda\). Then the truncated Ito-Taylor expansion of (8) reads, for \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d})\),_ \[\mathbf{x}_{t}=\frac{\alpha_{t}}{\alpha_{s}}\mathbf{x}_{s}-2\bar{\sigma}_{t} \sum_{k=0}^{n}h^{k+1}\varphi_{k+1}(h)\hat{F}_{\theta}^{(k)}(\widehat{\mathbf{ x}}_{\lambda_{s}},\lambda_{s})-\bar{\sigma}_{t}\sqrt{e^{2h}-1}\epsilon+\mathcal{R}_{n +1}, \tag{16}\] _with \(\hat{F}_{\theta}^{(k)}(\widehat{\mathbf{x}}_{\lambda},\lambda)=L_{\lambda}^{k} \hat{F}_{\theta}(\widehat{\mathbf{x}}_{\lambda},\lambda)\), with \(L_{\lambda}\) is an infinitesimal operator defined in Appendix D.2.2 and \(\mathcal{R}_{n+1}\) consists on the usual deterministic residue and all iterated integrals of length at greater or equal to 2 in which there is at least one stochastic component among them._ Our approach for constructing derivative-free 2-stage and 3-stage SEEDS solvers consists on exploiting the analytic computation of the Ito-Taylor coefficients in Proposition 4.2 and replace the \(\hat{F}_{\theta}^{(k)}(\widehat{\mathbf{x}}_{\lambda},\lambda)\) terms by well-adapted correction terms which _do not need any derivative evaluation_ and dropping the \(\mathcal{R}_{n+1}\) contribution as in the Runge-Kutta approach. Lastly, we use Chasles rule for dealing with the remaining stochastic term whose variance was analytically computed. Algorithms 1 to 4 prescribe all SEEDS solvers obtained by this procedure and in what follows we show (see Appendix B for the proof) that all methods are weakly convergent. **Corollary 4.3**.: _Under Assumption B.2, the numerical solutions \(\widetilde{\mathbf{x}}_{t}\) produced by the SEEDS methods (3) and (4) converge to the exact solution \(\mathbf{x}_{t}\) of (15) with coefficients (7) in weak sense with global order 1 in both cases: there is a constant \(C>0\) such that, for any continuous bounded function \(G\):_ \[|\mathbb{E}[G(\widetilde{\mathbf{x}}_{t_{M}})]-\mathbb{E}[G(\mathbf{x}_{t_{M} })]|\leqslant Ch.\] Comparison with existing sampling methods.Let us now examine the connection between SEEDS and existing sampling techniques used for DPMs, emphasizing the contrasts between them. The main distinctive feature of SEEDS is that it is an _off-the-shelf_ solver. This means that, not only it is _training-free_, contrary to [6], but they do not require any kind of optimization procedure to achieve their optimal results. This is in contrast to methods such as: gDDIM, which is training-free but not off-the-shelf as one needs to make preliminary optimization procedures such as simulating the transition matrix of their method in the CLD case; Heun-Like method from EDM (for all baseline models and the EDM-optimized models for ImageNet) since they need preliminary optimization procedures on 4 parameters which actually break the convergence criteria. Moreover, neither gDDIM, EDM nor the SSCS method in [7] present full proofs of convergence for their solvers. Also, both DEIS and gDDIM identify their methods with stochastic DDIM theoretically, but the poor results obtained by their stochastic solvers do not yield to further experimentation in their works. In a way, SEEDS can be thought as an improved and generalized DPM-Solver to SDEs. Nevertheless, such generalization is not incremental as the tools for proving convergence in our methods involve concepts which are exclusive to SDEs. We make rigorous statements of the above discussion as follows. **Proposition 4.4**.: _Consider the SEEDS approximation of (15) with coefficients (7). Then_ 1. _If we set_ \(g=0\) _in (_15_), the resulting SEEDS solvers do not yield DPM-Solver._ 2. _If we parameterize (_15_) in terms of the data prediction model_ \(D_{\theta}\)_, the resulting SEEDS solvers are not equivalent to their noise prediction counterparts defined in Alg._ 1 _to_ 4_._ 3. _The gDDIM solver_ _[_28_, Th. 1]_ _equals to SEEDS-1 in the data prediction mode, for_ \(\ell=1\)_._ The first point makes it explicit that SEEDS is not an incremental result based on DPM-Solver. The second point in Prop. 4.4 is analog to the result in Appendix B of [16], where the authors compare DPM-Solver2 and DPM-Solver++(2S), that is the noise and data prediction approaches, and find that they do not equate. The last point exhibits gDDIM as a special case of SEEDS-1 for isotropic DPMs. _Remark 4.5_.: Building solvers from the representation of the exact solution in (5) requires being able to compute the transition matrix \(\Phi_{A}(t,s)\), and the latter cannot be analytically computed for non-isotropic DPMs such as CLD [7]. Nevertheless, the SEEDS approach can be applied in this scenario in at least two different ways. On the one hand, the SSCS method from [7] resides splitting \(\Phi_{A}(t,s)\) into two separate terms. The first can be analytically computed. The second describes the evolution of a semi-linear differential equation [7, Eq. 92]. While [7] approximates the latter by the Euler method, crafting exponential integrators for approximating such DE may yield an acceleration of the SSCS method. On the other hand, gDDIM [28] proposes an extension of DEIS sampling [27] to CLD by setting a pre-sampling phase [28, App. C.4] in which they compute an approximation of \(\Phi_{A}(t,s)\) in order to apply their method, and the latter was shown in Prop. 4.4. to be a special case of our method. Unfortunately, the authors did not release pre-trained models in [28], and the latter are not the same as those in [7]. Nevertheless, we are confident that our approach may benefit sampling in this scenario too. ## 5 Experiments We compare SEEDS with several previous methods on discretely and continuously pre-trained DPMs. We report results many available sources, such as DDPM [8], Analytic DDPM [2], PNDM [14], GGF [11], DDIM [23], gDDIM [28], DEIS [27] and DPM-Solver [15]. Although we do not include training-based schemes here, we still included GENIE [6], which trains a small additional network but still solves the correct generative ODE at higher-order. For each experiment, we compute the FID score for 50K sampled images on multiple runs and report the minimum along different solvers. We detail and illustrate our experiments with image samples in Appendix E, where model pre-training specifications and references can also be found. Practical considerations.For continuously trained models, SEEDS uses the EDM discretization [12, Eq. 5] with default parameters and does _not_ use the _last-step iteration trick_, meaning that the last iteration of SEEDS is trivial. For discretely trained models, SEEDS uses the linear step schedule in the interval \([\lambda_{t_{0}},\lambda_{t_{y}}]\) interval following [15, Sec. 3.3, 3.4]. All the reported SEEDS results were obtained using the noise prediction mode. We conducted comparative experiments on SEEDS for both the data and noise prediction modes and found better results with the latter (see Tab. 2 for details). EDM solvers [12, Alg. 2] depend on four parameters controlling the amount of noise to be injected in a specific subinterval of the iterative procedure. We consider three scenarios: when stochasticity is injected along all the iterative procedure, we denote it stochastic EDM, when no stochasticity is injected we denote it EDM (\(S_{\mathrm{churn}}=0\)) and we denote EDM (Optimized) the case where such parameters were subject to an optimization procedure. To better evaluate sampling quality along the sample pre-trained DPM, we recalculate DPM-Solver for sampling from the _non-deep_ VP continuous model on the CIFAR-10 dataset. All implementation details can be found in C. Comparison with previous works.In Table 1 we compare SEEDS with other sampling methods for pre-trained DPMs, and report the minimum FID obtained and their respective NFE. For each of the reported pre-trained models in CIFAR-10, CelebA-64 and ImageNet-64, SEEDS outperforms all off-the-shelf methods in terms of quality with relatively low NFEs. For the discrete pre-trained DPM on CIFAR-10 (VP Uncond.) it is \(\sim 5\times\) faster than the second best performant solver. Additionally, SEEDS remains competitive with the optimized EDM samplers. For ImageNet-64, it is nearly as good as the optimized EDM sampler while being almost twice faster than the latter. Figure 1 (a) compares the FID score of SEEDS and DPM-Solver with varying NFEs. While DPM-Solver methods stabilize faster in a very low NFE regime, our methods eventually surpass them. Interestingly, after reaching their minimum, SEEDS methods tend to become worse at higher NFEs, a fact that is also visible in Figure 1 (b), where we notice that such phenomenon is also present on other SDE methods. We report in Appendix E, the results of our SEEDS methods in the low NFE regime and connect their behavior with their proven convergence rate. Combining SEEDS with other methods.While being an off-the-shelf method, SEEDS can be combined with the Churn-like method used in EDM incurring into an SDE solver an additional source of stochasticity. As done in [12], we evaluate the effect of this second kind of stochasticity, measured by a parameter denoted \(S_{\mathrm{churn}}\). Figure 1 (c) shows that SEEDS and EDM show similar behavior, although SEEDS is twice faster, more sensitive to \(S_{\mathrm{churn}}\), and quickly achieves comparable performance to EDM. This indicates that SEEDS could possibly outperform EDM after a proper parameter optimization that will be left for future works. Nevertheless, we highlight the fact that obtaining such optimal parameters is costly and might scale poorly. \begin{table} \begin{tabular}{l c c} \hline Sampling method & FID\(\downarrow\) & NFE \\ \hline CIFAR-10* vp-uncond. & & \\ \hline DDIM [23] & 3.95 & 1000 \\ Analytic-DDPM [2] & 3.84 & 1000 \\ GENIE [6] & 3.64 & 25 \\ analytic-DDIM [2] & 3.60 & 200 \\ F-PDNM (linear) [14] & 3.60 & 250 \\ DPM-Solver\(\dagger\)[15] & 3.48 & 44 \\ F-PNDM (cosine) [14] & 3.26 & 1000 \\ DDPM [8] & 3.16 & 1000 \\ SEEDS-3 (Ours) & **3.08** & **201** \\ \hline CIFAR-10* vp-cond. & & \\ \hline DPM-Solver\(\dagger\)[15] & 3.57 & 195 \\ EDM (\(S_{\mathrm{churn}}=0\)) [12] & 2.48 & 35 \\ SEEDS-3 (Ours) & **2.08** & **129** \\ \hline CIFAR-10* vp-uncond. & & \\ \hline DPM-Solver\(\dagger\)[15] & 2.59 & 51 \\ GGF [11] & 2.59 & 180 \\ GDDIM [28] & 2.56 & 100 \\ DEIS \(\eta\)3Kutta [27] & 2.55 & 50 \\ Euler-Maruyama [24] & 2.54 & 1024 \\ Stochastic EDM [12] & 2.54 & 1534 \\ SEEDS-3 (Ours) & 2.39 & 165 \\ EDM (Optimized) [12] & **2.27** & **383** \\ \hline \end{tabular} \end{table} Table 1: Sample quality measured by FID\(\downarrow\) on pre-trained DPMs. We report the minimum FID obtained by each model and the NFE at which it was obtained. For CIFAR, CelebA and FFHQ, we use baseline pretrained models [24, 12]. For ImageNet, we use the optimized pretrained model from [12]. *discrete-time model, *continuous-time model, \(\dagger\):FID recomputed to match the pretrained model in question. Figure 1: (a-b) Comparison of sample quality measured by FID \(\downarrow\) of SEEDS, DPM-Solver and other methods for discretely trained DPMs with varying number of function evaluations. (c) Effect of \(S_{\mathrm{churn}}\) on SEEDS-3 (at NFE = 270) and EDM method (at NFE = 511) on class-conditional ImageNet-64. \(\dagger\)baseline ADM model. *EDM preconditioned model. Stiffness reduction with SEEDS.In Figure 2, we illustrate the impact of different choices of discretization steps, noise schedule and dynamic scaling on SEEDS and stochastic EDM. We see that choosing the EDM discretization over the linear one has the overall effect of flattening the pixel trajectories at latest stages of the simulation procedure. Also, choosing the parameters (11) over those in (7) has the effect of greatly changing the distribution variances as the trajectories evolve. Notice that all the SEEDS trajectories seem perceptually more stable than those from EDM. This fact would be interesting to relate to the _stiffness_ of the semi-linear DE describing these trajectories, and to the magnitude of the parameters involved in the noise injection for EDM solver which amplify this phenomenon. ## 6 Conclusions Our focus is on addressing the challenge of training-free sampling from DPMs without compromising sampling quality. To achieve this, we introduce SEEDS, an off-the-shelf solution for solving diffusion SDEs. SEEDS capitalizes on the semi-linearity of diffusion SDEs by approximating a simplified formulation of their exact solutions. Inspired by numerical methods for stochastic exponential integrators, we propose three SEEDS solvers with proven convergence order. They transform the integrals involved in the exact solution into exponentially weighted integrals, and estimate the deterministic one while analytically computing the variance of the stochastic integral. We extend our approach to handle other isotropic DPMs, and evaluate its performance on various benchmark tests. Our experiments demonstrate that SEEDS can generate images of optimal quality, outperforming existing SDE solvers while being \(3\sim 5\times\) faster. **Limitations and broader impact.** While SEEDS prioritizes optimal quality sampling, it may require substantial computational resources and energy consumption, making it less suitable for scenarios where speed is the primary concern. In such cases, alternative ODE methods may be more appropriate. Additionally, as with other generative models, DPMs can be employed to create misleading or harmful content, and our proposed solver could inadvertently amplify the negative impact of generative AI for malicious purposes. Figure 2: Trajectories of 10 pixels (R channel) sampled from SEEDS (1st line) and Stochastic EDM (2nd line) on the optimized pre-trained model [12] on ImageNet64. Schedule=scaling=vp corresponds to the VP coefficients in (7) and schedule=linear, scaling=none to the EDM coefficients (11). We use the time discretizations disc=vp (linear) and disc=edm given in [12, Tab.1].
2307.05513
UX Heuristics and Checklist for Deep Learning powered Mobile Applications with Image Classification
Advances in mobile applications providing image classification enabled by Deep Learning require innovative User Experience solutions in order to assure their adequate use by users. To aid the design process, usability heuristics are typically customized for a specific kind of application. Therefore, based on a literature review and analyzing existing mobile applications with image classification, we propose an initial set of AIX heuristics for Deep Learning powered mobile applications with image classification decomposed into a checklist. In order to facilitate the usage of the checklist we also developed an online course presenting the concepts and heuristics as well as a web-based tool in order to support an evaluation using these heuristics. These results of this research can be used to guide the design of the interfaces of such applications as well as support the conduction of heuristic evaluations supporting practitioners to develop image classification apps that people can understand, trust, and can engage with effectively.
Christiane Gresse von Wangenheim, Gustavo Dirschnabel
2023-07-05T20:23:34Z
http://arxiv.org/abs/2307.05513v1
# UX Heuristics and Checklist for Deep Learning powered ###### Abstract Advances in mobile applications providing image classification enabled by Deep Learning require innovative User Experience solutions in order to assure their adequate use by users. To aid the design process, usability heuristics are typically customized for a specific kind of application. Therefore, based on a literature review and analyzing existing mobile applications with image classification, we propose an initial set of AIX heuristics for Deep Learning powered mobile applications with image classification decomposed into a checklist. In order to facilitate the usage of the checklist we also developed an online course presenting the concepts and heuristics as well as a web-based tool in order to support an evaluation using these heuristics. These results of this research can be used to guide the design of the interfaces of such applications as well as support the conduction of heuristic evaluations supporting practitioners to develop image classification apps that people can understand, trust, and can engage with effectively. User Experience, Usability Heuristics, Deep Learning, Image Classification, Mobile Application ## 1 Introduction Today's AI advances are mainly based on Deep Learning (DL) which is a subfield of machine learning that involves training artificial neural networks to recognize patterns in data. These networks are composed of layers of interconnected nodes that can learn to identify features and patterns in the input data. During training, the weights of the connections between the nodes are adjusted to minimize the error in the network's predictions maximizing its accuracy. Once the network is trained, it can be used to make predictions on new, unseen data. Deep learning has achieved state-of-the-art performance in many applications, including image classification. Image classification is a process in which a computer algorithm analyzes and categorizes images based on their visual features. It involves training a deep neural network on a large dataset of labeled images, where each image is associated with a specific class, e.g. a cat breed. The neural network learns to recognize patterns and features in the images that are indicative of their respective classes. Once trained, the algorithm can classify new images by comparing their features to the learned patterns, and outputting the predicted class label. The accuracy of the classification model depends on the quality and quantity of the training data, as well as the architecture and hyperparameters of the neural network. The recent evolution of Deep Learning has also increased the deployment of these trained image classification models into mobile applications, in diverse domains, including computer vision. This allows the app to automatically make predictions based on the contents of the image, e.g., classifying dog breeds, plants, or even diseases such as skin cancer. However, different to traditional software systems, there are various risks associated with interaction with DL-powered systems. One of the main risks associated with such systems is accuracy. Due to the probabilistic nature of DL algorithms their results are not always definitive or 100% accurate. While DL-powered systems can perform tasks at a high level of accuracy, they also make errors, particularly when dealing with complex or ambiguous situations. This can lead to incorrect decisions or recommendations, which can have serious consequences, e.g., when classifying images of skin cancer. DL-powered systems can also be opaque or difficult to understand, making it difficult for users to know how they are working or what data they are using. This lack of transparency can lead to distrust or suspicion among users, particularly when the system is making important decisions or providing recommendations. This situation is commonly referred to as the black box problem in Artificial Intelligence (AI). Without understanding how AI reaches its conclusions, it is an open question to what extent the user can trust these systems. The question of trust becomes more urgent as more and more decision-making is delegated to AI to areas that may harm humans, such as security, healthcare, and safety. Results of DL-powered systems can also be unethical, such as in image classifications misclassifying people of a certain race due to algorithmic bias, where the DL model had been trained on a dataset that was not diverse enough to accurately classify images. DL-powered systems can also pose risks to privacy. These systems may collect large amounts of data and/or images from the users, which can be used for purposes that the user may not be aware of or may not have consented to. As a result, DL-infused systems may demonstrate unpredictable behaviors that can be disruptive, confusing, offensive, and even dangerous [1]. These risks can become even more critical when considering the larger audience of AI-illiterate general public using such mobile apps. Since AI/DL technology is still new and not well understood by many people, users may trust the app's decisions without questioning them. Inaccurate or biased results generated by the app can lead to false conclusions, misunderstandings, or even harm the user (e.g., when relying solely on a skin cancer application app, without consulting a dermatologist). The lack of understanding of how AI works can also make it difficult for users to know when to rely on the app's suggestions and when to question them. Therefore, it is crucial to consider the usability and transparency of AI-powered mobile apps to avoid potential harm and ensure user trust. This lack of transparency can create confusion, frustration and mistrust and indeed specific socially untoward consequences of algorithmic interactions have been widely identified [1][1][1]. Yet, as the deployment of DL in mobile applications grows, the field of user experience design must shift to understand the possibilities, limitations, and biases of AI. A great image classification app depends on a well-designed DL model as much as it depends on a well-designed user interface (UI) and user experience (UX) that compose the human experience around the AI models [24]. Aiming to enhance or complement human capabilities rather than to replace them through DL, a Human-Centered AI (HCAI) approach needs to be adopted to develop and deploy AI systems that focus on the needs and well-being of the individuals and communities affected by the technology (Rield, 2019)(Ribeira & Lapedriza, 2019)(Xu, 2019). HCAI aims to ensure that the AI technology is developed with the user's needs and expectations in mind, making it more accessible and easy to use. Human-centered design can help to mitigate the risks associated with AI. By prioritizing the user's experience and understanding of the technology, designers should develop interfaces that provide users with greater control and understanding of the AI's decisions and outputs (Shneiderman, 2020)(Xu, 2019). As such, it is important that these systems provide an adequate user experience (UX) in order to be effective and easy to use. A well-designed UX can help ensure that people are able to use an AI system easily and effectively, and that they are able to get the most out of it and prevent any harm (Wong, 2018). Yet, the design of user interfaces of DL-powered apps still presents a challenge and limited attention has been given so far to the user interface design principles for such systems (Yang et al., 2020). AI User Experience (AlX) It refers to the user experience design of AI-powered systems, with a focus on creating interfaces and interactions that are intuitive, transparent, ethical, usable, useful and trustworthy for users. The field of AIX is still relatively new, so far few proposals of guidance have emerged for AIX design (Wright et al., 2020), mostly in white papers by large technology companies such as Google's People + AI Guidebook (Google, 2023) for any kind of AI systems with emphasis on recommendation systems or IBM's AI Design Guidelines (IBM, 2023), while Amazon (2020) and Facebook (2023) propose guidelines for conversational AI. Amershi et al. (2019) at Microsoft synthesized research in interaction with AI into a set of guidelines for human-AI interaction (HAI guidelines) as well as Wright et al. (2020) presenting a comparative analysis of industry human-AI interaction guidelines. These frameworks either provide general principles and guidelines or focus more on a specific type of AI-powered system (such as recommendation systems or conversational agents) that are difficult to apply to other AI tasks, such as image classification. Therefore, this article presents an initial proposal of AIX heuristics and checklist in order to provide guidance for designing and evaluating mobile apps with image classification powered by Deep Learning. The availability of such heuristics is expected to contribute to the improvement of the user interface design of such apps and, thus, providing an improved user experience. ## 2 Background ### Image Classification Applications powered by Deep Learning A prominent AI task today is image classification applied to various domains such as healthcare, biology, arts, etc. The image classification task predicts the class of an object in an image, e.g., a cat breed (Figure 1). In recent years, the state of the art of image classification has been improved by applying Deep Learning (LeCun et al., 2015). Deep learning is a type of machine learning based on artificial neural networks in which multiple complex layers of processing are used to extract progressively higher level features from data. During training, deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep-learning methods are representation- learning methods with multiple levels of representation, obtained by composing simple but non- linear modules that each transform the representation at one level starting with the input into a representation at a higher, slightly more abstract level, suppressing irrelevant variations, in order to learn complex functions. Currently, convolutional neural networks (CNN) are one of the most prominent approaches in deep learning (LeCun et al., 2015), including AlexNet (Krizhevsky, 2012), ResNet (He et al., 2016), MobileNet (Howard et al., 2017), EfficientNet (Tan et al., 2019) among others. These models are typically trained with a large dataset using supervised learning. Yet, transfer learning exploits the capabilities of the pre-trained CNN, retaining both its initial architecture and all the learned weights, to new data with a smaller population instead of training a CNN from scratch. When working on classification problems, in general, the performance of the model is measured during the validation and testing of the trained model by metrics, such as the accuracy, precision, recall among others. Accuracy is the measure of how much percentage of images are classified correctly. It is the ratio of the number of correct predictions over total predictions. Here a high value of accuracy represents better performance. Figure 1: Image classification task powered by Deep Learning Precision measures the ability of a model to not to classify a negative instance as positive, by measuring how many positive predicted instances by the model were actually positive, while recall measures how many of the positive instances were correctly classified by the model. Once trained, the model can be used to predict the classes of new images not seen by the model before. The result of a classification typically provides for each class in the scope of the image classification model a confidence score, a decimal number between 0 and 1, which can be interpreted as a percentage of confidence in predicting this class label. From this output of the model the result can be presented to the user in different ways, such as just showing the class with the highest confidence score, or all classes in decreasing order of their confidence scores among other ways. Recent breakthroughs in Artificial Intelligence technologies have enabled numerous mobile applications using smaller, yet still accurate DL models enabling real-time classification of any image from the smartphone's gallery or camera as input (Martinez-Fernandez et al., 2021). As a result, there exist already a variety of image classification mobile applications in app stores focusing on diverse application domains, such as biology (e.g., plant or animal species classification), healthcare (e.g., skin cancer diagnosis), entertainment (e.g., classification of celebrities, age identification), among others. Others such as Google Lens offer to classify a wide range of objects, including but not limited to plants, animals, landmarks, and products. Typically image classification is the main feature of such an app. Very few apps directly use the classification results as part of another functionality. The common interaction process of humans with image classification apps starts with presenting the app's feature when opening the app (Figure 2). It involves capturing an image with the smartphone's camera or uploading an image from the device gallery to the app. Some apps offer instructions on how to capture images with sufficient quality (e.g., in focus, good lighting, clean backgrounds) and/or even provide feedback on the quality of a captured image suggesting retrying, if necessary. Several apps also offer the possibility to crop or rotate the image. The user may also be able to provide additional information or context to improve the accuracy of the classification, e.g., regarding localization. The app then uses a deployed DL model to classify the image. Once the image has been classified, the app commonly presents the results of the classification to the user directly and may provide the user with additional information, e.g., detailed information on spider species. The user may have the option to provide feedback on the classification result and/or ask for human assistance either by experts or community members. The user may also have the option to share the image or the results of the classification with others through various means, such as social media, messaging, or email. Some apps provide information on privacy regarding the usage of user's data and images as well as information on how the DL model has been trained and its performance. Apps focusing on the classification of objects that are potentially harmful to humans such as poisonous animals (spiders or snakes) may also present warnings with regard to taking a picture of the animal, as well as when presenting the results when poisonous species have been identified. ### Heuristic evaluation The design of the user interface of Al-powered apps can greatly influence the user's perception of the app's functionality, accuracy, trustworthiness, and ease of use. A well-designed interface can help users understand the app's capabilities, how to interact with it, and the results they can expect. Furthermore it can help to reduce the risk of a harmful use of the results. On the other hand, a poorly designed interface can lead to confusion, frustration, and a lack of trust in the app's ability to perform as intended. Therefore, it is essential to consider the quality of the user interface for Al-powered apps to ensure a positive user experience. In order to improve the quality of user interfaces, various methods can be used, including heuristic evaluation. Heuristic evaluation [19] is a well-known and widely accepted method used in Human-Computer Interaction to evaluate the usability of a user interface by examining it against a set of established heuristics or guidelines. Through this evaluation, designers and developers can identify potential usability issues, and prioritize and make changes to the interface design in order to improve the overall user experience. This method can be performed at any stage of the design process and is relatively quick and cost-effective compared to user testing. The results of heuristic evaluation can help identify potential usability issues early in the design process, leading to more user-friendly products. A heuristic evaluation is based on a set of usability heuristics. Usability heuristics refer to a set of high-level guidelines or best practices, typically based on a combination of established principles from a mix of theory-based knowledge, experience and common sense. Besides the usability heuristics originally developed for GUI interfaces on desktop computers, including Nielsen's ten heuristics [19] and Shneiderman's eight golden rules [18], various specialized heuristic sets are being developed. This includes heuristics for user Figure 2: Example of screens of an image classification app (Seek iNaturalist) interfaces on different devices, such as mobile phones (Inostroza et al., 2012)(Salazar et al., 2013), ambient displays (Mankoff et al., 2003) and virtual worlds (Rusu et al., 2011), or for specific groups of users, such as senior citizens (Salman et al., 2018), or application domains such as e-commerce (Tezza et al., 2011) or e-learning (Gu et al. 2011). This shows that usability heuristics must be carefully selected so that they reflect the specific type of interface being designed and may require alternative heuristics or re-interpretation of existing ones in order to make sense (Holzinger, 2005). It also points out the need for adapting those "traditional" usability heuristics to fit the specific characteristics and limitations of DL-powered applications. And, although there exists first research developing usability heuristics for Artificial Intelligence systems, some are focused on specific types of tasks, such as recommendation or conversation. However, as Al/DL-powered systems vary largely with respect to their task and the ways an application may use this technology, a lack of specific heuristics to guide the design of user interfaces of mobile applications can still be observed. ## 3 Overview on existing AI heuristics The need for creating AlX heuristics arises from the fact that Al-powered systems present unique challenges for human-computer interaction that are not addressed by traditional usability heuristics. These challenges include issues related to explainability, transparency, trust, and ethics. Therefore, the development of a set of heuristics specifically tailored to Al systems is important to help to create more user-centered and effective interfaces that address these challenges and promote a positive user experience. Recently, first proposals of guidance for Al design have emerged mostly in white papers by IT companies such as as Google's People + Al Guidebook (Google, 2023), Apple's Machine Learning design guidelines (2023) or Orium's smarter patterns (2023) for any kind of Al systems, while Amazon (2020), Facebook (2023) and IBM (2023) propose guidelines for conversational Al. Few academic research is available, including Dudley & Kristensson (2018) focusing on the design of interactive Machine Learning and Mohseni et al. (2021) presenting a framework for the design of explainable Al systems. Amershi et al. (2019) at Microsoft synthesized research in interaction with Al into a set of guidelines for human-Al interaction (HAl guidelines) as well as Wright et al. (2020) presenting a comparative analysis of industry human-Al interaction guidelines. These frameworks either provide general principles and guidelines or focus more on a specific type of Al-powered system (such as recommendation systems or conversational agents) that are difficult to apply to other Al tasks, such as image classification. Considering general guidelines for creating a basis for the proposal of heuristics for image classification, Table 1 presents a mapping of the encountered guidelines. Most of these AIX guidelines are presented as a list of principles or heuristics decomposed into a set of items to be used during the design of user interfaces for AI-powered systems. No guidelines in the form of a heuristic evaluation checklist supporting also the evaluation of such interfaces were encountered. There also does not exist any (automated) tool support for such an evaluation of this kind of interface. In this way the results demonstrate the current lack of a set of AIX guidelines tailored specifically for applications with image classification. ## 4 Research methodology We developed the AIX guidelines for image classification apps using a systematic methodology proposed by Quifonnes et al. (2018) and Rusu et al. (2011): Step 1. At this exploratory stage we reviewed literature related to AIX principles and guidelines, their characteristics, as well as usability heuristics for this specific kind of application. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **AIX Guidelines** & (Amershi et al., 2019) & (Apple, 2023) & (Google, 2023) & (Orium, 2023) \\ \hline Make clear what the system can do & x & x & x & \\ Explain the benefit not the technology & & x & x & \\ Avoid technical jargon & & x & x & x \\ Make clear how well the system can do what it can do & x & x & x \\ Demonstrate how the users can get satisfactory results & & x & & \\ Time services based on context & x & & & \\ Show contextually relevant information & x & & x & \\ Anchor on familiarity & & x & & \\ Make the presentation of images and information & & & x \\ accessible & & & & x \\ Show processing status & & & & x \\ Determine how to show model confidence, if at all & x & x & x \\ Explain for understanding, not completeness & & x & x & \\ Go beyond in-the-moment explanations & & & x & \\ Match social norms & x & & & x \\ Mitigate social biases & x & & & x \\ Support efficient invocation & x & & x & \\ Support efficient dismissal & x & & & \\ Support efficient correction & x & x & x & \\ Make it clear when the results are inaccurate & & & & x \\ Provide manual controls when the AI fails & & & x & x \\ Offer customer support when the AI fails & & & x & \\ Scope services when in doubt & x & & & \\ Make clear why the system did what it did & x & x & & \\ Remember recent interactions & x & & & \\ Learn from user behavior & x & x & & \\ Update and adapt cautiously & x & x & & \\ Let users give feedback & x & x & x & \\ Convey the consequences of user actions & x & & & x \\ Provide global controls & x & x & & \\ Notify users about changes & x & x & & \\ Be transparent about privacy and data settings & x & x & x & x \\ Make it safe to explore & & & x & \\ Explain how the AI model has been developed and the algorithm & & & x & x \\ Give risk alerts especially in critical applications & & & x \\ \hline \hline \end{tabular} \end{table} Table 1: Mapping of general AIX guidelines Step 2. At this experimental stage we analyzed the specific characteristics and usability problems based on an analysis of 100 randomly selected mobile applications with image classification available at Google Play. Step 2. At this descriptive stage we mapped the UAIX guidelines we encountered in order to highlight the most important items of the previously collected information. Step 3. At this correlational stage we identified a set of heuristics and selected items that mobile applications with image classification should adhere to based on the mapping of existing general AIX guidelines and the usability problems observed during the analysis of the apps. Step 4. At the specification stage we formally specified the set of usability heuristics. We also decomposed the heuristics into a set of checklist items to facilitate their application as part of a heuristic evaluation. In addition we defined a response scale for the checklist. We described each item by an ID, question and brief explanation. For each of the items we also present an example of the item's compliance and/or violation. Step 5. In addition we prepared an online course demonstrating and explaining the AIX heuristics and the checklist as well as a web-based tool to support the execution of the heuristic evaluation using the checklist. ## 5 Initial proposal of AIX heuristics and checklist Based on the literature review and the experimental results testing mobile applications with image classification we defined a set of heuristics and checklist as presented in Table 2. These heuristics and items are specifically related to the image classification functionality. For a comprehensive evaluation they should be completed by general usability heuristics as well as usability heuristics customized for mobile applications. \begin{table} \begin{tabular}{l|l|l|l|l} \hline **heuristic** & **Checklist item** & **item explanation** & **Example** & **Yes, No, NIA (Hot)** \\ \hline **Make expectations** & 1. Does the app & The app presents the classes that it is & **YesNo** \\ and limitations & make it clear what & able to distinguish (e.g., plants, dog) & **YesNo** \\ **explicit** & & **and of object it can & breeds) hidden, the user can classify an \\ & & **classify?** & image (e.g., on the home screen). \\ \hline \end{tabular} \end{table} Table 2: AIK - image classification heuristics and checklist \begin{tabular}{|l|l|l|} \hline [MISSING_PAGE_POST] [FIGURE:S [MISSING_PAGE_POST] * 15. Does the app indicate when the score of the app (e.g. a glass in a dog user tries to classification app), the app displays as a classy objects result the information that this is an outside of its scope? [MISSING_PAGE_POST] * [16] R. Does the app show a warning when the system is confidence (e.g. 70%); the app alerts not able to classify the other informing that it was unable to classify this image (instead of showing minimum level of confidence)? 20. The age makes the purpose of two benefits feedback may affect the sending feedback (according, and judges the user to clear?) provide a carefully. No. in case the gap does not provide the possibility of feedback 23. Does the apo highlight the risks involved with a potential (smallestation) and risk in case classification errors do not carry many risks The apo indicates the consequences of a possible misclassification, specifically in involved with a case where this could result in physical harm to humans. This broad appears to be safe to be ext, but always use your judgment as well to avoid the risk of food poisoning 24. Does the app show visual (or abrupt face) to alert the user to the camera of an image (as a result of a visual object). ## 6 Course and web-based tool support for heuristic evaluation In order to facilitate the usage of the proposed heuristics we developed a course introducing UX principles related to DL-powered mobile applications with image classification, explaining also the evaluation items by examples. The course is available online for free in Brazilian Portuguese at [https://cursos.computacaoaescola.ufsc.br/cursos/aix-classificacao-de-imagens/](https://cursos.computacaoaescola.ufsc.br/cursos/aix-classificacao-de-imagens/) (Figure 3). In order to facilitate the execution of evaluations using the proposed heuristics and checklist, we also developed a web tool that guides the evaluation by presenting each item of the checklist by its name, brief explanation, image with example and its corresponding response alternatives. In the end, the tool summarizes the results by presenting a list of the checklist items and visually indicating the items that are not satisfied, as well as the general percentage of the items that are satisfied. The report can also be downloaded in pdf format. The tool has been developed in javascript and is available online for free in English and Brazilian Portuguese: [http://apps.computacaoaescola.ufsc.br/aix/](http://apps.computacaoaescola.ufsc.br/aix/) Figure 3: Example slides of the course ## 7 Conclusion The article presents a first proposal of a set of AIX heuristics and checklist to support the design and evaluation of mobile applications with DL-powered image classification. Our next steps involve the validation of this set of heuristics and checklists through a series of experiments performing heuristic evaluations on selected apps of this kind in comparison to user tests. Figure 4: Tool support for heuristic evaluation ## Acknowledgements This work was supported by the CNPq (_Conselho Nacional de Desenvolvimento Cientifico e Tecnologico_), an entity of the Brazilian government focused on scientific and technological development
2308.15185
Exciton-polaritons in GaAsbased slab waveguide photonic crystals
We report the observation of band gaps for low loss exciton-polaritons propagating outside the light cone in GaAs-based planar waveguides patterned into two-dimensional photonic crystals. By etching square lattice arrays of shallow holes into the uppermost layer of our structure, we open gaps on the order of 10 meV in the photonic mode dispersion, whose size and light-matter composition can be tuned by proximity to the strongly coupled exciton resonance. We demonstrate gaps ranging from almost fully photonic to highly excitonic. Opening a gap in the exciton-dominated part of the polariton spectrum is a promising first step towards the realization of quantum-Hall-like states arising from topologically nontrivial hybridization of excitons and photons.
C. E. Whittaker, T. Isoniemi, S. Lovett, P. M. Walker, S. Kolodny, V. Kozin, I. V. Iorsh, I. Farrer, D. A. Ritchie, M. S. Skolnick, D. N. Krizhanovskii
2023-08-29T10:05:33Z
http://arxiv.org/abs/2308.15185v1
# Exciton-polaritons in GaAs-based slab waveguide photonic crystals ###### Abstract We report the observation of band gaps for low loss exciton-polaritons propagating outside the light cone in GaAs-based planar waveguides patterned into two-dimensional photonic crystals. By etching square lattice arrays of shallow holes into the uppermost layer of our structure, we open gaps on the order of 10 meV in the photonic mode dispersion, whose size and light-matter composition can be tuned by proximity to the strongly coupled exciton resonance. We demonstrate gaps ranging from almost fully photonic to highly excitonic. Opening a gap in the exciton-dominated part of the polariton spectrum is a promising first step towards the realization of quantum-Hall-like states arising from topologically nontrivial hybridization of excitons and photons. The hybridization of photons and quantum well excitons leads to the formation of exciton-polaritons, quasiparticles combining high-speed propagation with large nonlinearity and susceptibility to magnetic fields. These favourable properties arising from their mixed light-matter nature have made polaritons highly attractive candidates for novel semiconductor-based optical devices incorporating nonlinearity[1] and robustness against disorder induced by topology[2]. In order to direct and manipulate the flow of polaritons accordingly, appropriate tailoring of the potential landscape is required[3]. In Bragg microcavities, the most mature platform for polariton research to date, lithographic patterning has been used to strongly modify the photonic mode dispersion resulting in gapped polariton band structures enabling coherent devices[4; 5; 6; 7] and topological lasers[8; 9]. An alternative geometry to conventional microcavities for the study of polaritons is slab waveguides (WGs), in which a guided electromagnetic mode confined by total internal reflection strongly couples to quantum well excitons[10]. This configuration not only has relative ease of fabrication, but also far greater suitability for integration into on-chip circuits owing to the large in-plane propagation velocities. Coherent[11; 12] and continuum[13; 14] light sources have already been demonstrated using this "horizontal" geometry, and the thin layer structure facilitates enhancement of nonlinearities using dipolar polaritons[15; 16]. Similar to microcavities the potential landscape can be engineered to emulate novel physical systems, but WGs can achieve this without the need to etch several micrometers of material[17]. A full quantum theory of strong light-matter coupling in photonic crystal (PhC) slabs with embedded quantum wells was given by Gerace and Andreani over a decade ago[18]. Experimentally, PhCs and bound states in the continuum have been demonstrated using monolayer semiconductors (MS) placed on top of periodically patterned dielectric WGs[19; 20; 21]. PhC were also made using pillars of hybrid organic-inorganic perovskites embedded in a homogeneous dielectric[22]. In the area of topologically non-trivial polaritons, propagating edge states protected by breaking of a pseudo-time-reversal symmetry were demonstrated using MS on a PhC WG[23]. Of particular interest is the potential to realize topological polariton states protected by true time-reversal symmetry breaking[24] by combining polarization splitting, photonic crystal bandgap and exciton Zeeman splitting due to external magnetic field. So far similar states have only demonstrated in microcavities in the lasing regime owing to a very small bandgap of 0.1 meV[9]. While modulated bandstructure is one key ingredient for future applications, it is equally important to achieve long polariton lifetimes and propagation distances. In MS and organics the lifetimes are short, which may occur due to inhomogeneous broadening of the exciton linewidth up to \(\sim\)10 meV. The longest reported polariton lifetimes are achieved in GaAs-based structures[25] where, owing to the high quality quantum wells, polariton linewidths can be 10-100 times smaller. Furthermore, a crucial feature of previous works is that the studied states are near zero in-plane momentum where coupling to freely propagating waves in the surroundings inevitable leads to high photonic losses. For example, in Dang et. al.[22] even the simulated linewidth for passive materials is of order 16 meV suggesting only a 40 fs photon lifetime. To achieve long polariton lifetime it is critical that states are produced at high wavenumbers where total internal reflection (TIR) prevents radiative loss. For GaAs-based WGs lifetimes (propagation lengths) on the order of 10 ps (500 \(\upmu\)m) can be expected[10; 13]. Until now, however, strong bandstructure modulation was not demonstrated in GaAs-based polariton WGs, and polariton PhC states protected by TIR have not so far been demonstrated in any material system. Achieving the former is challenging since it relies on etching through a significant fraction of the core. However, etching through or too near the quantum wells leads to high losses associated with surface recombination effects. While previous works[17] have demonstrated the persistence of strong coupling in patterned GaAs-based WGs, the modulated region was spatially separated from the WG core leading to only a weak perturbation and no opening of a gap. In this work, we implement two-dimensional square lattice PhCs in GaAs-based WGs in the strong coupling regime. We achieve PhC band gaps near the Brillouin zone (BZ) edge where the states are protected by TIR, and show low loss prop agation over the \(\sim 50\,\upmu\)m wide PhC for states outside the gap. The band gap, with width \(\sim\)10 meV, (roughly an order of magnitude larger than in microcavity lattices [2; 3]), is the signature of the polariton bandstructure being strongly modulated by the periodic lattice. Finally, we demonstrate one of the useful features of polariton-based platforms by exploiting the excitonic content of the states to tune the gap width using an external magnetic field. The sample, illustrated in Fig. 1(a), is a planar WG into which holes are etched to form a square lattice PhC. The GaAs WG core layer has total thickness 295 nm and contains three 10 nm wide In\({}_{0.06}\)Ga\({}_{0.94}\)As quantum wells (QWs) separated by 10 nm GaAs barriers and placed at the peak of the transverse-electric (TE) polarised field in order to maximize the Rabi coupling. The upper quantum well is 137 nm below the surface, which allows etch depths up to 87 nm (\(\sim\)30% of the core thickness) while still leaving 50 nm GaAs cap to protect the wells. The core is separated from the GaAs substrate by a 750 nm thick AlGaAs cladding layer with 90% average Al composition. An extended discussion of the design of the layer structure is given in the supplementary material. The samples were grown by MBE and then patterned with a soft mask using electron beam lithography to form PhCs with a square lattice geometry, as shown in Figs. 1(a) and (b). The patterns were etched down into the planar structure using inductively coupled plasma etching with a chlorine/argon chemistry. Several samples were fabricated, with lattice constants \(a\) between 120-130 nm, hole etch depths \(h\) between 55-64 nm and hole diameters \(d\) between 47-66 nm. The ranges for fabrication were selected by calculating the PhC bandstructure using Lumerical(r) FDTD solutions finite difference time domain software (see Supplementary Material for further details). Fig. 1(c) shows an example of the simulated bandstructure for photons (without coupling to the exciton) across the whole Brillouin zone (BZ). The polarisation of states in slab photonic crystals may be classified as TE-like or TM-like, with electric field mostly in or out of the WG plane respectively [26]. Only the TE-like modes couple strongly to the excitons in our QWs. The bands for both polarisations are shown in Fig. 1(c). No narrow linewidth states could be identified above the light cone (in the shaded region) or above the GaAs band gap (1.519 eV) since such states are either not guided or strongly absorbed respectively. In Fig. 1(c) the modes get broader with increasing energy above the band gap and we choose to plot the dispersion of the modes, which have linewidth \(<\) 100 meV. There is a \(\sim\) 10 meV gap in the TE-like states around the X point, which we will examine in more detail momentarily. From X towards M the bands increase in frequency so that the band gap does not span all momentum states but exists only for modes propagating near X. A larger gap could be engineered by using a different lattice geometry such as honeycomb [24] but even this small gap serves to illustrate that polariton bandstructure can be modulated by many times the linewidth, and could support edge states where the bulk 2D lattice provides other desirable properties such as topological protection [24]. The bandstructure was calculated for a range of \(a\), \(d\) and \(h\) and close-to-optimal values were selected for fabrication such that the gap in the TE-like modes at the X point is near the exciton energy and so that the gap width was maximized without etching too close to the QWs. In experiment we will observe polariton rather than pure photon bands. Fig. 1(d) shows an example of the calculated polariton dispersion resulting from strong coupling between the TE-like mode and QW excitons, focussing on the region around the X point. The polariton band gap can be seen clearly. To enable us to experimentally study gap formation within the PhCs, grating couplers were fabricated on either side of the PhC in the \(z\) direction (Fig. 1(c)). These fold the waveguide modes into the radiative re Figure 1: (a) Schematic of photonic crystal (PhC) structure in a slab waveguide. (b) Scanning electron microscope (SEM) image of holes etched into the surface of the waveguide showing the centre-to-centre distance \(a\). (c) Bandstructure of the photon modes of a PhC with \(a=126\) nm, \(d=47\) nm, \(h=64\) nm. The horizontal axis follows the triangular path \(\Gamma\)-X-M-\(\Gamma\) indicated on the inset schematic of the first BZ (top left). Vertical lines indicate the X and M points. Horizontal line indicates the GaAs bandgap. Central inset is a zoom of the region around the X point. (d) Calculated polariton dispersion relation in the vicinity of the X point (\(k_{z}/(2\pi/a)=-0.5\)) for the PhC in (c). The thick lines correspond to the states which are visible in Fig. 2. (e) Optical microscope image of a PhC with grating couplers allowing light to be coupled in and out along the \(z\) direction and with etched trenches. The inset shows an angled SEM image of a cleaved etched grating. gion allowing light to be coupled in and out. On the other two sides of the PhC, trenches were etched (through the active region) for prospective studies of edge states, although we note that this is an entirely optional design feature. The side length of the PhCs corresponds to \(\sim\)400 periods. For optical measurements, the sample was held at low temperatures (\(\textlessapprox\) K) and excited by a cw laser at 637 nm focused to a \(\sim\)3 \(\upmu\)m spot on the PhC surface. This nonresonant excitation incoherently populates the polariton modes of the PhC WGs (which lie below the light cone) by multiple relaxation processes. The polaritons may then propagate out of the PhC to the grating where they are scattered out and recorded using a spectrometer. Spatial filtering optics were used to collect the emission from selected regions on the sample. In Fig. 2(a) we show typical examples of the emission collected from gratings adjacent to the PhCs, measured for a single set of devices and corresponding to a cut through the X point, along the \(\Gamma\)-X-\(\Gamma\) direction in momentum space. The heavy attenuation of PL intensity within particular energy windows directly results from the presence of band gaps in the PhC slabs. Polaritons at energies in the gaps cannot propagate out of the PhC to be detected at the grating. We stress that these gaps arise from the periodic potential created by the PhCs, and are qualitatively very different to the resonances induced by back-reflection of guided modes between pairs of gratings in ref. [12]. We see that as the PhC period \(a\) is made successively smaller, the gap (shown by the double-sided vertical arrow) moves upwards in energy, becoming smaller as it approaches the exciton resonance (dashed white line). The reduction in the gap size arises from reduction of the photonic fraction as the gap approaches the exciton resonance, confirming that the strong coupling is retained in the PhC region. This can also be seen from the anticrossing behaviour of the band gap edges in Fig. 2(b), which also includes results from devices with different hole profiles. This is in contrast to our simulations of the purely photonic structure (without strong coupling to an exciton) in which the gap size varies little across this range of periods. Since the normal mode (Rabi) splitting between the photonic and excitonic resonances is known, we can calculate the exciton fraction of each gap using \[|X|^{2}=\frac{1}{2}\left[1+\frac{E_{G}^{ph}-E_{X}}{\sqrt{(E_{G}^{ph}-E_{X})^{2 }+\hbar\Omega^{2}}}\right], \tag{1}\] where \(E_{G}^{ph}\) is the central gap energy of the bare photonic gap, calculated using \(E_{G}^{ph}=\Omega^{2}/4(E_{X}-E_{G}^{pol})+E_{G}^{pol}\) where \(E_{G}^{pol}\) is the central gap energy of the lower polariton branch. In the data shown in Fig. 2, the gap passes from almost fully photonic (\(|X|^{2}\) = 1%) to predominantly excitonic (\(|X|^{2}\) = 61%), as can be seen from Fig. 2(c). In other devices we have measured gaps with exciton fractions as high as 73%. We note that emission from states within \(\sim\)1.5 meV below the exciton line is heavily attenuated due to absorption which, along with the broadened polariton linewidths, places an upper bound on the exciton fraction of gaps which can be observed in our current samples. This may be improved somewhat by reducing the exciton inhomogeneous linewidth and/or increasing the Rabi splitting to increase \(|X|^{2}\) for polariton states further from the highly absorbing part of the exciton tail. We also note that within the gaps some weak emission is visible, implying that the reflectivity is below unity, which arises from the finite extent of the PhCs. As we show in Fig. 3, the extinction within the gap depends strongly on the length of photonic crystal (i.e. number of lattice periods) between the excitation spot and the edge of the PhC. Increasing the number of periods strongly increases the attenuation for energies in the gap. This is clear evidence that the observed gaps are due to reflection of propagating polaritons from the periodic PhC structure. We further note that outside the band gap there is little observable depen Figure 2: (a) Angle-resolved photoluminescence spectra corresponding to PhCs with different periodicities. The devices have holes with diameter d = 47 nm and etch depth h = 64 nm. (b) Measured and calculated positions of gaps (upper and lower band edges) against PhC period for devices with different hole profiles (different diameters d and etch depths h). The calculated positions are given by taking the gaps from photonic simulations and assuming a Rabi splitting of 9 meV with the exciton resonance. The error bars on the measured data points represent the standard deviation from measurements of different devices with the same hole profiles. (c) Exciton fraction at the center of the gap vs. periodicity for hole profiles corresponding to the solid curves in panel (b). dence on excitation spot distance. This demonstrates that the decay length of freely propagating polaritons in the photonic crystal (but outside the gap) is much larger than the 35 \(\upmu\)m length over which the spot was moved. This is consistent with the \(\sim\)500 \(\upmu\)m lengths in unpatterned polariton waveguides.[13; 10] In contrast to our system, the primary gaps studied in periodic potentials in Bragg microcavities are typically formed in the photonic part of the spectrum and thus have far smaller exciton fractions, which severely limits the ability to tune the gap position using external fields. If the exciton content is large, however, one may employ diverse methods to tune the energy of the gap including temperature, optical excitation and electric and magnetic fields[27]. In order to demonstrate the feasibility of tuning gaps in our devices, and also to show unambiguously that they are polaritonic in nature, we placed our sample in a magnetic field in Faraday geometry. The results are summarized in Fig. 4. The predominant effect is the diamagnetic (blue)shift of the exciton resonance, which reduces the exciton fraction of the gaps and hence increases their size. For the PhC with \(a\) = 125 nm, the gap size can be increased from 4.0 meV at B = 0 T to 5.4 meV at B = +9 T (Figs.4(a)-(c)). In the case of the PhC with \(a\) = 124 nm, the full gap is not visible at B = 0 T, since the upper gap energy lies in the heavily attenuated region below the exciton resonance. At higher magnetic fields however the gap becomes visible, reaching a size of 2.4 meV at B = +9 T (Figs.4(d)-(f)). We thus demonstrate that the polariton band structure can be tuned by varying the exciton content using magnetic field. It should be noted that to obtain the larger gap sizes requires reducing the exciton content (see Fig. 2(c)), which will also reduce the polariton nonlinear interaction strength. The tuning can be used as a lever to strike an optimal balance between these properties. For finite magnetic fields there is also a Zeeman splitting of the exciton, which reaches values exceeding 0.5 meV at the highest fields. Thus, the basic ingredients we have presented, namely, gaps in the excitonic part of the spectrum and an exciton Zeeman splitting are those described in ref. [24] as the criteria for quantum Hall type (chiral) edge states at the boundaries of the system. However, for true topological protection one needs the Zeeman splitting to exceed the size of the gap; this will require an enhancement of the exciton \(g\) factor which in our system could be achieved by varying the In composition or width of the quantum wells[28]. Alternatively one may think about employing semimagnetic quantum wells in a Te-based system[29] or using transition metal dichalcogenides where spin-dependent strong coupling to a photonic mode can create a giant effective Zeeman splitting exceeding 10 meV[30]. We note that even without a global band gap the system may support topological edge states with well defined momentum close to the X point. Losses due to scattering into bulk modes can be minimised by maximising the gap size using other lattice geometries such as hexagonal[24]. In summary, we have presented a platform to study strongly modulated exciton-polariton band structures using patterned slab waveguides in the strong coupling regime. We observe low loss propagating states outside the light cone and band gaps of order 10 meV. We have demonstrated that the gaps can be controlled both through the photonic component (varying the period of the crystal) or the excitonic component (external magnetic field). For future studies we envisage the patterning of photonic crystals with different lattice geometries featuring exotic dispersion relations[31; 32], as well as interfacing waveguides with other excitonic materials such as atomically thin semiconductors[21] and organic polymers[33]. Our system could thus offer a flexible and promising alternative to microcavity Figure 3: Measured angle-resolved PL spectra when the excitation spot (Exc.) is at the near (a) and far (b) edges of the PhC (\(a\)=126 nm) with respect to the grating from which light is collected (Col.). (c) PL spectra measured for different excitation positions (i.e. number of PhC periods between the excitation and detection spots). Figure 4: (a)-(c) Magnetic field dependence of spectra for PhC with \(a\) = 125 nm. The angle-resolved spectra measured at 0 T and 9 T are shown in (a) and (c) respectively, and contour plots of the angle-integrated spectrum are shown in (b). (d)-(f) Same as (a)-(c) for PhC with \(a\) = 124 nm. The colour scale is the same as that of Fig. 3. -based polariton lattices for the study of topological states and implementation of optoelectronic devices. ## Supplementary material Further details of the PhC structure and simulations used in the design process are given in the associated supplementary material file. ## Acknowledgments The work was supported by UK EPSRC Grants EP/N031776/1 and EP/R04385X/1 and by the Russian Science Foundation (Project No. 19-72-20120). ## The Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2307.09329
Towards a performance analysis on pre-trained Visual Question Answering models for autonomous driving
This short paper presents a preliminary analysis of three popular Visual Question Answering (VQA) models, namely ViLBERT, ViLT, and LXMERT, in the context of answering questions relating to driving scenarios. The performance of these models is evaluated by comparing the similarity of responses to reference answers provided by computer vision experts. Model selection is predicated on the analysis of transformer utilization in multimodal architectures. The results indicate that models incorporating cross-modal attention and late fusion techniques exhibit promising potential for generating improved answers within a driving perspective. This initial analysis serves as a launchpad for a forthcoming comprehensive comparative study involving nine VQA models and sets the scene for further investigations into the effectiveness of VQA model queries in self-driving scenarios. Supplementary material is available at https://github.com/KaavyaRekanar/Towards-a-performance-analysis-on-pre-trained-VQA-models-for-autonomous-driving.
Kaavya Rekanar, Ciarán Eising, Ganesh Sistu, Martin Hayes
2023-07-18T15:11:40Z
http://arxiv.org/abs/2307.09329v2
Towards a performance analysis on pre-trained Visual Question Answering models for autonomous driving ###### Abstract This short paper presents a preliminary analysis of three popular Visual Question Answering (VQA) models, namely ViLBERT, ViLT, and LXMERT, in the context of answering questions relating to driving scenarios. The performance of these models is evaluated by comparing the similarity of responses to reference answers provided by computer vision experts. Model selection is predicated on the analysis of transformer utilization in multimodal architectures. The results indicate that models incorporating cross-modal attention and late fusion techniques exhibit promising potential for generating improved answers within a driving perspective. This initial analysis serves as a launchpad for a forthcoming comprehensive comparative study involving nine VQA models and sets the scene for further investigations into the effectiveness of VQA model queries in self-driving scenarios. Supplementary material is available on the Github page. **Keywords:** Visual Question Answering, Transformers, Performance Analysis, Multi-modal Models ## 1 Introduction Visual Question Answering (VQA) is the process of generating natural language responses to open-ended questions by leveraging visual information derived from an image. This task encompasses the generation of textual answers to queries expressed in natural language. Visual question answering (VQA) holds significant importance for self-driving cars due to the requirement for enhanced perception and decision-making in autonomous vehicles. By incorporating VQA systems into the framework of self-driving cars, key benefits like Contextual Understanding, Enhanced Human-Machine Interaction, Adaptive Decision-Making, and Safety and Error Handling can be realized [20]. By integrating VQA capabilities, autonomous vehicles can enhance their perception, communication, and decision-making processes, ultimately leading to safer and more efficient driving experiences. This paper provides an introductory overview of the analysis conducted on three select models, focusing specifically on their performance in the domain of Visual Question Answering (VQA) with a strict focus on driving scenarios. It is part of a research study aimed at identifying the most effective VQA model for answering questions related to driving. Although there are review papers available on VQA models [23], there is a notable research gap, as none of these studies has conducted a model evaluation in the context of common driving scenarios. A survey has been conducted to observe how pretrained available models respond to questions and how similar or different the answers are when compared to humans. The comparative analysis done has led us to the result that the available models are not as suitable for questions in a driving context as they are in a general scenario and this is a research gap that could be exploited. Additionally, the authors observed that there has not been a thorough performance analysis conducted on this topic. ## 2 Background Study and Related Work VQA models incorporate multimodal architectures that utilize transformers to handle the fusion of visual and textual modalities. Transformers enable contextual understanding and information exchange between the visual and textual components of the input, facilitating more accurate and comprehensive question answering [26]. Therefore, multimodal models employ transformers to process and fuse information from different modalities. Within the domain of VQA, multimodal models leverage transformers to handle the integration of visual and textual information, allowing for enhanced understanding and improved performance in answering questions based on visual inputs. Transformers in multimodal models with vision and NLP refer to the application of transformer-based architectures in tasks that involve both visual and textual information. Transformers have demonstrated great success in natural language processing (NLP) tasks, thanks to their ability to capture long-range dependencies and model sequential data effectively. However, the integration of visual information poses unique challenges, and incorporating it into transformers allows for more powerful multimodal models. Traditionally, multimodal models combined visual and textual information using separate pathways, such as using convolutional neural networks (CNNs) for image processing and recurrent neural networks (RNNs) for language processing. Transformers offer an alternative approach that enables joint processing of both modalities within a unified architecture. Transformers are utilized in multimodal models for early fusion, late fusion, and cross-modal attention according to [21]. Early fusion involves the simultaneous processing of modalities to learn joint representations. Late fusion includes separate processing of modalities, followed by fusion to capture interdependencies. Cross-modal attention enables information exchange and alignment between modalities, enhancing multimodal understanding and integration. More details on how transformers are utilized in each of these types of models can be read in [1] and [21]. ## 3 Methodology In this study, we collected a comprehensive corpus of 78 research papers on Visual Question Answering (VQA)1 From this collection, we carefully selected nine models based on specific criteria for our analysis. These models were evaluated for user interface quality, code replication ease, and compatibility with our pretrained models. The initial experiment aimed to enhance the models' performance using the German Traffic Sign Recognition Benchmark (GTSRB) dataset, explicitly focusing on signboard interpretation [16]. However, the results revealed limited comprehension of driving-related matters by the models. This led us to conduct an additional experiment with computer vision experts, presenting them with contextually minimal images from our dataset, mirroring the approach used with the pre-trained models. Footnote 1: Full list of papers available at our Github page. For our experiment comparing human responses to multimodal models, we selected three models solely based on their utilization of transformers in their architectures from the previously mentioned nine models. A brief introduction about the three models chosen for the analysis: * Vision and Language BERT (ViBERT)- Early Fusion: extends BERT with a co-attention mechanism, integrating vision-attended language features into visual representations. It enables joint reasoning about text and images for visual grounding [14]. * Vision-and-Language Transformer (ViLT)- Cross-Modal Attention: aligns visual and textual features and generates joint representations through a visual encoder and a language encoder [15]. * Learning Cross-Modality Encoder Representations from Transformers (LXMERT)- Late Fusion: incorporates multi-level interactions between vision and language by employing cross-attention mechanisms. It captures the interplay between different modalities and generates accurate answers to the posed questions [13]. The authors conducted a survey consisting of two specific questions, namely "What are the contents of the image?" and "What should the driver do?", targeting a carefully chosen set of images all pertaining to driving scenarios. These images were selected from the MS COCO dataset. The survey was distributed among a cohort of ten Computer Vision Experts who provided responses to the questions based on the available options and the accompanying images. The answer that received the most votes was selected as the ground truth. The comprehensive outcomes of this survey are presented in Figure 1. The rationale behind asking both subjective and objective questions, namely "What are the contents of the image?" and "What should the driver do?", is to assess the model's ability to comprehend and respond to different types of questions in the context of visual information. Subjective questions, like "What are the contents of the image?", require the model to understand and interpret the visual content and provide a descriptive answer. These questions evaluate the model's capability to recognize objects, scenes, and other relevant visual elements depicted in the image. Objective questions, like "What should the driver do?", require the model to provide a specific action or response based on the given visual information. These questions assess the model's understanding of driving scenarios and ability to reason about the appropriate course of action. By including both subjective and objective questions, the experiment aims to evaluate different aspects of the model's performance. Subjective questions focus on the model's visual comprehension and scene understanding abilities, while objective questions assess its ability to provide contextually appropriate and practical responses in a driving context. This comprehensive evaluation helps to gauge the model's overall proficiency in visual question answering and its potential utility in real-world applications such as self-driving cars. The ground truth for the respective questions was evaluated against the answers generated by the pre-trained models. Figure 1 provides a visual representation of the model's performance in addressing the posed questions, allowing for an assessment of their effectiveness based on the ground truth. The rationale behind comparing the answers of three Visual Question Answering (VQA) models with human answers and using colour coding (green for correct, orange for wrong, yellow for partially correct) is to visually highlight the performance and discrepancies between the models and human responses as done in [10]. This visual representation allows for a quick and intuitive understanding of the accuracy and effectiveness of the models in comparison to human performance. ## 4 Results and Analysis Figure 1 concisely summarises the results from the experiment conducted on the selected models. ### Analysis In summary, the analysis of the evaluated models to date yields the following observations: ViLBERT demonstrates a lack of comprehension regarding the question "What are the contents of the image?", as it consistently provided the answer "nothing". However, when posed with the question "What are the objects of the image?", ViLBERT manages to produce answers, albeit quite often incorrect or of limited utility for the application at hand. Consequently, ViLBERT is not an optimal choice for fine-tuning within the context of self-driving scenarios. ViLT exhibits a certain level of capability in generating answers based on the provided images. Notably, when addressing the question "What should the driver do?", ViLT frequently responds with "Stop," as depicted in Figure 1. However, upon further investigation, it becomes apparent that the model does perform well in terms of question comprehension, and its object identification performance surpasses that of ViLBERT. This finding suggests that ViLT holds promise for fine-tuning with the GTSRB dataset, enabling it to learn how to effectively answer questions within a driving context. The LXMERT model demonstrates better performance in answering questions within a driving context. Although the model exhibits excellent object identification capabilities, the accuracy of its answers requires refinement. The authors noted that LXMERT's object identification algorithm effectively recognizes objects in various scenes, and can offer accurate scene descriptions with the important exception of accident-related images. This observation implies there exists potential to enhancing LXMERT's performance in driving scenarios through fine-tuning with the GTSRB dataset, thereby improving its performance in driving-specific use cases. Figure 1: Comparison of Responses: Computer Vision Experts vs. Selected Models Conclusions and Future Work This paper has reviewed the performance of three VQA models, namely ViLBERT, ViLT, and LXMERT, from a driver assistance perspective, focusing on model efficacy in terms of similarity to expert responses for posed questions. Based on the analysis presented in this paper, it is inferred that both ViLT and LXMERT exhibit promising performance in this application space. However, despite the advancements observed in these models, further research and development are required to address the specific challenges associated with driver assistance. The ability to accurately comprehend and respond to user queries in real-time scenarios remains a crucial aspect of enhancing the interaction between drivers and vehicles. Achieving a VQA model that can effectively interpret diverse driver inquiries, provide accurate answers, and adapt to dynamic driving conditions is essential for optimizing user-car interaction. Moving forward, the work will expand its scope by conducting a more comprehensive performance analysis that considers six additional selected models, including basic and fine-tuned pretrained models using the GTSRB dataset. The ultimate objective is to identify a preferred model that can be extensively trained with an expanded dataset that encompasses driving scenarios with subjective reference responses. Future research will focus on providing better codified contextual information to both experts and models, including camera location, velocity, acceleration, handbrake, and steering inputs, to enable more informed assessment of performance and to enable decisions on the next highest priority action to be taken with greater confidence. ## Acknowledgments This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6049. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
2307.11566
Discovery and characterisation of two Neptune-mass planets orbiting HD 212729 with TESS
We report the discovery of two exoplanets orbiting around HD 212729 (TOI\,1052, TIC 317060587), a $T_{\rm eff}=6146$K star with V=9.51 observed by TESS in Sectors 1 and 13. One exoplanet, TOI-1052b, is Neptune-mass and transits the star, and an additional planet TOI-1052c is observed in radial velocities but not seen to transit. We confirm the planetary nature of TOI-1052b using precise radial velocity observations from HARPS and determined its parameters in a joint RV and photometry analysis. TOI-1052b has a radius of $2.87^{+0.29}_{-0.24}$ R$_{\oplus}$, a mass of $16.9\pm 1.7$ M$_{\oplus}$, and an orbital period of 9.14 days. TOI-1052c does not show any transits in the TESS data, and has a minimum mass of $34.3^{+4.1}_{-3.7}$ M$_{\oplus}$ and an orbital period of 35.8 days, placing it just interior to the 4:1 mean motion resonance. Both planets are best fit by relatively high but only marginally significant eccentricities of $0.18^{+0.09}_{-0.07}$ for planet b and $0.24^{+0.09}_{-0.08}$ for planet c. We perform a dynamical analysis and internal structure model of the planets as well as deriving stellar parameters and chemical abundances. The mean density of TOI-1052b is $3.9^{+1.7}_{-1.3}$ g cm$^{-3}$ consistent with an internal structure similar to Neptune. A nearby star is observed in Gaia DR3 with the same distance and proper motion as TOI-1052, at a sky projected separation of ~1500AU, making this a potential wide binary star system.
David J. Armstrong, Ares Osborn, Vardan Adibekyan, Elisa Delgado-Mena, Saeed Hojjatpanah, Steve B. Howell, Sergio Hoyer, Henrik Knierim, Sérgio G. Sousa, Keivan G. Stassun, Dimitri Veras, David R. Anderson, Daniel Bayliss, François Bouchy, Christopher J. Burke, Jessie L. Christiansen, Xavier Dumusque, Marcelo Aron Fetzner Keniger, Andreas Hadjigeorghiou, Faith Hawthorn, Ravit Helled, Jon M. Jenkins, David W. Latham, Jorge Lillo-Box, Louise D. Nielsen, Hugh P. Osborn, José Rodrigues, David Rodriguez, Nuno C. Santos, Sara Seager, Paul A. Strøm, Guillermo Torres, Joseph D. Twicken, Stephane Udry, Peter J. Wheatley, Joshua N. Winn
2023-07-21T13:17:53Z
http://arxiv.org/abs/2307.11566v1
# Discovery and characterisation of two Neptune-mass planets orbiting HD 212729 with TESS ###### Abstract We report the discovery of two exoplanets orbiting around HD 212729 (TOI 1052, TIC 317060587), a \(T_{\rm eff}=6146\)K star with V=9.51 observed by _TESS_ in Sectors 1 and 13. One exoplanet, TOI-1052b, is Neptune-mass and transits the star, and an additional planet TOI-1052c is observed in radial velocities but not seen to transit. We confirm the planetary nature of TOI-1052b using precise radial velocity observations from HARPS and determined its parameters in a joint RV and photometry analysis. TOI-1052b has a radius of \(2.87^{+0.29}_{-0.24}\) R\({}_{\rm b}\), a mass of \(16.9\pm 1.7\) M\({}_{\rm b}\), and an orbital period of 9.14 days. TOI-1052c does not show any transits in the _TESS_ data, and has a minimum mass of \(34.3^{+4.1}_{-3.7}\) M\({}_{\rm b}\) and an orbital period of 35.8 days, placing it just interior to the 4:1 mean motion resonance. Both planets are best fit by relatively high but only marginally significant eccentricities of \(0.18^{+0.09}_{-0.07}\) for planet b and \(0.24^{+0.09}_{-0.08}\) for planet c. We perform a dynamical analysis and internal structure model of the planets as well as deriving stellar parameters and chemical abundances. The mean density of TOI-1052b is \(3.9^{+1.7}_{-1.3}\) g cm\({}^{-3}\) consistent with an internal structure similar to Neptune. A nearby star is observed in Gaia DR3 with the same distance and proper motion as TOI-1052, at a sky projected separation of -1500AU, making this a potential wide binary star system. keywords: exoplanets - planets and satellites: detection - planets and satellites: individual: (TOI-1052, TIC 317060587) ## 1 Introduction The Transiting Exoplanet Survey Satellite (_TESS_), launched in April 2018, has discovered more than 6000 exoplanet candidates and more than 300 confirmed exoplanets to date (Ricker et al., 2015), (see NASA Exoplanet Archive, Akeson et al., 2013). _TESS_ has observed nearly the entire sky, in a series of 27-day sectors, with many stars observed for two or more sectors. This large number of candidates is made up of both real exoplanets and false positives. High resolution and stable spectroscopy for radial velocity measurements is essential not only to confirm the candidate exoplanets but also to measure the mass of the planet precisely. Knowing both mass and radius of a planet provides us an estimation of the bulk density, internal structure and planetary composition. High resolution spectroscopy allows us to fully derive stellar parameters and chemical abundances allowing studies of planetary origin, formation and evolution (Alibert et al., 2010, 2015). The discovery of thousands of exoplanets over recent years has revealed a diverse population of exoplanets ranging from Earth-mass to Jupiter-mass, which also can be categorised in various sub-types including hot-Jupiters and super-Earths. A less well known group is the Neptunian exoplanets, raising questions about their overall occurrence, and an apparent lack of short period Neptune-mass exoplanets - the so-called Neptunian desert (Szabob & Kiss, 2011; Mazeh et al., 2016) which could be due to tidal disruption and photoevaporation (Beauge & Nesvorny, 2013; Mazeh et al., 2016), combined with potential migration of evaporating planets (Boue et al., 2012). Different programs including our NCORES and NOMADS HARPS large programs aim at a more detailed investigation of Neptunian planets, and have successfully discovered several such exoplanets (e.g. Nielsen et al., 2020; Otegi et al., 2021; Hoyer et al., 2021). The number of discoveries in this parameter space remains low and more exoplanets are needed to better understand the population. In this paper we report the discovery and confirmation of TOI-1052 b, a Neptune-like planet orbiting HD 212729, a G0 high proper motion southern star with a visual magnitude of 9.51 (_TESS_ mag of 9.02) and a non-transiting additional planet. We use high precision radial velocity (RV) measurements from the High Accuracy Radial velocity Planet Searcher spectrograph (HARPS, Pepe et al., 2002) mounted at the ESO La Silla 3.6 m telescope, in the framework of the NCORES program (e.g. Nielsen et al., 2020; Armstrong et al., 2020). Simultaneous analysis of the HARPS RV measurements using high resolution spectroscopy and _TESS_ photometry enable us to confirm the nature of the planets as well as determine the stellar parameters of the host star. The paper is organised as follows: the observations and data of the system are described in Sect. 2. The stellar parameters and signal analysis are described in Sect. 3. In Sect. 4 we describe the joint model and the discussion is presented in Sect. 5. ## 2 Observations ### _Tess_ photometry _TESS_ observed TOI 1052 (TIC 317060587) in Sectors 1 and 13, obtaining data from 25 July to 22 August 2018 and from 19 June to 17 July 2019. The data were reduced in the TESS Science Processing Operations Center (SPOC, Jenkins et al., 2016) pipeline. This pipeline is adapted from Kepler mission pipeline at NASA Ames Research Center (Jenkins et al., 2010). Transit events with 9.1 d orbital period were detected in the SPOC search of the two-minute cadence light curve for Sector 1 on 28 Aug 2018 and for Sector 13 on 27 July 2019 with an adaptive, noise-compensating matched filter (Jenkins, 2002; Jenkins et al., 2010, 2020). A limb-darkened transit model was fitted (Li et al., 2019) and a suite of diagnostic tests were conducted to assess the planetary nature of the signal (Twicken et al., 2018). The TESS Science Office (TSO) reviewed the SPOC data validation reports and issued an alert for TOI 1052.01 following the Sector 13 transit search on 16 August 2019 (Guerrero et al., 2021). In a search of the combined data from both sectors, the SPOC re \begin{table} \begin{tabular}{l c c} \hline \hline Parameter & Value & Source \\ \hline \hline Identifying Information & & \\ Identifier & HD 212729 & \\ TOI & TOI-1052 & _TESS_ \\ TIC ID & 317060587 & _TESS_ \\ 2MASS ID & 22300272-7538476 & 2MASS \\ Gaia ID & 6357524189130820992 & Gaia DR3 \\ \multicolumn{4}{l}{Astrometric Parameters} \\ R.A. (J2000, deg) & 337.51023851341 & Gaia DR3 \\ Dec (J2000, deg) & -75.64656089247 & Gaia DR3 \\ Parallax (mas) & 7.74 \(\pm\)0.01 & Gaia DR3 \\ Distance (pc) & 128.7\(\pm\)0.2 & Gaia DR3 \\ \multicolumn{4}{l}{Photometric Parameters} \\ \hline B & 10.09 \(\pm\) 0.04 & Tycho \\ V & 9.51 \(\pm\) 0.02 & Tycho \\ T & 9.02 \(\pm\) 0.01 & _TESS_ \\ G & 9.40 \(\pm\) 0.01 & Gaia DR3 \\ J & 8.56 \(\pm\) 0.02 & 2MASS \\ H & 8.25 \(\pm\) 0.04 & 2MASS \\ K & 8.25\(\pm\) 0.03 & 2MASS \\ W1 & 8.160 \(\pm\) 0.023 & WISE \\ W2 & 8.255 \(\pm\) 0.021 & WISE \\ W3 & 8.190 \(\pm\) 0.020 & WISE \\ W4 & 8.119 \(\pm\) 0.174 & WISE \\ \multicolumn{4}{l}{Abundances} \\ \hline \([\)Fe/H\(]\) (dex) & 0.140 \(\pm\) 0.013 & Sec. 3 \\ \([\)MgI/H\(]\) (dex) & 0.08 \(\pm\) 0.04 & Sec. 3 \\ \([\)AlI/H\(]\) (dex) & 0.10 \(\pm\) 0.02 & Sec. 3 \\ \([\)SiI/H\(]\) (dex) & 0.11 \(\pm\) 0.04 & Sec. 3 \\ \([\)TiI/H\(]\) (dex) & 0.12 \(\pm\) 0.03 & Sec. 3 \\ \([\)NiI/H\(]\) (dex) & 0.13 \(\pm\) 0.03 & Sec. 3 \\ \multicolumn{4}{l}{Bulk Parameters} \\ \hline Mass (\(M_{\odot}\)) & 1.204 \(\pm\) 0.025 & Sec. 3 (PARAM) \\ Radius (\(R_{\odot}\)) & 1.264 \(\pm\) 0.033 & Sec. 3 (PARAM) \\ \(T_{\rm eff}\) (K) & 6146 \(\pm\) 62 & Sec. 3 \\ log \(g\) (cm s\({}^{-2}\)) & 4.30 \(\pm\) 0.02 & Sec. 3 (Gaia) \\ log \(g\) (cm s\({}^{-2}\)) & 4.39 \(\pm\) 0.11 & Sec. 3 (spec) \\ \(v_{\rm mic}\) (km s\({}^{-1}\)) & 1.28 \(\pm\) 0.02 & Sec. 3 \\ \(v\sin i\) (km s\({}^{-1}\)) & 5.0 \(\pm\) 0.9 & Sec. 3 \\ P\({}_{rot}/\sin i\) (d) & 12.8 \(\pm\) 2.3 & Sec. 3 \\ Age (Gyrs) & 2.3 \(\pm\) 1.0 & Sec. 3 (PARAM) \\ mean \(log(R_{HK}^{*})\) & -5.32 \(\pm\) 0.02 & HARPS \\ \hline \end{tabular} Sources: _TESS_ (Stassun et al., 2019);2MASS (Skrutskie et al., 2006);Tycho (Høg et al., 2000);WISE (Wright et al., 2010); and Gaia Collaboration et al. (2016, 2022) \end{table} Table 1: Stellar parameters. ported a signal-to-noise ratio (SNR) of 10.62 and transit depth value of 358.4 + 33.7 parts-per-million (ppm) for the 9.14 d period transit event. The transit signal passed all diagnostic tests, and the source was localized within 2.12 +/- 4.55" of the target star. We used the publicly available Presearch Data Conditioning Simple Aperture Photometry (Twicken et al., 2010; Stumpe et al., 2012, 2014; Smith et al., 2012, PDC-SAP) light curves provided by the SPOC for the transit modeling. Fig. 1 shows the 2 min cadence _TESS_ light curve, and Fig. 2 shows the phase-folded curve for TOI 1052b. We also identified the 9.14 day transit event independently using the Transit Least Squares (TLS) algorithm Hippke & Heller (2019) with a Signal detection efficiency (SDE) of 19.67. No further significant periodic signal was detected in the lightcurve. Following Aller et al. (2020) we searched for sources of flux contamination by over-plotting the GAIA DR3 catalogue to the TESS Target Pixel Files (TPF), shown in Fig. 3. According to GAIA DR3 (Gaia Collaboration et al., 2022), one star exists inside the TESS aperture in addition to TOI-1052 (Gaia DR3 6357524189130821376), with a magnitude contrast in the Gaia pass-band of 5.38, leading to negligible dilution of the transit. The nearby star is separated from TOI-1052 by 11.51", is at a distance of 128.9 pc consistent with the distance to TOI-1052, and has similar proper motion to TOI-1052 as shown by the proper motion vectors in Fig. 3. As such, the stars potentially form a bound system with a projected sky separation of -1500 AU. The secondary star has a temperature of 3600 K derived from the Gaia passbands (Fouesneau et al., 2022). ### High-resolution imaging TOI-1052 was observed on 2021 July 07 UT using the Zorro speckle instrument on the Gemini South 8-m telescope (Scott et al., 2021; Howell & Furlan, 2022). Zorro provides simultaneous speckle imaging in two bands (562 nm and 832 nm) with output data products including a reconstructed image with robust contrast limits on companion detections. Five sets of \(1000\times 0.06\) s images were obtained at 832 nm only and processed in the standard reduction pipeline (see Howell et al., 2011). TOI-1052 was found to have no close companions within the angular and \(5\sigma\) contrast limits (5-7 magnitudes below the target star) achieved by the observations (see Fig. 4). The angular limits from the 8-m Gemini telescope range from the diffraction limit (20 mas) out to 1.2". At the distance of TOI-1052 (d=129.8 pc) these angular limits correspond to spatial limits of 2.6 to 155.8 AU. ### HARPS follow-up We collected 53 HARPS high-resolution spectra of TOI-1052 in three observation programs, between 2021-05-24 and 2021-09-22. The spectrograph is mounted at the ESO 3.6m telescope at La Silla Observatory, Chile (Mayor et al., 2003) and optimised to measure high precision RVs. The observations were carried out as part of the NCORES large program (46 obs, ID 1102.C-0249, PI: Armstrong), with supplementary observations from the NGTS-HARPS Program (5 obs, ID 0105.C-0773(A), PI: Wheatley) and the Small planets inside and out program (\(2\) obs, ID: 1106.C-0597(A), PI: Gandolfi). We used the HARPS Data Reduction Software (DRS) to reduce the data, using a G0 template in order to measure RVs using a cross-correlation function (CCF) (Pepe et al., 2002; Baranne et al., 1996). The spectrum signal-to-noise ratio (SNR) is approx. 40 per pixel leading to a mean photon-noise uncertainty of 2.06 m s\({}^{-1}\). The DRS was used to measure the full width half maximum (FWHM), the line bisector, and the contrast of the CCF, as well as several activity indicators. The mean \(log(R^{\prime}_{HK})\) of the star is \(-5.32\pm 0.02\) implying a relatively low magnetic activity level. The full RV timeseries is shown in Fig. 5. ## 3 Host star fundamental parameters ### Spectroscopic Analysis The stellar spectroscopic parameters (\(T_{\rm eff}\), log,\(g\), microturbulence, [Fe/H]) were estimated using the ARES+MOOG methodology. The methodology is described in detail in Sousa et al. (2021); Sousa (2014); Santos et al. (2013). To consistently measure the equivalent widths (EW) we used the latest version of ARES 1(Sousa et al., 2007, 2015). The list of iron lines is the same as the one presented in Sousa et al. (2008). For this we used a co-added HARPS spectrum of TOI-1052. In this analysis we use a minimization process to find the ionization and excitation equilibrium to converge on the best set of spectroscopic parameters. This process makes use of a grid of Kurucz model atmospheres (Kurucz, 1993) and the radiative transfer code MOOG (Sneden, 1973). We also derived a more accurate trigonometric surface gravity using recent GAIA data following the same procedure as described in Sousa et al. (2021) which provided a consistent value when compared with the spectroscopic surface gravity. The resulting spectroscopic parameters are given in Table 1. The derived temperature of \(6146\pm 62\)K is indicative of a late F star as opposed to the G0 type specified in the literature (Houk & Cowley, 1975), but not different at high enough confidence for us to reclassify the spectral type. Footnote 1: The latest version, ARES v2, can be downloaded at [https://github.com/soussaga/ARES](https://github.com/soussaga/ARES) The abundances of the following elements were also derived using the same tools and models as for the stellar spectroscopic parameters: Mg, Al, Ti, Si, and Ni (detailed in e.g. Adibekyan et al., 2012, 2015), neutron capture elements (used later to obtain ages) as explained in Delgado Mena et al. (2017), and C and O (following Bertran de Lis et al., 2015; Delgado Mena et al., 2021). Although the equivalent widths (EWs) of the spectral lines were automatically measured with ARES, we performed careful visual inspection of the EWs measurements. ### Stellar Mass, Radius and Age The stellar mass, radius and age were estimated from the spectroscopically derived parameters using the PARAM 1.3 web-interface2(da Silva et al., 2006), leading to \(R_{\star}=1.264\pm 0.033R_{\odot}\), \(M_{\star}=1.204\pm 0.025M_{\odot}\), and Age \(=2.3\pm 1.0\) Gyr. As an alternative we also estimated the stellar mass from the PARAM 1.3 values using the calibration presented in Torres et al. (2010a) which provided a consistent result (\(M_{\star,\rm Torres}=1.19\pm 0.03M_{\odot}\)). Footnote 2: [http://stev.oapd.inaf.it/cgi-bin/param_1.3](http://stev.oapd.inaf.it/cgi-bin/param_1.3) As an independent determination of the basic stellar parameters, we also performed an analysis of the broadband spectral energy distribution (SED) of the star together with the _Gaia_ EDR3 parallax (with no systematic offset applied; see, e.g., Stassun & Torres, 2021), in order to determine an empirical measurement of the stellar radius, following the procedures described in Stassun & Torres (2016); Stassun et al. (2017, 2018). We pulled the \(B_{T}v_{T}\) magnitudes from _Tycho-2_, the \(JHK_{S}\) magnitudes from _2MASS_, the W1-W4 magnitudes from _WISE_, and the \(GG_{\rm BP}G_{\rm BP}\) magnitudes from _Gaia_. Together, the available photometry spans the stellar SED over the wavelength range 0.4-22 \(\mu\)m. We performed a fit using Kurucz stellar atmosphere models, with the main parameters being the effective temperature (\(T_{\rm eff}\)), surface gravity (\(\log g\)), and metallicity ([Fe/H]), for which we adopted the spectroscopically determined values. The remaining free parameter is the extinction \(A_{V}\), which we limited to the maximum line-of-sight value from the Galactic dust maps of Schlegel et al. (1998). The resulting fit has a reduced \(\chi^{2}\) of 1.0 and best fit \(A_{V}=0.04\pm 0.04\). Integrating the model SED gives the bolometric flux at Earth, \(F_{\rm bol}=4.123\pm 0.096\times 10^{-9}\) erg s\({}^{-1}\) cm\({}^{-2}\). Taking the \(F_{\rm bol}\) and \(T_{\rm eff}\) together with the _Gaia_ parallax, gives the stellar radius, \(R_{\star}=1.293\pm 0.030\) R\({}_{\odot}\). In addition, we can again estimate the stellar mass from the empirical relations of Torres et al. (2010), giving \(M_{\star}=1.20\pm 0.07\) M\({}_{\odot}\). All of our methods of stellar parameter estimation produce consistent results. We adopt the PARAM 1.3 values going forwards, and these are listed in Table 1. ### Rotational period and age Our PARAM fit led to an estimated isochrone age of \(=2.3\pm 1.0\) Gyr. We are also able to estimate the stellar age via the chemical clocks method (see Delgado Mena et al., 2019). The ages estimated from different chemical clocks (together with the Teff and [Fe/H]) are listed in Table 2, giving a weighted average of \(1.9\pm 0.3\) Gyr consistent with the PARAM age. This small error bar just reflects the good agreement between the ages obtained from different chemical clocks. We conservatively adopt the PARAM age with its larger error bar, given the uncertainties associated with stellar age estimation. We do not find any evidence of periodic variability indicative of rotation in the _TESS_ lightcurves. Through the FWHM of the HARPS spectra CCF we are able to estimate the projected rotation velocity \(v\sin i\) for the star. The mean FWHM across the spectra is 9.12 kms\({}^{-1}\). Using a calibration similar to the one presented in Santos et al. (2002); Maldonado et al. (2017); Hojjatpanah et al. (2019, and references therein) this FWHM implies a \(v\sin i\) of \(5.63\pm 0.5\) kms\({}^{-1}\). We also re-derived the \(v\sin i\) by performing spectral synthesis with MOOG on 36 isolated iron lines and by fixing all the stellar parameters, macroturbulent velocity, and limb-darkening coefficient (Costa Silva et al., 2020), leading to a consistent value of \(\sin i=5.0\pm 0.9\) km/s, which we adopt. The linear limb-darkening coefficient (0.7) was determined using the ExoCTK package (Bourque et al., 2021) using the determined stellar parameters. The macroturbulent velocity (4.4 km/s) was determined using the temperature and gravity dependent empirical formula from Doyle et al. (2014). We estimated the (projected) rotation period directly via the spectroscopic \(v\sin i\) and the \(R_{\star}\) determined above, which gives \(P_{\rm rot}/\sin i=12.8\pm 2.3\) d. Assuming the stellar orbital inclination is \(i\approx 90^{\circ}\), then this would represent approximately the true rotation period. ### Signal identification We computed the \(l_{1}\) periodogram (e.g., Hara et al., 2017, 2020) to find periodicities in the RV data. The \(l_{1}\) periodogram uses the theory of compressed sensing adapted for handling correlated noise to analyze the radial velocity without the estimation of the frequency iteratively, see Hara et al. (2017, 2020) for more information. A fundamental difference of the \(l_{1}\) periodogram over the typically used Figure 1: Full _TESS_ PDCSAP lightcurve of TOI-1052 at 2-minute cadence with the best fit model overplotted in red. A typical error bar is shown in the top left. Bimmed datapoints of width 0.7d are shown. Figure 2: Fig. 1 data phase-folded on the best fitting period for TOI-1052 b with the best fit model overplotted. Bimmed datapoints of width 0.002 in phase are shown. Lomb-Scargle (Lomb, 1976; Scargle, 1982; VanderPlas, 2018) is that all possible frequencies are tested simultaneously. This method reduces aliases in the periodogram. Fig. 6 shows two significant signals, considering the model noise with a 1.5 m/s jitter noise, consistent with the eventual jitter found by our best fit model in Table 4. Both the 9.14 day transiting planet period and an additional 36.6 day period were found to be significant with a False Alarm Probability (FAP) \(<\) 1.0 % for the 9.14 day and 1.2% for the 36.6 day, as opposed to the other shown peaks which have FAP \(>\) 99%. Lomb-Scargle periodograms of the RVs with and without the planet signals, FWHM, bisector span (BIS), log RHK and CCF contrast, calculated with astropy(Astropy Collaboration et al., 2022), are shown in Fig. 7, to investigate the planet peaks further and consider a potential activity source for the significant signals. No significant power is found at the 9 day or 36 day periods in any of the indicators. Fig. 7 also demonstrates that two periodic components are required to model the RVs, with no further periodic signals found once the two planets are removed. Note the the initially most significant peak seen in the RVs in the Lomb-Scargle periodogram is at 22d, which is seen with low significance in the \(l_{1}\) periodogram. The 22d peak is an artefact arising from both the 9.14 day and 36 day planet peaks and vanishes when both planets are removed. Similarly, the 6d signal seen in both periodograms is an artefact of the planet b peak. Given the robust detection of the transiting planet in the radial velocities, we are able to confirm the known planetary candidate as TOI-1052 b. Given that there is no sign of stellar activity in any indicator at the 36.6 day additional period, and this period does not match the estimated stellar rotation period or its harmonics, we claim this as an additional planet in the system, TOI-1052 c. The joint fit of photometry and spectroscopy in Section 4 finds a period for planet c of 35.81\({}^{+0.45}_{-0.34}\)d. TOI-1052 c is just within the 4:1 resonance of planet b. The system dynamics are discussed in Section 5.1. ## 4 Joint modelling The photometry from _TESS_ and spectroscopy from HARPS were combined in a joint fit using the exoplanet(Foreman-Mackey et al., 2021) code framework. This package also makes use of starry(Luger et al., 2019) and PYMC3(Salvatier et al., 2016). The photo \begin{table} \begin{tabular}{l c} \hline \hline Clock & Value (Gyr) \\ \hline [Y/Zn] & 2.1 \(\pm\) 0.5 \\ [Y/Ti] & 1.6 \(\pm\) 0.7 \\ [Y/Mg] & 1.4 \(\pm\) 0.6 \\ [Sr/Ti] & 2.3 \(\pm\) 1.3 \\ [Sr/Mg] & 2.0 \(\pm\) 1.1 \\ [Y/Si] & 1.6 \(\pm\) 0.7 \\ [Sr/Si] & 1.8 \(\pm\) 1.2 \\ [Y/Al] & 2.7 \(\pm\) 0.3 \\ Weighted Mean & 1.9 \(\pm\) 0.3 \\ \hline \end{tabular} \end{table} Table 2: Chemical clock age estimates (see Delgado Mena et al., 2019, Table 10). Figure 4: 5\(\sigma\) contrast curve for high resolution imaging observations with Zorro/Gemini. The 832 nm reconstructed image is shown in the upper right. Figure 3: _TESS_ pixel data with GAIA DR3 data sources overplotted in sector 1 (top) and 13 (bottom). TOI-1052 is marked with a white cross and the magnitude contrast is shown as red circles. Arrows show the proper motion of each star. Aperture pixels are highlighted in red. Star 2 is a potential bound companion to TOI-1052 with consistent distance and proper motion. metric model is adjusted to account for the _TESS_ exposure time of 2 minutes. The model constructs two Keplerian orbits, one for each planet, with orbital period \(P\), epoch \(t_{0}\), impact parameter \(b\), eccentricity \(e\) and angle of periastron \(\omega\) as free parameters determining the orbit. The orbital period and epoch are drawn from Gaussian prior distributions with a mean drawn from initial fits and a standard deviation of 0.001 and 4 days for planets b and c respectively, approximately 10 times larger than the eventual errors on those parameters. The impact parameter is drawn uniformly between 0 and 1 + \(R_{p}/R_{*}\), where \(R_{p}\) is the planetary radius. \(e\) and \(\omega\) are drawn via scaling \(e\sin\omega\) and \(e\cos\omega\) from a unit disk distribution then deriving \(e\) and \(\omega\). Additionally the stellar mass and radius are allowed to vary in a Gaussian distribution according to their values from Section 3. Once the orbit is defined, the planet to star radius ratio \(R_{p}/R_{*}\) and radial velocity semi-amplitude \(K\) are drawn from wide uniform dis Figure 5: The full radial velocity HARPS timeseries showing the combined best fit model from planets b and c in red. Residuals after subtracting the model are shown below. Figure 6: \(l_{1}\) Periodogram as discussed in Section 3.4 showing significant peaks at 9.14 and 36.6d. Peak values and false alarm probabilities (FAPs) are shown above the periodogram. Figure 7: Lomb-Scargle periodogram of HARPS RVs and activity indicators. The orbital period of each planet is shown as vertical dashed lines. Horizontal dashed lines show the FAP 1% level. A 22d artefact is seen in the raw RVs at the top, but vanishes when both planet models are removed. tributions. Limb darkening parameters are drawn from the quadratic limb darkening parameterisation of Kipping (2013). We introduce a systematic radial velocity offset, a _TESS_ photometry offset, and instrumental jitter parameters for both instruments as extra parameters. Jitter is drawn from a broad Gaussian distribution in log-space to allow for a wide range of orders of magnitude, and is then added to the measured instrumental noise in quadrature. We do not include a model for the stellar noise, apart from the jitter term in the RVs, as no significant periodic signal was found in either the RVs, stellar activity indicators or photometry aside from the two planetary signals. We use a No U-Turn Sampler (NUTS) variant of the Hamiltonian Monte Carlo (HMC) algorithm to draw samples from the posterior chain, for which we use 12 chains each with 5,000 steps for a total of 60000 iterations. We treat the first 1500 samples drawn from each chain as burn-in and subsequently discard them. The resulting Gelman-rubin statistics (Brooks and Gelman, 1998) for each variable are \(<<\) 1.05, demonstrating the chains have converged. Our initial fits revealed a marginally significant eccentricity for both planets (at 2.5\(\sigma\) for planet b and 2.9\(\sigma\) for planet c). We present fit posterior values with eccentricity, and with both planets fixed at zero eccentricity, in Table 4. The resulting planet parameters are consistent in both models. To compare the models we calculate the WAIC (widely applicable information criterion), which estimates the expected log pointwise predictive density (elpd) of the models (for details on the criterion see Vehari et al., 2017; Watanabe, 2010). The eccentric model is slightly favoured with a higher elpd, with a difference of 4.0, although this difference is not large enough to be considered significant. We adopt the free eccentricity results going forwards. Through the results of this analysis, we determine that TOI-1052 b is a mini-Neptune of radius \(2.87^{+0.29}_{-0.24}\) R\({}_{\rm o}\) and mass \(16.9\pm 1.7\) M\({}_{\rm o}\). From these values we infer a bulk density of \(3.9^{+1.7}_{-1.3}\) g Figure 8: Radial velocities phase-folded at the best fitting period of TOI-1052 b, with best fit model overplotted in red. \begin{table} \begin{tabular}{l l l} \hline **Parameter** & **(unit)** & **Prior Distribution** \\ \hline **Planet b** & & \\ Period \(P_{b}\) & (days) & \(\mathcal{N}(9.13966,0.001)\) \\ Ephemeris \(t_{0,b}\) & (BJD- & \(\mathcal{N}(1332.9448,0.02)\) \\ & & 2457000) \\ Radius log (\(R_{b}\)) & (\(\log\) R\({}_{\rm o}\)) & \(\mathcal{N}(-3.733^{\circ},1.0)\) \\ Impact Parameter \(b_{b}\) & & \(\mathcal{U}(0,1+R_{b}/R_{*})\) \\ \(c_{b}\sin\omega_{b}\) & & \(\mathcal{U}(\rm Unit\ disk)\) \\ \(\epsilon_{b}\cos\omega_{b}\) & & \(\mathcal{U}(\rm Unit\ disk)\) \\ \(K_{b}\) & (m s\({}^{-1}\)) & \(\mathcal{U}(0,0.50.0)\) \\ \hline **Planet c** & & \\ Period \(P_{c}\) & (days) & \(\mathcal{N}(35.97306,4.0)\) \\ Ephemeris \(t_{0,c}\) & (BJD- & \(\mathcal{N}(2423.3168,20.0)\) \\ & & 2457000) \\ \(e_{c}\sin\omega_{c}\) & & \(\mathcal{U}(\rm Unit\ disk)\) \\ \(e_{c}\cos\omega_{c}\) & & \(\mathcal{U}(\rm Unit\ disk)\) \\ \(K_{c}\) & (m s\({}^{-1}\)) & \(\mathcal{U}(0,0.50.0)\) \\ \hline **Star** & & \\ Mass \(M_{*}\) & (M\({}_{\rm o}\)) & \(\mathcal{N}_{\rm g}(1.204,0.025,0.0,3.0)\) \\ Radius \(R_{*}\) & (R\({}_{\rm o}\)) & \(\mathcal{N}_{\rm g}(1.264,0.033,0,0,3.0)\) \\ \hline **Photometry** & & \\ TESS mean & & \(\mathcal{N}(0.0,1.0)\) \\ log (Jitter) & (m s\({}^{-1}\)) & \(\mathcal{N}(-7.40^{\circ},10)\) \\ \hline **HARPS RVs** & & \\ Offset & (m s\({}^{-1}\)) & \(\mathcal{N}(54945.0,10.0)\) \\ log (Jitter) & (m s\({}^{-1}\)) & \(\mathcal{N}(0.37^{\circ},5.0)\) \\ \hline **Distributions:** & & \\ \(\mathcal{N}(\mu,\sigma)\): a normal distribution with a mean \(\mu\) and a standard deviation \(\sigma\), & \\ \(\mathcal{N}_{\rm g}(\mu,\sigma,a,b)\): a bounded normal distribution with a mean \(\mu\), a standard deviation \(\sigma\), a lower bound \(a\), and an upper bound \(b\) (bounds optional): & \\ \(\mathcal{U}(a,b)\): a uniform distribution with a lower bound \(a\), and an upper bound \(b\). & \\ **Prior values:** & & \\ \({}^{*}\) equivalent to \(0.5(\log(D))+\log(R_{*})\) where \(D\) is the transit depth (ppm multiplied by 10\({}^{-6}\)) and \(R_{*}\) is the mean of the prior on the stellar radius (R\({}_{\rm o}\)); & \\ \({}^{\dagger}\) equivalent to the log of the minimum error on the HARPS data (m s\({}^{-1}\)), or the mean error on the _TESS_ data. We fit a log value to enforce an broad, non-zero prior covering several orders of magnitude. \begin{table} \begin{tabular}{l l l} \hline **Parameter** & **(unit)** & **Prior Distribution** \\ \hline **Planet b** & & \\ Period \(P_{b}\) & (days) & \(\mathcal{N}(9.13966,0.001)\) \\ Ephemeris \(t_{0,b}\) & (BJD- & \(\mathcal{N}(1332.9448,0.02)\) \\ & & 2457000) \\ Radius log (\(R_{b}\)) & (\(\log\) R\({}_{\rm o}\)) & \(\mathcal{N}(-3.733^{\circ},1.0)\) \\ Impact Parameter \(b_{b}\) & & \(\mathcal{U}(0,1+R_{b}/R_{*})\) \\ \(c_{b}\sin\omega_{b}\) & & \(\mathcal{U}(\rm Unit\ disk)\) \\ \(\epsilon_{b}\cos\omega_{b}\) & & \(\mathcal{U}(\rm Unit\ disk)\) \\ \(K_{b}\) & (m s\({}^{-1}\)) & \(\mathcal{U}(0,0.50.0)\) \\ \hline **Planet c** & & \\ Period \(P_{c}\) & (days) & \(\mathcal{N}(35.97306,4.0)\) \\ Ephemeris \(t_{0,c}\) & (BJD- & \(\mathcal{N}(2423.3168,20.0)\) \\ & & 2457000) \\ \(e_{c}\sin\omega_{c}\) & & \(\mathcal{U}(\rm Unit\ disk)\) \\ \(e_{c}\cos\omega_{c}\) & & \(\mathcal{U}(\rm Unit\ disk)\) \\ \(K_{c}\) & (m s\({}^{-1}\)) & \(\mathcal{U}(0,0.50.0)\) \\ \hline **Star** & & \\ Mass \(M_{*}\) & (M\({}_{\rm o}\)) & \(\mathcal{N}_{\rm g}(1.204,0.025,0.0,3.0)\) \\ Radius \(R_{*}\) & (R\({}_{\rm o}\)) & \(\mathcal{N}_{\rm g}(1.264,0.033,0,0,3.0)\) \\ \hline **Photometry** & & \\ TESS mean & & \(\mathcal{N}(0.0,1.0)\) \\ log (Jitter) & (m s\({}^{-1}\)) & \(\mathcal{N}(-7.40^{\circ},10)\) \\ \hline **HARPS RVs** & & \\ Offset & (m s\({}^{-1}\)) & \(\mathcal{N}(54945.0,10.0)\) \\ log (Jitter) & (m s\({}^{-1}\)) & \(\mathcal{N}(0.37^{\circ},5.0)\) \\ \hline **Distributions:** & & \\ \(\mathcal{N}(\mu,\sigma)\): a normal distribution with a mean \(\mu\) and a standard deviation \(\sigma\), & \\ \(\mathcal{N}_{\rm g}(\mu,\sigma,a,b)\): a bounded normal distribution with a mean \(\mu\), a standard deviation \(\sigma\), a lower bound \(a\), and an upper bound \(b\) (bounds optional): & \\ \(\mathcal{U}(a,b)\): a uniform distribution with a lower bound \(a\), and an upper bound \(b\). & \\ **Prior values:** & & \\ \({}^{*}\) equivalent to \(0.5(\log(D))+\log(R_{*})\) where \(D\) is the transit depth (ppm multiplied by 10\({}^{-6}\)) and \(R_{*}\) is the mean of the prior on the stellar radius (R\({}_{\rm o}\)); & \\ \({}^{\dagger}\) equivalent to the log of the minimum error on the HARPS data (m s\({}^{-1}\)), or the mean error on the _TESS_ data. We fit a log value to enforce an broad, non-zero prior covering several orders of magnitude. \end{table} Table 3: Prior distributions used in our joint fit model, fully described in Section 4. The priors are created using distributions in PyNC3 with the relevant inputs to each distribution described in the table footer. Fit results and derived parameters can be found in Table 4 Figure 9: Radial velocities phase-folded at the best fitting period of TOI-1052 c transiting planet TOI-1052 c is found to have \(M\) sin \(i_{c}=34.3^{+4.1}_{-3.7}\) M\({}_{\oplus}\). No evidence of transits is seen for planet c. Fig. 10 shows the two planets in the context of the exoplanet population. The near 4:1 ratio of the orbital periods, and potential eccentricity, invite questions as to whether there is a third planet in between planets b and c, forming a 1:2:4 ratio. Two planets may mimic a single planet with an eccentric orbit in radial velocity observations, although TOI-1052 c is below the 3\(\sigma\) significant eccentricity criterion for this issue found in Wittenmyer et al. (2019). We could not find any evidence of such a hidden planet, which might be expected to show in the radial velocity residuals if the model is forced to be circular. Absent that evidence, we proceed with the two-planet model but note this possibility in case future observations can probe the system further. ## 5 Discussion ### Dynamical analysis The wide range of allowed eccentricities for both planets raise questions as to their stability and dynamical interactions. Further, the approximate 4:1 ratio of the planets' orbital periods invites a more detailed analysis of a potential resonant interaction and how it affects the stability of the system. First, because this system is a two-planet system, we can determine if it is Hill stable analytically. By using the procedure out \begin{table} \begin{tabular}{l l l l} \hline Model & \multicolumn{2}{c}{With eccentricity (adopted)} & \multicolumn{2}{c}{Fixed eccentricity} \\ \multicolumn{4}{l}{System Parameters:} \\ \(u_{1,TESS}\).. & & \(0.80^{+0.61}_{-0.55}\) & \(0.85^{+0.99}_{-0.87}\) \\ \(u_{2,TESS}\).. & & \(-0.07^{+0.32}_{-0.49}\) & \(-0.12^{+0.43}_{-0.48}\) \\ \(TES_{\mathrm{Softw}}\). & ppm & \(-7.9^{+0.33}_{-0.33}\) & \(-7.9^{+0.2}_{-0.32}\) \\ \(\sigma_{TESS}\).. & ppm & \(61.0^{+2.5}_{-2.3}\) & \(61.0^{+2.7}_{-2.1}\) \\ \(\sigma_{AMPS}\).. & m/s & \(0.79^{+0.65}_{-0.65}\) & \(1.62^{+0.42}_{-0.42}\) \\ Systemic RV & m/s & \(54946.0^{+0.38}_{-0.39}\) & \(54945.75^{+0.38}_{-0.39}\) \\ \multicolumn{4}{l}{Planetary Parameters:} \\ \(P\)......... & c \\ \(T_{0}\).. lined in Veras et al. (2013), which is based on the equations in Domison (2006, 2011), we find that the TOI-1052 is Hill stable for all planetary eccentricities in the ranges \(e_{\rm b}=0.0-0.3\) and \(e_{\rm c}=0.0-0.3\). In fact, the system is comfortably Hill stable: even in the scenario where \(e_{\rm b}=e_{\rm c}=0.3\), the system would be Hill stable for \(a_{\rm c}/a_{\rm b}\geq 2.12\), whereas actually \(a_{\rm c}/a_{\rm b}\approx 2.49\). Hence, residence in a strong mean-motion resonance is not necessarily required to stabilise the system. Nevertheless, the system's proximity to a strong mean-motion resonance is of interest, particularly in context of the entire exoplanet population. Fig. 5 of Weiss et al. (2022) illustrates a statistically significant asymmetry in the population of two-planet pairs which reside just interior versus just exterior to the strongest (first-order) mean-motion resonances, first noted in Fabrycky et al. (2014). The observed population of 4:1 planetary pairs might not yet be high enough for a 4:1 asymmetry to be detectable. In this respect, the TOI-1052 system might provide a valuable data point, although it cannot be excluded that another planet lies between the two detected planets. In order to explore the system's proximity to resonance, we employ the semianalytic libration width prescription of Gallardo et al. (2021). This prescription effectively computes bounds within which mean-motion resonant behaviour is possible through a numerical procedure mixed in with analytical theory. We plot the libration width curves for four cases (\(e_{\rm b}=0.0,0.1,0.2,0.3\)) in Fig. 11 by using an eccentricity resolution of 0.025 in the numerical integration. Superimposed are the uncertainties for the current location of TOI-1052 c. Comparing these uncertainties with the libration width locations indicates that the TOI-1052 system is definitely not in resonance (at least a third-, second- or first-order resonance), and resides just interior to the 4:1 resonance. The system's close proximity to resonance is characteristic of many exoplanetary systems, although the proximity to a relatively high-order resonance is noteworthy. Proximity to resonance can be a marker for the differential migration rates of planets in their nascent protoplanetary disc, although to date this has primarily been investigated in depth for first-order resonances (Huang & Ormel, 2023). ### Internal structure TOI-1052 b is similar to Uranus or Neptune in mass, but has a considerably smaller radius and therefore a denser interior. Fig. 12, which shows the Mass-Radius relation, demonstrates that TOI-1052 b is located between the water line and Earth-like compositional line, suggesting a significant fraction of refractory materials. In comparison, both Uranus and Neptune are located above the pure-water line. For TOI-1052b, two limiting cases come to mind: a planet with a rocky interior and a substantial hydrosphere and a refractory-rich planet with a primordial H-He atmosphere. We investigate these two scenarios with a layered interior model, consisting of up to four layers: a H-He atmosphere, a water layer, a silicate mantle, and an iron core (see Dorn et al., 2017). Using the inferred age and elemental abundances of TOI-1052, we solve the standard structure equations for two models: 1.) a model where we assume that TOI-1052 b contains no water (_no-water model_) and 2.) a model where we conversely assume that TOI-1052 b contains no H-He atmosphere (_no-atmosphere model_). We put no constraints on the compositions, i.e., the elemental ratios, of the other layers. As a result, the iron-to-rock ratio can take any value. For both models, we apply a nested sampling algorithm (Buchner et al., 2014) to explore the permitted parameter ranges that reproduce the measured masses and radii of TOI-1052 b. We find that the no-water model favors a core-to-mantle mass fraction of nearly unity: \(0.96\pm 0.17\) with a H-He envelope of \(2^{+1.4}_{-0.8}\) %. In the case of the no-atmosphere model, while the core-to-mantle mass fraction is poorly constrained (\(0.6\pm 0.5\)), this model predicts a water mass fraction of \(0.43\pm 0.12\). The larger uncertainties are caused by the wide range of possible core, mantle, and water layer masses that can reproduce the observed radius and mass compared to the no-water model. Assuming a fixed iron-to-rock ratio, e.g., similar to the host star's elemental ratios, decreases the model's uncertainty significantly. We note that at high planetary masses, layers might not be as distinct as assumed here (e.g., Helled & Stevenson, 2017; Bodenheimer et al., 2018). Moreover, the atmospheric mass fraction may be underestimated due to pollution of the H-He envelope by heavier elements, leading to further contraction of the atmosphere (Lozovsky et al., 2018). The interior model also neglects any water that is dissolved deep in the interior, which could increase the overall water mass fraction (Dorn & Lichtenberg, 2021). Nevertheless, while these details could change the exact values inferred here, it is clear that TOI-1052 b is enriched with refractory materials and any H-He atmosphere is likely to be minimal. Additionally, the planet's elemental abundances could differ from its host star, which can change the mantle and the temperature structure. We therefore also considered structure models with varying elemental abundances to investigate this effect. We find that the inferred possible compositions and their error for TOI-1052 b do not change significantly. ## 6 Conclusions We report the discovery and characterisation of two new planets just outside the 4:1 mean motion resonance in the bright, V=9.5 TOI-1052 system, using _TESS_ mission data and HARPS RV measurements. We used high-resolution imaging from the Zorro speckle imaging instrument in order to investigate the presence of any nearby Figure 11: Proximity of the two planets in the TOI-1052 to the 4:1 mean motion resonance. The four pairs of curves are libration widths for this resonance. These curves, moving outwards, correspond to \(e_{\rm b}=0.0,0.1,0.2,0.3\). The planet TOI-1052 c is nearly outside all of these curves, adding to the asymmetry seen around mean-motion commensurabilities in the exoplanet population. This system is also Hill stable, with the critical limit off the scale of the plot. companions and find none within the detector limits. We estimated the projected stellar rotation period to be around 12.8 days from measuring line broadening in the spectra, and derived stellar parameters, chemical abundances and an age estimate to reveal the system in more detail. TOT-1052b is a Neptune-mass planet with a sub-Neptune radius, with a potentially eccentric 9.13 d orbit. The planet's density of \(3.93^{+1.7}_{-1.3}\) g/cm\({}^{3}\) implies a composition denser with more heavy elements than Neptune. Limiting case layered interior models show a degeneracy between a rocky planet with a 2% H-He atmosphere and a water-rich planet with a water mass fraction of 0.43. The companion planet TOI-1052c shows an \(M_{P}\sin{i}\) of \(34.3^{+1.4}_{-3.7}M_{\oplus}\), approx. double the mass of planet b, and orbits on a 35.8d period. Given its presence near the 4:1 mean motion resonance, and the potential eccentricity of both planets, the system provides an interesting case study for dynamical interactions. ## Data Availability _TESS_ data is accessible via the MAST (Mikulski Archive for Space Telescopes) portal at [https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html). Imaging data from Zororre are accessible via the ExoFOP-TESS archive at [https://exofop.ipac.caltech.edu/tess/target.php?id=317060587](https://exofop.ipac.caltech.edu/tess/target.php?id=317060587). The exoplanet modelling code and associated python scripts for parameter analysis and plotting are available upon reasonable request to the author. Radial velocity data is presented in Table 1. ## Acknowledgements Based on observations collected at the La Silla Observatory, ESO(Chile), with the HARPS spectrograph at the 3.6-m telescope for programs \(1102.C-0249(F)\), \(106.21TJ.001\) and \(105.20G9.001\). DJA is supported by UKRI through the STFC (ST/R00384X/1) and EPSRC (EP/X027556/1). AO and FH are funded by an STFC studentship. Co-funded by the European Union (ERC, FIERCE, 101052347). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. This work was supported by FCT - Fundacao para a Ciencia e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionalizacao by these grants: UIDB/04434/2020; UIDP/04434/2020. VA acknowledges the support from FCT through the following grant: 2022.06962.PTDC. SGS acknowledges the support from FCT through Investigador FCT contract nr. CEECIND/00826/2018 and POPH/FSE (EC). EDM. acknowledges the support from FCT through Investigador FCT contract nr. 2021.01294.CEECIND. HK and RH carried out this work within the framework of the NCCR Planets supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. CD acknowledges support from the Swiss National Science Foundation under grant PZ00P2_174028. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement SCORE No 851555). J.L-B. is partly funded by grants LCF/BQ/PI20/11760023, Ramon y Cajal fellowship with code RYC2021-031640-I, and the Spanish MCIN/AEI/10.13039/501100011033 grant PID2019-107061GB-C61. SH acknowledges CNES funding through the grant 837319. This work made use of tpfplotter by J. Lillo-Box (publicly available in www.github.com/jllib/tpfplotter), which also made use of the python packages astropy, lightkurve, matplotlib and numpy. Funding for the TESS mission is provided by NASA's Science Mission Directorate. We acknowledge the use of public TESS data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products. This research has made use of the Exoplanet Follow-up Observation Program website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. Some of the observations in this paper made use of the High-Resolution Imaging instrument Zorro and were obtained under Gemini LLP Proposal Number: GN/S-2021A-LP-105. Zorro was funded by the NASA Exoplanet Exploration Program and built at the NASA Ames Research Center by Steve B. Howell, Nie Scott, Elliott P. Horch, and Emmett Quigley. Zorro was mounted on the Gemini South telescope of the international Gemini Observatory, a program of NSF's OIR Lab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. on behalf of the Gemini partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia e Innovacion (Argentina), Ministerio da Ciencia, Tecnologia, Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. Figure 12: Mass–radius diagram showing various internal structure lines from our model labelled in the legend. TOI-1052b can be explained by an Earth-like composition with either 50% water or a H-He atmosphere as described in the text.
2302.12869
On critical thresholds for hyperbolic balance law systems
We review the theoretical development in the study of critical thresholds for hyperbolic balance laws. The emphasis is on two classes of systems: Euler-Poisson-alignment (EPA) systems and hyperbolic relaxation systems. We start with an introduction to the `Critical Threshold Phenomena' and study some nonlocal PDE systems, which are important from modeling point of view.
Manas Bhatnagar, Hailiang Liu
2023-02-24T20:06:22Z
http://arxiv.org/abs/2302.12869v1
# On critical thresholds for hyperbolic balance law systems ###### Abstract. We review the theoretical development in the study of critical thresholds for hyperbolic balance laws. The emphasis is on two classes of systems: Euler-Poisson-alignment (EPA) systems and hyperbolic relaxation systems. We start with an introduction to the 'Critical Threshold Phenomena' and study some nonlocal PDE systems, which are important from modeling point of view. Key words and phrases:Critical thresholds, global regularity, shock formation, Euler-Poisson system 2020 Mathematics Subject Classification: 35A01; 35B30; 35B44; 35L45 ## 1. Introduction For first order hyperbolic conservation laws, it is generic that the solutions lose smoothness even if the initial data is smooth, [8, 14]. However, addition of source terms can balance this 'breaking' and result in global-in-time smooth solutions for a large class of initial data. For the question of global behavior of strong solutions, the choice of the initial data and/or damping forces is decisive. The classical stability analysis can fail either for large perturbations of some wave patterns or when the steady state solution may be only conditionally stable due to the weak dissipation in the system, see for example [11, 21]. On the other hand, the notion of critical threshold (CT) has been shown to be powerful in describing the conditional stability for underlying physical problems, and the associated phenomena does reflect the delicate balance among various forcing mechanisms, [1, 2, 7, 9]. An example to illustrate this is the pressureless Euler-Poisson (EP) system that consists of the continuity equation, Burgers' equation with an electric source through a potential, and the Poisson equation for the potential, \[\begin{split}&\rho_{t}+(\rho u)_{x}=0,\\ & u_{t}+uu_{x}=-\phi_{x},\\ &-\phi_{xx}=\rho,\end{split} \tag{1.1}\] with smooth initial data \((\rho_{0}\geq 0,u_{0})\in C^{1}(\mathbb{R})\times C^{1}(\mathbb{R})\). Taking the spatial derivative of the second equation and setting \(g(t,X):=u_{x}(t,X)\) for \(\frac{dX}{dt}=u(t,X)\), we can obtain an ODE system along the characteristics, \[\begin{split}\frac{d(\rho(t,X))}{dt}&=-\rho g,\\ \frac{d(g(t,X))}{dt}&=-g^{2}+\rho.\end{split}\] We have omitted the parameter for the ODE, that is a consequence of the method of characteristics, to avoid excess notation. For global well-posedness of (1.1), the issue now Introduction The study of the Euler-Poisson-alignment (EPA) system models is a very important topic in the study of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system. The Euler-Poisson-alignment (EPA) system is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson-alignment (EPA) system, which is a generalization of the Euler-Poisson- results for \(\psi\in L^{1}\). In [6], we relaxed the hypothesis and used a different, more elementary technique to arrive at the result. The gist of the result is mentioned in Theorem 2.3. EPA systems with background (\(c>0\)) have to be studied on a periodic spatial domain and not on \(\mathbb{R}\). This is owing to assumptions required for local well-posedness. Therefore, we let the spatial variable space be \(\mathbb{T}=[-1/2,1/2)\), the periodic torus. Local existence requires the assumption: \(\int_{-1/2}^{1/2}\rho(t,y)-c\,dy=0\). This equality holds for all time if it holds initially. This is because mass is conserved by (2.1a). In view of this, we set, \[c=\int_{-\frac{1}{2}}^{\frac{1}{2}}\rho_{0}(y)\,dy.\] Our first set of results is for \(\psi\in L^{\infty}(\mathbb{T})\) having, \[0\leq\psi_{min}\leq\psi\leq\psi_{max}. \tag{2.2}\] With the above assumption, (2.1) can be reformulated into a simpler system. Setting \(G:=u_{x}+\psi*\rho\) with \(G_{0}(x):=u_{0x}(x)+(\psi*\rho_{0})(x)\), we obtain the following. \[G_{t}+(Gu)_{x}=k(\rho-c), \tag{2.3b}\] \[\rho_{t}+(\rho u)_{x}=0, \tag{2.3a}\] with initial data \((G_{0},\rho_{0})\in H^{s}(\mathbb{T})\times(H^{s}(\mathbb{T})\cap L^{1}_{+}( \mathbb{T}))\), for \(s>1/2\). The local existence for such system is known, [7]. In particular, if initial data is smooth, then a smooth solution exists for some finite time. **Theorem 2.1** (Bounded alignment force).: _Consider (2.3) with repulsive electric force \(k>0\) and bounded alignment influence \(\psi\) satisfying (2.2). Set \(\lambda:=2\sqrt{\frac{k}{c}}\). Suppose the initial data \((G_{0},\rho_{0})\) is smooth and lies in the space mentioned above. Then there exist sets \(\Sigma_{1},\Sigma_{2},\Sigma_{3}\) such that,_ 1. Weak alignment _(_\(\psi_{max}<\lambda\)_): under the admissible condition_ (2.4) \[\psi_{max}-\psi_{min}<\frac{e^{\frac{\tan^{-1}\delta}{2}}\left(1-e^{-\frac{ \pi}{2}-\frac{\pi}{2}}\right)}{2\left(1+e^{-\frac{\pi}{2}}\right)}\lambda,\] _if the initial data lie in the subcritical region_ \(\Sigma_{1}\)_, namely_ \[\left(G_{0}(x),\rho_{0}(x)\right)\in\Sigma_{1},\quad\forall\,x\in\mathbb{T},\] _then (_2.3_) admits global-in-time classical solutions._ 2. Strong alignment _(_\(\psi_{min}\geq\lambda\)_): if the initial data lie in the subcritical region_ \(\Sigma_{2}\)_, namely_ \[\left(G_{0}(x),\rho_{0}(x)\right)\in\Sigma_{2},\quad\forall\,x\in\mathbb{T},\] _then (_2.3_) admits global-in-time classical solutions._ 3. Medium alignment _(_\(\psi_{min}<\lambda\leq\psi_{max}\)_): under the admissible condition_ (2.5) \[\psi_{max}-\psi_{min}<\frac{e^{\frac{\tan^{-1}\delta}{2}}}{2\left(1+e^{-\frac{ \pi}{2}}\right)}\lambda,\] _if the initial data lie in the subcritical region_ \(\Sigma_{3}\)_, namely_ \[\left(G_{0}(x),\rho_{0}(x)\right)\in\Sigma_{3},\quad\forall\,x\in\mathbb{T},\] _then (_2.3_) admits global-in-time classical solutions._ _Here, the parameters \(\hat{z}\) and \(\tilde{z}\) are defined as_ \[\hat{z}:=\sqrt{\left(\frac{\lambda}{\psi_{max}}\right)^{2}-1}\quad\text{and} \quad\tilde{z}:=\sqrt{\left(\frac{\lambda}{\psi_{min}}\right)^{2}-1}. \tag{2.6}\] _Note that \(\hat{z}\), \(\tilde{z}\) could be real, purely imaginary, as well as infinity._ The weak and medium alignment situations require an additional structural inequality (2.4) and (2.5) respectively, so that a subcritical region can be obtained through our techniques. These inequalities only depend on the parameters of the EPA system, \(k,c,\psi\). These conditions arise due to the presence of oscillatory solutions and as a consequence of our method in handling these to arrive at the thresholds. In the strong alignment case, the solutions decay exponentially to the equilibrium solution without any oscillations, obviating the requirement of any additional condition. We also prove the corresponding finite-time-breakdown result but do not include it here. We obtain regions \(\Delta_{1},\Delta_{2},\Delta_{3}\) which are the supercritical regions for the weak, strong and medium alignment cases respectively. Similar to the example in the introduction, the fundamental step in analyzing EPA systems for thresholds is to derive an ODE system along the common characteristic path, \(\left\{(t,x):\frac{dx}{dt}=u(t,x(t)),\,x(0)=\alpha\right\}\). \(\alpha\in\mathbb{T}\) is the parameter which is fixed for a single characteristic path. The resulting ODE system is analyzed for each path. The global well-posedness of unknowns in (2.3) is obtained by combining the all-time-existence of the unknowns in (2.7) for all \(\alpha\in\mathbb{T}\). For (2.3), the resulting ODE system is, \[G^{\prime}=-G(G-\psi*\rho)+k(\rho-c), \tag{2.7b}\] \[\rho^{\prime}=-\rho(G-\psi*\rho). \tag{2.7a}\] Next, we move on to another important situation of that of the weakly singular kernel, that is, \(\psi\in L^{1}(\mathbb{T})\). Here, we do not have the (2.2) type of bounds which are essential in the threshold analysis. Therefore, we have to modify our technique. Here, we need to improve the bounds on \(\psi*\rho\) to obtain valid thresholds. Following is the global existence result. **Theorem 2.2** (Weakly singular alignment force).: _Let \(\psi\in L^{1}(\mathbb{T})\). Set \(\lambda:=2\sqrt{\frac{k}{c}}\). Suppose the initial data to (2.3) satisfies the hypothesis of Theorem 2.1. Then there exists sets \(\Sigma^{1}_{L},\Sigma^{2}_{L},\Sigma^{3}_{L}\) such that,_ 1. Weak alignment _(_\(\|\psi\|_{L^{1}}-\gamma<\frac{\lambda}{2}\)_): under the admissible condition_ (2.8) \[4(\|\psi\|_{L^{1}}-2\gamma)<\frac{e^{\frac{\tan^{-1}\hat{z}}{z}}\left(1-e^{- \frac{\pi}{2}-\frac{\pi}{2}}\right)}{2\left(1+e^{-\frac{\pi}{2}}\right)}\lambda,\] _if the initial data lie in the subcritical region_ \(\Sigma^{1}_{L}\)_, namely_ \[\left(G_{0}(x),\rho_{0}(x)\right)\in\Sigma^{1}_{L},\quad\forall\,x\in \mathbb{T},\] _then_ \((G,\rho)\) _remain bounded in all time._ 2. Strong alignment _(_\(\gamma\geq\frac{\lambda}{2}\)_): if the initial data lie in the subcritical region_ \(\Sigma^{2}_{L}\)_, namely_ \[\left(G_{0}(x),\rho_{0}(x)\right)\in\Sigma^{2}_{L},\quad\forall\,x\in \mathbb{T},\] _then_ \((G,\rho)\) _remain bounded in all time._ 3. Medium alignment _(_\(\gamma<\frac{\lambda}{2}\leq||\psi||_{1}-\gamma\)_): under the admissible condition_ (2.9) \[4(||\psi||_{L^{1}}-2\gamma)<\frac{e^{\frac{\tan^{-1}\hat{z}}{z}}}{2\left(1+e^ {-\frac{\pi}{2}}\right)}\lambda,\] _if the initial data lie in the subcritical region_ \(\Sigma^{3}_{L}\)_, namely_ \[\left(G_{0}(x),\rho_{0}(x)\right)\in\Sigma^{3}_{L},\quad\forall\,x\in \mathbb{T},\] _then_ \((G,\rho)\) _remain bounded in all time._ _Consequently, (2.3) has a global smooth solution. Here, \(\gamma=\int_{1/2}^{1}\psi^{*}(x)\,dx\), where \(\psi^{*}:(0,1]\to\mathbb{R}\) is the decreasing rearrangement of \(\psi\) on \(\mathbb{T}\). The parameters \(\hat{z}\) and \(\tilde{z}\) are defined as_ \[\hat{z}:=\sqrt{\left(\frac{\lambda}{2(\|\psi\|_{L^{1}}-\gamma)}\right)^{2}-1} \quad\text{and}\quad\tilde{z}:=\sqrt{\left(\frac{\lambda}{2\gamma}\right)^{2} -1}. \tag{2.10}\] Figure 2 illustrates the shape of \(\Sigma^{1}_{L}\) and \(\Sigma^{2}_{L}\). The steady-state solution \((G,\rho)=(c\|\psi\|_{L^{1}},c)\in\Sigma^{i}_{L}\). Hence, the region \(\Sigma^{i}_{L}\) contains initial data around the steady state and we obtain a nontrivial subcritical region. The admissible conditions (2.8) and (2.9) are similar to (2.4) and (2.5) respectively, in the sense that both imply that nonlocality of \(\psi\) is not too strong. When \(\psi\) is bounded, \(\psi_{max}-\psi_{min}\) is its oscillation. It is zero if and only if \(\psi\) is constant. Correspondingly, when \(\psi\) is unbounded but integrable, \(\psi_{max}-\psi_{min}\) is replaced by \(4(\|\psi\|_{L^{1}}-2\gamma)\). Note that \(\|\psi\|_{L^{1}}-2\gamma\geq 0\), and the equality holds if and only if \(\psi\) is a constant. Therefore, all of (2.8), (2.9), (2.4), (2.5) impose an upper bound on how much \(\psi\) is offset from a constant. The following result is for the EA system with weakly singular influence function. **Theorem 2.3**.: _Let \(\psi\in L^{1}(X)\) (\(X=\mathbb{R}\) or \(\mathbb{T}\)) be non-negative. Consider (2.3) with \(k=0\). If_ \[\inf_{x}G(0,x)>0,\] _then there exists global-in-time \(C^{1}\) solution to the system (2.3). Moreover, \(\rho,G\) have uniform bounds in terms of \(\psi,\rho_{0},G_{0}\)._ ## 3. Nonlocal Euler system with relaxation Several traffic-flow and fluid-flow models are modeled through a density that follows the continuity equation with a nonlocal flux, see for example [10, 15]. A general equation is, \[\rho_{t}+(\rho\mathrm{v})_{x}=0,\] for some nonlocal \(\mathrm{v}\). We augment this model with a velocity obeying Burgers' equation with some source terms. The source terms are such that the particles modeled move towards an equilibrium state, imparting some order and regularity to the system. We consider the following pressureless Euler-like model, \[\rho_{t}+(\rho\mathrm{v})_{x}=0,\;x\in\mathbb{R},\;t>0, \tag{3.1b}\] \[u_{t}+uu_{x}=\rho(\mathrm{v}-u), \tag{3.1a}\] with initial data \((\rho_{0}\geq 0,u_{0})\). The quantity \(\mathrm{v}\) determines the steady state velocity. Motivated by physical assumptions, we set \(\mathrm{v}=Q*u\), with, \[Q\in W^{1,1}(\mathbb{R}),\quad Q(x)=Q(-x)\ (\text{symmetric}),\] \[\int_{\mathbb{R}}\!\!Q(x)\,dx=1,\quad\text{and $Q$ is decreasing away from origin.} \tag{3.2}\] Evidently, \(Q*u\) is a weighted average of the velocity, with maximum weight at the point itself and decreasing symetrically as one moves away. To our knowledge, the well-posedness of (3.1) was not known. Due to the nonlocal flux, it cannot be concluded from existing literature for hyperbolic balance laws. Inspired by Kato and Majda, see [13, 16], we use energy methods to prove local existence/uniqueness for a general multidimensional system, the one dimensional form of which is (3.1), in a relatively more general space. In particular, we allow for solutions that need not decay at infinity, though they are bounded. We do not state the result here but instead, focus on the one dimensional threshold result. **Theorem 3.1**.: _Consider (3.1) with \(v=Q*u\) and \(Q\) satisfying (3.2). Suppose the initial density is nonnegative and the initial data \(\rho_{0},u_{0}\in L^{\infty}(\mathbb{R})\), and \(\rho_{0x},u_{0x}\in H^{s}(\mathbb{R})\), \(s\geq 1\)._ * _[Subcritical region] A unique solution_ \(\rho,u\in C([0,\infty);L^{\infty}(\mathbb{R}))\) _and_ \[\rho_{x},u_{x}\in C([0,\infty);H^{s}(\mathbb{R})),\quad s\geq 1\] _exists if_ \(u_{0x}(x)+\rho_{0}(x)\geq 0\) _for all_ \(x\in\mathbb{R}\)_._ * _[Supercritical region] If_ \(\exists x_{*}\in\mathbb{R}\) _for which_ \(u_{0x}(x_{*})<-\rho_{0}(x_{*})\)_, then_ \(u_{x}\to-\infty\) _in finite time._ Another interesting case is when v is local in (3.1), that is, v = \(f(\rho,u)\). Let \(\Theta=\{(\rho,u):u=f(\rho,u)\}\). Then (3.1) is strictly hyperbolic as long as \((\rho,u)\in\Theta^{c}\). However, this cannot be guaranteed a priori and (3.1) could degenerate from strict hyperbolic to weak hyperbolic at certain time. It turns out that for \(f_{\rho}=0\) (\(f\) only depending on the velocity), strict hyperbolicity can be guaranteed if the initial data lies in a certain set. The threshold analysis and results depend heavily on whether (3.1) is strictly or weakly hyperbolic. **Theorem 3.2**.: _Consider the system (3.1) with \(v=f(u)\) and initial conditions \((\rho_{0}\geq 0,u_{0})\in C^{1}_{b}(\mathbb{R})\times C^{1}_{b}(\mathbb{R})\) with \(\inf|f(u_{0})-u_{0}|>0\). If \(f_{u}\leq 0\) for solution \(u\) under consideration, then,_ 1. _Bounds on_ \(u\) _and_ \(\rho\)_:_ \(u(t,\cdot)\) _is uniformly bounded and satisfies_ \(|f(u(t,\cdot))-u(t,\cdot)|>0\) _for_ \(t>0\)_. And_ \[\rho(t,x)\leq\frac{\sup\rho_{0}|f(u_{0})-u_{0}|}{e^{\int_{u_{0}(x)}^{u(t,x)} \frac{d\xi}{f(t)-\xi}}|f(u(t,x))-u(t,x)|}.\] 2. _Global solution:_ _If_ \[u_{0x}(x)+\rho_{0}(x)\geq 0,\quad\forall x\in\mathbb{R},\] _then there exists a global classical solution_ \(\rho,u\in C^{1}((0,\infty)\times\mathbb{R})\)_. Moreover,_ \(\rho,u_{x}\) _are uniformly bounded with_ \[0\leq\rho(t,x)\leq M,\quad 0\leq u_{x}(t,x)+\rho(t,x)\leq M,\qquad\forall t>0,x \in\mathbb{R},\] _where_ \(M=\max\{\sup\rho_{0},\sup(u_{0x}+\rho_{0})\}\)_._ 3. _Finite time breakdown:_ _If_ \(\exists x_{0}\in\mathbb{R}\) _such that_ \[u_{0x}(x_{0})+\rho_{0}(x_{0})<0,\] _then_ \(\lim_{t\to t_{c}}|\rho_{x}|=\infty\) _or_ \(\lim_{t\to t_{c}}|u_{x}|=\infty\) _for some_ \(t_{c}>0\)_._ The condition \(\inf|f(u_{0})-u_{0}|>0\) ensures strict hyperbolicity by ensuring \(\inf|f(u(t,\cdot))-u(t,\cdot)|>0\) for all \(t>0\). If this does not hold, then we have Theorem 3.3. Also, the upper bound on \(\rho\) can be infinitely large as the system gets 'closer' to weakly hyperbolic. This indeed exhibits the borderline behaviour of density between strictly and weakly hyperbolic systems. In strictly hyperbolic systems (with non-erratic source terms) density is bounded for all times, even when shock forms, whereas in weakly hyperbolic systems, density becomes unbounded when shock forms, as is the case in (2.1). **Theorem 3.3**.: _Let \(f\) be a smooth function depending on \(u\) only, i.e., \(f_{\rho}=0\). Consider the system (3.1) subject to initial conditions, \((\rho_{0}\geq 0,u_{0})\in C^{1}_{b}(\mathbb{R})\times C^{2}_{b}(\mathbb{R})\). If \(f_{u}\leq 0\) for the solution \(u\) of consideration, then_ 1. _Global Solution:_ _If_ \[u_{0x}(x)+\rho_{0}(x)\geq 0,\quad\forall x\in\mathbb{R},\] _then there exists a global solution_ \[\rho,u\in C^{1}((0,\infty)\times\mathbb{R}).\] _Moreover,_ \(u,\rho,u_{x}\) _are uniformly bounded. Also, we have the following,_ \[||\rho_{x}(t,\cdot)||_{\infty}\leq C_{1}e^{C_{2}t},\quad t>0,\] _where_ \(C_{1}=C_{1}(||\rho_{0}||_{C^{1}},||u||_{C^{2}})\) _and_ \(C_{2}=C_{2}(||u||_{C^{2}},||\rho_{0}||_{\infty})\)_._ We would like to point out some key differences in Theorems 3.2 and 3.3. Firstly, \(\rho,u\) are bounded for all times in the former which is not true for the latter wherein there might be density concentration, that is, \(\rho\to\infty\) in finite time. Secondly, the space in which the solutions lie is different due to an extra degree of smoothness needed for velocity which arises in proving the local existence in the case of pressureless Eulerian systems. The following result is for a general \(f\), that is, it is a function of both density and velocity. Here, we cannot guarantee a priori strict hyperbolicity and it is imperative that we consider (3.1) as weakly hyperbolic. As a result, we need more conditions for global existence. **Theorem 3.4**.: _Let \(f=f(\rho,u)\) be a smooth function of its variables. Consider the system (3.1) with initial conditions \((\rho_{0}\geq 0,u_{0})\in C^{1}_{b}(\mathbb{R})\times C^{2}_{b}(\mathbb{R})\). If \(f_{u}\leq 0\) for the solutions under consideration, then \(u,\rho,u_{x}\) are uniformly bounded. If in addition to \(u_{0x}+\rho_{0}\geq 0\),_ * \((\rho f)_{\rho\rho}\geq 0\)_,_ \(f_{uu}\leq 0\) _along with_ \[\rho_{0x}(x)\geq 0,\quad u_{0xx}(x)+\rho_{0x}(x)\geq 0,\ \forall x\in\mathbb{R},\] _OR_ \[\rho_{0x}(x)\geq 0,\ \,u_{0xx}(x)+\rho_{0x}(x)\leq 0,\ \forall x\in\mathbb{R},\] _then there exists a global solution_ \(\rho,u\in C^{1}((0,\infty)\times\mathbb{R})\)_. In addition,_ \[u_{x}+\rho\in C^{1}((0,\infty)\times\mathbb{R}).\] The breakdown result is same for Theorems 3.3 and 3.4. **Theorem 3.5** (Finite time breakdown).: _Consider (3.1) with \(v=f(\rho,u)\). If there exists an \(x_{*}\) such that \(u_{0x}(x_{*})+\rho_{0}(x_{*})<0\), then \(\lim_{t\to t_{c}^{-}}|u_{x}(t,x_{c})|=-\infty\) for some \(x_{c}\) and \(t_{c}>0\)._ A key step is to identify a quantity \(e:=u_{x}+\rho\), that simplifies (3.1). \(e\) is quite analogous to \(G\) in (2.3). This transformation results in, \[\rho_{t}+f\rho_{x} =f_{u}\rho(\rho-e), \tag{3.3b}\] \[e_{t}+ue_{x} =-e(e-\rho). \tag{3.3a}\] If \(f_{u}\leq 0\), we can bound \(e,\rho\) in tandem, that is, both quantities are all-time-bounded or break down together. Theorems 3.3 and 3.4 can be proved thereafter. Under the assumptions of Theorem 3.2, where the system is strictly hyperbolic, we find the two Riemann invariants and analyze them. From (3.1), one of the Riemann invariants is simply \(u\). The other invariant \(R\) can be evaluated using conventional techniques. Bounds on \(R_{x}\) ensure bounds on \(\rho_{x}\). The asymptotic bound on \(\rho\) in Theorem 3.2 is obtained by bounding \(R\). ## Acknowledgments This work was supported in part by the National Science Foundation under Grant DMS1812666.
2304.07654
Wilson-Fisher fixed points in presence of Dirac fermions
Wilson-Fisher expansion near upper critical dimension has proven to be an invaluable conceptual and computational tool in our understanding of the universal critical behavior in the $\phi ^4$ field theories that describe low-energy physics of the canonical models such as Ising, XY, and Heisenberg. Here I review its application to a class of the Gross-Neveu-Yukawa (GNY) field theories, which emerge as possible universal description of a number of quantum phase transitions in electronic two-dimensional systems such as graphene and d-wave superconductors. GNY field theories may be viewed as minimal modifications of the $\phi^4$ field theories in which the order parameter is coupled to relativistic Dirac fermions through Yukawa term, and which still exhibit critical fixed points in the suitably formulated Wilson-Fisher $\epsilon$-expansion. I discuss the unified GNY field theory for a set of different symmetry-breaking patterns, with focus on the semimetal-N\'eel-ordered-Mott insulator quantum phase transition in the half-filled Hubbard model on the honeycomb lattice, for which a comparison between the state-of-the-art $\epsilon$-expansion, quantum Monte Carlo, large-N, and functional renormalization group calculations can be made.
Igor F. Herbut
2023-04-15T23:20:42Z
http://arxiv.org/abs/2304.07654v1
# Wilson-Fisher fixed points in presence of Dirac fermions ###### Abstract Wilson-Fisher expansion near upper critical dimension has proven to be an invaluable conceptual and computational tool in our understanding of the universal critical behavior in the \(\phi^{4}\) field theories that describe low-energy physics of the canonical models such as Ising, XY, and Heisenberg. Here I review its application to a class of the Gross-Neveu-Yukawa (GNY) field theories, which emerge as possible universal description of a number of quantum phase transitions in electronic two-dimensional systems such as graphene and d-wave superconductors. GNY field theories may be viewed as minimal modifications of the \(\phi^{4}\) field theories in which the order parameter is coupled to relativistic Dirac fermions through Yukawa term, and which still exhibit critical fixed points in the suitably formulated Wilson-Fisher \(\epsilon\)-expansion. I discuss the unified GNY field theory for a set of different symmetry-breaking patterns, with focus on the semimetal-Neel-ordered-Mott insulator quantum phase transition in the half-filled Hubbard model on the honeycomb lattice, for which a comparison between the state-of-the-art \(\epsilon\)-expansion, quantum Monte Carlo, large-N, and functional renormalization group calculations can be made. ## I Introduction Critical behavior of the \(O(N)\) models for \(N=1,2,3\) in the physical three (\(d=3\)) dimensions is an inherently strong-coupling problem for which no obvious small paramater exists. Anybody who tried some real-space decimation procedure for two-dimensional Ising model, for example, has certainly experienced a feeling of frustration in having to terminate the generated series of ever-further-neighbors-couplings in a more or less ad hoc manner.[1] The Wilson-Fisher (WF) expansion[2] in powers of the parameter \(\epsilon=4-d\) was therefore a conceptual, and as it turns out, even a computational breakthrough map excellence. I can still recall my own sense of wonder when I first learned of this approach as an uninitiated undergraduate in the course on phase transitions. It seemed to me to be a truly imaginative idea to take an integer quantity such as dimension of space and consider it in a mathematically consistent and apparently useful way as a real number. At the same time, however, I could not see how a procedure that relied on the parameter such as \(\epsilon\) being small could be expected to be sensible even for its value as large as unity! Little did I know that I would spend a fair part of my professional life wrestling with precisely these two issues. Today the significance of the WF \(\epsilon\)-expansion around the upper critical dimension for general studies of critical phenomena cannot be overstated. At the cost of entertaining non-integer values of system's dimensionality \(d\) it allows one to directly follow the emergence of the non-trivial critical WF fixed point as the upper critical dimension is crossed from above, and to monitor for the relevance of all couplings at the critical point with the increase of \(\epsilon\). As long as the evolution of the WF fixed point is smooth, one can hope to rely on perturbation theory to extract the desired critical exponents in powers of \(\epsilon\). The unpleasant fact is that the series is certainly not convergent, but it is thought to be asymptotic. The first few terms often already provide a decent estimate of the universal quantities in \(d=3\), and even in \(d=2\), for the Ising model for example, where the exact Onsager's solution can be used for comparison. Elaborate procedures[3] for resummation of the series exist nowadays that yield the most accurate values of the critical exponents for various values of the parameter \(N\), when the expansion is pushed to higher order, the sixth order being the highest at the time of writing.[4] Even for the Ising model where a more accurate conformal bootstrap[5] amounts to an essentially exact solution in \(d=3\), such a resummed \(\epsilon\)-expansion is still competitive in accuracy. The success of the \(\epsilon\)-expansion for the \(O(N)\)\(\phi^{4}\) field theories is rooted in the fact that there is a single self-interaction coupling constant that becomes relevant at the non-interacting Gaussian fixed point as the upper critical dimension is crossed from above. One therefore needs only to track the evolution of the WF fixed point along a well defined line in the coupling space, and provided there are no other non-perturbative fixed points along the same line[6] the WF critical fixed point continues to exist at all values of \(\epsilon\). Such a smooth evolution is no longer guaranteed when there is more than one coupling in the theory. First, it is in principle possible that different couplings have their canonical dimensions vanish in different physical dimensions, in which case there would be no well-defined upper critical dimension around which to expand. Second, even if there is an upper critical dimension in the standard sense, the number of fixed points below it may depend on some fixed parameter of the system, such as the number of the field's components \(N\), for instance. An important early example of this was provided by the original Ginzburg-Landau theory for the complex scalar field, i. e. the superconducting order parameter, coupled to the fluctuating electromagnetic gauge field, also known as the scalar electrodynamics.[7; 8] In this canonical field theory[9] both the self-interaction coupling and the electromagnetic charge become relevant at the Gaussian fixed point below the upper critical dimension of \(d=4\), and for a number of complex fields \(n\) larger than the critical value \(n_{c}\) there is indeed a critical fixed point of the renormalization group (RG) flow, and the concomitant universal critical behavior. As \(n\to n_{c}+\), however, this critical point is approached in the coupling space by another, bicritical fixed point, until they coincide at \(n=n_{c}\). Both fixed points become complex for \(n<n_{c}\), when there is no longer a real-valued critical fixed point left, and only a runaway flow remains. Interestingly, the critical number of complex components \(n_{c}\) can itself be computed in the \(\epsilon\)-expansion,[10] and one finds \(n_{c}=182.95(1-1.752\epsilon+0.798\epsilon^{2}+0.362\epsilon^{3})+O(\epsilon ^{4})\), in the four-loop computation.[11] The series is obviously badly behaved, but using additional information about the behavior of the scalar electrodynamics near \(d=2\) the value of \(n_{c}\) can be estimated to be around twelve, with significant uncertainty in this number. The main point is that the fixed-point structure of the RG flow in this case depends crucially on the value of the parameter \(n\), with its critical value \(n_{c}\) itself being rather strongly dependent on \(\epsilon\), and consequently poorly known for \(\epsilon=1\). For different values of \(n\) the RG flow at small and large values of \(\epsilon\) therefore may or may not be smoothly connected. In this contribution I discuss a set of field theories where the \(O(N)\) order parameter is also coupled to soft modes such as the gauge field as in the above example of scalar electrodynamics, except that the modes are being fermionic instead bosonic. These field theories are believed to describe low-dimensional condensed matter systems which at low energies feature Dirac fermions, such as graphene or d-wave superconductors. The electronic Fermi surface collapses to a set of (Dirac) points, and the energy spectrum of fermionic quasiparticles becomes effectively relativistic, which facilitates a controlled field-theoretic treatment of quantum phase transitions that ensue with an increase of electron-electron interactions. A paradigmatic example is provided by the standard Hubbard model on the honeycomb lattice at filling one-half and at zero temperature. For weak on-site repulsive interaction \(U\) the electronic system is a paramagnetic semimetal, and presumably in the same ground state as in the real graphene layer. At high \(U\), on the other hand, the ground state is an antiferromagnetic Mott insulator, with a finite value of the three-component Neel order parameter. There is strong accumulated evidence that there exists a single critical value of the interaction \(U_{c}\), so that for \(U>U_{c}\) Dirac fermions acquire a relativistic mass-gap, and that the Neel order simultaneously develops. The resulting semimetal-insulator quantum critical point should be described by a model closely related to the Gross-Neveu model in 2+1 space-time dimensions.[12] Variants with Ising (\(N=1\)) and XY (\(N=2\)) order parameters also have realizations on the honeycomb lattice with fermion-fermion interactions suitably modified to include nearest- and the next-nearest-neighbor terms. Gross-Neveu-like models in 2+1 dimensions can be treated in \(1/N_{\psi}\) expansion, with \(N_{\psi}\) as the number of Dirac fermions. Alternatively, one can explicitly include the bosonic order parameter Yukawa-coupled to the Dirac fermions so that the theory features two coupling constants: the order-parameter self-interaction, and the Yukawa coupling, both marginally irrelevant in 3+1 space-time dimensions in the infrared.[13] A possible advantage of this formulation is that one can attempt the standard Wilson-Fisher expansion around the upper critical spatial dimension of three. The algebraic structure of the RG \(\beta\)-functions is similar to those in the scalar electrodynamics, with one important difference: fermionic statistics of Dirac fermions reverses the signs of the analogous terms in the scalar electrodynamics, so that the problematic collisions of the fixed points described above are avoided. This allows one to push the \(\epsilon\)-expansion to higher orders and to try to extract some quantitative information about the quantum critical points in the Hubbard, and the Hubbard-like models on honeycomb lattice. Some of these models can also be independently studied by sign-problem-free quantum Monte Carlo calculations, which can then be compared with the analytic results. The rest of the paper is organized as follows. In sec. 2, I describe the construction of the GNY theory for various order parameters on honeycomb lattice. In sec. 3 the WF \(\epsilon\)-expansion for general GNY theory is discussed, and one-loop results for the critical exponents are given. Sec. 4 gives a review of the higher-order results for the chiral-Heisenberg universality class, relevant for the Hubbard model on the honeycomb lattice, and compares them with the results of other analytical and numerical techniques. Further discussion and extensions of the GNY theory to other patterns of symmetry breaking are provided in sec. 5. Summary is given in the final sec. 6. ## II GNY field theories for graphene The motion of non-interacting electrons on the graphene's honeycomb lattice can be described by the simple tight-binding Hamiltonian \[H_{0}=-t\sum_{\vec{R},i,\sigma}\left[u_{\sigma}^{\dagger}(\vec{R})v_{\sigma}( \vec{R}+\vec{\delta}_{i})+{\rm h.c.}\right], \tag{1}\] with nearest-neighbor hopping amplitude \(t\).[14] Here, \(u\) and \(v\) are the electron annihilation operators at the two triangular sublattices of the honeycomb lattice, and the sum runs over the sites \(\vec{R}\) of the first triangular sublattice with position vectors \(\vec{R}_{1}=a\left(\sqrt{3}/2,-1/2\right)\) and \(\vec{R}_{2}=a\left(0,1\right)\). The lattice spacing \(a\) is set to \(a=1\) and the three nearest-neighbor vectors \(\vec{\delta}_{i}\) read \(\vec{\delta}_{1}=\left(1/(2\sqrt{3}),1/2\right)\), \(\vec{\delta}_{2}=\left(1/(2\sqrt{3}),-1/2\right)\) and \(\vec{\delta}_{3}=\left(-1/\sqrt{3},0\right)\). \(\sigma=\pm\) labels the third projection of the electron spin. The diagonalization of the Hamiltonian \(H_{0}\) yields the spectrum with two degenerate energy bands with the dispersion \(\epsilon_{\vec{k}}=\pm t|\sum_{i=1}^{3}\exp(i\vec{k}\cdot\vec{\delta}_{i})|\). At the corners of the Brillouin zone, given by the two points \(\vec{K}=\pm(2\pi/\sqrt{3},2\pi/3)\), the two energy bands touch linearly and isotropically, and give rise to two inequivalent Dirac points. Retaining only the Fourier modes near the Dirac points, the continuum low-energy effective theory for \(H_{0}\) can be written down in terms of the free Dirac Lagrangian \[L_{\psi}=\psi^{\dagger}(x)(1_{2}\otimes 1_{2}\otimes(\partial_{\tau}-i\sigma_{1} \partial_{1}-i\sigma_{2}\partial_{2})+O(\partial^{2}))\psi(x), \tag{2}\] where \(\sigma_{i}\) are the conventional Pauli matrices, \(1_{2}\) is a two-dimensional unit matrix, and the eight-component Dirac field \(\Psi^{T}=(\Psi_{+}^{T},\Psi^{T})\), with \(\psi_{\sigma}(x)=\int d^{D}eq^{iqx}\psi_{\sigma}(q)\) given by \(\psi_{\varphi}^{\dagger}(q)=\left[u_{\sigma}^{\dagger}(K+q),v_{\sigma}^{ \dagger}(K+q),iv_{\sigma}^{\dagger}(-K+q),-iu_{\sigma}^{\dagger}(-K+q)\right]\). The \(D=2+1\)-energy-momentum vector \(q=(\omega,\vec{q})\) collects together the Matsubara frequency \(\omega\) and the wavevector \(\vec{q}\), \(K=(0,\vec{K})\), and \(\tau\) represents the imaginary time. The reference frame is chosen so that \(q_{x}=\vec{q}\cdot\vec{K}/|\vec{K}|\) and \(q_{y}=(\vec{K}\times\vec{q})\times\vec{K}/|\vec{K}|^{2}\).[12] We have also set the Fermi velocity \(v_{F}=t\sqrt{3}/2\) to unity. With the above definition of the four-component Dirac fermions it is evident that the leading term in the low-energy Lagrangian is invariant under a global unitary transformation \[\psi(x)\rightarrow(U\otimes 1_{2})\psi(x), \tag{3}\] with the unitary matrix \(U\in SU(4)\). The Lagrangian is also symmetric under an arbitrary global change of the phase of the Dirac field, which of course implies the familiar particle number conservation. Hereafter I will for the reasons of economy of presentation assume that the particle-number \(U(1)\) symmetry is always preserved, and will not consider possible superconducting states.[15; 16] The general relativistic "mass-term" which if simply added by hand to \(L_{\psi}\) would gap out the Dirac fermions is then \[L_{\phi\psi}=\phi_{i}\psi^{\dagger}(x)(H_{i}\otimes\sigma_{3})\psi(x), \tag{4}\] where \(\phi_{i}\) are real constants ("masses") and \(H_{i}\) are either the fifteen Hermitian generators of \(SU(4)\), when \(i=1,2,...15\), or the unit matrix when \(i=0\). Summation over the repeated index is assumed. When \(\phi_{i}\neq 0\) for some \(i\neq 0\), \(L_{\psi}+L_{\phi\psi}\) has the symmetry reduced from \(SU(4)\) to \(U(1)\times SO(4)\). The masses \(\phi_{i}\)\(i=1,2,...15\) transform as the adjoint representation under \(SU(4)\), whereas \(\phi_{0}\) transforms as a scalar. The preserved particle-number \(U(1)\) symmetry implies that \(L_{\psi}+L_{\phi\psi}\) would describe the low-energy spectrum of an insulator. We may further discern the following broken symmetry states: 1) \(H=1_{2}\otimes 1_{2}\) corresponds to the quantum anomalous Hall state,[17] which violates only the time reversal symmetry, and otherwise preserves the entire \(SU(4)\), 2) \(H=1_{2}\otimes\sigma_{i}\), \(i=1,2,3\), correspond to the charge-density-wave[14] and the two Kekule bond-density-waves,[18] which break the valley-rotation (sometimes also called "chiral") \(SO(3)\) symmetry, but preserve the spin-rotation \(SO(3)\) and the time reversal symmetry, 3) \(H=\sigma_{i}\otimes 1_{2}\), \(i=1,2,3\), correspond to the anomalous spin-Hall state,[19] which breaks the spin-rotation \(SO(3)\) while preserving the valley-rotation \(SO(3)\) and the time reversal, and finally 4) \(H=\sigma_{i}\otimes\sigma_{j}\), \(i,j=1,2,3\), correspond to the spin-density-wave[12] and the triplet versions of the two Kekule bond-density-waves, which break both the valley-rotation \(SO(3)\) and the spin-rotation \(SO(3)\) symmetry, as well as the time reversal. The antiunitary time reversal operator in the above representation is given by \(T=(\sigma_{2}\otimes\sigma_{2}\otimes\sigma_{2})C\), where \(C\) stands formally for the complex conjugation. In terms of the \(SO(4)\simeq SO(3)\times SO(3)\) subgroup of the \(SU(4)\) symmetry group, with the two \(SO(3)\) groups as the spin-rotation and the valley-rotation symmetries, the above matrices transform as \((0,0)\), \((0,1)\), \((1,0)\), and \((1,1)\) irreducible representations, respectively. The large \(SU(4)\) symmetry of the low-energy Dirac Hamiltonian for electrons in graphene is an artifact of the linearization of the energy dispersion, and the \(O(\partial^{2})\) term in Eq. (2) already reduces it. Explicitly, it reads \[O(\partial^{2})=1_{2}\otimes\sigma_{3}\otimes(\sigma_{1}(\partial_{1}^{2}- \partial_{2}^{2})-\sigma_{2}\partial_{1}\partial_{2}), \tag{5}\] so the \(SU(4)\) symmetry is reduced to \(SO(3)\otimes SO(2)\), which are the spin-rotations, and the translation symmetry in disguise,[20] the latter generated by \(1_{2}\otimes\sigma_{3}\). The electron-electron interaction terms which derive from the Coulomb repulsion can also be expected to respect only the same reduced symmetry. One is therefore led to consider the expectation values of the following fermion bilinears to be possibly dynamically generated at strong interactions: 1) \(\phi_{cdw}=\langle\psi^{\dagger}(x)(1\otimes\sigma_{3}\otimes\sigma_{3})\psi(x)\rangle\), which would preserve the group \(SO(3)\times SO(2)\), but break the discrete (Ising) sublattice symmetry \(\psi\rightarrow(1_{2}\otimes 1_{2}\otimes\sigma_{1})\psi\), \(\partial_{2}\rightarrow-\partial_{2}\), which exchanges the two triangular sublattices of the honeycomb lattice. Generation of such a finite bilinear average is favored, for example, by a sufficiently strong nearest-neighbor repulsion.[12] 2) \(\phi_{kek,1}=\langle\psi^{\dagger}(x)(1\otimes\sigma_{1}\otimes\sigma_{3}) \psi(x)\rangle\) and \(\phi_{kek,2}=\langle\psi(x)^{\dagger}(1\otimes\sigma_{2}\otimes\sigma_{3})\psi(x)\rangle\), which preserve spin-rotation \(SO(3)\), but break the translation \(SO(2)\) subgroup of the valley-rotation symmetry.[18] This order parameter is dynamically induced by sufficiently strong nearest-neighbor and next-nearest-neighbor repulsions, when they are of comparable strength.[21] 3) \(\phi_{sdw,i}=\langle\psi^{\dagger}(x)(\sigma_{i}\otimes\sigma_{3}\otimes\sigma_{3 })\psi(x)\rangle\), which breaks the spin-rotation \(SO(3)\), preserves translation \(SO(2)\), and breaks sublattice symmetry. A finite vector Neel order parameter \(\vec{\phi}_{sdw}\) is induced by sufficiently strong on-site Hubbard repulsion.[12] Note that the factor in the charge-density-wave mass-matrix \(1\otimes\sigma_{3}\) can be transformed as \(U_{1}(1\otimes\sigma_{3})U_{1}^{\dagger}=\sigma_{3}\otimes\sigma_{3}\), with some unitary \(U_{1}\in SU(4)\). Similarly, the factors in the two Kekule bond-density-wave mass-matrices, \(1\otimes\sigma_{i}\), \(i=1,2\), can be transformed as \(U_{2}(1\otimes\sigma_{i})U_{2}^{\dagger}=\sigma_{i}\otimes\sigma_{3}\), with a different unitary transformation \(U_{2}\in SU(4)\). Both transformations belong to the \(SU(4)\), the group of symmetry of the Dirac Lagrangian \(L_{\psi}\). One can therefore study all three above symmetry-breaking quantum phase transitions which would be induced by increasing different components of electron-electron interactions by considering the single GNY field theory in the following form: \[L=L_{\psi}+L_{\phi\psi}+\mathrm{L}_{\phi}, \tag{6}\] with \[L_{\phi\psi}=g\phi_{i}(x)\psi^{\dagger}(x)(\sigma_{i}\otimes\sigma_{3}\otimes \sigma_{3})\psi(x), \tag{7}\] and \[L_{\phi}=\frac{1}{2}((\partial_{\mu}\phi_{i}(x))^{2}+m^{2}\phi_{i}(x)\phi_{i} (x))+\lambda(\phi_{i}(x)\phi_{i}(x))^{2}, \tag{8}\] by restricting the index \(i\) to take the values \(i=1\) (charge-density-wave), \(i=1,2\) (Kekule bond-density-wave), and \(i=1,2,3\) (Neel). Index \(\mu=0,1,2\) goes over imaginary time and space dimensions. The tuning parameter for the transition is \(m^{2}\sim(V_{c}-V)\), where \(V\) is the strength of the interaction relevant to the particular phase transition, and \(V_{c}\) is its (non-universal) critical value. Coupling \(\lambda\) is the order parameter's self-interaction. The form of the Yukawa coupling of the bosonic order parameter \(\phi_{i}(x)\) to Dirac fermions implies that for a uniform order parameter \[\langle\phi_{i}(x)\rangle=-\frac{g}{m^{2}}\langle\psi^{\dagger}(x)(\sigma_{i} \otimes\sigma_{3}\otimes\sigma_{3})\psi(x)\rangle+O(\lambda\langle\phi_{j} \phi_{j}\phi_{i}\rangle), \tag{9}\] so that the system becomes a broken-symmetry Mott insulator when \(V>V_{c}\), i. e. when \(m^{2}<0\). Anticipating some of the results that follow we have also set the velocity of the bosonic order parameter to be the same as the velocity of Dirac fermions, that is to unity. ## III \(\epsilon\) - expansion for GNY By comparing the Dirac and the Yukawa terms in the GNY field theory one finds that in terms of their canonical dimensions \[g\phi\sim L^{-1}, \tag{10}\] where \(L\) is a length. Comparing the derivative and the self-interaction terms, on the other hand, \[\lambda\phi^{2}\sim L^{-2}. \tag{11}\] Eliminating the order parameter field yields therefore that in terms of their canonical dimensions \[\lambda\sim g^{2}. \tag{12}\] The canonical dimensions of the self-interaction \(\lambda\) and of the square of the Yukawa coupling \(g\) are the same, and \(\lambda\sim g^{2}\sim L^{d-3}\), where \(d\) is the number of spatial dimensions. In the physical case, \(d=2\), and they are both infrared-relevant couplings at the Gaussian fixed point. Extending \(d\) to real values and in particular following Wilson and Fisher [2] and assuming it to be near and below \(d=3\) would thus bring both canonical dimensions to be small and positive. This opens up the possibility for the \(\epsilon\)-expansion for the GNY theory of the order parameter coupled to the Dirac fermions. [13] Standard one-loop computation then leads to the RG flow, [22] \[\beta_{\lambda}=\frac{d\lambda}{d\ln b}=\epsilon\lambda-4N_{\psi}y\lambda-4(N +8)\lambda^{2}+N_{\psi}y^{2}, \tag{13}\] \[\beta_{y}=\frac{dy}{d\ln b}=\epsilon y-(2N_{\psi}+4-N)y^{2}, \tag{14}\] with the elimination of both the order parameter's and Dirac field's modes in the momentum shell \(\Lambda/b<q<\Lambda\), with \(\Lambda\ll|\vec{K}|\) as the high-energy cutoff in the theory. We left \(N_{\psi}\) as the general number of Dirac fermions, with \(N_{\psi}=2\) in graphene, and \(N\) as the number of order parameter components: \(N=1,2,3\), for Ising (charge-density-wave), XY (Kekule), and Heisenberg (Neel) order parameters, respectively. We have also redefined the coupling constants as \(\lambda/(8\pi^{2}\Lambda^{\epsilon})\rightarrow\lambda\), and \(y=g^{2}/(8\pi^{2}\Lambda^{\epsilon})\). Since the one-loop function \(\beta_{y}\) is independent of the self-interaction \(\lambda\), the Yukawa coupling \(y\) is equally relevant at the standard \(O(N)\) WF fixed point at \(y=0\) and \(\lambda=\epsilon/4(N+8)\) as it is at the Gaussian fixed point \(y=\lambda=0\). Starting anywhere at \(y>0\) and \(\lambda>0\) the RG flow in the critical plane \(m^{2}=0\) is attracted to the new critical fixed point where both \(y=y^{*}=O(\epsilon)\) and \(\lambda=\lambda^{*}=O(\epsilon)\). At this fixed point and for \(N_{\psi}=2\) the order parameter's anomalous dimension is \[\eta_{\phi}=\frac{4\epsilon}{8-N}, \tag{15}\] and thus of the order \(O(\epsilon)\), in contrast to its \(O(\epsilon^{2})\) value at the standard WF fixed point. [9] One may therefore expect \(\eta_{\phi}\) at semimetal-insulator quantum phase transitions in graphene not to be particularly small, in contrast to the usual \(O(N)\) universality classes. The correlation-length critical exponent may also be evaluated, and to the leading order it equals \[\nu=\frac{1}{2}+\frac{3(4+N)}{(8-N)(8+N)}\epsilon. \tag{16}\] The Lorentz invariance of the GNY theory implies that the dynamical critical exponent is exactly \[z=1. \tag{17}\] The hyperscaling is expected to hold, and therefore the remaining critical exponents are given by the usual scaling laws. [9] The fermion propagator also acquires an anomalous dimension; at the critical point it behaves as \(G_{f}^{-1}\sim(\omega^{2}+k^{2})^{(1-\eta_{\psi})/2}\), with \(\eta_{\psi}\) as the fermion's anomalous dimension.[12] To the leading order in \(\epsilon\) one finds it \[\eta_{\psi}=\frac{3\epsilon}{2(8-N)}, \tag{18}\] and thus to be comparable to the order parameter's anomalous dimension. The scaling implies that the residue of the quasiparticle pole on the semimetallic side vanishes as a power-law[12] \[Z\sim(m^{2})^{\nu\eta_{\psi}}. \tag{19}\] Similarly, the velocity of the Dirac fermions scales as \[v_{F}\sim(m^{2})^{\nu(z-1)}. \tag{20}\] The Lorentz invariance of the GNY theory thus implies that the Dirac velocity remains finite at the transition, while the residue of the Dirac quasiparticle's pole vanishes continuously as the critical point is approached from the semimetal side. The systematic expansion in \(\epsilon\) has been pursued to higher order,[23; 24; 25] the highest at the moment of writing being the fourth order in \(\lambda\) and \(y\). For the "chiral-Ising" (\(N=1\)) and the "chiral-Heisenberg" (\(N=3\)) GNY theories, for example, the four-loop computation entails summing up 31671 Feynman diagrams. Such a computationally intensive calculation is possible only because of the recent breakthroughs in automatization of high-order perturbative calculations designed for the standard model of particles physics.[25] ## IV Chiral-Heisenberg University Class We focus next on the chiral-Heisenberg universality class, i. e. the GNY theory with \(N=3\), which is supposed to describe the quantum phase transition between the Dirac semimetal and the Neel-ordered Mott insulator in the canonical Hubbard model on honeycomb lattice, at half-filling. The fourth-order \(\epsilon\)-expansion yields the critical exponents[25] \[\nu^{-1}=2-1.527\epsilon+0.4076\epsilon^{2}-0.8144\epsilon^{3}+2.001\epsilon ^{4}, \tag{21}\] \[\eta_{\phi}=0.8\epsilon+0.1593\epsilon^{2}+0.02381\epsilon^{3}+0.2103\epsilon ^{4}, \tag{22}\] \[\eta_{\psi}=0.3\epsilon-0.0576\epsilon^{2}-0.1184\epsilon^{3}+0.04388\epsilon ^{4} \tag{23}\] \[\omega=\epsilon-0.4830\epsilon^{2}+0.9863\epsilon^{3}-2.627\epsilon^{4}, \tag{24}\] where we included the leading-correction-to-scaling-exponent \(\omega\) as well. One may immediately observe the usual poor convergence properties of the series: for the physical value of \(\epsilon=1\) the \(\epsilon^{3}\) terms become larger than the preceding \(\epsilon^{2}\) terms in three out of the four displayed series. Possibly useful estimates may be obtained therefore by simply terminating the series at the order \(O(\epsilon^{2})\). This leads to \(\nu=1.13\), \(\eta_{\phi}=0.96\), \(\eta_{\psi}=0.24\), and \(\omega=0.52\). (Expanding \(\nu\) and terminating again at the second order in \(\epsilon\) would, for example, lead to a similar value of \(\nu=1.07\).) The crudeness of the approximation notwithstanding, the results are in the same ballpark as the results of the more elaborate summation using Pade approximants; although the series are probably too short to give stable results, [3/1] Pade approximant for example yields \(\nu=1.2352\), \(\eta_{\phi}=0.9563\), and \(\eta_{\psi}=0.1560\).[25] The Hubbard model on the honeycomb lattice at the filling one-half can also be studied directly by the auxiliary-field quantum Monte Carlo method, as the calculation does not suffer from the sign problem. Large-scale calculations[26; 27; 28] support the overall picture provided by the GNY theory: 1) there is a direct continuous quantum phase transition between the semimetallic and the insulating antiferromagnetic phases, 2) the Neel order parameter scales the same way with the size of the system and the deviation from the critical point as the fermion single-particle gap, 3) the values of the critical exponents are distinctly unconventional, with both the correlation length exponent \(\nu\) and the order-parameter's anomalous dimension \(\eta_{\phi}\) close to unity, 4) the residue of the Dirac quasiparticle pole is reduced continuously as the critical point is approached from the semimetallic side, while the Fermi velocity remains finite. While in broad agreement, different Monte Carlo calculations still mutually disagree on the precise values of the critical exponents, which also differ somewhat from the field-theoretic estimates based on the GNY theory. For example, ref.[28] finds \(\nu=1.02(1)\), \(\eta_{\psi}=0.20(2)\), whereas ref.[27] finds \(\nu=0.84(4),\eta_{\phi}=0.70(15)\), and ref.[29] gives \(\nu=1.185(43)\) and \(\eta_{\phi}=0.71(5)\). It is encouraging, on the other hand, that the results seem to be independent of the details of the microscopic model, and to depend only on the broken symmetry and the number of Dirac fermions, just as the GNY field theory would imply. This way the Hubbard model on the honeycomb and the staggered-flux square lattice, which both feature two Dirac fermions and the Neel-ordered phase, but have very different critical values of the interaction, for example, show numerically identical finite-size scaling functions and the critical exponents.[27] Even starting from an entirely different single-particle Hamiltonian, such as d-wave Cooper-paired electrons at half-filled square lattice, which lacks particle-number \(U(1)\) symmetry but does have the same number of Dirac fermions, seems to lead to the quantum phase transition in the same chiral-Heisenberg universality class with an increase of Hubbard on-site repulsion \(U\): the values of the critical exponents are \(\nu=1.05(5)\), \(\eta_{\phi}=0.75(4)\), \(\eta_{\psi}=0.23(4)\).[30] Discussion While we have postulated Lorentz invariance of the GNY field theory from the outset, and both the velocities of the order parameter and the Dirac fermions have been set to unity, one may also assume the two velocities to be different. Within the \(\epsilon\)-expansion they are then found to flow to the same value in the infrared, both if their difference is initially small [22], or even large [31]. In fact, the relativistic invariance in the GNY-like theories becomes restored in the infrared under very general conditions, with and without couplings of the order parameter and the Dirac fermions to the fluctuating gauge-field, with the gauge field having yet another different bare velocity, in 3+1 dimensions and below it. [31; 32] It thus seems safe to assume the breaking of relativistic invariance to be an irrelevant perturbation at the critical point in the physical 2+1 dimensions. Similarly, the long-range \(\sim 1/r\) tail of the Coulomb interaction between electrons also represents an irrelevant perturbation [31; 33], although the detailed interplay between the Hubbard on-site interaction \(U\) and the long-range part may be quite intricate. [33] The effect of Coulomb interaction's long-range tail on the GNY criticality is similar as at the standard \(O(N)\) WF quantum critical points without Dirac fermions. [34; 35; 9] The Gross-Neveu model and in particular the chiral-Heisenberg universality class has been studied also in the large-\(N_{\psi}\) limit. [36; 37; 38] The correlation length critical exponent and the order parameter anomalous dimensions have been computed [38] to the order \(O(1/N_{\psi}^{2})\); in \(2+1\) dimensions and for \(N_{\psi}=2\) one finds \(\nu=1.182\) and \(\eta_{\phi}=1.184\). The fermion's anomalous dimension is found to the order \(O(1/N_{\psi}^{3})\), and for the same parameters \(\eta_{\psi}=0.105\). Finally, the functional renormalization group has also been brought to bear [39]: the most elaborate computation to date yields \(\nu=1.26\), \(\eta_{\phi}=1.032\), and \(\eta_{\psi}=0.071\). [40] Within the last decade conformal bootstrap has led to the most accurate values of the critical exponents for the Ising model, [5] and has become competitive with the high-order \(\epsilon\)-expansion for the XY and Heisenberg. It therefore seems natural to attempt to extend it to the GNY field theories. While this has not been done at the time of writing for the chiral-Heisenberg model, it has been done for a close cousin of the chiral-Ising field theory, which in the context of graphene, for example, describes the quantum phase transition into the quantum anomalous Hall state. [41] This GNY theory would correspond to the Ising (\(N=1\)) order parameter in Eq. (8) coupled to the fermion bilinear as in \[L_{\phi\psi}=g\phi(x)\psi^{\dagger}(x)(1_{2}\otimes 1_{2}\otimes\sigma_{3}) \psi(x). \tag{25}\] Both this and the chiral-Ising theory describe the spontaneous symmetry breaking of the Ising sublattice symmetry, but with the order parameter coupled to different fermion bilinears; a finite \(\langle\psi^{\dagger}(1_{2}\otimes 1_{2}\otimes\sigma_{3})\psi\rangle\) would violate the time reversal symmetry as well. It has been argued [41] that the two GNY Ising theories differ at higher order in the \(1/N_{\psi}\) expansion, and therefore should not be expected to have the identical critical behavior; on the other hand, the actual difference in the exponents could be expected to be small. Indeed, the exponents extracted from the four-loop \(\epsilon\)-expansion for the chiral-Ising model [42] agree within their error bars with the bootstrap values for the anomalous Hall transition, and even the difference with the quantum Monte Carlo calculations [43] is of the order of few percent. Other quantum phase transitions have also been addressed within the framework of the GNY field theory. The quantum phase transition from the Dirac semimetal into the quantum spin Hall state on the honeycomb lattice also exhibits breaking of spin-rotational symmetry, but with the vector order parameter coupled to a different Dirac bilinear \(\sim\psi^{\dagger}(\sigma_{i}\otimes\sigma_{3}\otimes\sigma_{3})\psi\). [44] The transition into the nematic state that breaks rotational symmetry but remains gapless has also been studied, both numerically and analytically. [45; 46; 47] Both phase transitions appear to be continuous and to be described by an \(O(\epsilon)\) fixed point of the RG flow in the corresponding GNY theories. Lattice models that circumvent the Nielsen-Ninomiya [48] fermion-doubling theorem and display the transitions involving a single two-component Dirac fermion, with or without spin, have also been put forward [49; 50], and studied by Monte Carlo methods. They corroborate and extend further the physical picture implied by the GNY field theory and discussed above. The GNY phase transitions in presence of quenched disorder [51] or cubic terms that could render the transition discontinuous have also been addressed. [52; 53; 54; 55] Multicritical behavior in presence of Dirac fermions has been studied as well. [56; 57] Surprisingly, emergence of larger symmetries at the criticality induced by Dirac fermions has been found within \(\epsilon\)-expansion. [58] Although not discussed here, one can also formulate a GNY-type field theory for the transition into the s-wave superconducting state [59; 60]. Finally, GNY-like field theories with fermions with quadratic instead of linear Dirac energy dispersion have also been considered, and their \(O(\epsilon)\) fixed points identified. [61; 62] ## VI Summary In conclusion, we reviewed the construction and the applications of the Gross-Neveu-Yukawa field theories for bosonic order parameters coupled to Dirac fermions, primarily as they arise in the system of interacting electrons on honeycomb lattice at the filling one half. These field theories generically exhibit critical fixed points points that are not of the standard \(O(N)\) variety, but which nevertheless can be identified and systematically studied using the time-honored expansion around the upper critical dimension proposed by Kenneth Wilson and Michael Fisher more than fifty year ago. This extends the relevance of the method of the \(\epsilon\)-expansion to the domain of quantum many-body systems and to the fundamental electronic models such as the Hubbard model, where the hope is that it could prove just as fertile as it has been in the classical statistical physics. ## VII Acknowledgement The author is grateful to Shaffique Adam, Fakher Assaad, Igor Boettcher, Laura Classen, John Gracey, Martin Hohenadler, Lukas Janssen, Vladimir Juricic, Bitan Roy, Michael Scherer, Francesco Parisen Toldin, and Oskar Vafek for many useful discussions and collaborations on the subject of this review, and especially to Michael Scherer for also reading the manuscript. This work has been supported by the NSERC of Canada.
2303.02953
A possible solution to the Hubble tension from quantum gravity
We investigate the relevance of quantum gravity during inflation to address the Hubble tension that arises from Planck 2018 and SH0ES data sets. We show that the effect of quantum gravity during inflation can increase the rate of change of $H_0$, thereby accounting for a wide range of observed $H_0$. Further, we show that due to the quantum gravity effect on inflation, the temperature at the onset of reheating can be lower than the standard case, causing delays in the reheating process. The role of quantum gravity is inevitable in settling the Hubble tension. The results of the present study may find use in resolving the Hubble tension, in validating inflationary model and quantum gravity.
Anupama B, P K Suresh
2023-03-06T07:46:58Z
http://arxiv.org/abs/2303.02953v2
# A possible solution to the Hubble tension from quantum gravity ###### Abstract We investigate the relevance of quantum gravity during inflation to address the Hubble tension that arises from Planck 2018 and SH0ES data sets. We show that the effect of quantum gravity during inflation can increase the rate of change of \(H_{0}\), thereby accounting for a wide range of observed \(H_{0}\). Further, we show that due to the quantum gravity effect on inflation, the temperature at the onset of reheating can be lower than the standard case, causing delays in the reheating process. The role of quantum gravity is inevitable in settling the Hubble tension. The results of the present study may find use in resolving the Hubble tension, in validating inflationary model and quantum gravity. Keywords:Hubble tension, inflation, quantum gravity, effective field theory ## 1 Introduction The twentieth century discovery of expansion of the universe by Edwin Hubble has tremendously improved the understanding of the universe in a great way and has helped in strengthening the foundations of modern observational cosmology. This historical discovery is not complete without considering the contributions of Henrietta Leavitt, Vesto Slipher and George Lemaitre whose works were detrimental in developing the distance ladder method for measuring intergalactic distances [1], finding the recessional velocities of distant galaxies [2] and associating it with expansion of the universe [3]. All these efforts along with the observations made by Hubble led to the Hubble-Lemaitre law [4], a linear relation between the radial velocity of galaxies and their distances using the present value of the Hubble parameter (\(H_{0}\)) [5]. Hubble's initial estimate for \(H_{0}\) was very high (\(\approx\) 500 Km s\({}^{-1}\) MPc\({}^{-1}\)) due to errors in the calibration. \(H_{0}\) can be estimated by employing many modern techniques, which includes the method of CMB and calibration of distance ladder using standard rulers [6]. The value of \(H_{0}\) obtained from CMB calibrated observations of Planck 2018 in the light of \(\Lambda\)CDM model is 67.36 \(\pm\) 0.54 Km s\({}^{-1}\) MPc\({}^{-1}\)[7] and the SH0ES (Supernova \(H_{0}\) for the Equation of State) team has estimated the late universe value of \(H_{0}\) as 74.03 \(\pm\) 1.04 Km s\({}^{-1}\) MPc\({}^{-1}\)[8], which disagree to a level of 5\(\sigma\). This incompatibility between the indirect measurements and the direct measurements of \(H_{0}\) is known as the Hubble tension. Inspite of the improvements in the detection mechanisms, the discrepancy still exists, pointing towards a growing tension. At present the disagreement is statistically significant creating a real tension among the cosmologists. Resolving the Hubble tension has paramount importance in cosmology as the value of \(H_{0}\) plays a crucial role in bridging the theory and observations related to the determination of size, age and expansion rate of the universe. With the developments in precision cosmology and the modern methods of astronomical observation and techniques, we can assume that both the large scale and local measurements of \(H_{0}\) are reliable. In view of this, many attempts have been made to resolve the Hubble tension such as taking into account the effect of local inhomogenity [9], early dark energy models [10], modified gravity theory [11], phantom cosmology (w \(<-\) 1) and the increase in the number of relativistic degrees of freedom due to effective operators [12] and vaccum energy interaction with matter and radiation [13]. While one set of researchers argue that the problem lies in the assumptions made in the standard \(\Lambda\)CDM model of cosmology others are searching for new Physics. New measurements of \(H_{0}\) from the shadows of supermassive black holes [14], grey sirens [15] and ellipsoidal geometry of the universe are also in progress [16]. We assume that at very early stage, the universe underwent an accelerated expansion in a very short span of time known as inflation. Usually, the inflation is described by a solo scalar field. The variations in this field can have energy higher than the Planck scale (\(m_{pl}\)), the energy scale where the quantum gravity effect is supposed to be dominant as implied by the Lyth bound [18] \[\Delta\varphi\gtrsim m_{pl}\sqrt{\frac{r}{4\pi}}. \tag{1}\] The recent CMB anisotropic measurements rule out most of the single scalar field based inflation and multi fields inflationary model can be adopted as an alternate remedy to it. Among the multi fields inflationary models, hybrid inflation receives much attention in explaining the gap between inflation and particle production [17]. The recent study on hybrid inflation shows that the Hubble parameter during different epochs of evolution of the universe can be related [19]. Therefore in the present work we attempt to address the Hubble tension using hybrid inflationary model by incorporating quantum gravity effect in the framework of effective field theory (EFT). We investigate the Hubble parameter during the inflation (\(H_{I}\)) and during the phase transition (\(H_{T}\)) in terms of quantum gravity effect in view of Planck 2018 and SH0ES data. We show that quantum gravity influence \(H_{I}\) whereas it does not affect \(H_{T}\). Therefore we explore the possibility of accounting a wide range of observed \(H_{0}\) values from various estimates. Since quantum gravity is sensitive to inflation in the EFT framework its reflection is expected on the immediate stage like reheating. This impact on \(H_{I}\) can be tested by analysing its consequence on reheating by examining the reheating e-folding number and reheating temperature. We show that the temperature during the onset of reheating can be lowered due to quantum gravity resulting in a delayed reheating. Throughout the paper we follow c = G = \(\hbar\) =1. ## 2 The Hybrid Inflationary Model The concept of inflation was introduced to resolve the major problems of standard cosmology like flatness problem, horizon problem, fine tuning problem, etc [20]. More than hundred models have been proposed so far but single field models alone are not sufficient to address all the problems of standard model of cosmology successfully along with various observational results [21]. Multi fields models are introduced to overcome this issue. Among these models, the hybrid inflationary model consisting of two scalar fields (\(\varphi\), \(\sigma\)) is considered reasonable to study the inflation, where the slow roll of the field (\(\varphi\)) is responsible for inflation and the waterfall field (\(\sigma\)) leads to symmetry breaking and phase transition [17]. The potential of hybrid inflationary model is given by [17] (see figure (1)) \[V(\varphi,\sigma)=\frac{1}{4\lambda}\bigg{(}M^{2}-\lambda\sigma^{2}\bigg{)}^{ 2}+\frac{1}{2}g^{2}\varphi^{2}\sigma^{2}+\frac{1}{2}m^{2}\varphi^{2}, \tag{2}\] \(g\) is the coupling constant for the interaction between \(\varphi\) and \(\sigma\). The critical value of the field \(\varphi\) is found to be \(\varphi_{c}=\frac{M}{g}\). In the hybrid model, the mechanism of inflation begins when \(\varphi>\varphi_{c}\) which is followed by the slow roll of \(\varphi\) towards \(\varphi_{c}\). At \(\varphi=\varphi_{c}\), the inflation comes to an end. When \(\varphi<\varphi_{c}\) (waterfall regime) [22], symmetry breaking occurs and both the fields roll down to their respective true vacuum \(\varphi\)\(\rightarrow\)\(0\) and \(\sigma\)\(\rightarrow\)\(\frac{M}{\sqrt{\lambda}}\) rapidly. During the initial stage of inflation (\(\varphi>\varphi_{c}\) ) the potential is nearly flat in the direction of the field \(\varphi\) and is steeper in the direction of \(\sigma\). As a result, the field \(\sigma\) immediately settles in the false vacuum (\(\sigma=0\)) and \(\varphi\) rolls slowly subjected to quantum fluctuations. Therefore \(\varphi\) can be written as \[\varphi(x,t)=\varphi(t)+\delta\varphi(x,t). \tag{2}\] The magnitude of these fluctuations in the Fourier space is given by [23] \[|\delta\varphi_{k}|^{2}\ \simeq\ \frac{H_{I}^{2}}{2k^{3}}\bigg{(}\frac{k}{aH_{I} }\bigg{)}^{\frac{2m^{2}}{3H_{I}^{2}}}, \tag{3}\] and the variance of the scalar field fluctuations is given by \[<|\delta\varphi_{k}|^{2}>\ \simeq\ \frac{3H_{I}^{4}}{8\pi^{2}m^{2}}\bigg{[}1-e^{ -\frac{2m^{2}N}{3H_{I}^{2}}}\bigg{]}, \tag{4}\] where \(H_{I}\) is the Hubble parameter during inflation, \(k\) is the wavenumber for the mode that exits the horizon and \(N\) is the e-folding number that amounts the duration of inflation. Inflation occurs for a long period (\(N\) is very high) therefore equation (4) becomes \[<|\delta\varphi_{k}|^{2}>\ \simeq\ \frac{3H_{I}^{4}}{8\pi^{2}m^{2}}. \tag{5}\] Figure 1: Schematic representation of hybrid inflationary potential. ince the inflation is governed by the field \(\varphi\), the relevant potential can be written as \[V(\varphi)=\frac{M^{4}}{4\lambda}+\frac{1}{2}m^{2}\varphi^{2}. \tag{6}\] The quantum fluctuations can dominate over homogenous part of the field. Therefore using equation (5) in (6) the potential can be rewritten as \[V(\varphi)\ \simeq\ \frac{1}{2}m^{2}<|\delta\varphi_{k}|^{2}>\ \simeq\ \frac{3H_{I}^{4}}{16\pi^{2}}. \tag{7}\] We compute the Hubble parameter during inflation using the equation (7) in the Friedmann's equation for a flat FLRW universe and obtain [19] \[H_{I}=(\Omega_{\Lambda})^{\frac{1}{4}}\sqrt{4\pi m_{pl}\ H_{0}}, \tag{8}\] where \(\Omega_{\Lambda}\) is the density parameter for dark energy, which is the current dominant energy driving the acceleration of universe. We study the obtained Hubble parameter during inflation for a range of values of the present Hubble parameter estimated from Planck 2018 and SH0ES data. The results are presented in figure (2). #### Phase transition The initial conditions of inflation is chaotic but as soon as \(\varphi<\varphi_{c}\), ordering becomes more and symmetry becomes less. In hybrid inflation, this happens instantaneously through waterfall mechanism known as phase transition and the symmetry is said to be broken. The system settles down in the lowest energy state (true vacuum) of the fields. The hybrid inflation results in spontaneous symmetry breaking and first order phase transition, driven by the vacuum energy density [17]. Therefore the potential during the phase transition is \[V_{T}=V(0,0)=\frac{M^{4}}{4\lambda}\simeq\frac{H_{T}^{2}}{4}, \tag{9}\] Figure 2: Variation of the Hubble parameter during inflation (\(H_{I}\)) for a range of present Hubble parameter (\(H_{0}\)) with Planck 2018 and SH0ES data. here \(H_{T}\) is the Hubble parameter during phase transition. \(H_{T}\) can be obtained using equation (9) in the Friedmann's equation for a flat FLRW universe as \[H_{T}=\sqrt{12m_{pl}^{2}\Omega_{\Lambda}}\ H_{0}. \tag{10}\] We study the Hubble parameter during phase transition for a range of values of the present Hubble parameter obtained from Planck 2018 and SH0ES data. The results are presented in figure (3). ## 3 Reheating At the end of inflation the universe was devoid of any matter and further the temperature dropped down to a level which was not sufficient to trigger the big bang nucleosynthesis. To initiate the thermonuclear reactions, temperature of the universe must be greater than 10 MeV [24]. In order to achieve this threshold temperature a mechanism known as reheating is required. Reheating is the phase between the end of inflation and the beginning of radiation dominated epoch where the energy density of the inflaton field is transferred to the matter and thermalization occurs. The dynamics of the energy density of a cosmological fluid is governed by the time dependent effective equation of state (w). It can be used to define the two important parameters of reheating, the reheating e-folding number (\(N_{re}\)) and the reheating temperature (\(T_{re}\)). The initial stage of reheating where the scalar particles are produced is called preheating and can be described using two scenarios. One is the perturbative decay of the inflaton as it oscillates near the minimum of its potential [25] and the other scenario involves large amount of particle production via non perturbative processes like tachyonic instability [26], parametric resonance [27; 28] and instant preheating [29]. Effective preheating is necessary to achieve the required reheating temperature. The duration of preheating is expressed in terms of the preheating e-folding number (\(N_{pre}\)) [30] and the e-folding number for thermalization (\(N_{th}\)). Preheating and thermalization together constitutes the reheating Figure 3: Variation of the Hubble parameter during phase transition (\(H_{T}\)) for a range of present Hubble parameter (\(H_{0}\)) with Planck 2018 and SH0ES data. and can be written as \[N_{re}=N_{pre}+N_{th}, \tag{1}\] where \[N_{pre}=\bigg{[}61.6-\frac{1}{4}\ln\bigg{(}\frac{V_{end}}{H_{k}^{4}}\bigg{)}-N \bigg{]}-\bigg{(}\frac{1-3\mathrm{w}}{4}\bigg{)}N_{th}. \tag{2}\] In the present work we assume instantaneous preheating (\(N_{pre}\to 0\) ), which gives the reheating e-folding number as \[N_{re}=N_{th}=\frac{4}{1-3\mathrm{w}}\bigg{[}61.6-\frac{1}{4}\ln\bigg{(}\frac{V _{end}}{H_{k}^{4}}\bigg{)}-N\bigg{]}, \tag{3}\] where \(V_{end},H_{k}\) and \(N\) can be written in terms of the scalar spectral index (\(n_{s}\)) and amplitude of the scalar power spectrum (\(A_{s}\)) [24, 31, 32] as \[V_{end} = 6\pi^{2}m_{pl}^{2}A_{s}\bigg{(}\frac{1-n_{s}}{2}\bigg{)}^{2} \tag{4}\] \[H_{k} = 2\pi\sqrt{A_{s}\ m_{pl}^{4}\bigg{(}\frac{V^{\prime}}{V}\bigg{)}^{2}} \tag{5}\] \[H_{k} = \frac{\pi^{2}\ m\ m_{pl}^{2}}{H_{I}^{2}}\sqrt{\frac{128\ A_{s}}{3}} \tag{6}\] \[N = \frac{2}{1-n_{s}}. \tag{7}\] By substituting equations (4), (6) and (7) in equation (3) we get \[N_{re}=\frac{4}{1-3\mathrm{w}}\bigg{[}61.6-\frac{1}{4}\ln\bigg{(}\frac{6912\ \Omega_{\Lambda}^{2}\ (1-n_{s})^{2}}{32768\ \pi^{2}\ A_{s}}\bigg{)}-\frac{2}{1-n_{s}}\bigg{]}. \tag{8}\] Soon after the particle production at the end of inflation, thermalization occurs through various mechanisms like back reaction and rescattering. The universe reaches a thermal equilibrium by attaining a temperature (\(T_{re}\)) and the reheating comes to an end. This reheating temperature can be expressed as [24, 31, 32] \[T_{re}=\bigg{[}\bigg{(}\frac{43}{11}\bigg{)}^{\frac{1}{4}}\frac{a_{0}\ T_{0}\ H_{k}\ e^{-N}}{k}\bigg{(}\frac{45\ V_{end}}{\pi^{2}g}\bigg{)}^{\frac{-1}{3(1+\mathrm{w} )}}\bigg{]}^{\frac{3(1+\mathrm{w})}{3\mathrm{w}-1}}. \tag{9}\] In order to study \(T_{re}\) with Planck 2018 data it is convenient to express \(T_{re}\) in terms of \(n_{s}\) and \(A_{s}\). Therefore we rewrite equation (9) using equations (4), (6) and (7) as follows \[T_{re} = \bigg{(}\frac{43}{11}\bigg{)}^{\frac{1+3\mathrm{w}}{3\mathrm{w}-1 }}\ \bigg{(}\frac{135}{2}\bigg{)}^{\frac{-1}{3\mathrm{w}-1}}\ g^{\frac{-\mathrm{w}}{3 \mathrm{w}-1}}\ \bigg{(}\frac{\sqrt{2}\ \pi\ a_{0}\ T_{0}}{k}\bigg{)}^{\frac{3(1+\mathrm{w})}{3 \mathrm{w}-1}}\ m_{pl}^{\frac{3\mathrm{w}+1}{3\mathrm{w}-1}}\] \[\times\ A_{s}^{\frac{1+3\mathrm{w}}{2(3\mathrm{w}-1)}}\ (1-n_{s})^{\frac{1}{2}}\ e^{ \frac{-6(1+\mathrm{w})}{(1-n_{s})(3\mathrm{w}-1)}}\.\] Now we are in a position to study the reheating e-folding number and reheating temperature for a range of scalar spectral index obtained from Planck 2018 data for different equation of states. The results are presented in figure (4) and (5) respectively. [FIGURE ## 4 Effective Field Theory and Quantum Gravity The Lyth bound [18] suggests that the inflationary energy scale can be much more than the Planckian energy scale where quantum gravity is relevant. Therefore quantum gravity effect cannot be ignored while explaining the Physics of early universe and hence the inflationary scenario. But this effect is accessible only at high energy scales and its direct realization is difficult. Even though many attempts have been made to formulate quantum gravity, so far no satisfactory theory exists. The effective field theory (EFT) approach is a promising one to probe the quantum gravity effect through inflation [33; 34]. In the context of EFT [35], the Einstein-Hilbert action for the hybrid inflationary model Figure 4: Behaviour of the reheating e-folding number (\(N_{re}\)) for a range of scalar spectral index (\(n_{s}\)) for various equation of states (w) with Planck 2018 data. Figure 5: Behaviour of the reheating temperature (\(T_{re}\)) for a range of scalar spectral index (\(n_{s}\)) for various equation of states (w) with Planck 2018 data. can be written as \[S = \int d^{D}x\ \sqrt{-g}\bigg{(}\frac{m_{pl}^{2}R}{2}+f(\varphi, \sigma)F(R,R_{\mu\nu})+g^{\mu\nu}\partial_{\mu}\varphi\partial_{\nu}\varphi+ \frac{1}{4\lambda}\bigg{(}M^{2}-\lambda\sigma^{2}\bigg{)}^{2}\] \[+\ \frac{1}{2}g^{2}\varphi^{2}\sigma^{2}+\frac{1}{2}m^{2}\varphi^ {2}+\sum_{n=5}^{\infty}c_{n}\frac{\varphi^{n}}{m_{pl}^{n-4}}\ \bigg{)},\] where \(f(\varphi,\sigma)F(R,R_{\mu\nu})\) represents the non minimal coupling of the fields to gravity and \(c_{n}\frac{\varphi^{n}}{m_{pl}^{n-4}}\) are called as the higher dimensional operators (HDO) where \(c_{n}\) are the Wilson coefficients of HDO whose values must be of the order of \(10^{-3}\) for the potential to be nearly flat and the slow roll to occur. This is a general expression for the action and in the present study we assume minimal coupling and hence ignore the second term, also, for simplicity we neglect the term containing partial derivatives. Since the inflation is governed only by \(\varphi\), the effective potential of inflation becomes, \[V_{eff}(\varphi)=V_{ren}(\varphi)+\sum_{n=5}^{\infty}c_{n}\frac{\varphi^{n}}{m _{pl}^{n-4}}. \tag{10}\] Here, it is assumed that \(V_{ren}(\varphi)\) contains renormalizable terms of the potential upto relevant dimension. ### Quantum gravity effect on Hubble parameter In view of EFT, the effective potential corresponding to the inflationary field of the hybrid model with quantum gravity correction can be written as \[V_{eff}(\bar{\varphi})=m_{pl}^{4}(\bar{m}^{2}\bar{\varphi}^{2}+c_{n}\bar{ \varphi}^{n}), \tag{11}\] where \(\bar{m}=\frac{m}{m_{pl}}\) and \(\bar{\varphi}=\frac{\varphi}{m_{pl}}\) are the rescaled mass and field. For any model of inflation, all the associated Wilson coefficients need not be considered. But in the present study we restrict ourselves to one dominant coefficient, say \(c_{6}\) and hence the other coefficients are considered subdominant. Since the principal term of the potential is \(\bar{m}^{2}\), the quantum gravity correction term can be taken as \[c_{6}=\alpha_{m}\bar{m}^{2}. \tag{12}\] Using equation (12) the potential can be rewritten as \[V_{eff}(\bar{\varphi})=m_{pl}^{4}\bar{m}^{2}\bar{\varphi}^{2}(1+\alpha_{m} \bar{\varphi}^{4}), \tag{13}\] such that \[|\alpha_{m}|\bar{\varphi}^{4}<1. \tag{14}\] Using the condition to end the inflation, the value of the rescaled inflationary field at the beginning of inflation in terms of the e-folding number with quantum gravity correction is [35] \[\begin{split}\bar{\varphi}_{N}^{2}&=\bar{\varphi }_{N,cl}^{2}+\bar{\varphi}_{N,qg}^{2}\\ &=\frac{N}{2\pi}+\frac{\alpha_{m}N^{3}}{12\pi^{3}}.\end{split} \tag{15}\] Hereafter, the subscripts \(cl\) and \(qg\) respectively represents the standard classical part and quantum gravity corrected part of the associated quantities. With the help of the effective potential (in equation (4.5)) the slow roll parameters can be computed [35]. Following equation (4.7) the quantum gravity corrected first slow roll parameter (\(\epsilon\)) and second slow roll parameter (\(\eta\)) in terms of \(N\) and \(\alpha_{m}\) are [35; 36] \[\epsilon =\epsilon_{cl}+\epsilon_{qg} \tag{4.8}\] \[=\frac{1}{2N}+\frac{5\alpha_{m}N}{12\pi^{2}},\] \[\eta =\eta_{cl}+\eta_{qg}\] \[=\frac{1}{2N}+\frac{5\alpha_{m}N}{3\pi^{2}}\.\] The scalar power spectrum arising from the scalar field fluctuations can be characterised in terms of the scalar spectral index and amplitude. Using equation (4.8) in the standard definition of \(n_{s}\) and \(A_{s}\), the corresponding quantum gravity corrected results can be respectively written as \[n_{s} =n_{s,cl}+n_{s,qg} \tag{4.9}\] \[=\left(1-\frac{2}{N}\right)+\frac{5\alpha_{m}N}{6\pi^{2}}\,\] \[A_{s} =A_{s,cl}+A_{s,qg} \tag{4.10}\] \[=\frac{4m^{2}N^{2}}{3\pi}+\frac{4m^{2}\alpha_{m}N^{4}}{9\pi^{3}}\.\] Next, we focus on the Hubble parameter during inflation in the light of quantum gravity. \(H_{I}\) can be expressed in terms of slow roll parameter from the equations (3.5) and (3.6) as \[H_{I}=\left(\frac{16}{3\epsilon}\right)^{\frac{1}{4}}\sqrt{\pi\ m\ m_{pl}}. \tag{4.11}\] Substituting equation (4.8) in equation (4.11) the quantum gravity corrected \(H_{I}\) is obtained as \[H_{I}=(\Omega_{\Lambda})^{\frac{1}{4}}\sqrt{4\pi m_{pl}\ H_{0}}\ \bigg{(}1-\frac{5\alpha_{m}N^{2}}{24\pi^{2}}\bigg{)}. \tag{4.12}\] The quantum gravity corrected Hubble's parameter during inflation is studied for various measured values of \(H_{0}\). The corresponding results are presented in figure (6). At a glance, the figure shows that the quantum gravity corrected \(H_{I}\) appears to be parallel to the classical counterpart. But equation (4.12) demands a deeper analysis to understand the results further. Therefore to get more insight, we analyse the results in terms of rate of change of \(H_{I}\) with respect to the rate of change of \(H_{0}\) for which we calculate the slope (\(s\)) as \[s=\frac{dH_{I}}{dH_{0}}=(\Omega_{\Lambda})^{\frac{1}{4}}\sqrt{\frac{\pi m_{pl }}{H_{0}}}\ \bigg{(}1-\frac{5\alpha_{m}N^{2}}{24\pi^{2}}\bigg{)}. \tag{4.13}\] We scrutinise the slope for various \(H_{0}\) from Planck 2018 data for various HDO and the results are presented in figure (7). From the analysis of the results, the slope associated with quantum gravity corrected \(H_{I}\) and \(H_{0}\) is found decreasing. Since \(s\) is decreasing in nature we can infer that the rate of change of Hubble parameter during inflation (\(dH_{I}\)) is decreasing due to the quantum gravity effect which in turn implies that the rate of change of present Hubble parameter (\(dH_{0}\)) is increasing. Therefore we may conclude that the role of quantum gravity is important in understanding Planck 2018 measured \(H_{0}\) while addressing the Hubble tension. Figure 6: Effect of quantum gravity on the variation of the Hubble parameter during inflation (\(H_{I}\)) for a range of present Hubble parameter (\(H_{0}\)) with Planck 2018 data. Figure 7: Quantum gravity effect on the slope \(\frac{dH_{I}}{dH_{0}}\) for a range of present Hubble parameter (\(H_{0}\)) with Planck 2018 data. According to the hybrid inflationary model, soon after the inflation, universe undegoes a phase transition driven by the vacuum energy density. The Hubble parameter during phase transition is given by equation (10) and we can observe that \(H_{T}\) is independent of HDO. This implies that \[H_{T}=H_{T,cl}. \tag{24}\] We study \(H_{T}\) with \(H_{0}\) obtained from SH0ES data and the results are presented in figure (8). Upon analysing the results, we find that \(H_{T}\) is not influenced by quantum gravity effect. Therefore we can conclude that quantum gravity does not play any role in phase transition. This is in agreement with the equation (9). We observed that quantum gravity can have an effect on \(H_{I}\) whose signature can be traced on the reheating stage. Hence we investigate the quantum gravity effect on the duration of the reheating and reheating temperature with Planck 2018 data to substantiate the impact of quantum gravity on \(H_{I}\). ### Quantum gravity effect on reheating We have seen that the number of e-folding of reheating and the reheating temperature are depended on the scalar spectral index and the amplitude of the scalar power spectrum (see equation (8) and (10)). But, these quantities get modified in the presence of quantum gravity (see equation (9) and (10)) and have an effect on \(N_{re}\) and \(T_{re}\). Therefore using equation (9) and (10) in equation (8) and (10), the quantum gravity corrected reheating Figure 8: Variation of the Hubble parameter during phase transition (\(H_{T}\)) for a range of present Hubble parameter (\(H_{0}\)) with SH0ES data. e-folding number is obtained as \[N_{re}=N_{re,cl}+N_{re,qg}= \Bigg{\{}\frac{4}{1-3\text{w}}\bigg{[}61.6-\frac{1}{4}\ln\left( \frac{6912\ \Omega_{\Lambda}^{2}\ (1-n_{s,cl})^{2}}{32768\ \pi^{2}\ A_{s,cl}}\right)-\frac{2}{1-n_{s,cl}} \bigg{]}\Bigg{\}}\] \[+\Bigg{\{}\frac{4}{1-3\text{w}}\bigg{[}\frac{2}{1-n_{s,cl}}- \frac{1}{4}\ln\left(\frac{1-\frac{n_{s,qg}}{1-n_{s,cl}}}{1+\frac{A_{s,qg}}{A_{s,cl}}}\right)-\frac{2}{1-n_{s,cl}-n_{s,qg}}\bigg{]}\Bigg{\}}. \tag{4.15}\] And similarly the quantum gravity corrected reheating temperature can be written as Figure 9: Quantum gravity effect on the reheating e-folding number (\(N_{re}\)) for a range of scalar spectral index (\(n_{s}\)) for different values of equation of states (w) and higher dimensional operator \(\alpha_{m}\) with Planck 2018 data. \[\begin{split} T_{re}=T_{re,cl}\times T_{re,qg}=&\Bigg{\{} \left(\frac{43}{11}\right)^{\frac{1+\mathrm{w}}{3\mathrm{w-1}}}\,\left(\frac{13 5}{2}\right)^{\frac{-1}{3\mathrm{w-1}}}\,g^{\frac{-\mathrm{w}}{3\mathrm{w-1}} }\,\left(\frac{\sqrt{2}\,\,\pi\,\,a_{0}\,\,T_{0}}{k}\right)^{\frac{3(1+ \mathrm{w})}{3\mathrm{w-1}}}\\ &\times m_{pl}^{\frac{3\mathrm{w+1}}{3\mathrm{w-1}}}A_{s,cl}^{ \frac{1+3\mathrm{w}}{2(3\mathrm{w-1})}}\,\left(1-n_{s,cl}\right)^{\frac{1}{2} }\,e^{\frac{-6(1+\mathrm{w})}{(1-n_{s,cl})(3\mathrm{w-1})}}\Bigg{\}}\times \Bigg{\{}\left(1+\frac{A_{s,qg}}{A_{s,cl}}\right)^{\frac{1+3\mathrm{w}}{2(3 \mathrm{w-1})}}\\ &\times\sqrt{1+\frac{n_{s,qg}}{(1-n_{s,cl})}}e^{-\left[\frac{1}{ 1-\left(\frac{n_{s,qg}}{1-n_{s,cl}}\right)}\right]}.\end{split} \tag{4.16}\] We study the reheating e-folding number for different values of equation of states and \(\alpha_{m}\) with a range of scalar spectral index obtained from Planck 2018 data. The results are presented in figure (9). For \(\alpha_{m}\leq-\) 0.0012 the reheating e-folding numbers representing the various Figure 10: Role of quantum gravity on the reheating temperature (\(T_{re}\)) for a range of scalar spectral index (\(n_{s}\)) for different values of equation of states (w) and higher dimensional operator (\(\alpha_{m}\)) with Planck 2018 data. equation of states do not converge to a single \(n_{s}\) value indicating that quantum gravity effect is equation of states dependent. But this is unphysical and unlikely to happen as the particles are not yet produced before reheating and therefore the corresponding \(\alpha_{m}\) values are ruled out (see figure 9 (a)). Moreover, the obtained \(N_{re}\) values are blue tilted with respect to the Planck 2018 bounds. For some values of \(\alpha_{m}\), the \(N_{re}\) values converge at higher values of \(n_{s}\) showing that the different w agree with each other, but the corresponding \(\alpha_{m}\) values are ruled out as they converge outside the Planck 2018 bounds (see figure 9 (b), (c) and (d)). For higher values of \(\alpha_{m}\) the \(N_{re}\) values converge within the Planck 2018 bounds and their corresponding \(\alpha_{m}\) values are favourable (see figure 9 (e) and (f)). But in all these cases \(N_{re}\) after quantum gravity correction is found to be higher than \(N_{re,cl}\) indicating that quantum gravity effect can elongate the reheating period. The prolonged reheating suggests that it has taken much more time to attain the reheating temperature or in other words it must have started with a lower temperature of reheating. To check this, we examine the variation of reheating temperature for a range of values of scalar spectral index and for different values of equation of states and HDO. The results are presented in figure (10). It is observed that due to quantum gravity effect the reheating begins with a lower temperature which implies that it takes more time to achieve the reheating temperature (reheating is prolonged) in consistent with the previous results obtained for \(N_{re}\). We can see that for some \(\alpha_{m}\) values \(T_{re}\) converges to a single value at a particular \(n_{s}\) which is well within the Planck 2018 bounds suggesting that \(T_{re}\) is independent of the equation of states (see figure 10 (d), (e) and (f)). From the study and analysis of the present work it is evident that quantum gravity can decrease the temperature during the onset of reheating thereby increasing the duration of reheating. Therefore the role of quantum gravity cannot be ignored while addressing the Hubble tension. ## 5 Discussions and conclusions The discrepancy existing in the present value of Hubble parameter obtained from Planck 2018 and SH0ES data creates a tension in understanding the universe and therefore resolving it is vital in the field of cosmology. Inspite of many attempts to resolve the Hubble tension, lack of satisfactory solution motivates us to address the Hubble tension starting from the inflationary stage of the universe with novel effect such as quantum gravity effect. In the standard framework, the single scalar field responsible for inflation is supposed to be insensitive to quantum gravity effect. However combining CMB results with Lyth bound suggests that the inflationary field can have energy higher than the Planckian energy scale. Further, CMB results favour multi fields over single field in the inflationary scenario. Among the multi fields inflationary models the hybrid model with two scalar fields received much attention, where one field drives the inflation and the other is responsible for phase transition. This gives rise to the necessity of considering the Hubble parameter during inflation and phase transition separately. Therefore we incorporate quantum gravity in the hybrid inflationary model using effective field theory approach to investigate its effect on \(H_{0}\) through \(H_{I}\) and \(H_{T}\). We observe that quantum gravity can have an impact on \(H_{I}\) where as it does not reflect on \(H_{T}\). We find that the effect of quantum gravity during inflation can increase \(H_{0}\) thus accounting for the different observed \(H_{0}\) values thereby suggesting that quantum gravity may be a viable solution to the Hubble tension. Since quantum gravity is playing a prominent role during inflation the chances of its manifestation on the subsequent stages of reheating cannot be ruled out. The footprints of quantum gravity on \(H_{I}\) can be substantiated by studying its aftereffect on reheating through reheating e-folding number and reheating temperature with the scalar spectral index coming from Planck 2018 data. Finally we show that the quantum gravity effect during inflation can influence the reheating by lowering the initial temperature of reheating thereby delaying the reheating process. We may conclude that the quantum gravity effect during inflation is inevitable in addressing Hubble tension. Further, the results of the present study may be useful in validating inflationary model. The present work is mainly to resolve the Hubble tension with quantum gravity effect on the hybrid inflationary model. Results and observations of this study can be re-examined with other inflationary models, which can help not only in resolving the Hubble tension but also in validating inflationary model because resolving the Hubble tension rely on the underlying inflationary model. Therefore the Hubble tension cannot be studied in isolation but it has to be in conjunction with the validation of inflationary model. At a glance validating inflationary model may not have any direct connection to Hubble tension because it relies chiefly on the tensor to scalar ratio (\(r\)). But in the light of Hubble tension appropriate inflationary model has to be considered to account for the observed values of \(H_{0}\). Therefore validation of inflationary model can result in resolving the Hubble tension and vice versa. In other words, validation of inflationary model and resolution of Hubble tension can complement each other. The results of present work are useful in resolving the Hubble tension as well as validating inflationary model along with quantum gravity. ###### Acknowledgements. AB acknowledges the financial support of Prime Minister's Research Fellowship (PMRF ID : 3702550).
2306.16120
Data-driven approach for diagnostic analysis of dynamic bottlenecks in serial manufacturing systems
A variety of established approaches exist for the detection of dynamic bottlenecks. Furthermore, the prediction of bottlenecks is experiencing a growing scientific interest, quantifiable by the increasing number of publications in recent years. Neglected, on the other hand, is the diagnosis of occurring bottlenecks. Detection methods may determine the current location of a bottleneck, while predictive approaches may indicate the location of an upcoming bottleneck. However, mere knowledge of current and future bottlenecks does not enable concrete actions to be taken to avoid the bottlenecks, nor does it open up any immediate advantage for manufacturing companies. Since small and medium-sized companies in particular have limited resources, they cannot implement improvement measures for every bottleneck that occurs. Due to the shifts of dynamic bottlenecks, the selection of the mostsuitable stations in the value stream becomes more difficult. This paper therefore contributes to the neglected field of bottleneck diagnosis. First, we propose two data-driven metrics, relative bottleneck frequency and relative bottleneck severity, which allow a quantitative assessment of the respective bottleneck situations. For validation purposes, we apply these metrics in nine selected scenarios generated using discrete event simulation in a value stream with a serial manufacturing line. Finally, we evaluate and discuss the results.
Nikolai West, Joern Schwenken, Jochen Deuse
2023-06-28T11:44:07Z
http://arxiv.org/abs/2306.16120v1
# Data-driven approach for diagnostic analysis of dynamic bottlenecks in serial manufacturing systems ###### Abstract A variety of established approaches exist for the detection of dynamic bottlenecks. Furthermore, the prediction of bottlenecks is experiencing a growing scientific interest, quantifiable by the increasing number of publications in recent years. Neglected, on the other hand, is the diagnosis of occurring bottlenecks. Detection methods may determine the current location of a bottleneck, while predictive approaches may indicate the location of an upcoming bottleneck. However, mere knowledge of current and future bottlenecks does not enable concrete actions to be taken to avoid the bottlenecks, nor does it open up any immediate advantage for manufacturing companies. Since small and medium-sized companies in particular have limited resources, they cannot implement improvement measures for every bottleneck that occurs. Due to the shifts of dynamic bottlenecks, the selection of the most suitable stations in the value stream becomes more difficult. This paper therefore contributes to the neglected field of bottleneck diagnosis. First, we propose two data-driven metrics, relative bottleneck frequency and relative bottleneck severity, which allow a quantitative assessment of the respective bottleneck situations. For validation purposes, we apply these metrics in nine selected scenarios generated using discrete event simulation in a value stream with a serial manufacturing line. Finally, we evaluate and discuss the results. Bottleneck analysis Dynamic bottlenecks Shifting bottlenecks Bottleneck detection Bottleneck diagnosis Throughput Theory of constraints Discrete event simulation ## 1 Introduction The diagnosis of throughput-limiting bottlenecks is essential for manufacturing companies that want to maintain a high degree of production efficiency. According to the **Theory of Constraints (TOC)**, every system is inevitable limited by a bottleneck, which must be identified and optimized to improve the systems overall output [1]. Since the TOC is considered universally applicable, its rules apply to manufacturing systems as well. TOC has been the subject of on-going research efforts for several decades. In all but the most simple manufacturing systems, bottlenecks are not static, but change dynamically [2]. A particular challenge arises due to this shifting behavior of manufacturing bottlenecks. Due to the increasing demand for flexibility and due to the rising complexity in interconnected value streams, the variability in modern value streams increases as well. Shifting behavior is considered an important underlying principle for dealing with manufacturing bottlenecks. Thus, most scientific work has primarily dealt with two questions: * **Detection:** Where is the bottleneck at this moment? * **Prediction:** Where is the bottleneck going to be next? Several methods exist to detect the current location of a bottleneck [3]. These methods are either based on a momentary snapshot of the current conditions of the manufacturing system or they are based on an averaged evaluation of a given period of past system behavior [4]. Due to its good applicability for dynamic bottlenecks [5], we are going to apply the **Active Period Method (APM)** in this paper [2]. APM considers the station with the longest active operating time as the current bottleneck. We further elaborate the usage of the method in **Section 2.2**. Due to the emerging capabilities in analyzing large volumes of data using machine learning and artificial intelligence, research in recent years mainly aims to predict the future location of a bottleneck. The evolving possibilities of intelligent and data-driven analyses are being used for this purpose. Even though the research field is still rather young, there are several promising approaches for making data-driven predictions of future bottleneck events [6; 7; 8]. Similar to detection, such bottleneck prediction requires data-driven diagnostic tools to evaluate different scenarios. As such, we emphasize the need for bottleneck diagnosis, which has been neglected so far. * **Diagnosis:** What are the effects of occurring bottlenecks? The remainder of this paper is organized as follows. First, we introduce fundamentals of Bottleneck Analysis (**Section 2**). We discuss a holistic model, bottleneck detection with AMP and the universal need for bottleneck diagnosis. Then, we propose two statistical metrics for bottleneck diagnosis and explain the calculations for bottleneck frequency and severity (**Section 3**). To validate these metrics, we apply them in a number of simulations. For this purpose, we outline the nine selected scenarios in the next chapter (**Section 4**). At last, we present the results of the application (**Section 5**) and discuss the results with a brief outlook on follow-up work (**Section 6**). ## 2 Fundamentals ### Four steps of a holistic Bottleneck Analysis Bottleneck Analysis is a complex process that requires a well-structured methodology to ensure that all necessary tasks are completed in a successive and goal-oriented manner. Within this paper, we follow a holistic model for Bottleneck Analysis to distinguish four major steps: Bottleneck detection, diagnosis, prediction, and prescription [4]. **Figure 1** depicts the four steps in the context of actionable and anticipative results. The first step, **Bottleneck Detection**, is akin to descriptive analytics and fulfills a declarative task during Bottleneck Analysis. It involves the concise determination of the current bottleneck in the manufacturing system, based on information about the system's status. To identify bottlenecks, different methods may be applied, such as machine states, buffer level, or process times [3; 9]. We propose using a Momentary Value Method to identify bottlenecks in real-time, as they can identify a bottleneck at any point in time, whereas Average Value Methods rely on arbitrary periods of system observations [5]. The second step, **Bottleneck Diagnosis**, originates from diagnostic analysis and primarily addresses an assessment task during Bottleneck Analysis. The goal of this step is to evaluate and assess the cause and effect of observed bottlenecks [4]. The diagnosis of bottlenecks is the focus of this paper and will be explained in more detail later on. Our main goal is to standardize future work by developing two simple metrics to quantify bottleneck impacts. In the past, diagnosis was usually either neglected or performed based on qualitative estimates that included consultation with experts of the manufacturing system. The third step, **Bottleneck Prediction**, relates to predictive analytics and fulfills an anticipatory task during Bottleneck Analysis. Its goal is to determine the future performance within complex manufacturing systems. The prediction step requires prior implementation of a real-time bottleneck detection system and can be achieved by predicting future machine states, buffer level, and numeric forecast of the development of interdeparture time variances. However, further research is necessary to develop prediction methods for Bottleneck Analysis. Lastly, the fourth step, **Bottleneck Prescription**, is similar to prescriptive analytics and fulfills a preemptive task during Bottleneck Analysis. It involves using intelligent system control to mitigate bottlenecks, such as bottleneck-centered production control systems. Although most approaches consider systems that are geared to past system states, we suggest using Bottleneck Diagnosis and Prediction to ensure that the system is up-to-date and responsive to changing conditions. In conclusion, the four steps help to manage and utilize the potential of Bottleneck Analysis in manufacturing. The holistic model may still be novel, but it has been included in a number of works in the past years [10]. For a more extensive review of the bottleneck literature, we refer to the excellent literature review in [11]. For this paper, we conclude that Bottleneck Analysis is a continuous process that requires constant attention to ensure the efficient operation of a manufacturing system. ### Bottleneck detection using the longest active period The method presented below is one possible approach to detect bottlenecks. As mentioned in the introduction, there are several different methods to detect bottlenecks. For a detailed comparison of the methods, we therefore refer to [5] and [12]. The Active Period Method (APM) is a method used for bottleneck detection in manufacturing systems. According to the APM, a bottleneck is the station in the value stream that has been working the longest without interruptions. This duration is then called the active period of the station. A station is considered active if it is processing products as defined by the production program. On the other hand, a station is considered to be inactive if it is waiting due to buffer-related starvation or blockage [2, 13]. A station is blocked if the downstream buffer is filled to the maximum. Likewise, a station is starved if the upstream buffer is empty and cannot supply another part or product to the next station. APM also incorporates shifting states of a bottleneck to determine whether a station is the sole bottleneck or a shifting bottleneck. Shifting behavior occurs at the overlap of the current and the subsequent bottleneck. Figure 2 shows an illustrative example of the active periods for two stations \(M1\) and \(M2\). Here, at \(t_{0}\) the station \(M2\) is the sole bottleneck. While at \(t_{1}\) both \(M1\) and \(M2\) have become shifting bottlenecks [2]. Figure 1: Depiction of the four steps for Bottleneck Analysis in the context of actionable and anticipative results [4] Figure 2: Exemplary visualization of the active periods for two stations \(M1\) and \(M2\) with a shifting bottleneck [2] Summarizing, the main advantage of APM is that it is easy to apply and requires neither extensive analysis nor complex mathematical modeling. In addition, as a non-invasive method, APM does not require shutting down the manufacturing system to collect data. Overall, APM is a simple and useful method to detect bottlenecks in manufacturing systems [5]. ### On the need for metrics for bottleneck diagnosis While both bottleneck detection and prediction receive considerable attention in the scientific discourse, bottleneck diagnosis has been widely neglected. Nevertheless, some approaches to performing a diagnosis do exist. In an early work, [14] propagate a simple visual evaluation for the Bottleneck Analysis to determine the effects of the bottleneck stations. Using an average-based performance metric, such as the utilization, it was then possible to quantify the bottleneck effect of individual stations. In a promising paper, [15] use a clustering-based approach to prioritize maintenance activities. Through the unsupervised approach, maintenance practitioners are provided with information on the maintenance-related diagnostic insights into bottlenecks. [16] include a diagnostic examination in their digital twin based framework for throughput bottlenecks. This is intended to prepare manufacturing companies for the digital transformation and to enable them to 'utilize the wealth of enterprise information'. Common to all previous approaches is a lack of a metric to easily compare bottleneck effects. Two such metrics are developed in this paper and further elaborated in **Section 3**. As shown in **Figure 1**, both bottleneck detection and prediction have a limited degree of actionability. In contrast, a targeted diagnosis of existing bottlenecks opens up the possibility of prioritizing measures to reduce the overall effects of bottlenecks. This is especially relevant in scenarios characterized by dynamic bottlenecks and a finite amount of resources: * **Dynamic behavior:** Since any station can become the bottleneck in systems with dynamic shifting, the focus of improvement activities must be changed on a regular basis. * **Finite resources:** Since the resources that are available for improvement are limited, the activities must be concentrated on the stations most heavily affected by bottleneck behavior. Summarizing, a successful bottleneck diagnosis is a critical step in improving the performance of a system or process, and it is essential for organizations that want to remain competitive and efficient in today's fast-paced business environment. ## 3 Metric proposal As previously mentioned, the focus of this paper is on two simple metrics for evaluating bottleneck behavior. The metrics are intended to enable users to evaluate detected bottlenecks in order to select improvement measures in a targeted manner and maximize the impact of these measures. Therefore, both metrics are used in deterministic calculations that we explain next. ### Relative bottleneck frequency We refer to the first metric used to diagnose bottlenecks as **relative bottleneck frequency** or \(rbf\). This metric uses the intuitive approach of determining how often an individual station or machine occurs as a bottleneck during the period under consideration. This simple idea builds on the fundamental assumption of TOC that there can only be one bottleneck at any given time. Let \(S\) be a station in a manufacturing system. Then \(rbf_{S}\) represents the relative bottleneck frequency for \(S\). The analysis takes place for a defined period of time t of a fixed length n. At each point in time \(t_{i}\), the current bottleneck of the entire system needs to be determined. Conceptually, there are no restrictions in the choice of methods as long as they clearly identify a single station as a bottleneck for each time point. Then, we can determine how often S appears as a bottleneck during t. We call this value the (total) bottleneck frequency \(bf_{S}\). It is calculated as the quotient of the number of times at which S is the bottleneck during t and n, the total length of t. **Equation 1** and **Equation 2** show the mathematical calculation of the relative bottleneck frequency. \[\mathbf{S}=\sum_{0<i<n}\begin{cases}1&:\text{ if }S\text{ is bottleneck at }t_{i},\\ 0&:\text{ else.}\end{cases} \tag{1}\] \[rbf_{S}=\frac{bf_{S}}{n} \tag{2}\] The following applies to the value range \(rbf_{S}\) in \((0,...,1)\). During bottleneck diagnosis, \(rbf\) has to be determined for each S. The higher \(rbf_{S}\), the more frequently \(S\) occurs as bottleneck. The case of \(rbf\) having a value of 1 implies a static bottleneck on S, while an \(rbf_{S}\) of 0 corresponds to S never becoming a bottleneck. In addition, the sum of the \(rbf_{S}\) of all stations in the value stream under consideration must always add up to 1. ### Relative bottleneck severity While relative bottleneck severity is well suited to evaluate an observation period, it fails when examining individual points in time. This dilemma is already known from the detection of bottlenecks. Here, momentary value methods in particular prove useful because, in contrast to average value methods, they do not require any information about past system states. For this reason, we propose a second metric, the **relative bottleneck severity** or \(rbs\), that is based only on the current system state. Like \(rbf\), \(rbs\) is determined for each station \(S\) in the value stream. The determination of the severity is based on the respective characteristic of the selected bottleneck detection method. For the purpose of illustration, we will explain the calculation of \(rbs\) using the APM as an example. To apply the APM, the duration of the active operating period of all stations is known at each point in time. We refer to this duration as \(bs_{s}\) for each \(S\). Since the bottleneck station is characterized by the longest active operating period, we refer to this value as \(bs_{\rm BN}\). \[rbs_{S}=\frac{bs_{S}}{bs_{BN}} \tag{3}\] \(rbs\) has the same range of values as \(rbf\) of \((0,...,1)\), but the sum of \(rbs_{S}\) for all stations \(S\) can be greater than 1 (**Equation 3**). The station occurring as the bottleneck at the time \(t_{i}\) of the analysis has an \(rbs\) value of 1, because \(bs_{S}\)=\(bs_{\rm BN}\) must always be valid for this station. The \(rbs\) value of the other stations then indicates the severity of their current impact on the bottleneck. The closer the \(rbs\) value is to 1, the more severe is the impact on the system. While we have explained the calculation of \(rbs\) for APM only, in principle other detection methods can also be used to determine \(rbs\). For example, the interdeparture time variance that relies on the stations' current processing times [3] or an adaption of the bottleneck walk that considers the current buffer level before and after each station could be utilized as well. ## 4 Simulation scenarios To demonstrate the usability of the two metrics \(rbf\) and \(rbs\), we apply them in nine exemplary scenarios in this section. We generated the manufacturing data ourselves using a discrete event simulation. In order to obtain comprehensible results from the metrics, we apply them in simple flow lines with seven fully interlinked stations. We separate nine scenarios in three main categories (\(S1\), \(S2\), \(S3\)), each consisting of three scenarios. * **S1**: No station receives an increased process time pt, instead the variability var of the process times is changed in each scenario by 25% (\(S1\)-\(1\), \(S1\)-\(2\) and \(S1\)-\(3\)). * **S2**: One station receives an additional increase of 12.5% to its average process time, and we change the location of this station in each scenario (\(S2\)-\(1\), \(S2\)-\(2\) and \(S2\)-\(3\)). * **S3**: Two stations receives an additional increase of 12.5% to their average process times, and we change the location of these stations in each scenario (\(S3\)-\(1\), \(S3\)-\(2\) and \(S3\)-\(3\)). To be able to represent the scenarios in later representations we use a simplified notation. A \(\Box\)' corresponds to a simple station, while a '\(\blacksquare\)' corresponds to a station with additional process time. Buffers between stations are represented by a '-'. Table 1 shows the nine scenarios in our simplified notation. Furthermore, the simulation required several assumptions. We set the process time pt of every station to 2.00. Every modified station \(\blacksquare\) in all scenarios of \(S2\) and \(S3\) then has a \(pt\) of \(2.25\). Furthermore, the maximum capacity of all buffers is set to 5 units and the system's boundaries as set to infinite, providing an unlimited supply of parts and demand for products. Each simulation scenario is run 10 times. The simulation receives a settling time of 2,000 time steps, which are then removed from the analysis. Each scenario corresponds to a one-week observation period with 10,080 individual observations each. The simulation scenarios are implemented using simpy (v4.0.1), a library for discrete event simulation in Python [17]. The raw data of the 90 simulation runs (from nine scenarios with ten simulation runs each) on which the analysis is based, as well as the code used to produce the following results, can be found in the publicly available project repository for this paper. [https://github.com/nikolaiwest/2023_bottleneck_diagnosis_arxiv](https://github.com/nikolaiwest/2023_bottleneck_diagnosis_arxiv) To achieve a realistic system behavior, variability must be applied to the stations' process times. For this purpose, we use an exponential distribution, since it most closely corresponds to stations affected by failure or delay. **Figure 3** and **Figure 4** serve to illustrate the exponential distribution of the process times. An increase in variability leads to a more frequent occurrence of longer process times. This corresponds to effective downtime due to unforeseen influences. Naturally, the 12.5% increase in process times leads to a tendency for process times to shift towards longer times. The expected times are always higher than they would be without the process time addition. While we do not anticipate a dominant bottleneck in \(S1\), we expect that in \(S2\) and \(S3\), the \(\blacksquare\) stations will increasingly stand out as bottlenecks. ## 5 Results First, we consider the evaluation of the relative bottleneck frequencies. For this purpose, **Figure 5**, **Figure 6** and **Figure 7** show the three scenarios of the main categories, averaged for ten simulation runs. As expected, in no scenario of group \(S1\) does a station have a higher tendency to show bottleneck behavior. Despite process times influenced by variance, the \(rbf\) for the three scenarios is about \(0.15\) for all seven stations which reflects a random spread. \begin{table} \begin{tabular}{l c c} \hline \hline & Scenario & \\ \cline{2-3} Name & Setup of the manufacturing line & Variability \\ \hline S1-1 & \(\square\)-\(\square\)-\(\square\)-\(\square\)-\(\square\)-\(\square\)-\(\square\) & Low \\ S1-2 & \(\square\)-\(\square\)-\(\square\)-\(\square\)-\(\square\)-\(\square\) & Medium \\ S1-3 & \(\square\)-\(\square\)-\(\square\)-\(\square\)-\(\square\)-\(\square\) & High \\ \hline S2-1 & \(\square\)-\(\blacksquare\)-\(\square\)-\(\square\)-\(\square\)-\(\square\) & Medium \\ S2-2 & \(\square\)-\(\square\)-\(\blacksquare\)-\(\square\)-\(\square\)-\(\square\) & Medium \\ S2-3 & \(\square\)-\(\square\)-\(\square\)-\(\square\)-\(\blacksquare\)-\(\square\) & Medium \\ \hline S3-1 & \(\square\)-\(\square\)-\(\blacksquare\)-\(\square\)-\(\blacksquare\)-\(\square\) & Medium \\ S3-2 & \(\square\)-\(\blacksquare\)-\(\square\)-\(\square\)-\(\square\) & Medium \\ S3-3 & \(\blacksquare\)-\(\square\)-\(\square\)-\(\square\)-\(\blacksquare\) & Medium \\ \hline \hline \end{tabular} \end{table} Table 1: Tabular representation of the nine scenarios in the simplified value stream notation Figure 3: Visualization of the effect of the process time distribution and the applied variability We used the same method as in the previous section. Figure 4: Visualization of the effect of the additional process time for targeted bottleneck stations Figure 5: Relative bottleneck frequency for \(S1\)-\(1\), \(S1\)-\(2\) and \(S1\)-\(3\) Figure 6: Relative bottleneck frequency for \(S2\)-1, \(S2\)-2 and \(S2\)-3 Figure 7: Relative bottleneck frequency for \(S3\)-1, \(S3\)-2 and \(S3\)-3 The case is different for the six scenarios of the groups \(S2\) and \(S3\). Again, we show the average results from ten simulation runs, each with 10,000 steps. Here, the stations, marked by \(\blacksquare\), with additional process time can be frequently identified as bottlenecks on the basis of \(rbf\). Consequently, \(S1\) is highlighted in \(S2\)-\(1\), \(S3\) in \(S2\)-\(2\), and \(S5\) in \(S2\)-\(3\). With a value of about \(0.65\), the three stations in \(S2\)-\(1\), \(S2\)-\(2\) and \(S2\)-\(3\) show the corresponding station as a bottleneck. Similarly, both modified stations in \(S3\)-\(1\), \(S3\)-\(2\), and \(S3\)-\(1\) are characterized by equally frequent bottlenecks. Since there are two alternating bottlenecks, the \(rbf\) value of 0.4 is, as expected, below the value of systems with only one main bottleneck. Overall, we summarize that both the expected hypotheses about the system behavior have been met and the \(rbf\) metric contributes to a clear detection and quantitative of the bottleneck stations. We refrain from visualizing all nine scenarios for the corresponding \(rbs\) values in this paper for reasons of space. The metric values behave quite similar and can be viewed in the published project repository. Instead, we show a single exemplary comparison for scenario \(S3\)-\(1\) in **Figure 8**. The \(rbs\) is continuously above the \(rbf\), but just so marks the two stations \(S2\) and \(S4\), changed to \(S3\)-\(1\), as bottlenecks. In this averaged representation over the entire period, \(rbs\) is very similar to \(rbf\). The benefit of the \(rbs\) becomes apparent when viewed in individual time steps. **Figure 9** shows a period of a bottleneck shift taking place. For the sake of clarity, we limit the visualization to stations \(S3\) to \(S6\). At a time of about 997, the current bottleneck shifts from \(S4\) to \(S6\). Thus, the \(rbs\) of \(S4\) drops to 0 and the \(rbs\) of \(S6\) changes to 1. The other stations change relative to the \(rbs\) of \(S6\). Consequently, in a consideration in momentary terms, \(S4\) becomes the best target for immediate improvement measures up to time of 997, while \(S6\) is already emerging as the successor bottleneck, marked by a comparatively high \(rbs\) value. ## 6 Conclusion With \(rbf\) and \(rbs\), two novel metrics for the diagnosis of dynamic bottlenecks were proposed in this paper. The metrics extend the field of bottleneck research, which has mainly been characterized by a focus on bottleneck detection and prediction. The metrics provide by a simple and practical way to quantify bottleneck behavior of a manufacturing system. While relative bottleneck frequency can be used to evaluate long-term periods of observation, the relative bottleneck severity allows for an examination of current points in time. The code underlying the simulation and the subsequent evaluations for the calculation of the active periods, relative bottleneck frequency and relative bottleneck severity were made freely publicly available. With regard to a promising possibility for the development of bottleneck metrics, we note at this point the not yet given possibility for the monetary evaluation of occurring bottlenecks. Only by determining availability losses in near-real time monitoring, it is possible to evaluate the monetary costs of bottleneck behavior. This provides an important argument when constructive or organizational improvement measures have to be selected and granted to deal with the bottlenecks. Thus, in a continuation of metrics for evaluating bottleneck states, a monetary quantification of throughput Figure 8: Exemplary comparison of \(rbf\) and \(rbf\) for \(S3\)-\(1\) losses due to bottleneck events should be considered. This allows a consideration of the potential costs for the remedial measures of the bottlenecks and the costs due to the throughput losses that occur. Finally, we emphasize the untapped potential of bottleneck prescription. To implement intelligent system, which reacts independently to occurring bottlenecks with appropriate measures, a data-driven way for the evaluation of bottleneck conditions is required. Since such a system must also operate under resource constraints, the question of prioritizing emerging bottlenecks also arises here. In times of an increasing demand for sustainable and resource-efficient manufacturing systems, we foresee that the field of Bottleneck Analysis will continue to gain importance in the future. ## Acknowledgments This paper is part of the project 'Prediction of dynamic bottlenecks in directed material flow systems using machine learning methods' (PrEPFlow, 21595), which is funded by the German Federal Ministry of Economics and Technology (BMWi), through the Working Group of Industrial Research Associations (AIF). It is carried out on behalf of the German Logistics Association e.V. (BVL) and it is part of the program for promotion of joint industrial research and development (IGF) based on a resolution of the German Bundestag.
2305.00212
Production locality and spatial diffusion of heavy flavour at high energy densities
Heavy-ion collisions are a unique tool for testing the behaviour of matter under extreme conditions. The momentum correlations of charm and bottom hadrons have been considered for testing heavy quarks' thermalization in the hot, dense medium produced by the collisions. In this respect, two effects have been considered: the decrease of the initial back-to-back correlations and the increase of correlations due to heavy-quark interactions with collectively flowing medium. Here, we show that, in the case of a single charm and anti-charm hadron pair production, the collective flow allows for testing heavy-quark production locality and spatial diffusion. Using an example of central Pb+Pb collisions at the CERN SPS energies, we demonstrate that the azimuthal correlations of charm and anti-charm hadrons are particularly sensitive to their spatial correlations. We argue that the existing experimental technology and beam intensities at the CERN SPS should allow for the corresponding measurements soon. The correlation measurements in collisions with a single heavy-quark pair produced will provide a unique input constraining the diffusion of charm quarks and verifying assumptions concerning production locality of a charm and anti-charm quark pair.
M. Gazdzicki, D. Kikola, I. Pidhurskyi, L. Tinti
2023-04-29T09:46:23Z
http://arxiv.org/abs/2305.00212v2
# Production locality and spatial diffusion of heavy flavour ###### Abstract Heavy-ion collisions are a unique tool to test the behaviour of matter in extreme conditions. The momentum correlations of charm and bottom hadrons have been considered for testing heavy quarks' thermalisation in the hot, dense medium produced by the collisions. In addition to these back-to-back correlations due to the quark-anti-quark pair creation dynamics, there are other important sources of momentum correlations, which allow us to explore the rich physics of heavy-ion collisions further. Here we show significant momentum correlations even in the thermalisation of charm quarks in the expanding medium, and this effect can be measured in collisions at sufficiently low energies. The momentum correlations depend on the correlations in the configuration space, more specifically, on the spatial separation between charm and anti-charm hadrons emission points. The pair production locality and spatial diffusion of the charm quarks give the latter. Using an example of central Pb+Pb collisions at the CERN SPS energies, we show that future measurements of the azimuthal correlations of charm and anti-charm hadrons should allow us to distinguish between different assumptions on their spatial correlations. This provides a unique window into a poorly understood sector of particle production at high-energy densities. The measurements can help to constrain the diffusion of charm quarks and verify assumptions concerning production locality of a charm and ant-charm quark pair. Introduction Collisions of heavy ions at relativistic energies provide insights into fascinating features of nuclear matter at high energy densities. This includes the creation of the Quark-Gluon Plasma (QGP) - a state of matter with quark and gluon degrees of freedom expected to exist in the Universe's first moments. Impressive progress has been made in the last three decades in experimental and theoretical studies of the QGP. Still, there are areas where an adequate understanding of the underlying processes is yet to be achieved, for example, in measuring and modelling particle correlations and fluctuations. The most popular approaches to predict the spectra of final-state particles in heavy-ion collisions are focused on the lowest [1]. For instance, the relativistic-kinetic theory deals with the one-particle distribution function \(f(x,{\bf p})\) but neglects the many-particle distributions. That is the higher orders in the Bogoliubov - Born - Green - Kirkwood - Yvon hierarchy [2]. More generally, the expectation values of operators are considered, for instance, the energy density in hydrodynamics. Still, their fluctuations (e.g. variance and higher orders) and related correlations are more difficult to deal with. Experimentally, measurements of correlations and fluctuations are also significantly more challenging than measurements of one-particle spectra. Recently the field has been mostly motivated by the search for the critical point of strongly interacting matter; for example, see Ref. [3] and references therein. Measurements of correlations between the charm meson and its anti-particle have been proposed to test the equilibration of charm [4; 5] in momentum space. In a semi-classical picture, the initial back-to-to back momentum correlations between the \(c\) and \(\bar{c}\) quarks are reduced by the interactions with the medium and hadronisation of the quarks (see, for instance, Ref. [6] and references therein). This paper is motivated by the fact that, even for a locally thermalised and expanding medium, the momenta of charm and anti-charm hadrons originating from the same \(c\) and \(\bar{c}\) pair are correlated. Depending on the creation points and spatial diffusion properties of the charm in the medium, the charm hadron and the anti-hadron emission points can be either close or distant. In a locally thermalised and expanding system, the charm hadrons have an average momentum dependent on the fluid cell's drift speed (flow). If the emission points of the hadrons are close, they will have a similar drift, and thus their momenta will be correlated. Thus the presented here idea utilises the collective flow of charm hadrons measured in heavy-ion collisions at high energies [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. We assume that final-state momenta of charm hadrons are given by the superposition of the flow and a random (thermal) contribution due to statistical hadronisation. Hadronic rescattering and final-state interactions are neglected, supported by the recent measurements of interaction parameters of \(D\)-mesons with hadrons [18; 19]. The flow contribution depends only on the emission point in the freeze-out hypersurface, whereas the thermal contribution is a random effect, uncorrelated for different hadrons. Here we stress that the arguments are generally valid, but to directly measure the wanted correlations, one should have no more than one \(c\bar{c}\) pair produced per collision. Otherwise, the measured two-particle correlation function includes pairs of \(c\)- and \(\bar{c}\)-hadrons coming from different and likely independent charm production processes. The magnitude of this unwanted contribution to the momentum-correlation results strongly depends on the multiplicity distribution of heavy-flavour pairs. This effect is especially important in the heavy-ion collisions at RHIC and the LHC. On average, one expects \(\simeq 3\)\(c\bar{c}\) pairs in the 10% most central Au+Au collisions at \(\sqrt{\rm{s_{NN}}}=200\) GeV at RHIC [20; 21], and a few tens at the LHC (for example, \(\simeq 30\)\(c\bar{c}\) pairs in the 10% most central Pb+Pb reactions at \(\sqrt{\rm{s_{NN}}}=5.02\) TeV [22; 23]). The multiplicity distribution of heavy-flavour is nowadays difficult to access experimentally, and thus the wanted correlations at very high energies cannot be extracted in a model-independent way. Thus to minimize the bias due to unwanted correlations, the measurements should be performed at sufficiently low collision energies, where the mean multiplicity of \(c\bar{c}\) pairs is below one. For this reason, we consider an example of central Pb+Pb collisions at the CERN SPS energies. This example can be straightforwardly extended to bottom hadron production at LHC. The heavy-flavour production and azimuthal correlations in heavy-ion collisions at very high energies were addressed theoretically in the past; for review, see Ref. [24]. In particular, they were considered as a tool for uncovering a mechanism behind the jet suppression [25; 26] and the study of charm energy-loss mechanism [27; 28; 29; 30]. The heavy-quark spatial diffusion in QCD matter was discussed recently in Refs. [31; 32; 33], see also references therein. The ATLAS experiment measured the azimuthal-angle correlations of muon pairs originating from heavy-flavour decays in 5.02 TeV Pb+Pb collisions [34]. One notes that the measured muon pairs come from jet-like correlations of high transverse-momentum heavy-flavour hadrons. The paper is organised as follows. The above intuitive expectations are quantified using simple modelling presented in Sec. II. The physics meaning of different assumptions on charm-hadron correlation in the emission volume is also discussed in this section. The feasibility of the corresponding measurements is estimated in Sec. III, and the results are summarised in Sec. IV. ## II Quantitative predictions and discussion The following assumptions are made to quantify the intuitive expectations for the considered momentum correlation: 1. The production of charm hadrons in head-on Pb+Pb collisions is considered. The collision energy is assumed to be adjusted to have a mean charm multiplicity below one, allowing to neglect production of more than one \(c\)- and \(\bar{c}\) hadron pair in a single collision. This likely corresponds to the top CERN SPS energy (\(\sqrt{s_{NN}}\approx 17\) GeV) [35]. 2. The charm hadrons are emitted from the freeze-out hyper-surface of a spherical fireball undergoing a Hubble-like expansion. That is, the three velocity reads \(\vec{v}=\vec{r}/t\), with \(\vec{r}=x,y,z\) being the distance from the centre of the fireball, and four-velocity \(u^{\mu}=x^{\mu}/\tau=x^{\mu}/\sqrt{t^{2}-r^{2}}\). It was recently demonstrated that Hubble-like expansion is an appropriate approximation of velocity fields in heavy-ion collisions in the energy range of our interest [36]. 3. The freeze-out hyper-surface is set by the freeze-out time \(\tau=\tau_{fo}\) and the maximal radius \(r\leq R_{max}\). They are set, respectively, to \(\tau_{fo}=9\) fm/\(c\) and \(R_{max}=6\) fm. 4. Emission probability of charm hadrons is independent of the fluid cell on the freeze-out hyper-surface, consistent with the method used to predict the spectra within the relativistic hydrodynamics approach. Note that the considered correlations are given by the conditional probability of the charm hadrons to be emitted from the same cell or another one with respect to the anti-charm one. * In the rest frame of the flow, the charm hadron momentum \(p\) distribution at the freeze-out hyper-surface is assumed to be the statistical one: \[\frac{d^{3}N}{dp\ d^{2}\Omega}\ \propto\ p^{2}\exp\left(\frac{-\sqrt{m^{2}+p^{2}}}{T _{\rm fo}}\right)\,,\] (1) where \(m=1.869\)\(\mbox{\rm\,Ge\kern-1.2ptV\!/}c\) is the charm hadron mass assumed to be equal to the \(D^{0}\) meson mass, and the temperature parameter is \(T_{\rm fo}=150\) MeV. The statistical momenta of charm hadrons are drawn independently. * To calculate the hadron momentum in the collision rest frame, the obtained statistical four-momentum is boosted with the flow velocity, \(\vec{v}=\vec{u}\ /\ \sqrt{1+u^{2}}\). Note that for simplicity, we do not consider correlations between momenta \(c\) and \(\bar{c}\)-hadrons resulting from the energy-momentum conservation and dynamics of the pair creation process. The change of these correlations during the system evolution was discussed in Ref. [4; 5] for heavy-ion collision at top RHIC energy (\(\sqrt{s_{NN}}=200\) GeV) and at the LHC. Given that we focus on producing charmed mesons with low \(p_{\rm T}\) in low-energy collisions, we expect the back-to-back correlation will not play a significant role in the measurement we consider in this work. Then the results are calculated for three different space correlations of the \(c\)- and \(\bar{c}\)-hadrons. These are * The \(c\)- and \(\bar{c}\)-hadrons are emitted from the same fluid cell. Thus the average of their momenta is set by the drift velocity of the cell. Their actual momenta are different because of the independence of their momenta in the fluid rest frame. This ansatz is labelled the _local_ emission. * The emission points of charm hadrons are independent of each other. They don't have a common drift velocity, hence. This ansatz is labelled the _independent_ emission. * The intermediate case is modelled, assuming the correlation function of the emission points to be the 3D Gaussian with \(\sigma=\sigma_{x}=\sigma_{y}=\sigma_{z}=2\) fm. Note that the points are required to be within the fireball volume. The flow components of \(c\)- and \(\bar{c}\)-hadrons are different but correlated, leading to the correlation of their hadron momenta. Clearly in the limits of \(\sigma\to 0\) and \(\sigma\to\infty\) one recovers the local and independent emissions, respectively. This ansatz is labelled the _correlated_ emission. Figure 1 shows the distribution of \(c\)-\(\bar{c}\) hadron pairs in the difference of azimuthal angles \(\Delta\phi\) (_left_) and transverse momenta \(\Delta p_{T}\) (_right_) for local, independent and correlated emission. The results are obtained using the Monte Carlo technique with \(10^{7}\) events generated. The distributions of the pairs in \(\Delta\phi\) significantly differ for local, independent and correlated emissions. The differences are smaller in the case of the transverse momentum difference. The flat distribution in \(\Delta\phi\) for the independent emission is independent of the flow and random momentum contributions modelling. The distributions in \(\Delta\phi\) decrease monotonically for the local and correlated emission from \(\Delta\phi=0\) to \(\Delta\phi=\pi\), but the quantitative properties of this qualitative behaviour depend on model details. Nonetheless, the effect of correlation at \(\Delta\phi\approx 0\) is remarkably different compared to the back-to-back correlations expected for charm pair production in hard parton scatterings [4; 5]. Thus, we expect experimental data will allow discrimination between these two different kinds of correlation. It is clear that in the case of the \(\Delta\phi\) distribution rather limited data statistics (see the next section) should allow us to distinguish between predictions obtained assuming different space correlations between the emitted charm hadrons and different production mechanisms of charm quarks. Encouraged by this conclusion, we turn to the standard approach to heavy-ion collisions [1] and, within it, discuss the implications of different possible outcomes of the experimental measurements. The approach pictures heavy-ion collisions at high energies as a time sequence of the following stages: 1. _Initial stage_ - a high-density quark-gluon plasma is created. QCD is assumed to be a valid theory. Charm-anti-charm quark pairs are produced locally and in a limited number because of the high energy threshold. 2. _Expansion stage_ - the plasma expands [37], reaching the hadronisation temperature \(T_{H}\approx 150\) MeV. The pair of (anti-)charm quarks thermalise with the medium and flow. 3. _Hadronisation stage_ - the plasma, including the \(c\) and \(\bar{c}\) quarks, is converted to hadrons and resonances following the statistical rules [38; 39] applied in the rest frame of a plasma fluid element. Thus, the flow and hadronisation (local statistical process) contributions give the momenta of charm hadrons. 4. _Free-streaming stage_ - resonances decay, and non-interacting hadrons freely stream in the vacuum to a detector. Many additional details, conceptual and quantitative, can be added [1], about the hydrodynamic evolution or the rescattering after hadronisation. The stages listed above are the most relevant to this paper aiming for a qualitative discussion of the correlations, the feasibility of their measurements and their phenomenological implications. Within the standard heavy-ion approach, 1. The experimental data consistent with the local emission would imply a small space sep Figure 1: The distribution of \(c\)-\(\bar{c}\) pairs in the difference of azimuthal angles \(\Delta\phi\) (_left_) and transverse momenta \(\Delta p_{T}\) (_right_) for local, independent and correlated (\(\sigma=2\) fm emission). aration of \(c\)- and \(\bar{c}\)-quarks during the expansion stage. This should be confronted with the charm-quark spatial diffusion calculated using the QCD-base approaches; for recent examples, see Refs. [31; 32; 33]. 2. The experimental data consistent with the independent emission would imply a large spatial diffusion of the charm quarks in the plasma. Ultimately, for more accurate models, because of the limited expansion time, the experimental results may even be inconsistent with a local production coupled to a semi-classical transport (hence not faster than speed-of-light, 1). This might imply non-local effects in the expansion. For the historical record, this paper was motivated by the non-local, indeterministic toy model [40] requiring the _teleportation_ transitions in its most symmetric version. 3. The data consistent with the correlated emission would give a sensitive input for restricting the charm-quark spatial diffusion in the plasma. 4. It is always wise to leave a door open to the unexpected. The approximations used to compute the spectra might fail to describe the correlations, and the experimental results could qualitatively disagree with the expectations. ## III Required Statistics of Pb+Pb Collisions for Testing the Model Predictions In this section, we discuss the feasibility of performing the required measurements of correlations between charm and anti-charm hadrons produced in head-on heavy-ion collisions. The important physics condition is a mean multiplicity of charm being small enough to neglect the production of two and more pairs of charm hadrons. This requirement implies the measurements at relatively small collision energies, probably close to the top SPS energy of \(\sqrt{s_{NN}}\approx 20\) GeV. It also suggests collecting data in the fixed target mode, which due to the Lorentz boost of the center-of-mass, allows for high detection acceptance and efficiency. As shown below, if the charm yield is significantly lower than one or the reconstruction of open-charm hadrons is too inefficient, the event statistics needed to perform the measurement may be well beyond the capabilities of the now-a-day experiments. For now, we only consider an analysis of the most abundant open-charm hadrons, namely, \(D^{0}\) and \(\bar{D^{0}}\) mesons. As shown later, this should be sufficient to measure the correlations with sufficient precision. The required event statistics can be derived from the average number of reconstructed \(D^{0}\bar{D^{0}}\)-pairs per event, \(\langle D^{0}\bar{D^{0}}\rangle_{\rm rec}\). We perform our feasibility study assuming detector setup and performance similar to NA61/ SHINE experiment at CERN collecting Pb+Pb collisions at \(\sqrt{s_{NN}}=17.3\) GeV. Assuming that processes that impact the reconstruction of a \(D^{0}\) and a \(\bar{D^{0}}\) mesons within an event are approximately uncorrelated, we estimate the average number of reconstructed pairs as \[\langle D^{0}\bar{D^{0}}\rangle_{rec}\approx\langle c\bar{c}\rangle\cdot\left( P(c\to D^{0})\cdot{\rm BR}(D^{0}\to K\pi)\cdot P({\rm acc})\cdot P({\rm sel })\cdot P({\rm rec})\right)^{2}, \tag{2}\] where \(\langle c\bar{c}\rangle\) is the average number of \(c\bar{c}\)-pairs per event. The \(P(c\to D^{0})\) = 0.31 is a probability for \(c\)-quark to hadronize into the \(D^{0}\) meson evaluated within the PHSD model [41], \({\rm BR}(D^{0}\to K^{+}\pi^{-})\) = 3.98% is a branching ratio of decay channel used in the measurements [42], \(P({\rm acc})\) = 0.5 is a probability for \(D^{0}\) to be within an acceptance region of the detector, \(P({\rm sel})\) = 0.2 is a probability for \(D^{0}\) to pass background-suppressing selection of charm meson candidates, and \(P({\rm rec})\) = 0.9 is a probability of reconstructing the meson. The value of \(P({\rm acc})\) was evaluated using the Geant4 simulation with the detector setup for November 2022, \(P({\rm sel})\) is taken from the pilot analysis of \(D^{0}\) and \(\bar{D^{0}}\) production [43], and \(P({\rm rec})\) was obtained from a Geant4 simulation with the setup for November 2022 and reconstruction software used for previous open charm analysis using 2017 and 2018 data [43; 44]. Finally, given \(\langle D^{0}\bar{D^{0}}\rangle_{rec}\), an estimate of the required event statistics can be obtained via \[\{\mbox{number of head-on events to collect}\}\approx\frac{\{\mbox{number of $D^{0}\bar{D^{0}}$ pairs to reconstruct}\}}{\langle D^{0}\bar{D^{0}}\rangle_{rec}}. \tag{3}\] The value of \(\langle c\bar{c}\rangle\) is neither reliably predicted by models nor measured by experiments. However, considering available estimates [35], we expect that the value of \(\langle c\bar{c}\rangle\) for head-on Pb+Pb at \(\sqrt{s_{NN}}\approx 17\) GeV should range from 0.1 up to 1. Putting all together, estimates on the run time needed to collect 1000 \(D^{0}\bar{D^{0}}\)-pairs for different event rates of the updated NA61/SHINE experiment and for different values of \(\langle c\bar{c}\rangle\) are given in Table I. Figure III demonstrates the statistical precision of a signal from 1000 \(D^{0}\bar{D^{0}}\)-pairs assuming that the statistical fluctuations of background pairs can be neglected. A typical ion beam period at CERN is about four weeks. Entries in Table I with a data-taking time of 100 days or more correspond to scenarios where the measurement may take longer \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \(\langle c\bar{c}\rangle=0.1\) & \(\langle c\bar{c}\rangle=0.2\) & \(\langle c\bar{c}\rangle=0.5\) & \(\langle c\bar{c}\rangle=1\) \\ \hline 1 kHz & 1000 days & 500 days & 200 days & 100 days \\ \hline 10 kHz & 100 days & 50 days & 20 days & 10 days \\ \hline 100 kHz & 10 days & 5 days & 2 days & 1 day \\ \hline \hline \(N_{pair}/N_{comb}\) & 91\% & 83\% & 66\% & 50\% \\ \hline \end{tabular} \end{table} Table I: Estimate of the duration of a data-taking period needed to collect 1000 \(D^{0}\bar{D^{0}}\)-pairs (first three rows). In the calculations, the duty cycle of 30% was assumed. The last row shows the ratio of the produced pairs of \(c\bar{c}\) quarks to all combinations of them. Figure 2: The projection for statistical precision of measurement of the azimuthal correlation \(\Delta\phi\) assuming the experiment registered \(N=1000\)\(D^{0}\bar{D^{0}}\) pairs. The local, independent, and correlated emission is assumed. than a period between the CERN accelerators' long shutdowns. Moreover, at the moment, the event rate of 100 kHz would require a significant update of the NA61/SHINE detector and its beamline. However, a setup corresponding to 10 kHz should be achievable within the nearest years. Thus we find that for \(\langle c\bar{c}\rangle>0.2\), it should be possible to perform the measurements of \(c\bar{c}\)-correlations by NA61/SHINE in the CERN Run 4 period (2028-2032). The additional possibility for the experimental study would be constructing a new experiment optimized for charm measurements. The corresponding letter of intent was recently submitted to the CERN SPSC [45]. The discussed measurement of correlations between \(c\) and \(\bar{c}\) is only meaningful for the quarks produced as a pair. However, if multiple pairs of \(c\bar{c}\) quarks were produced within the same event, we will observe an unavoidable background due to combining \(c\)- and \(\bar{c}\)-hadrons originating from different pairs. Quantifying this background suffices taking a ratio between a multiplicity of produced \(c\bar{c}\) pairs in the event to a number of all possible combinations of \(c\)- and \(\bar{c}\)-hadrons that could be observed in the event. To compute this ratio for different values of \(\langle c\bar{c}\rangle\), it was assumed that \(c\bar{c}\)-multiplicity follows a Poisson distribution, parameterized by the given \(\langle c\bar{c}\rangle\). This yields a probability of having more than one \(c\bar{c}\) pair within the same event, in which case one has unwanted combinations of \(c\)- and \(\bar{c}\)-hadrons. Obtained values of the ratio pairs to combinations are given in Table 1. Values around 50% indicate that about half of the combinations of \(c\)- and \(\bar{c}\)-hadrons are unwanted. For the real-world analysis, background due to the misidentification of open-charm hadrons would likely be significant. Moreover, lower values of \(\langle c\bar{c}\rangle\) also imply higher requirements for the event statistics. It suggests that \(\langle c\bar{c}\rangle\) should be cautiously picked for reasonable analysis. Realistic estimates of \(\langle c\bar{c}\rangle\) for head-on Pb+Pb collisions at the top CERN SPS energies range between 0.1 and 1. This further supports the conclusion that the measurements at the CERN SPS should considered. Summary In this work, we propose to study momentum correlations between pairs of \(c\)- and \(\bar{c}\)-hadrons produced in heavy-ion collisions at low collision energies. We argue that the correlations are sensitive to the four-velocity of the fluid cells from which charm hadrons are emitted. This relates the momentum correlations of charm hadrons measured in an experiment to the spatial correlations of the charm hadron emission points. The latter depends on production locality and spatial diffusion of charm at high energy densities. The obtained predictions for azimuthal angle correlations for local, independent, and correlated emission of charm hadrons differ significantly. Since the emission of multiple, uncorrelated, pairs of \(c\)- and \(\bar{c}\)-hadrons from a single collision would spoil the wanted correlations, it is mandatory to perform the measurements at sufficiently-low collision energies granting a low production probability of multiple-charm pairs. The proposed method can also be used for bottom hadrons. As a quantitative example, we consider charm hadron measurements in head-on Pb+Pb collisions at the CERN SPS. Assuming typical values of data-taking parameters for the NA61/SHINE experiment at SPS, we show that the required measurements would need a data-taking rate of 10k Hz or more. These rates are easily allowed by the current detector technologies. Thus the corresponding measurements should be possible by the upgraded NA61/ SHINE and the new NA60++ experiments after the CERN LS3 upgrade period. ###### Acknowledgements. _This work is partially supported by the Polish National Science Centre grant 2018/30/A/ST2/00226, the National Science Centre, Poland, grant no. 2018/30/E/ST2/00089, the German Research Foundation grant GA1480/8-1, by the Polish National Science Centre grant 2020/39/D/ST2/02054._
2305.16470
Measuring the Effect of Influential Messages on Varying Personas
Predicting how a user responds to news events enables important applications such as allowing intelligent agents or content producers to estimate the effect on different communities and revise unreleased messages to prevent unexpected bad outcomes such as social conflict and moral injury. We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona (characterizing an individual or a group) might have upon seeing a news message. Compared to the previous efforts which only predict generic comments to news, the proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response. This enables more accurate and comprehensive inference on the mental state of the persona. Meanwhile, the generated sentiment dimensions make the evaluation and application more reliable. We create the first benchmark dataset, which consists of 13,357 responses to 3,847 news headlines from Twitter. We further evaluate the SOTA neural language models with our dataset. The empirical results suggest that the included persona attributes are helpful for the performance of all response dimensions. Our analysis shows that the best-performing models are capable of predicting responses that are consistent with the personas, and as a byproduct, the task formulation also enables many interesting applications in the analysis of social network groups and their opinions, such as the discovery of extreme opinion groups.
Chenkai Sun, Jinning Li, Hou Pong Chan, ChengXiang Zhai, Heng Ji
2023-05-25T21:01:00Z
http://arxiv.org/abs/2305.16470v1
# Measuring the Effect of Influential Messages on Varying Personas ###### Abstract Predicting how a user responds to news events enables important applications such as allowing intelligent agents or content producers to estimate the effect on different communities and revise unreleased messages to prevent unexpected bad outcomes such as social conflict and moral injury. We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona (characterizing an individual or a group) might have upon seeing a news message. Compared to the previous efforts which only predict generic comments to news, the proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response. This enables more accurate and comprehensive inference on the mental state of the persona. Meanwhile, the generated sentiment dimensions make the evaluation and application more reliable. We create the first benchmark dataset, which consists of 13,357 responses to 3,847 news headlines from Twitter. We further evaluate the SOTA neural language models with our dataset. The empirical results suggest that the included persona attributes are helpful for the performance of all response dimensions. Our analysis shows that the best-performing models are capable of predicting responses that are consistent with the personas, and as a byproduct, the task formulation also enables many interesting applications in the analysis of social network groups and their opinions, such as the discovery of extreme opinion groups. + Footnote †: Code Repository: [https://github.com/chenkaisun/response_forecasting](https://github.com/chenkaisun/response_forecasting) ## 1 Introduction To prevent the flooding of misinformation and hate speech on the internet, a great amount of progress has been made toward identifying and filtering such content on social media using machine learning models Fung et al. (2021); Su et al. (2022); ElSherief et al. (2021); Sap et al. (2019). While directly creating message-level labels is a natural way to address the issue, it is equally important to measure the influence of the message on different viewers as a way to decide how to manage the publication of the messages. Existing efforts Lin and Chen (2008); Giachanou et al. (2018); Yang et al. (2019); Artzi et al. (2012) have made steps toward predicting population-level news response (e.g., predicting the most likely response to a news message), but neglected the importance of personas in measuring influence. According to Individual Differences Theory Riley (1959), which proposes that individuals respond differently to the mass media according to their psychological needs, the same message can impact different population groups/personas in different ways. For example, a message claiming the honor of sacrificing others' lives for a religious goal might agitate people who are prone to agreeing with such messages. It is therefore essential to consider personalization when inferring viewers' responses. On the other hand, the previous approaches that Figure 1: An example illustrating the task. The input consists of persona attributes (e.g., historical activities and profile) and a news message. The model is asked to predict response in multiple dimensions. predict text-level responses (Yang et al., 2019; Wu et al., 2021; Lu et al., 2022) have only used generation metrics for automatic evaluation, yet the same sentiment can be expressed in a multitude of ways, and text alignment metrics like BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) do not credit cases where the sentiments match but semantics do not align well. As a result, it is crucial to evaluate the sentiment dimensions of user responses. We propose Response Forecasting on Personas for News Media, a task for measuring the influence of news media messages on viewers by predicting viewers' responses. In particular, the input consists of the news message and persona information (e.g., user profile and history in our dataset), and we define response in terms of sentiment polarity, sentiment intensity, and textual response. While we include three categories in this work, many other interesting aspects can also be defined (e.g., change of attitude toward real-world entities) and we leave them to future work. Studying the problem of forecasting individual viewers' responses allows the creation of tools to assist analysts and online content producers to estimate the potential impact of messages on different communities, and sheds light on new applications such as automatically re-writing a message/email to achieve a communication goal (e.g., to obtain a positive response from the receiver). Furthermore, this new task also helps to understand associations between user attributes and emotional responses. To construct a test bed for this task, we collect a dataset from Twitter consisting of 13,357 labeled responses to 3,847 news headlines from Twitter. Using the corpus, we examine how state-of-the-art neural models work in our task. We find that the models can predict responses with reasonable accuracy yet still have a large room for improvement. We also find that the best-performing models are capable of predicting responses that are consistent with the personas, indicating that the models may be used for many exciting applications such as the discovery of groups with different opinions. ## 2 Dataset Collection In this section, we describe how we construct data from Twitter. Specifically, we used Twitter API1 to crawl news headlines and comments below each headline from CNN Breaking News2, which is one of the most popular news accounts on Twitter. Footnote 1: developer.twitter.com/en/docs/twitter-api Footnote 2: twitter.com/cnbrk **Preprocess**. We collected news headlines and corresponding comments from CNN Breaking News between January 2017 and January 2019 and removed the comments that are over 50 tokens to avoid spamming. We stripped away HTML syntax tokens and normalized user reference with special tokens "@user". ### Persona Data We categorize the users who post comments as responders. To describe responders, we gathered various persona attributes from Twitter, including (1) User Profile, which is a short paragraph describing the user, and (2) User History, which are tweets written directly by the user. We consider persona as a representation of an individual or a community that characterizes interests and beliefs. User profiles and history serve as effective indicators of persona, as they reveal such information well. Since users' behavior is generally influenced by their personas, we can potentially infer personas by analyzing data that reflects their behavior. Additionally, studying historical tweets helps us understand users' communication styles. To ensure that future posting activities are not included when predicting the comment, we collect the historical posts prior to the earliest data sample in our dataset for each individual user. ### Annotation We obtained 14k headline and comment pairs from preprocessing. In the annotation stage, we collect labels for sentiment intensity and polarity of comments based on the context of the headline. For the 10k training instances, we produce automatic labels using deep-learning models trained on existing message-level datasets. More specifically, we train a Deberta-based model (He et al., 2020) using data from SemEval-2018 Task 13 Mohammad et al. (2018), reaching over 85% Pearson correlation. We then proceed to use crowd-sourcing to annotate the remaining 2k samples as our evaluation set. \begin{table} \begin{tabular}{l|c c c} \hline \hline Split & Train & Dev. & Test \\ \hline \# Samples & 10,977 & 1,341 & 1,039 \\ \# Headlines & 3,561 & 1,065 & 843 \\ \# Users & 7,243 & 1,206 & 961 \\ Avg \# Profile Tokens & 10.75 & 11.02 & 10.50 \\ Avg \# Response Tokens & 12.33 & 12.2 & 11.87 \\ Avg \# Headline Tokens & 19.79 & 19.82 & 19.72 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary statistics for the dataset. **Task Setup**. The annotation for the evaluation set is performed using the Amazon Mechanical Turk (MTurk) crowd-sourcing platform. The workers were each asked to annotate a headline and comment pair with three workers assigned to each data sample. During the annotation, the annotator is asked to select the sentiment polarity label and the intensity of the sentiment based on their understanding of the input. The workers select positive, negative, or neutral for the sentiment polarity label and select on the integer scale of 0 to 3 for intensity. 415 workers participated in this task in total and all annotators are paid a fair wage above the federal minimum. **Quality Control**. To ensure the quality of annotation, we allowed only the workers who have at least 95% approval rate and have had at least 5,000 hits approved to access our tasks. We further removed workers who have a <70% accuracy in the first 30 annotations and discarded the assignments that have completion time deviated from the expected average largely. We used majority voting to determine the final labels: if at least two annotators agreed on a label, we chose it as the final label. The resulting annotated samples achieve an inter-annotator agreement accuracy of 81.3%. We show the statistics of the dataset in Table 1. ## 3 Response Forecasting on Personas for News Media ### Task Formulation In this task, we aim to predict sentiment polarity, sentiment intensity, and textual response from an individual when the individual sees a message on news media. Formally, given persona \(\mathcal{P}\) (represented by profile, or historical posts), and a source message \(\mathcal{M}\), the task is to predict the persona's sentiment polarity \(\phi_{p}\) (i.e., _Positive_, _Negative_, _Neutral_) and sentiment intensity \(\phi_{int}\) (i.e., in the scale of 0 to 3), and textual expression \(t\). Our goal is to encode \(\mathcal{P}\) and produce \(\phi_{p}\), \(\phi_{int}\), and \(t\) at decoding time. We formulate the task as a conditional generation problem and use the following maximum-likelihood objective to train a generative model: \[\sum_{i}^{N}\log p(O_{i}|O_{<i-1},\mathcal{P})\] where \(O\) is the output string concatenating \(\phi_{p}\), \(\phi_{int}\), and \(t\) with special separator tokens. ### Experimental Setup For deep learning-based text generators, we fine-tune decoder-only text generator GPT2 (Radford et al., 2019) as well as two Encoder-Decoder models T5 (Raffel et al., 2019) and BART (Lewis et al., 2019). Greedy decoding is used for all the models during training. We further perform ablation on the best-performing model by removing different user attributes. We further include two naive baselines, _Random_ and _Majority_, for sentiment dimensions, where each prediction follows either the majority label or a random label. Our neural models are implemented using Pytorch (Paszke et al., 2019) and Huggingface Transformers (Wolf et al., 2020). The reproducibility and hyperparameter details can be found in Appendix Table 4. #### 3.2.1 Evaluation Metrics **Automatic**. We use BARTScore (Yuan et al., 2021), BLEU (Papineni et al., 2002), METEOR (Baner \begin{table} \begin{tabular}{l c c c c c c|c c|c c} \hline \hline & \multicolumn{4}{c}{**Textual Response**} & \multicolumn{4}{c}{\(\phi_{int}\)} & \multicolumn{2}{c}{\(\phi_{p}\)} \\ Name & BLEU & BScore & Meteor & R-1 & R-L & Avg. Len & \(\tau_{s}\) & \(\tau\) & MiF1 & MaF1 \\ \hline Majority & - & - & - & - & - & - & - & 43.41 & 20.18 \\ Random & - & - & - & - & - & - & 0.62 & 0.41 & 35.51 & 30.55 \\ GPT2 & 1.59 & -5.78 & 3.36 & 6.50 & 1.90 & 9.64 & 50.34 & 49.78 & 60.25 & 56.85 \\ T5 & 6.95 & -5.71 & 5.98 & **10.40** & **2.70** & 18.87 & 50.06 & 49.26 & 63.72 & 57.85 \\ BART & **8.17** & **-5.67** & **6.09** & 9.90 & 2.50 & **21.05** & **62.03** & **61.82** & **67.85** & **63.23** \\ BART w/o Profile & 7.30 & -5.70 & 5.91 & 10.00 & 2.50 & 19.47 & 57.95 & 58.20 & 67.28 & 62.26 \\ BART w/o History & 5.24 & -5.88 & 4.41 & 7.70 & 1.50 & 18.62 & 48.80 & 48.63 & 59.00 & 53.29 \\ BART w/o Both & 3.90 & -5.92 & 4.00 & 7.90 & 1.80 & 15.73 & 45.28 & 44.75 & 61.41 & 46.01 \\ \hline \hline \end{tabular} \end{table} Table 2: Response forecasting results above show that the state-of-the-art models can predict responses with reasonable performance. The best overall performance is bolded. \begin{table} \begin{tabular}{l|c c c} \hline \hline Model & Persona & Label & Context \\ \hline GPT2 & 3.18 & 3.84 & 2.84 \\ T5 & 3.68 & 4.23 & 3.57 \\ BART & **4.35** & **4.42** & **3.99** \\ \hline \hline \end{tabular} \end{table} Table 3: The table shows human evaluation results based on three consistency measures, supporting the automatic evaluation findings. jee and Lavie, 2005), and ROUGE Lin (2004) to evaluate textual response generation performance. Note that BARTScore computes the log-likelihood of producing the reference text given the generated text using a BART model pretrained on ParaBank24. Furthermore, we use Pearson and Spearman correlation to evaluate sentiment intensity, and F1 to evaluate sentiment polarity. Footnote 4: [https://github.com/neulab/BARTScore](https://github.com/neulab/BARTScore) **Manual**. We conduct human evaluation to measure the consistency of the generated outputs from those models. We define three types of consistency metrics: (1) _persona consistency_: whether the output reflects the persona's characteristics, (2) _label consistency:_ whether the response text and sentiment are consistent with each other, (3) and _context consistency:_ whether the output is responding to the input news headline. We randomly select 10 personas with distinct characteristics (i.e., the writing style/interest/profession do not clearly overlap) and 10 news headlines from distinct topics, and consequently generate 100 responses using each model. The samples are distributed to 5 raters who score each output based on our metrics. The raters are master students who passed a small quiz of 20 samples with at least 80% accuracy. We additionally make sure that each rater is familiar with the persona information (e.g., profile and history) before starting to work on the task. ### Results **Automatic Evaluation**. Across the metrics in Table 2, we can see that BART provides us with the highest quality response predictions on both sentiment and text levels. As expected, the performance of simple baselines is relatively low compared to other models, showing that the dataset does not have a class imbalance issue. While the automatic generation scores are generally low (i.e., words do not align well), the sentiment prediction scores are much higher in scale, demonstrating the importance of sentiment scoring to make a fair judgment of the result; the model needs to be credited for correctly predicting the latent sentiment even if it does not utter the exact sentence. Finally, we ablate user attribute features one by one. As shown in the table, not only both features included are effective for the task, but they are also complementary of each other. **Human Evaluation**. The results from human judgments (Table 3) in general support the automatic evaluation findings. Among all three models, our approach with BART reaches the highest on all metrics, showing it can generate responses of better quality than others. The difference between models on Label Consistency is noticeably lower than other metrics, and the number suggests that pretrained language models are capable of producing sentiment labels consistent with the textual expression. On the other hand, we find that BART can produce responses more consistent with the controllable variables than GPT2, which might be attributed to its denoising pretraining (e.g., it adapts better to different modeling formats). In fact, the outputs show that GPT2 hallucinates more often than other models. ### Application We hypothesize that the formulation of the task enables the application of discovering groups with different opinions on issues. We verify the hypothesis by collecting personas with contrasting stances on an issue and generating responses based on this issue. We find that the output from the model stays consistent with the persona (examples are shown in the Appendix Table 5). The result demonstrates the potential for application on social network analysis. Since the model is able to generalize to different personas or news, an analyst can therefore replace the news headline with others to segment the population based on different issues, or manually construct a persona to visualize how a person from a particular community would respond to certain issues. ## 4 Conclusions and Future Work We propose Response Forecasting on Personas for News Media, a new task that tests the model's capability of estimating the responses from different personas. The task enables important applications such as estimating the effect of unreleased messages on different communities as an additional layer of defense against unsafe information (e.g., information that might cause conflict or moral injury). We also create the first dataset for evaluating this new task and present an evaluation of the state-of-the-art neural models. The empirical results show that the best-performing models are able to predict responses with reasonable accuracy and produce outputs that are consistent with the personas. The analysis shows that the models are also able to generate contrasting opinions when conditioned on contrasting personas, demonstrating the feasibility of applying the models to discovering social groups with different opinions on issues for future work. In addition to this, an intriguing avenue for further research lies in utilizing response forecasting techniques to predict the popularity of discussion threads, as explored in previous studies [16, 17]. ### Limitations While the training method makes use of user profile description and history, one additional factor that is important is the structure between users and news articles. Knowing a user's social circles can often give hints about the user's interests and beliefs, which can potentially help the model to infer how a particular persona would respond to an issue. A possible direction is to design a method that explores the social context features (e.g., social network) via graph-based algorithms. ### Ethics During annotation, each worker was paid $15 per hour (converted to per assignment cost on MTurk). If workers emailed us with any concerns, we responded to them within 1 hour. The research study has also been approved by the Institutional Review Board (IRB) and Ethics Review Board at the researchers' institution. Regarding privacy concerns our dataset may bring about, we follow the Twitter API's Terms of Use5 and only redistribute content for non-commercial academic research only. We will release pointers to the tweets and user profiles in the dataset. Footnote 5: [https://developer.twitter.com/en/developer-terms/agreement-and-policy](https://developer.twitter.com/en/developer-terms/agreement-and-policy) ## Acknowledgement This research is based upon work supported in part by U.S. DARPA INCAS Program No. HR001121C0165. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. Hou Pong Chan was supported in part by the Science and Technology Development Fund, Macau SAR (Grant Nos. FDCT/060/2022/AFJ, FDCT/0070/2022/AMJ) and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2020-00054-FST).
2305.08896
The connection between nonzero density and spontaneous symmetry breaking for interacting scalars
We consider ${\rm U}(1)$-symmetric scalar quantum field theories at zero temperature. At nonzero charge densities, the ground state of these systems is usually assumed to be a superfluid phase, in which the global symmetry is spontaneously broken along with Lorentz boosts and time translations. We show that, in $d>2$ spacetime dimensions, this expectation is always realized at one loop for arbitrary non-derivative interactions, confirming that the physically distinct phenomena of nonzero charge density and spontaneous symmetry breaking occur simultaneously in these systems. We quantify this result by deriving universal scaling relations for the symmetry breaking scale as a function of the charge density, at low and high density. Moreover, we show that the critical value of $\mu$ above which a nonzero density develops coincides with the pole mass in the unbroken, Poincar\'e invariant vacuum of the theory. The same conclusions hold non-perturbatively for an ${\rm O}(N)$ theory with quartic interactions in $d=3$ and $4$, at leading order in the $1/N$ expansion. We derive these results by computing analytically the zero-temperature, finite-$\mu$ one-loop effective potential. We check our results against the one-loop low-energy effective action for the superfluid phonons in $\lambda \phi^4$ theory in $d=4$ previously derived by Joyce and ourselves, which we further generalize to arbitrary potential interactions and arbitrary dimensions. As a byproduct, we find analytically the one-loop scaling dimension of the lightest charge-$n$ operator for the $\lambda \phi^6$ conformal superfluid in $d=3$, at leading order in $1/n$, reproducing a numerical result of Badel et al. For a $\lambda \phi^4$ superfluid in $d=4$, we also reproduce the Lee--Huang--Yang relation and compute relativistic corrections to it. Finally, we discuss possible extensions of our results beyond perturbation theory.
Alberto Nicolis, Alessandro Podo, Luca Santoni
2023-05-15T18:00:00Z
http://arxiv.org/abs/2305.08896v3
# The connection between nonzero density and spontaneous symmetry breaking for interacting scalars ###### Abstract We consider U(1)-symmetric scalar quantum field theories at zero temperature. At nonzero charge densities, the ground state of these systems is usually assumed to be a superfluid phase, in which the global symmetry is spontaneously broken along with Lorentz boosts and time translations. We show that, in \(d>2\) spacetime dimensions, this expectation is always realized at one loop for arbitrary non-derivative interactions, confirming that the physically distinct phenomena of nonzero charge density and spontaneous symmetry breaking occur simultaneously in these systems. We quantify this result by deriving universal scaling relations for the symmetry breaking scale as a function of the charge density, at low and high density. Moreover, we show that the critical value of \(\mu\) above which a nonzero density develops coincides with the pole mass in the unbroken, Poincare invariant vacuum of the theory. The same conclusions hold non-perturbatively for an O(\(N\)) theory with quartic interactions in \(d=3\) and \(4\), at leading order in the \(1/N\) expansion. We derive these results by computing analytically the zero-temperature, finite-\(\mu\) one-loop effective potential, paying special attention to subtle points related to the \(i\varepsilon\) terms. We check our results against the one-loop low-energy effective action for the superfluid phonons in \(\lambda\phi^{4}\) theory in \(d=4\) previously derived by Joyce and ourselves, which we further generalize to arbitrary potential interactions and arbitrary dimensions. As a byproduct, we find analytically the one-loop scaling dimension of the lightest charge-\(n\) operator for the \(\lambda\phi^{6}\) conformal superfluid in \(d=3\), at leading order in \(1/n\), reproducing a numerical result of Badel et al. For a \(\lambda\phi^{4}\) superfluid in \(d=4\), we also reproduce the Lee-Huang-Yang relation and compute relativistic corrections to it. Finally, we discuss possible extensions of our results beyond perturbation theory. ## 1 Introduction * 2 Complex scalar field at finite chemical potential * 2.1 Lagrangian and Hamiltonian formulations * 2.2 Path integral formulation and \(i\varepsilon\) terms * 2.3 The effective potential and the ground state * 2.4 The fundamental question * 3 Quantum Mechanics: the point particle in a central potential * 4 QFT in \(d>2\) dimensions * 4.1 Free scalar for \(\mu<m\) * 4.2 Effective potential at finite \(\mu\) * 4.3 Expanding in powers of \(g\) * 4.4 Finite density, symmetry breaking, and the critical value of \(\mu\) * 4.5 Expanding in powers of \(\mu\) and the superfluid EFT * 5 Quantifying spontaneous symmetry breaking * 6 The \(\mathrm{O}(N)\) model at large \(N\) * 7 Applications and relation to previous works * 7.1 Conformal superfluid in \(d=3\): massless \(\lambda\phi^{6}\) * 7.2 Quartic potential in \(d=4\) * 7.3 The Lee-Huang-Yang relation and its relativistic extension * 8 Outlook: towards a non-perturbative understanding * A The spinning rigid rotor and its ground state energy * B Alternative derivation of the path integral for \(\mu<m\) * C One-loop path integral at finite \(\mu\) for arbitrary potential * D Some useful identities in dimensional regularization * E Divergencies and counterterms at finite \(\mu\) * F More details on the derivation of the scaling relations of section 5 * G Ground state and stability for the \(\mathrm{O}(N)\) model with quartic interactions * ## 1 Introduction When it comes to conserved charges and the associated symmetries in Quantum Field Theory (QFT), there is a somewhat implicit expectation that having a zero-temperature state with nonzero density for a given charge goes hand in hand with the spontaneous breaking of the associated symmetry. However, these two properties are conceptually different [1], and in fact there exist physical systems for each possible combination. For example: 1. Zero charge density and no spontaneous symmetry breaking (SSB): the Poincare-invariant vacuum of any relativistic QFT with an unbroken U(1) symmetry; 2. Zero charge density, but SSB: the Higgs phase of the Standard Model; 3. Nonzero charge density, but no SSB: a Fermi liquid; 4. Nonzero charge density and SSB: a superfluid. Nevertheless, it is believed that case 3 is realized only for the free Fermi gas: all interacting Fermi liquids end up forming Cooper pairs in the deep infrared and eventually transition to a superfluid or possibly inhomogeneous phase [2].1 For instance, experimentally, helium-3 behaves as a degenerate Fermi liquid at temperatures between \(\sim\)K and \(\sim\)mK, but at even lower temperatures it turns into a superfluid [3]. As for systems with bosons only, with some caveats2, there is no known example of a state with nonzero charge density that does _not_ break the corresponding symmetry. So, it appears that, at zero temperature, under very general conditions, a nonzero charge density implies spontaneous symmetry breaking. Footnote 1: Another possibility, in the presence of gapless bosons, is the onset of non-Fermi liquid behavior. We leave aside this currently poorly understood scenario from our discussion. Footnote 2: There are in fact explicit examples of interacting bosonic theories in \(d=3\) that at finite \(\mu\) display an emergent fermionic behavior, with bosons satisfying an effective exclusion principle. This is the case of bosonic Chern–Simons theories at large \(N\)[4] (see [5] for earlier work on fermionic Chern–Simons theories displaying bosonic behavior). Even though the coupling of the scalar field to Chern–Simons gauge fields proves crucial in renormalizing the particle spin, we are not aware of a general proof that similar (or other exotic) phenomena cannot occur in theories of interacting scalar fields. See for instance Ref. [6] for an example of exotic phase at finite \(\mu\) and \(T=0\) in \(d=2\) QFT (see also [7] for a review of earlier works on the same model). In lattice systems, an additional possible phase is provided by the (bosonic) Mott insulator (we thank Sean Hartnoll for remarking this to us). This expectation is so ingrained in the way we think about finite density systems, that it is a more or less implicit assumption in much of the recent "large-charge" CFT literature, starting with the seminal paper [8]. To appreciate why it is a nontrivial assumption, apart from considering the free Fermi gas case, where it is manifestly violated, one can consider a self-interacting massive complex scalar \(\Phi\) with a U(1) symmetry. There, a homogeneous state \(|\Psi\rangle\) has nonzero density \(J^{0}=i\,\Phi^{*}\overset{\leftrightarrow}{\partial}_{t}\Phi\) for the U(1) charge if and only if \[\langle\Psi|\Phi^{*}\overset{\leftrightarrow}{\partial}_{t}\Phi|\Psi\rangle \neq 0\;. \tag{1}\] On the other hand, \(|\Psi\rangle\) breaks the U(1) symmetry if and only if there exists a charged local operator, for instance \(\Phi(x)\) itself, with a nonzero expectation value on \(|\Psi\rangle\): \[\langle\Psi|\Phi|\Psi\rangle\neq 0\;. \tag{2}\] These two conditions look quite independent, and neither seems to be implying the other. Certainly, there are systems obeying (2) that do _not_ obey (1) (see case 2 above.) Why is it then that all systems obeying (1) happen to obey (2) as well? At the classical field theory level, there is no mystery: since the density is bilinear in \(\Phi\) and \(\Phi^{*}\), to have nonzero density one needs a nonzero \(\Phi\), which breaks the symmetry. At the free QFT level also there is no mystery: at nonzero charge density there is the phenomenon of Bose-Einstein condensation, which implies that the symmetry is spontaneously broken (although it takes some work to prove this last implication [9]). So, the real question is at the level of the interacting, quantum theory. It is important to notice that, with the exception of the somewhat degenerate case of a free theory, there is a control parameter--the chemical potential \(\mu\)--that can be used to modulate the density. Classically, one immediately finds that, if at vanishing \(\mu\) the symmetric phase with \(\Phi=0\) is stable and the field there has mass \(m\), then for \(\mu<m\) the system will stay in that phase, with no charge density and no symmetry breaking, while for \(\mu>m\) it will simultaneously develop a nontrivial \(\Phi\) and a nontrivial density \(J_{0}\). However, at the quantum level the two operators are two distinct operators, with different quantum numbers, and it is thus a sensible question to ask whether the scales at which they develop a non-zero expectation value, say \(\mu=\Lambda_{\Phi}\) and \(\mu=\Lambda_{J}\), are the same or not. In this notation, at tree level one has two, in principle independent, equalities: \[\Lambda_{\Phi} =\Lambda_{J}\equiv\mu_{\rm crit} \tag{3}\] \[\mu_{\rm crit} =m\;. \tag{4}\] We may thus ask: Does the first equality survive at the quantum level? If it does, how is the second corrected? One of our main results will be that, for scalar field theories with generic non-derivative self-interactions, _both_ equalities survive at one-loop order, with \(m\) now being replaced by the physical pole mass of the scalar quanta in the unbroken phase, \(\mu_{\rm crit}=m_{\rm pole}\). Moreover, we will find that the same result holds non-perturbatively in the O(\(N\)) vector model with quartic interactions in \(d=3,4\), at leading order in the large \(N\) expansion. We shall also quantify the amount of symmetry breaking as a function of the charge density, by deriving universal scaling relations at low and high density and a strict lower bound. As a byproduct, we shall derive the one-loop phonon effective actions for the associated superfluid phases, which are directly related to their equations of state. These results will reproduce and generalize the independent computation of [10]. In the following we adopt a path integral approach and pay particular attention to the \(i\varepsilon\) terms needed to project onto the ground state of the interacting system at finite \(\mu\). We shall see that the \(i\varepsilon\) term projects on a time-independent field configuration (both at zero and at finite density) only in a specific basis of field variables, in which the generalized Lagrangian \({\cal L}_{\mu}\) is explicitly \(\mu\)-dependent and has quadratic terms which are of first order in time derivatives. We shall perform our computations in this basis, so that the properties of the ground state of the system can be extracted from the finite \(\mu\) quantum effective potential \(V_{\rm eff}\). Moreover, the structure of the poles of the propagators including the finite \(\mu\)\(i\varepsilon\) term is always such that observables can be computed in Euclidean space, since the Wick rotation is an analytic continuation that does not cross any singularity. When computing the one-loop effective potential we shall therefore compute the one-loop integrals in Euclidean space. In a separate work [11] we shall consider systems of fermions at finite chemical potential and show how, crucially, an accurate treatment of the \(i\varepsilon\) term allows to compute finite \(\mu\) quantities such as the free-energy of a Fermi gas using path integral methods. The \(\mu\) (in)dependence of the fermionic path integral for small \(\mu\) in QCD has been analyzed in [12]; see also [13] for a recent study of the large charge sector of fermionic CFTs in \(d=3\) and their infrared phases. Related computations for bosonic systems have been performed previously in Refs. [14; 15; 16; 17; 18]. Kapusta [14] performed a finite-temperature field theory analysis employing the so-called quasi-classical quasi-particle technique that does not fully capture the one-loop corrections in a systematic way, as already noted in the same paper. This computation was later improved by Bernstein, Dodelson and Benson [15; 16]. The authors considered scalar fields in \(d=4\) and formulated the effective potential computation in a way similar to ours, then specializing to the \(\lambda\phi^{4}\) model. The finite temperature computation was also recently reconsidered in Ref. [17]. The effective potential, however, is expressed only as an implicit integral over loop momenta. Our results are in agreement with those of [15; 16] whenever they overlap. Brauner [18] formulates the calculation of the effective potential for the theory of a complex scalar doublet with \(\phi^{4}\) interactions and internal symmetry \({\rm SU}(2)\times{\rm U}(1)\) in \(d=4\), and resorts to numerical calculations for the study of its minimum and other properties of interest. Related work has also appeared in the context of pion condensation [19], see for instance [20; 21; 22]. In particular, Adhikari, Andersen and Kneschke have shown that in chiral perturbation theory the pion condensation transition occurs for a critical chemical potential equal to the pion pole mass at next-to-leading order. On a more formal side, some properties of the relativistic \(\lambda\phi^{4}\) model at finite \(\mu\) have been analyzed in the context of axiomatic QFT in Ref. [23]. To the best of our knowledge, the integral representations for the finite \(\mu\) effective potential and the closed form expressions for its minimum and the related superfluid effective Lagrangian that we obtain, as well as the general statements on symmetry breaking at finite density and the scaling relations for the symmetry breaking scale, are new results that have not appeared before. Our discussion on spontaneous symmetry breaking will be limited to theories in which the number of spacetime dimensions is strictly larger than two. The reason for this is the well-known fact that spontaneous symmetry breaking in quantum mechanics and in two dimensional field theories is a subtle concept and requires more care. For instance, the Coleman-Mermin-Wagner theorem [24; 25] implies that in two-dimensional theories with a Lorentz invariant ground state there is no spontaneous symmetry breaking, at least in the ordinary sense of local order parameters for internal global symmetries. The computation of the effective potential is formally valid also in low dimensions, and some physical quantities can be meaningfully extracted from it, as we shall see in the example of a spinning rigid rotor (Appendix A). However, in the low-dimensional case the analysis of the effective potential is not enough to make definite statements on spontaneous symmetry breaking, and in our approach this shows up as a breakdown of perturbation theory. (We leave a detailed discussion of these aspects to a future work [26].) Nevertheless, a formal analysis of the quantum mechanical case \(d=1\) allows to extract useful physical information and is carried out as a warm-up problem. Since our paper is rather long and technical, we provide here a roadmap of its structure: * We review in section 2 the derivation of the finite \(\mu\) Lagrangian for a complex scalar field \(\Phi\) with arbitrary U(1)-symmetric potential, discussing in detail the role of the \(i\varepsilon\) term and the structure of poles in the propagators. In section 2.3, we apply functional methods to our system, introducing the formal ingredients that we will use in the rest of the work. In particular, we briefly review the quantum effective action, highlighting the differences due to the presence of a chemical potential \(\mu\) and the subtlelties associated with the use of the generating functional \(W[j]\) and the effective potential. We then formulate our general question in this language. * In section 3, as a warm up example, we compute the one-loop effective potential of a quantum mechanical point particle in a central potential. In the related appendix A we rederive the ground state energy quantization of the spinning rigid rotor in three dimensional space, see eqs. (A.8) and (A.9). * In section 4, we derive the one-loop effective potential of a complex scalar field with arbitrary U(1)-invariant, non-derivative self-interactions in generic \(d>2\) spacetime dimension. We show explicitly that the expectation that finite density is always accompanied by the spontaneous breaking of the U(1) internal symmetry remains valid at the quantum level at one loop (section 4.4). We also prove that the critical value of \(\mu\) above which the system can support a finite density state is given by the pole mass of the scalar field in the \(\mu=0\) theory. Moreover, we derive a universal analytic expression for the one-loop effective action that describes the superfluid phase of these theories and determine the order of the finite density phase transition (section 4.5). We then quantify the amount of symmetry breaking (section 5) by explicitly deriving universal scaling relations between the charge density and symmetry breaking scale, in the low and high-density limits. These results, together with those of the next section, are the main results of our work. * In section 6, we study an O(\(N\))-symmetric theory of \(N\) real scalars with quartic interactions in \(d=3\) and show that the same conclusions hold non-perturbatively at the leading order of the large-\(N\) limit. This theory provides also an explicit example of a model whose finite density dynamics at large density is described by a superfluid effective field theory (EFT) that is _not_ that of a conformal superfluid. This property is related to the super-renormalizability of the model. In section 7, we consider some special cases and compare with previous results in the literature. In particular, we compute (to leading order in \(1/n\)) the one-loop scaling dimension of the lightest charge \(n\) operator in the U(1) theory of a massless complex scalar field with \(\phi^{6}\) interactions in \(d=3\), and show that it is in agreement with the numerical result of Ref. [27]. In addition, we specialize our results of section 4 to the case of a \(\phi^{4}\) potential in \(d=4\) and further comment on the connection with Ref. [10]. In the low density limit we also reproduce the Lee-Huang-Yang relation for the superfluid energy density and compute relativistic corrections to it. More technical aspects and details are collected in the Appendix. #### Notation and conventions. We work in flat spacetime and adopt the mostly minus signature \(\eta_{\mu\nu}=\text{diag}(+1,-1,\ldots,-1)\) for the metric. We denote by \(d\) the number of space-time dimensions. Throughout the paper \(\Phi=(\varphi_{1}+i\,\varphi_{2})/\sqrt{2}\) will denote a complex scalar field, with real components \(\varphi_{i}\). We shall use the shorthand notation \(\phi=|\Phi|=\sqrt{(\varphi_{1}^{2}+\varphi_{2}^{2})/2}\) for its norm. To simplify the notation, we shall assume without loss of generality that \(\mu>0\). By a slight abuse of notation, we will use the symbol \(Q\) to denote both the expectation value of the conserved charge and its volume density, suppressing factors of volume. The correct meaning of the symbol should be clear from the context and volume factors can be reintroduced as desired by dimensional analysis. In this article we assume that the scalar has positive \(m^{2}\). ## 2 Complex scalar field at finite chemical potential We start by reviewing the formulation of a zero-temperature scalar QFT at finite chemical potential, paying special attention to the \(i\varepsilon\) terms. We stress that this is crucial to correctly derive the effective potential \(V_{\text{eff}}\) and compute the observables on the ground state at finite \(\mu\). For the sake of the presentation, we shall mostly focus here on the free theory of a complex scalar field. We shall mention at the end of the section how the discussion generalizes in the presence of an interaction potential. ### Lagrangian and Hamiltonian formulations Let \(\Phi(x)\) be a free complex scalar of mass \(m\). In Minkowski space with mostly-minus signature for the metric, its Lagrangian density is \[\mathcal{L}=(\partial_{\nu}\Phi)^{\dagger}(\partial^{\nu}\Phi)-m^{2}\Phi^{ \dagger}\Phi, \tag{1}\] and the U(1) symmetry \(\Phi\to e^{-i\alpha}\Phi\) is manifest. The theory can be rewritten in terms of two real scalars \(\varphi_{i}\), \(\Phi=(\varphi_{1}+i\,\varphi_{2})/\sqrt{2}\), with Lagrangian density \[\mathcal{L}=\frac{1}{2}\,\partial_{\nu}\varphi_{i}\,\partial^{\nu}\varphi_{i} -\frac{m^{2}}{2}\varphi_{i}\varphi_{i}\,. \tag{2}\] Under an infinitesimal U(1) transformation, \(\delta\Phi=-i\alpha\Phi\), the real scalars transform by an SO(2) rotation, \[\delta\varphi_{i}=\alpha\,\epsilon_{ij}\,\varphi_{j}\;, \tag{3}\] so that the Noether current associated with the symmetry is \[J^{\nu}=\frac{\partial\mathcal{L}}{\partial(\partial_{\nu}\varphi_{i})}\frac{ \delta\varphi_{i}}{\delta\alpha}=\epsilon_{ij}\,\partial^{\nu}\varphi_{i}\, \varphi_{j}\;. \tag{4}\] We are interested in studying this system at zero temperature but in the presence of a finite chemical potential for the U(1) charge. To define the system at finite \(\mu\) we switch to the canonical formalism and work with a generalized Hamiltonian that includes a chemical potential term, \(H_{\mu}\equiv H-\mu Q\). The conjugate momenta associated with the real scalar fields are \[\pi_{i}=\frac{\partial\mathcal{L}}{\partial\dot{\varphi}_{i}}=\dot{\varphi}_{ i}. \tag{5}\] The canonical Hamiltonian density is readily obtained: \[\mathcal{H}=\pi_{i}\dot{\varphi}_{i}-\mathcal{L}=\frac{1}{2}\pi_{i}\pi_{i}+ \frac{1}{2}\vec{\nabla}\varphi_{i}\cdot\vec{\nabla}\varphi_{i}+\frac{m^{2}}{2 }\varphi_{i}\varphi_{i}\;, \tag{6}\] and the generalized Hamiltonian density at finite \(\mu\) is \[\mathcal{H}_{\mu}=\mathcal{H}-\mu J^{0}=\frac{1}{2}\pi_{i}\pi_{i}+\frac{1}{2} \vec{\nabla}\varphi_{i}\cdot\vec{\nabla}\varphi_{i}+\frac{m^{2}}{2}\varphi_{ i}\varphi_{i}-\mu\,\epsilon_{ij}\,\pi_{i}\varphi_{j}\;. \tag{7}\] The corresponding Lagrangian--the Lagrangian at finite chemical potential--is just the Legendre transform of this, and reads \[\mathcal{L}_{\mu}=\frac{1}{2}\,\partial_{\nu}\varphi_{i}\,\partial^{\nu} \varphi_{i}-\frac{1}{2}(m^{2}-\mu^{2})\varphi_{i}\varphi_{i}+\mu\,\epsilon_{ ij}\,\dot{\varphi}_{i}\varphi_{j}\;. \tag{8}\] After integration by parts, this can be equivalently expressed in the matrix form \[\mathcal{L}_{\mu}=\frac{1}{2}\begin{pmatrix}\varphi_{1}&\varphi_{2}\end{pmatrix} \cdot K\cdot\begin{pmatrix}\varphi_{1}\\ \varphi_{2}\end{pmatrix}, \tag{9}\] with \[K=\begin{pmatrix}-\Box-m^{2}+\mu^{2}&-2\mu\partial_{t}\\ 2\mu\partial_{t}&-\Box-m^{2}+\mu^{2}\end{pmatrix}\;. \tag{10}\] By looking at the zeroes of the determinant of \(K\) in momentum space, one can read off the poles of our fields' propagators. Thinking for the moment only about the small \(\mu\) case, \(\mu<m\), we have positive energy solutions \[\omega_{\pm}^{(+)}=\omega_{k}\mp\mu\;, \tag{11}\] where \(\omega_{k}\) is the standard relativistic expression \[\omega_{k}=\sqrt{k^{2}+m^{2}}\;,\qquad k^{2}\equiv\left|\vec{k}\right|^{2}\;, \tag{12}\] and negative energy ones, \[\omega_{\pm}^{(-)}=-\omega_{k}\mp\mu\;. \tag{13}\] To explain the notation and gain some intuition, consider again our modified Hamiltonian, \(\mathcal{H}_{\mu}=\mathcal{H}-\mu Q\). As we will explain in detail, for \(\mu<m\) the ground state is still the Poincare invariant vacuum, and the excitations are still the standard Fock states. So, we have particles with charge \(q=1\) and anti-particles with charge \(q=-1\). Their energies as measured by \(\mathcal{H}_{\mu}\) are thus shifted by \(\mp\mu\) compared to their standard ones. So, going back to the frequencies above: \(\omega_{+}^{(+)}\) is the positive energy (superscript (\(+\))) solution for the positively charged (subscript \(+\)) particle, and so on. ### Path integral formulation and \(i\varepsilon\) terms Consider now the path integral formulation of the theory, in particular that for time-ordered correlation functions on the ground state of the modified Hamiltonian \(\hat{H}_{\mu}=\int d^{d-1}x\,\hat{\mathcal{H}}_{\mu}\). We can project onto that state by introducing an appropriate \(i\varepsilon\) term in the Hamiltonian path integral. Following the standard procedure, we define the partition function \[Z(\mu)=\int D\varphi D\pi\,e^{i\int\mathrm{d}^{d}x\,I(\pi_{i},\varphi_{i})}\;, \tag{14}\] with \[I(\pi_{i},\varphi_{i}) =\pi_{i}\dot{\varphi}_{i}-\mathcal{H}_{\mu}+i\varepsilon\mathcal{ H}_{\mu}\] \[=\pi_{i}\dot{\varphi}_{i}-\frac{1}{2}(1-i\varepsilon)\Big{[}\pi_ {i}\pi_{i}+\vec{\nabla}\varphi_{i}\cdot\vec{\nabla}\varphi_{i}+m^{2}\varphi_{ i}\varphi_{i}-2\mu\,\epsilon_{ij}\,\pi_{i}\varphi_{j}\Big{]}\;. \tag{15}\] Since the exponent in the path integral (14) is a quadratic polynomial in the momenta, we can play the usual game of solving for the momenta and derive a Lagrangian version of the path integral. After some straightforward manipulations and keeping up to first order in \(\varepsilon\), we arrive at \[Z(\mu)=\int D\varphi\,\exp\Big{\{}i\int\mathrm{d}^{d}x\,\big{(}\mathcal{L}_{ \mu}(\varphi_{i},\partial\varphi_{i})+i\varepsilon\,\mathcal{E}_{\mu}(\varphi _{i},\partial\varphi_{i})\big{)}\Big{\}}\;, \tag{16}\] where \(\mathcal{L}_{\mu}\) is the same Lagrangian at finite \(\mu\) as above, and \[\mathcal{E}_{\mu}(\varphi_{i},\partial\varphi_{i})\equiv\frac{1}{2}\left[ \dot{\varphi}_{i}\dot{\varphi}_{i}+\vec{\nabla}\varphi_{i}\cdot\vec{\nabla} \varphi_{i}+(m^{2}-\mu^{2})\varphi_{i}\varphi_{i}\right]\;. \tag{17}\] The Lagrangian \(i\varepsilon\) term (17) is thus weighted by the Hamiltonian of a free complex scalar with squared mass (\(m^{2}-\mu^{2}\)). In particular, it does not contain any mixing terms between the two components \(\varphi_{i}\). Reading off its positivity properties is thus straightforward: * For \(\mu<m\), \(\mathcal{E}_{\mu}\) is a positive definite quadratic form in the space of functions over which we are integrating. The path-integral is thus convergent, and, upon a Wick rotation, is equivalent to the Euclidean one. In particular, in this case our \(i\varepsilon\) term is equivalent to the usual one, \(i\varepsilon\int\frac{1}{2}\varphi_{i}\varphi_{i}\). * For \(\mu>m\), the mass term in \(\mathcal{E}_{\mu}\) is negative definite, which signals that the path integral is not convergent. This, as we will see, is related to a ghost-like instability. In a free theory there is no cure. With self-interactions instead, this signals that we are expanding about the wrong saddle point. * For \(\mu=m\), the mass term in \(\mathcal{E}_{\mu}\) vanishes. This makes the zero mode of \(\varphi\) a flat direction, which, in a free theory, is associated with the phenomenon of Bose-Einstein condensation. So, in a free theory only \(\mu\leq m\) is allowed. When we add an interaction potential term \(-V_{\mathrm{int}}(\varphi)\) to the original Lagrangian, all manipulations above go through unaltered, and the end result is that now \({\cal L}_{\mu}\) is supplemented by the same term \(-V_{\rm int}(\varphi)\), while the \(i\varepsilon\) term includes a \(+V_{\rm int}(\varphi)\) piece. Its positivity properties in the vicinity of \(\varphi=0\) are thus the same as above. However, when \(\varphi=0\) becomes unstable, that is for \(\mu\geq m\), the interaction potential can make \({\cal E}_{\mu}\) positive definite about a different, but constant, field configuration \(\varphi\neq 0\). This will be the new saddle point one has to expand about, which will lead to SSB. Repeating the analysis of the previous section, we find that the \(i\varepsilon\) term shifts the poles from the real line to the second and fourth quadrants of the complexified frequency plane, for all values of \(\mu\). As a result, the analytic continuation from Minkowski to Euclidean space can be performed without crossing any singularity. ### The effective potential and the ground state In order to study the properties of the ground state in an interacting QFT it is often convenient to use functional methods [28; 29; 30]. This approach allows us to extend the semiclassical approximation beyond tree level in a systematic and well-defined way and to include the dynamical effects of external sources. We briefly review this approach in order to highlight the differences introduced by the chemical potential \(\mu\) and the role of the \(i\varepsilon\) term. We start from the Lagrangian path-integral representation of the partition function in the presence of sources \(j_{i}(x)\), \[Z[j_{i};\mu]\equiv\int D\varphi\exp\left\{i\int{\rm d}^{\rm d}x\,({\cal L}_{ \mu}[\varphi]+j_{i}(x)\varphi_{i}(x)+i\varepsilon\,{\cal E}_{\mu}[\varphi]\, )\right\}\;. \tag{18}\] We treat \(\mu\) as a constant parameter on the same footing as the other couplings.3\(Z[j_{i};\mu]\) is a functional of the sources and its functional derivatives generate all the time-ordered Green's functions of \(\hat{\varphi}_{i}\) in the presence of the sources \(j_{i}(x)\). It is often more convenient to work with the generating functional of connected Green's functions, Footnote 3: We shall see, however, that, differently from the mass and the self-interaction couplings, the chemical potential \(\mu\) does not need to be renormalized. \[W[j;\mu]=-i\log Z[j;\mu]. \tag{19}\] The so-called _classical field_ is defined as the expectation value of \(\hat{\varphi}(x)\) in the presence of the source \(j(x)\): \[\varphi_{cl}(x)=\frac{\delta W[j;\mu]}{\delta j(x)}=\langle\Omega|\hat{ \varphi}(x)|\Omega\rangle_{j,\mu}\;. \tag{20}\] The quantum effective action \(\Gamma[\varphi_{cl};\mu]\) is defined through the Legendre transform \[\Gamma[\varphi_{cl};\mu]=W[j;\mu]-\int{\rm d}^{d}x\,j(x)\varphi_{cl}(x), \tag{21}\] where \(j(x)\) is understood as a functional of \(\varphi_{cl}(x)\), through the inverse of equation (20). \(\Gamma[\varphi_{cl};\mu]\) generates one-particle-irreducible (1PI) Green's functions, as it can be proved by taking appropriate functional derivatives. From the definition of the Legendre transform it follows that \[\frac{\delta\Gamma[\varphi_{cl};\mu]}{\delta\varphi_{cl}(x)}=-j(x). \tag{22}\] In particular, the expectation value of \(\hat{\varphi}(x)\) for vanishing external source must be a stationary point of the effective action, _i.e._ it obeys \(\delta\Gamma[\varphi_{cl};\mu]/\delta\varphi_{cl}(x)=0\). The quantum effective action admits a loop expansion, and, perhaps more importantly, a derivative expansion. The lowest order in the latter corresponds to constant field values, and, in that limit, the quantum effective action reduces to just an effective potential term, \(-\int\mathrm{d}^{d}x\,V_{\mathrm{eff}}(\varphi_{cl};\mu)\), which generates correlation functions with vanishing external momenta. At tree level, the effective potential is just the ordinary potential \(V(\varphi_{cl};\mu)\), while at all orders it has a representation in terms of a path-integral over quantum fluctuations about \(\varphi_{cl}\): \[e^{-i\int V_{\mathrm{eff}}(\varphi_{cl};\mu)}=\int_{1\mathrm{PI}}D\delta \varphi\,e^{i\int\big{(}\mathcal{L}_{\mu}[\varphi_{cl}+\delta\varphi]+i \varepsilon\,\mathcal{L}_{\mu}[\varphi_{cl}+\delta\varphi]\big{)}}\, \tag{23}\] where, in a diagrammatic expansion, the path-integral is restricted to 1PI diagrams only. In the absence of sources, _if_ the ground state is translationally invariant, it must have an expectation value for \(\hat{\varphi}\) that minimizes the effective potential, \[\langle\Omega|\hat{\varphi}(x)|\Omega\rangle_{\mu}=\bar{\varphi}\,\qquad \frac{\partial V_{\mathrm{eff}}(\varphi_{cl};\mu)}{\partial\varphi_{cl}^{i}} \bigg{|}_{\bar{\varphi}}=0. \tag{24}\] All this is absolutely standard, but now we come to two technical subtleties that, though important for our study, once understood can be safely ignored: 1. The path-integral expression (23) makes no sense for certain values of \(\varphi_{cl}\): if the quadratic terms in the expansion of \(\mathcal{E}_{\mu}\) about a certain \(\varphi_{cl}\) are _not_ positive definite, then the (perturbative) path integral does not converge. This is physically associated with the fact that, even in the presence of a suitable source \(j(x)\), the state with that \(\varphi_{cl}\) as expectation value for \(\hat{\varphi}\) is unstable. This is easier to understand in pictures than in words (or formulae)--see fig. 1. Technically, even in cases when \(W[j;\mu]\) is well defined for all sources \(j(x)\), its Legendre transform might not exist for all classical fields \(\varphi_{\mathrm{cl}}(x)\). The responsible thing to do would then be to compute \(W[j;\mu]\) rather than the effective action or the effective potential. However, in the usual perturbative computations Figure 1: _The instability discussed in the text. If one starts with a \(\mathrm{U}(1)\)-invariant potential with SSB (left), adding a linear source (right) can destabilize the equilibrium position if this is “on the wrong side.” As a result, the effective potential is not formally defined in the region inside the valley of minima, because there are no values for the source that can lead to such expectation values._ of the effective potential one uses the standard \(i\varepsilon\) prescription, _i.e._, \(+i\varepsilon\) at the denominator of Feynman propagators. Then, the subtlety just alluded to shows up as an imaginary part for the effective potential in the "forbidden" range of \(\varphi_{cl}\), which then one interprets, correctly, as an instability. We will take this pragmatic approach, keeping in mind that, to be safe, one should approach the minima of the effective potential from the stable side--typically, larger field values, in absolute value. 2. To study whether or not the system features SSB, we will study the expectation value of the field doublet \(\varphi_{i}\). Now, in our path-integral formulation of the partition function, eq. (18), the only ingredient that breaks the SO(2) symmetry is the coupling to the source \(j_{i}(x)\). So, apparently, at zero source, whatever we compute from the partition function must be symmetric, and so there cannot be SSB. The zero source limit, however, is more delicate than that. Recall that, for any value of the source, the partition function yields the expectation value of the field via eq. (20). The fact that the source is the only symmetry breaking ingredient in the partition function implies that, at least for sources that are constant in spacetime, the expectation value thus obtained will be aligned with the source, with a symmetry-preserving coefficient \[\langle\vec{\varphi}\,\rangle_{\mu,j}=v\big{(}|\vec{j}\,|;\mu\big{)}\,\frac{ \vec{j}}{|\vec{j}\,|}\;,\qquad\vec{j}(x)=\text{const}.\] (25) So, from this viewpoint it is clear that SSB is equivalent to the statement that the function \(v\) has a nonzero limit for \(\vec{j}\) going to zero: an external source or perturbation, however small, will determine the _direction_ in which the symmetry is broken, but the amount of breaking remains finite even for arbitrarily small sources, and it is thus an intrinsic property of the system. This approach to SSB is, in fact, quite physical, and resolves the apparent paradox alluded to above.4 Footnote 4: It is sometimes called the Bogolyubov approach [9]. At the technical level, this means that SSB is associated with a non-analyticity of the generating functional \(W[j;\mu]\) at zero \(\vec{j}\), because an analytic behavior consistent with the symmetries, \(W=\text{const}+|\vec{j}|^{2}+\mathcal{O}(|\vec{j}|^{4})\), would imply vanishing derivatives at zero \(\vec{j}\), and thus vanishing expectation values for the field \(\vec{\varphi}\). In particular, a finite limit for eq. (25) requires \[W[j;\mu]\sim|\vec{j}\,|\;,\qquad\vec{j}\to 0\;.\] (26) Once again, we can bypass this subtlety and just study the effective potential--in particular, its minima: approaching a minimum of the effective potential is equivalent to sending a source to zero. If that minimum corresponds to a nonzero field value, the system features SSB. We are now in a position to phrase our physical QFT question in the functional framework just described. ### The fundamental question Consider our U(1)-invariant complex scalar QFT. Its effective potential \(V_{\rm eff}(\varphi;\mu)\), at generic values of the chemical potential \(\mu\), must be U(1)-invariant. So, as far as \(\varphi\) is concerned, it can only depend on the absolute value \(\phi=|\vec{\varphi}|\), \[V_{\rm eff}=V_{\rm eff}(\phi=|\vec{\varphi}|;\mu). \tag{27}\] From \(V_{\rm eff}\) we can compute many physical properties of the ground state at finite \(\mu\). In particular, the expectation value of \(\vec{\varphi}\) must have absolute value \[\left|\langle\vec{\varphi}\,\rangle_{\mu}\right|=\phi_{\rm min}(\mu)\;, \tag{28}\] where \(\phi_{\rm min}(\mu)\) is a, possibly \(\mu\)-dependent, minimum of \(V_{\rm eff}\), \[\left.\frac{\partial V_{\rm eff}(\phi;\mu)}{\partial\phi}\right|_{\phi_{\rm min }(\mu)}=0. \tag{29}\] On the other hand, from the Hamiltonian path-integral definition of the partition function at vanishing source, see eqs. (14) and (7), and from its relationship to the effective potential, it follows that the ground state's charge density is \[Q(\mu)\equiv\langle\hat{J}^{0}\rangle_{\mu}=-i\frac{{\rm d}}{{\rm d}\mu}\log Z (\mu)=-\frac{{\rm d}V_{\rm eff}(\phi_{\rm min}(\mu);\mu)}{{\rm d}\mu}=-\frac{ \partial V_{\rm eff}(\phi;\mu)}{\partial\mu}\bigg{|}_{\phi_{\rm min}(\mu)}\;, \tag{30}\] where we used that working at zero source is equivalent to working at \(\phi_{\rm min}\) and that \(V_{\rm eff}\) is stationary there. (To avoid clutter, we are suppressing spacetime volume factors. If needed, these can be reinstated by dimensional analysis.) _If_ at zero chemical potential there is no SSB and the vacuum is the usual Poincare invariant one for relativistic field theories, then: \[\phi_{\rm min}(0)=0\;,\qquad Q(0)=0\;. \tag{31}\] Imagine now turning on a (positive) chemical potential. What happens? We like to think of the chemical potential as a control parameter for the charge density. But, in fact, in free theory, or at the classical level for an interacting theory, _nothing_ happens for a finite range of \(\mu\), specifically, for \(\mu<m\), where \(m\) is the mass of our scalar particles: the ground state is still the Poincare invariant vacuum, with zero charge density and unbroken U(1) symmetry. In free theory, when \(\mu\) reaches \(m\) the system develops a charge density _and_ SSB at the same time, and for \(\mu>m\) it is unstable. So, in bosonic free theory, the chemical potential is not a good control parameter at all. Things are better behaved in the classical interacting theory: for \(\mu\geq m\) the system exhibits both a charge density and SSB, and the charge density \(Q(\mu)\) and symmetry breaking scale \(\phi_{\rm min}(\mu)\) are both increasing functions of \(\mu\). So, our fundamental question is whether and how this story gets modified at the quantum level in the interacting theory. In particular: _1) Does \(Q(\mu)\) remain zero for a finite range of \(\mu\)'s and, if so, what is the critical \(\mu\) above which it becomes nonzero, and_ _2) Is there a range of \(\mu\)'s for which \(Q(\mu)\) is nonzero but the symmetry is unbroken (\(\phi_{\rm min}(\mu)=0\))?_ ## 3 Quantum Mechanics: the point particle in a central potential As a warm up, we can explore all these ideas in quantum mechanics. Let us consider a point particle moving in a plane in a central potential. In the presence of a large enough chemical potential for the angular momentum, the naive semiclassical configuration in which the particle sits at rest at the center of the potential becomes unstable. At this point, the system develops both a nonzero angular momentum and a symmetry-breaking expectation value for the position. In fact, as is well known, there is no SSB in quantum mechanics at the non-perturbative level. In our formalism, this property will show up as a breakdown of perturbation theory for such symmetry-breaking expectation value. To see all this, consider first the case of a quadratic potential, corresponding to a two-dimensional harmonic oscillator. The generalized Hamiltonian is:5 Footnote 5: We use the letter \(m\) to denote the frequency of the oscillator in order to have a uniform notation with the field theoretical case. This parameter should not be confused with the mass of the point particle, which is taken to be unity. \[H_{\mu}=\frac{1}{2}\,\vec{p}\,^{2}+\frac{m^{2}}{2}\vec{q}\,^{2}+\mu\,\epsilon_ {ij}q_{i}p_{j}\;. \tag{3.1}\] This Hamiltonian can be diagonalized via a time-independent canonical transformation, \[\begin{split} Q_{1}&=\sqrt{\frac{m}{2}}q_{1}+\sqrt {\frac{1}{2m}}p_{2}\;,\quad Q_{2}=\sqrt{\frac{m}{2}}q_{2}+\sqrt{\frac{1}{2m}}p _{1}\;,\\ P_{1}&=-\sqrt{\frac{m}{2}}q_{2}+\sqrt{\frac{1}{2m }}p_{1}\;,\quad P_{2}=-\sqrt{\frac{m}{2}}q_{1}+\sqrt{\frac{1}{2m}}p_{2}\;, \end{split} \tag{3.2}\] leading to \[H_{\mu}=\frac{1}{2}(m+\mu)\big{(}P_{1}^{2}+Q_{1}^{2}\big{)}+\frac{1}{2}(m-\mu) \big{(}P_{2}^{2}+Q_{2}^{2}\big{)}\;. \tag{3.3}\] In these variables it becomes transparent that: * For \(\mu<m\) we have two independent harmonic oscillators, with frequencies \(\omega_{\pm}=m\pm\mu\). The ground state corresponds to vanishing occupation numbers for both of them, regardless of the value of \(\mu\) (within this range). Since the canonical transformation that we performed does not depend on \(\mu\) either, then this ground state is just the standard \(\mu=0\) ground state of the system. That is, as anticipated, nothing happens for \(\mu<m\). * For \(\mu>m\) we have a _ghost_ instability: the Hamiltonian of the second oscillator is negative definite. To study the fate of this instability, we now go back to the original \((\vec{q},\vec{p}\,)\) variables and consider anharmonicities in the potential. For definiteness, consider a quartic potential: \[V(q)=\frac{m^{2}}{2}\vec{q}^{\,2}+\frac{\lambda}{4}(\vec{q}^{\,2})^{2}. \tag{3.4}\] The Hamiltonian can no longer be diagonalized with a simple canonical transformation, and in order to study the properties of the ground state it is useful to switch to the Lagrangian formalism and study the effective potential. The finite \(\mu\) Lagrangian is \[L_{\mu}=\frac{1}{2}\tilde{q}^{\,2}-\frac{(m^{2}-\mu^{2})}{2}\vec{q}^{\,2}-\frac {\lambda}{4}\big{(}\vec{q}^{\,2}\big{)}^{2}+\mu\,\epsilon_{ij}\dot{q}_{i}q_{j}\;, \tag{3.5}\] and we now apply (2.23) at one-loop: we expand the action to quadratic order about a generic but constant classical position \(\vec{q}_{cl}=(\bar{q}_{1},\bar{q}_{2})\,,\) \[\begin{split}& S_{(2)}[\delta\vec{q}\,,\vec{q}_{\rm cl}]=-\frac{1} {2}\int\mathrm{d}t\;\delta\vec{q}\,(t)\cdot K(\vec{q}_{cl})\cdot\delta\vec{q} \,(t)\;,\\ & K(\vec{q}_{cl})=\begin{pmatrix}\partial_{t}^{2}+m^{2}-\mu^{2} +3\lambda\bar{q}_{1}^{2}+\lambda\bar{q}_{2}^{2}&-2\mu\partial_{t}+2\lambda \bar{q}_{1}\bar{q}_{2}\\ 2\mu\partial_{t}+2\lambda\bar{q}_{1}\bar{q}_{2}&\partial_{t}^{2}+m^{2}-\mu^{2 }+\lambda\bar{q}_{1}^{2}+3\lambda\bar{q}_{2}^{2}\end{pmatrix}\;,\end{split} \tag{3.6}\] and evaluate the gaussian path integral that yields the one-loop correction to the effective potential, \(\Delta V^{\rm 1L}_{\rm eff}\). Following standard functional methods [31], \[e^{-i\int\!dt\,\Delta V^{\rm 1L}_{\rm eff}[\vec{q}_{cl};\mu]} =\int\!D\delta q(t)\,e^{i\,S_{(2)}[\delta\vec{q}]} \tag{3.7}\] \[=\big{(}\mathrm{Det}\,K\big{)}^{-1/2}\] (3.8) \[=\exp\Big{\{}-\frac{1}{2}\int\!dt\int\!\frac{d\omega}{2\pi}\log \det\tilde{K}(\vec{q}_{cl})\Big{\}}\;, \tag{3.9}\] where 'Det' is a functional determinant, 'det' an ordinary one, and \(\tilde{K}\) the Fourier-space version (\(\partial_{t}\to-i\omega\)) of the \(K\) matrix in (3.6). So, the one-loop correction to the effective potential reads \[\Delta V^{\rm 1L}_{\rm eff}(\vec{q}_{cl};\mu)=-\frac{i}{2}\int\frac{\mathrm{d} \omega}{2\pi}\log\left[\left(\omega^{2}-\omega_{-}^{2}\right)\left(\omega^{2}- \omega_{+}^{2}\right)\right]\;, \tag{3.10}\] where \(\omega_{\pm}\) are the poles of the \(\delta\vec{q}\,\) propagators, which depend on the classical radial distance \(r\equiv|\vec{q}_{cl}|\) and on \(\mu\): \[\omega_{\pm}^{2}(r;\mu)=m^{2}+\mu^{2}+2\lambda r^{2}\pm\sqrt{4m^{2}\mu^{2}+8 \lambda\mu^{2}r^{2}+\lambda^{2}r^{4}}\;. \tag{3.11}\] Computing the integral in Euclidean space, renormalizing away an \(r\)-independent and \(\mu\)-independent zero-point energy, and including the tree-level contributions, we arrive at the final one-loop result \[V_{\rm eff}(r;\mu)=\frac{1}{2}(m^{2}-\mu^{2})r^{2}+\frac{\lambda}{4}r^{4}+ \frac{1}{2}\big{(}\omega_{+}(r;\mu)+\omega_{-}(r;\mu)\big{)}. \tag{3.12}\] Let us start by asking for what values of \(\mu\) the origin is a stable (or metastable) configuration. This is determined by the sign of the second \(r\)-derivative of \(V_{\rm eff}\) at \(r=0\). We have \[\frac{\partial^{2}V_{\rm eff}(r;\mu)}{\partial r^{2}}\bigg{|}_{r=0}=m^{2}-\mu^{ 2}+\frac{2\lambda}{m} \tag{3.13}\] So, the origin is stable for \[\mu^{2}<m^{2}+\frac{2\lambda}{m}\;, \tag{3.14}\] and unstable otherwise. Since in this system there is no wave-function renormalization at one loop, the second derivative of the potential at the origin and at \(\mu=0\) happens to be the 'pole mass' \(m_{\rm pole}^{2}\)--the renormalized energy of the first excited states at vanishing chemical potential. So, the origin is stable for \[\mu^{2}<m_{\rm pole}^{2}\;, \tag{3.15}\] and unstable otherwise. Within this stability range, we can ask whether the system does feature a nonzero angular momentum at finite \(\mu\). This is determined by the \(\mu\)-derivative of \(V_{\rm eff}\) at the origin. However, we have \[V_{\rm eff}(0;\mu)=m\;, \tag{3.16}\] and so its \(\mu\)-derivative vanishes for all \(\mu\)'s. We thus reach the same conclusion as in the classical (or free quantum) theory: nothing happens until \(\mu\) crosses a critical value, given by the pole mass. Beyond that value, the system develops both a symmetry breaking average position and a nonzero angular momentum. However, this is where things go wrong in perturbation theory, as anticipated above. Let us start with the classical (_i.e._, tree-level) limit. For \(\mu>m\) the expectation value of the position and of the angular momentum are determined by the minimum of the classical potential (3.4), and read \[r_{\rm min}^{2}=\frac{\mu^{2}-m^{2}}{\lambda}\;,\qquad J=-\frac{\partial V}{ \partial\mu}\bigg{|}_{r_{\rm min}}=\frac{\mu(\mu^{2}-m^{2})}{\lambda}\qquad \qquad\mbox{(tree level)}\;. \tag{3.17}\] Now, it so happens that the 1-loop correction in (3.12) is singular at \(r=r_{\rm min}\), making it impossible to correct the value of \(r_{\rm min}\) order by order in perturbation theory. To see this, it is enough to notice that for \(\mu>m\), the pole frequency \(\omega_{-}\) displays a singularity at \(r=r_{\rm min}\): \[\omega_{-}\sim\sqrt{r-r_{\rm min}}\;,\quad r\to r_{\rm min}\qquad\qquad(\mu>m )\;. \tag{3.18}\] As explained around eq. (2.26), some form of non-analyticity is to be expected for SSB to take place. However, the one above for \(\omega_{-}\) is too strong. One can check that it corresponds to a one-loop generating functional scaling as \[W[j;\mu]\sim\sqrt{|\vec{j}\,|}\;,\qquad\vec{j}\to 0\;, \tag{3.19}\] which is not regular at zero source. We take this as a sign of a breakdown of perturbation theory. As mentioned above, the fact that perturbation theory breaks down in this case is consistent with the fact that there should not be SSB in quantum mechanics. We devote another paper to study in some detail this fact and the analogous one in \(d=2\) with functional methods [26]. As well known, these obstructions to SSB do not apply for QFT in \(d>2\), which we study next. However, as a check of our methods, in Appendix A we also generalize the QM analysis to the case of a quantum mechanical rigid rotor, rederiving the well-known quantization condition \(E_{J}=J(J+1)/2I\) for its energy. ## 4 QFT in \(d>2\) dimensions We now consider the case of a complex scalar field in \(d>2\) dimensions, where \(d\) includes both space and time and will be taken as an arbitrary parameter. We consider the case in which the tree-level potential includes the mass term and arbitrary \(\mathrm{U}(1)\)-invariant self-interactions, so that at finite \(\mu\) the kinetic term and the tree-level potential are, respectively, \[\begin{split}&\mathcal{L}_{\mu,\mathrm{kin}}=(\partial_{\nu}\Phi)^{ \dagger}(\partial^{\nu}\Phi)+i\mu\,\Phi^{\dagger}\overset{\leftrightarrow}{ \partial}_{0}\Phi\\ & V(\phi;\mu)=(m^{2}-\mu^{2})\phi^{2}+V_{\mathrm{int}}(\phi)\;. \end{split} \tag{38}\] The only assumption we impose for the interactions is regularity, that is, that \(V_{\mathrm{int}}(\phi)\) admits a Taylor expansion around \(\phi=0\) that starts at quartic or higher order. For simplicity we restrict to interaction potentials that are growing functions of \(\phi\), so that the full potential \(V(\phi;\mu)\) is minimized at the origin for \(\mu<m\) and develops a single symmetry-breaking minimum for \(\mu>m\), as depicted in fig. 2. More general interactions can be considered, in which case the full potential can have more extrema. Figure 2: _For simplicity we restrict to potentials \(V(\phi;\mu)\) that, as a function of \(\phi\equiv|\Phi|\), only feature one minimum. For \(\mu<m\) the minimum is at the origin, while for \(\mu>m\) it is at a nonzero value of \(\phi\)._ ### Free scalar for \(\mu<m\) Let us first treat the familiar case of free massive bosons, obtained by setting \(V_{\rm int}(\phi)=0\). The system is unstable for \(\mu>m\), and we want to check that for \(\mu<m\) the system is equivalent to a system of free bosons at zero density, which implies that the only non-trivial case is then the degenerate case \(\mu=m\). Carrying out the procedure detailed in Appendix C, the path integral in Euclidean space yields \[\log Z(\mu)=-\frac{i}{2}{\rm Vol}\int\frac{{\rm d}^{4}p}{(2\pi)^{4}}\left[\log \left(p^{2}+m^{2}+\xi_{0}^{2}+2\xi_{0}p_{0}\right)+\log\left(p^{2}+m^{2}+\xi_{0 }^{2}-2\xi_{0}p_{0}\right)\right]\;, \tag{4.2}\] where \(\xi_{0}=i\mu\) and we neglected a \(\mu\)-independent additive constant. Let us focus on the first term and use dimensional regularization to compute: \[I_{+}=\int\frac{{\rm d}^{d}p}{(2\pi)^{d}}\log\left(p^{2}+m^{2}+\xi_{0}^{2}+2 \xi_{0}p_{0}\right). \tag{4.3}\] Completing the square for the time component of the momentum, \[I_{+}=\int\frac{{\rm d}^{d}p}{(2\pi)^{d}}\log\left((p_{0}+\xi_{0})^{2}+|\vec{p }\,|^{2}+m^{2}\right), \tag{4.4}\] and shifting the integration variable \(p_{0}\) we end up with \[I_{+}=\int\frac{{\rm d}^{d}\tilde{p}}{(2\pi)^{d}}\log\left(\tilde{p}^{2}+m^{2} \right)\;, \tag{4.5}\] which, clearly, does not depend on \(\mu\). Analogous manipulations apply to the second term in \(Z(\mu)\) above, and we thus reach the conclusion \[Z(\mu<m)=Z(0)\;, \tag{4.6}\] which, upon deriving w.r.t. \(\mu\), implies that the charge density vanishes for all \(\mu<m\). Although not manifest in our derivation, this condition is crucial to make the computation in Euclidean space well defined--as emphasized in section 2.2, the \(i\varepsilon\) terms are such that for \(\mu>m\) the Euclidean path integral does not converge. Another technical subtlety is that in the derivation above we had to rely on a momentum-shift in the imaginary direction. In Appendix B we provide an alternative derivation of this result which does not need such a shift. ### Effective potential at finite \(\mu\) We now introduce self-interactions and consider the U(1)-invariant Lagrangian (4.1). The analysis of the \(i\varepsilon\) term (2.17) suggested that, at tree level, for \(\mu>m\) the semiclassical configuration \(\vec{\varphi}(x)\equiv 0\) is unstable. As we discussed in section 2.3, with our choice of field variables and generalized Hamiltonian, in order to find the correct semiclassical ground-state configuration at finite \(\mu\), it is sufficient to consider the effective potential and study its minima. The effective potential can be computed by functional methods [29; 30]. By working in an arbitrary number of dimensions and using the general results derived in Appendix C, the one-loop effective potential at finite \(\mu\) for our case is \[V_{\rm eff}^{(1)}(\varphi;\mu)=\frac{1}{2}\int\frac{{\rm d}^{d}p}{(2\pi)^{d}} \log\Big{[}(p^{2}+M^{2})^{2}-4(p\cdot\xi)^{2}-g^{2}\Big{]}\,, \tag{4.7}\] where \(p\) is the Euclidean momentum and we defined \[\begin{split}&\xi\equiv(i\mu,\vec{0}\,),\\ & M^{2}=M^{2}(\phi;\mu)\equiv\frac{1}{4\,\phi}\Big{(}V^{\prime}( \phi;\mu)+\phi\,V^{\prime\prime}(\phi;\mu)\Big{)},\\ & g^{2}=g^{2}(\phi)\equiv\frac{1}{16\,\phi^{2}}\Big{(}V^{\prime}( \phi;\mu)-\phi\,V^{\prime\prime}(\phi;\mu)\Big{)}^{2},\end{split} \tag{4.8}\] where primes denote derivatives with respect to \(\phi\). In what follows, when needed, we shall denote by \(g\) the positive square root of \(g^{2}\). Notice that, as proved in Appendix C in full generality, the combination \((M^{2}+\mu^{2})\) is \(\mu\) independent, while \(g^{2}\) is \(\mu\)-independent and vanishes for \(\phi=0\). Sometimes we shall specialize to the physical dimensions \(d=3\) and \(d=4\), or consider the explicit case of a complex scalar with \(\phi^{\alpha}\) interactions, \[V_{\rm int}(\phi)=\lambda\phi^{\alpha}, \tag{4.9}\] in which case \[\begin{split}& M^{2}=m^{2}-\mu^{2}+\frac{\alpha^{2}}{4}\lambda\, \phi^{\alpha-2}=m^{2}-\mu^{2}+\frac{\alpha}{(\alpha-2)}g,\\ & g^{2}=\lambda^{2}\,\frac{\alpha^{2}(\alpha-2)^{2}}{16}\,\phi^{ 2\alpha-4}.\end{split} \tag{4.10}\] This includes as particular cases the renormalizable \(\lambda\phi^{4}\) model in \(d=4\) and \(\lambda\phi^{6}\) model in \(d=3\), as we shall discuss in what follows, reproducing and generalizing some results previously obtained in other works. One might worry that the argument of the log in eq. (4.7) can become negative for some values of \(\phi\) and \(p^{2}\), generating an imaginary part for the effective potential. This is indeed the case, in general, at the left of its minimum. It is straightforward to check that for \(p=0\) the argument of the log is positive for \(\phi>\phi_{0}\), where \(\phi_{0}\) is the minimum of the tree level, finite \(\mu\) potential, but can be negative for \(\phi\) close to \(\phi_{0}\), but smaller than it. This follows by noticing that \[M^{4}-g^{2}=\frac{V^{\prime}V^{\prime\prime}}{4\phi}\;, \tag{4.11}\] which changes sign precisely at the minimum of \(V\). Stronger positivity properties hold for nonzero Euclidean momenta, ensuring that the one-loop effective potential is real for \(\phi>\phi_{0}\), as expected from the general considerations of section 2.2. There are two useful ways of organizing the calculation, which consist in expanding the logarithm either in powers of \(g\) (subsection 4.3) or in powers of \(\mu\) (subsection 4.5).6 Footnote 6: A third way to organize the computation allows to rewrite the effective potential in closed form. This is achieved by first integrating on frequency, similarly to the QM case (3.12), and then computing the spatial momentum integral in terms of \(\vec{p}^{2}\), up to a shift. For integer \(d\), the resulting integrals are Abelian integrals: of the (hyper-)elliptic type for even \(d\); expressible in terms of elementary functions for odd \(d\). We find the approaches described in the main text better suited for our purposes. ### Expanding in powers of \(g\) We use the decomposition \[(p^{2}+M^{2})^{2}-4(p\cdot\xi)^{2}=A\cdot B, \tag{4.12}\] with \[\begin{split} A&=p^{2}+M^{2}+2p\cdot\xi=(p+\xi)^{ 2}+M^{2}-\xi^{2},\\ B&=p^{2}+M^{2}-2p\cdot\xi=(p-\xi)^{2}+M^{2}-\xi^{2},\end{split} \tag{4.13}\] and write the log as \[\begin{split}\log\left[(p^{2}+M^{2})^{2}-4(p\cdot\xi)^{2}-g^{2} \right]&=\log A+\log B+\log\left(1-\frac{g^{2}}{A\cdot B}\right)= \\ &=\log A+\log B-\sum_{n=1}^{\infty}\frac{1}{n}\left(\frac{g^{2}}{ A\cdot B}\right)^{n}.\end{split} \tag{4.14}\] In order to compute the integral in (4.7) we introduce a Feynman parameter through the identity \[\frac{1}{A^{n}B^{n}}=\int_{0}^{1}\mathrm{d}x\frac{[x(1-x)]^{n-1}}{(xA+(1-x)B) ^{2n}}\frac{\Gamma(2n)}{\Gamma(n)^{2}}\, \tag{4.15}\] and complete the square in the denominator as: \[xA+(1-x)B=(p+(2x-1)\xi)^{2}+M^{2}-(2x-1)^{2}\xi^{2}. \tag{4.16}\] Evaluating the integrals in an arbitrary number of dimensions we obtain the general result for the dimensionally regularized one-loop effective potential in \(d\) dimensions: \[V_{\text{eff}}^{(1)}(\phi;\mu)=-\frac{\Gamma(-d/2)}{(4\pi)^{d/2 }}(M^{2}+\mu^{2})^{d/2}\\ -\frac{1}{2}\sum_{n=1}^{\infty}\frac{\Gamma(2n-d/2)}{(4\pi)^{d/2} \Gamma(n)^{2}}\frac{g^{2n}}{n}\int_{0}^{1}\mathrm{d}x\frac{[x(1-x)]^{n-1}}{(M ^{2}+(2x-1)^{2}\mu^{2})^{2n-d/2}}. \tag{4.17}\] In odd dimensions, \(d=2k+1\), at one loop there are no logarithmic divergencies and the one-loop effective potential is regular for \(d\to 2k+1\). Therefore, in this case the dimensionally regularized expression (4.17) is finite and no counterterms are needed in the minimal subtraction scheme. The series can be resummed in closed form for any odd dimension in terms of a hypergeometric function. For example, in \(d=3\) we arrive at the result \[V_{\rm eff}^{(1)}(\phi;\mu)\Big{|}_{d=3}=-\frac{1}{6\pi}(M^{2}+\mu^{2})^{3/2}- \frac{g^{2}}{16\pi}\int_{0}^{1}{\rm d}x\,\sqrt{y}\;_{2}F_{1}\left[\begin{array} []{cc}\frac{1}{4},&\frac{3}{4}\\ 2&\end{array}\right]\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! and the facts that \(g^{2}=0\) and \((M^{2}+\mu^{2})=m^{2}\) for \(\phi=0\) (see Appendix C for a general derivation), it follows that \[V_{\rm eff}^{(1)}(\phi=0;\mu)\Big{|}_{d\;{\rm odd}}=-\frac{\Gamma(-d/2)}{(4\pi) ^{d/2}}m^{d}, \tag{119}\] for arbitrary interactions. In particular, \(V_{\rm eff}^{(1)}(\phi=0;\mu)\) is \(\mu\) independent. It follows that in the unbroken phase in which \(\varphi=0\) the system cannot have finite charge density -- see eq. (30). From this result we can conclude that, in general, in a system describing a complex scalar at finite density in odd dimension, the global U(1) symmetry is always spontaneously broken. That is: finite density is always accompanied by spontaneous symmetry breaking, not only at the classical level but also at one loop. The case of even dimension \(d=2k\) requires more care due to the presence of logarithmic divergencies. For \(d=4\), renormalizing the theory in the \(\overline{\rm MS}\) scheme for an arbitrary potential -- eq. (104) -- and using again the facts that \(g^{2}=0\) and \((M^{2}+\mu^{2})=m^{2}\) for \(\phi=0\), we obtain \[V_{\rm eff}^{(1)}(\phi=0;\mu)\Big{|}_{d=4}=-\frac{3}{64\pi^{2}}m^{4}+\frac{1}{ 32\pi^{2}}m^{4}\log(m^{2}), \tag{120}\] which is manifestly \(\mu\) independent. Again, we conclude that for complex scalar fields, at one loop and in four dimensions finite density for a U(1) charge is always accompanied by spontaneous symmetry breaking. We stress that there is no ambiguity in this conclusion related to the renormalization scheme, since the theory can be renormalized at \(\mu=0\) and no additional counterterms are needed to renormalize the finite \(\mu\) theory. This is true even in the case of non-renormalizable field theories, where an infinite number of counterterms is needed to renormalize the theory at arbitrary loop level.7 The computation of the Coleman-Weinberg potential at finite \(\mu\) allows anyway to make definite low-energy predictions for finite density properties of the system, since the counterterms are determined independently of the infrared deformation induced by \(\mu\). Footnote 7: As usual, only a finite number of counterterms is necessary at a fixed loop order. Having established the relationship between finite density and symmetry breaking, an interesting question to address is: what is the critical value of \(\mu\) above which the system can support a finite density state? The analysis of Ref. [10] hinted that at one loop the critical value \(\mu_{\rm crit}\) coincides with the pole mass of the scalar in the \(\mu=0\) theory, assuming \(m_{\rm pole}^{2}>0\). This suggestive result was obtained by studying the consistency of the low-energy effective theory for the superfluid phonons in an explicit \(\lambda\phi^{4}\) model. Demanding stability and subluminality of the phonon perturbations, one finds that the theory is well-behaved only for \(\mu^{2}>m_{\rm pole}^{2}\). A full understanding, however, can only be obtained by studying the UV theory with the inclusion of the radial mode. This is what we shall do in this section, by analyzing the finite \(\mu\) one-loop effective potential, for arbitrary interaction potential. Let us work at one loop and denote by \(V_{\rm eff}\) the finite \(\mu\) effective potential including both the tree-level and loop contributions. From eq. (30) we have \(Q=-\frac{{\rm d}}{{\rm d}\mu}V_{\rm eff}(\phi_{\rm min};\mu)\) so that: \[J>0\iff-\frac{\mathrm{d}}{\mathrm{d}\mu}V_{\mathrm{eff}}(\phi_{\mathrm{min}};\mu)>0. \tag{101}\] Given the relationship with SSB, the critical value of \(\mu\) corresponds to the limit \(\phi_{\mathrm{min}}\to 0^{+}\). We can then expand \(V_{\mathrm{eff}}(\phi_{\mathrm{min}};\mu)\) around \(0\): \[V_{\mathrm{eff}}(\phi_{\mathrm{min}};\mu)=V_{\mathrm{eff}}(0;\mu)+V^{{}^{ \prime}}_{\mathrm{eff}}(0;\mu)\phi_{\mathrm{min}}+V^{{}^{\prime\prime}}_{ \mathrm{eff}}(0;\mu)\frac{\phi_{\mathrm{min}}^{2}}{2}+\ldots, \tag{102}\] where primes denote derivatives with respect to \(\phi\). The constant term is \(\mu\)-independent as we proved in the previous section, so it drops out of eq. (101). In dimensions \(d=3\) and \(4\), and for \(\mathrm{U}(1)\) invariant interaction potential, it is easy to show from eqs. (100), (101) and (103) that \(V^{{}^{\prime}}_{\mathrm{eff}}(0;\mu)=0\). This follows simply from the property that \(g^{2},\partial_{\phi}g^{2},\partial_{\phi}M^{2}\) all vanish at \(\phi=0\), and is a consequence of the \(\mathrm{U}(1)\) invariance of the theory. Therefore, in the limit \(\phi_{\mathrm{min}}\to 0^{+}\) the condition (101) becomes: \[-\frac{\mathrm{d}}{\mathrm{d}\mu}\left(V^{{}^{\prime\prime}}_{\mathrm{eff}}( 0;\mu)\frac{\phi_{\mathrm{min}}^{2}}{2}\right)>0. \tag{103}\] Using the fact that \(\frac{\mathrm{d}}{\mathrm{d}\mu}\phi_{\mathrm{min}}^{2}>0\), for \(\phi_{\mathrm{min}}\to 0^{+}\) we arrive at the condition: \[-V^{{}^{\prime\prime}}_{\mathrm{eff}}(0;\mu)>0. \tag{104}\] As a final step we separate the contributions from tree-level and one loop. At tree level one has simply \(V^{{}^{\prime\prime}}(0;\mu)=m^{2}-\mu^{2}\). At one loop, using again from eqs. (101) and (103), the property that \(g^{2},\partial_{\phi}^{2}g^{2}\) are vanishing for \(\phi=0\), and the property that \(M^{2}+\mu^{2}\) is \(\mu\)-independent, it follows that \(V^{(1){}^{\prime\prime}}_{\mathrm{eff}}(0;\mu)\equiv V^{(1){}^{\prime\prime} }_{\mathrm{eff}}(0;\mu=0)\). Since there is no wave-function renormalization at one-loop in the theory, this is nothing else than the one-loop contribution to the pole mass of the complex scalar in the \(\mu=0\) theory. Therefore we arrive at the final result: \[\mu^{2}>V^{{}^{\prime\prime}}_{\mathrm{eff}}(0;0)=m_{\mathrm{pole}}^{2}. \tag{105}\] This proves in general at one loop that \(\mu_{\mathrm{crit}}^{2}=m_{\mathrm{pole}}^{2}\), as suggested from the stability analysis of the low-energy effective theory. Notice that at the technical level, the condition (101) corresponds to \(\partial_{X}P(X)>0\) in the superfluid effective theory. However, in the low-energy effective theory the relationship between this condition and the pole mass is obscured. Moreover we stress that the condition on \(\mu\) derived in the effective theory is a necessary condition for spontaneous symmetry breaking, but the fact that it is also sufficient is apparent only thanks to the study of the effective potential. ### Expanding in powers of \(\mu\) and the superfluid EFT Alternatively, to evaluate (100) we can use the decomposition: \[(p^{2}+M^{2})^{2}-g^{2}=C\cdot D, \tag{106}\] with \[\begin{split} C&=p^{2}+M^{2}-g,\\ D&=p^{2}+M^{2}+g,\end{split} \tag{108}\] and write the log as: \[\begin{split}\log\left[(p^{2}+M^{2})^{2}-g^{2}-4(p\cdot\xi)^{2} \right]&=\log C+\log D+\log\left(1-\frac{4(p\cdot\xi)^{2}}{C\cdot D }\right)=\\ &=\log C+\log D-\sum_{n=1}^{\infty}\frac{1}{n}\left(\frac{4(p \cdot\xi)^{2}}{C\cdot D}\right)^{n}.\end{split} \tag{109}\] As before, to compute the integral in (103) we introduce a Feynman parameter through the identity: \[\frac{1}{C^{n}D^{n}}=\int_{0}^{1}\mathrm{d}x\frac{[x(1-x)]^{n-1}}{(xC+(1-x)D)^{ 2n}}\frac{\Gamma(2n)}{\Gamma(n)^{2}}, \tag{110}\] and complete the square in the denominator as: \[xC+(1-x)D=p^{2}+M^{2}+g-2g\,x. \tag{111}\] Evaluating the integrals in an arbitrary number of dimensions by using the results summarized in Appendix D, we obtain an alternative general expression for the dimensionally regularized one-loop effective potential in \(d\) dimensions: \[\begin{split} V^{(1)}_{\text{eff}}(\phi;\mu)=&- \frac{1}{2}\frac{\Gamma(-d/2)}{(4\pi)^{d/2}}(M^{2}-g)^{d/2}-\frac{1}{2}\frac{ \Gamma(-d/2)}{(4\pi)^{d/2}}(M^{2}+g)^{d/2}\\ &-\frac{1}{2}\sum_{n=1}^{\infty}(-1)^{n}\left(\frac{\Gamma(n+1/2) \Gamma(n-d/2)}{(4\pi)^{d/2}\Gamma(n)^{2}\Gamma(1/2)}\right)\frac{(2\mu)^{2n}}{ n}\int_{0}^{1}\mathrm{d}x\frac{[x(1-x)]^{n-1}}{(M^{2}+g-2g\,x)^{n-d/2}}.\end{split} \tag{112}\] Similarly to the case of eq. (112), in odd dimensions \(d=2k+1\) this result is finite and no counterterms are needed in the minimal subtraction scheme. The series can again be resummed, but we shall not do so for the moment, as the integral in \(\mathrm{d}x\) cannot be computed in closed form in general. In even dimensions \(d=2k\), as before, more care is needed as some of the terms in (112) are divergent for \(d\to 2k\) and renormalization is required. We carry out the explicit computation in \(d=4\), where all the terms with \(n\geq 3\) in the series are regular. As expected by consistency, summing up the divergent contributions we find again \[\begin{split} V^{(1)}_{\text{eff}}(\phi;\mu)\Big{|}_{d=4,\text{ div}}=\frac{g^{2}+(M^{2}+\mu^{2})^{2}}{16\pi^{2}}\frac{1}{(d-4)},\end{split} \tag{113}\] so that the counterterms are the same as those previously mentioned. By renormalizing the divergencies in the \(\overline{\text{MS}}\) scheme we find: \[\begin{split} V^{(1)}_{\text{eff}}(\phi;\mu)\Big{|}_{d=4}=& \,\frac{1}{384\pi^{2}g^{3}}\Bigg{[}-18g^{5}+6g\mu^{4}M^{4}-2g^{3} \left(8\mu^{4}+9M^{4}+18\mu^{2}M^{2}\right)\\ &+3\left(M^{2}-g\right)^{2}\left(2g^{3}-2g^{2}\mu^{2}+2g\mu^{4}+ \mu^{4}M^{2}\right)\log\left(M^{2}-g\right)\\ &+3\left(M^{2}+g\right)^{2}\left(2g^{3}+2g^{2}\mu^{2}+2g\mu^{4}- \mu^{4}M^{2}\right)\log\left(M^{2}+g\right)\Bigg{]}\\ &-\frac{1}{32\pi^{2}}\sum_{n=3}^{\infty}(-1)^{n}\frac{\Gamma(n+1 /2)\Gamma(n-2)}{\Gamma(n)^{2}\Gamma(1/2)}\frac{(2\mu)^{2n}}{n}\int_{0}^{1} \mathrm{d}x\frac{[x(1-x)]^{n-1}}{(M^{2}+g-2g\,x)^{n-2}},\end{split} \tag{100}\] where the renormalization scale \(\bar{\mu}\) is set to 1 for simplicity. This result allows us to derive the Goldstone low-energy effective action that describes the superfluid phase of the theory and, correspondingly, the one-loop equation of state for such a superfluid. In this way we shall generalize the independent result of Ref. [10], and reproduce a result of Ref. [27] derived using a completely different approach. As argued in [32] and easily seen from symmetry arguments, the low-energy quantum effective action for superfluid phonons, at leading order in derivatives, takes the form \(\Gamma_{\text{eff}}\left[X\right]=P(X)\), where \(X=(D_{\nu}\pi)(D^{\nu}\pi)\). The covariant derivative \(D_{\nu}\pi=\partial_{\nu}\pi+\mu\delta_{\nu}^{0}\) acts non-linearly on \(\pi\), being associated with a non-linearly realized global symmetry. Assuming that the effective action is extremized for a constant \(\pi\) configuration, as dictated by the \(i\varepsilon\) term, we end up with the relationship \[P(X)\xleftarrow{X=\mu^{2}\over\pi=\text{const}}-V_{\text{eff}}(\phi_{\text{ min}}(\mu);\mu). \tag{101}\] At one loop, it is sufficient to set \(\phi\) to the value at which the tree-level potential is minimized, denoted by \(\phi_{0}\), thanks to the fact that \(V^{\prime}(\phi_{0})=0\). As proved in general in Appendix C, one has \[M^{2}\Big{|}_{\text{min}}=g\Big{|}_{\text{min}}=\frac{V^{\prime\prime}(\phi_{ 0})}{4}\equiv g_{\text{min}}. \tag{102}\] Consider then the \(\mathrm{d}x\) integral in the general expression (101), \[I(n,d;\phi)\equiv\int_{0}^{1}\mathrm{d}x\frac{[x(1-x)]^{n-1}}{(M^{2}+g-2g\,x)^ {n-d/2}}. \tag{103}\] At \(\phi=\phi_{0}\) the integrand simplifies and we can compute \(I(n,d;\phi_{0})\) analytically in terms of the Euler beta function (or equivalently in terms of Gamma functions): \[\begin{split} I(n,d;\phi_{0})&=\left(\frac{1}{2g_{ \text{min}}}\right)^{n-d/2}\int_{0}^{1}\mathrm{d}x\,x^{n-1}(1-x)^{d/2-1}\\ &=\left(\frac{1}{2g_{\text{min}}}\right)^{n-d/2}B\left(n,\frac{d} {2}\right)=\left(\frac{1}{2g_{\text{min}}}\right)^{n-d/2}\frac{\Gamma\left(n \right)\Gamma\left(d/2\right)}{\Gamma\left(n+d/2\right)}.\end{split} \tag{104}\] Using this identity, the relation (104) and resumming the series, we can compute the Goldstone effective action for arbitrary potential. In odd dimensions \(d=2k+1\), from eq. (103) it follows \[\begin{split} P(X)\Big{|}_{d\,\text{odd}}=&-V(\phi_{ 0};\mu=X^{1/2})\\ &+(-1)^{(d+1)/2}\,\frac{\pi}{2\,\Gamma\left(d/2+1\right)}\left( \frac{g_{\text{min}}}{2\pi}\right)^{d/2}\,{}_{2}F_{1}\left[\left.\begin{matrix} \frac{1}{2},&-\frac{d}{2}\\ &\frac{d}{2}\end{matrix}\right|-\frac{2X}{g_{\text{min}}}\right]\,\end{split} \tag{105}\] where \(g_{\text{min}}\) should be seen as a function of \(\mu\) upon the substitution \(\mu=X^{1/2}\). The hypergeometric function of interest can be expressed in a more familiar way in terms of polynomials, square roots and hyperbolic functions. In particular, in \(d=3\) it takes the explicit form: \[\begin{split} P(X)\Big{|}_{d=3}=&-V(\phi_{0};\mu=X^{ 1/2})\\ &+\frac{1}{48\pi}\left[\left(5g_{\text{min}}+4X\right)\sqrt{2g_{ \text{min}}+4X}+3\left(\frac{g_{\text{min}}^{2}}{\sqrt{X}}\right)\,\text{ arcsinh}\left(\sqrt{\frac{2X}{g_{\text{min}}}}\right)\right].\end{split} \tag{106}\] The inverse hyperbolic function \(\text{arcsinh}(z)\) can be equivalently expressed as \(\log(z+\sqrt{z^{2}+1})\). On the other hand, in \(d=4\), from the \(\overline{\text{MS}}\) renormalized result of eq. (103) it follows that \[\begin{split} P(X)\Big{|}_{d=4}=&-V(\phi_{0};\mu=X^ {1/2})\\ &+\frac{1}{192\pi^{2}}\Bigg{(}-4X^{2}+\left(X^{2}+2g_{\text{min}}X +2g_{\text{min}}^{2}\right)\left(9-6\log(2g_{\text{min}})\right)\\ &-\frac{5X^{3}}{2g_{\text{min}}}\,{}_{3}F_{2}\left[\left.\begin{matrix} 1,&1,&\frac{7}{2}\\ &4,&5\end{matrix}\right|-\frac{2X}{g_{\text{min}}}\right]\ \Bigg{)}.\end{split} \tag{107}\] This result matches exactly the independent computation of [10] in the \(\lambda\phi^{4}\) theory in \(d=4\), where \(m_{\text{eff}}^{2}=2g_{\text{min}}=2(X-m^{2})\), and provides an additional non-trivial consistency check. From these results we see that the one-loop contribution to the \(P(X)\) takes a universal form in a fixed number of spacetime dimensions and depends only on the curvature \(g_{\text{min}}\) of the classical finite \(\mu\) potential at its minimum, given by eq. (102). Correspondingly, the same observation holds for the one-loop equation of state of a zero-temperature relativistic superfluid, which is obtained by setting \(X=\mu^{2}\) and identifying \(P(\mu^{2})\) with the pressure \(p\). The results (106) and (107) can be used to derive the universal behavior of the one-loop free energy at the finite density phase transition, determining in particular its order. Denote \(\Delta=\mu^{2}-m_{\text{pole}}^{2}\). We are interested in the behavior of the free energy \(f(\mu)=P(\mu^{2})\) for \(\Delta\to 0^{+}\). For smooth and regular potential, the tree-level contribution is analytic around \(\Delta=0\), for every \(d\). Moreover, the term in \(f(\mu)\) of order \(\Delta^{0}\) is a constant independent of \(\mu\), equal to (minus) the effective potential in \(\phi_{\text{min}}=0\) as computed in section 4.4, and just corresponds to the cosmological constant contribution (computed for \(\mu=0\)). The phase transition is therefore of second or higher order. The non-analytic terms originate from the one-loop contribution, which depends only on \(\bar{X}=\mu^{2}=m_{\rm pole}^{2}+\Delta\) and \(g_{\rm min}=c_{1}\Delta+c_{2}\Delta^{2}+\dots\). Expanding for \(\Delta\to 0^{+}\) (and setting the renormalization scale \(\bar{\mu}=m_{\rm pole}\)) we find that \[\begin{split} f(\mu)\Big{|}_{d=3}&=-\frac{c_{1}^{2} }{32\pi\,m_{\rm pole}}\Delta^{2}\log\left(\Delta\right)+{\cal O}(\Delta^{3} \log\left(\Delta\right))+\mbox{analytic in $\Delta$},\\ f(\mu)\Big{|}_{d=4}&=-\frac{\sqrt{2}c_{1}^{5/2}}{15 \pi^{2}\,m_{\rm pole}}\Delta^{\frac{5}{2}}+{\cal O}(\Delta^{\frac{7}{2}})+ \mbox{analytic in $\Delta$}.\end{split} \tag{4.45}\] The leading non-analyticity has therefore a universal behavior in \(d=3,4\), with coefficient \(c_{1}=1\) in the presence of a quartic coupling. The phase transition is in both cases of third order, since the free energy as a function of \(\mu\) has singular third derivative for \(\Delta\to 0^{+}\). ## 5 Quantifying spontaneous symmetry breaking We have shown that, at least at one loop, a system of scalars at finite density for some internal charge necessarily breaks the corresponding U(1) symmetry. However, this statement is not particularly meaningful unless we can quantify, or put a lower bound on the amount of symmetry breaking. After all, there could exist a parametric limit for which the charge density remains constant while all the physical effect of symmetry breaking go to zero. An obvious candidate to quantify SSB is \(\phi_{\rm min}\), the expectation value of \(\Phi\) itself, and in particular its relationship to the charge density. At tree level one has \[Q=2\mu\,\phi_{\rm min}^{2}\qquad\qquad\mbox{(tree level)}\;, \tag{5.1}\] and one might wonder whether at loop level such a relationship survives, or perhaps gets corrected in some universal way. However, beyond one loop \(\phi\) itself is not particularly meaningful from a physical standpoint--for instance, it is subject to wave-function renormalization. It would be better to find a more direct characterization of the symmetry breaking scale, one that is directly related to observable quantities. It is possible to do so by considering the superfluid effective theory, whose general one-loop form was discussed in section 4.5. Let us consider the case in which the U(1) global symmetry is linearly realized in the UV theory at \(\mu=0\), so that \(m_{\rm pole}^{2}>0\). We know from our previous results that for \(\mu^{2}>m_{\rm pole}^{2}\) the system is in the superfluid phase and we can consider the low-energy quantum effective action for the superfluid phonons \(P(X)\), where \(X=\partial_{\nu}\psi\partial^{\nu}\psi\).8 Expanding the effective action in terms of background \(\bar{\psi}=\mu t\) and phonon perturbations \(\pi(x)\), one finds the quadratic Lagrangian Footnote 8: The distinction between the classical and the quantum effective action for a superfluid is irrelevant up to one-loop order in dimensional regularization [10]. To all orders, the quantum effective action is related to the equation of state of the relativistic superfluid and provides a physical definition of the observable symmetry breaking scale. \[S_{(2)}=\frac{1}{2}\int{\rm d}^{d}x\,f_{\pi}^{d-2}\bigg{[}\frac{\dot{\pi}^{2}} {c_{s}^{2}}-\big{(}\vec{\nabla}\pi\big{)}^{2}\bigg{]}\;, \tag{5.2}\] where \[f_{\pi}^{d-2}(\mu)=2P^{\prime}(\mu^{2})\;,\qquad c_{s}^{2}(\mu)=\frac{P^{\prime}( \mu^{2})}{2P^{\prime\prime}(\mu^{2})\mu^{2}+P^{\prime}(\mu^{2})} \tag{100}\] The quantity \(f_{\pi}\), which has units of energy, can be taken as a concrete and physical definition of the symmetry breaking scale, in analogy with the pion decay constant in the QCD chiral Lagrangian. In fact, at tree level it coincides with \(2\phi_{\rm min}^{2}\). Once the normalization of \(\pi(x)\) is fixed, for instance by demanding that it be an angular variable of period \(2\pi\) (with apologies for the inconsistent usage of '\(\pi\)'), \(f_{\pi}\) is completely unambiguous, at the non-perturbative level. It can be thought of as a measure of the static rigidity of the ground state, in the sense that it controls the zero-frequency limit of the Goldstone response function. We shall refer to it as 'the symmetry breaking scale'. To relate \(f_{\pi}\) to the charge density, we notice that the Noether current associated with the U(1) (shift) symmetry of the effective action \(P(X)\) is \[J^{\nu}=2P^{\prime}(X)\partial^{\nu}\psi, \tag{101}\] so that on the background \(\bar{\psi}=\mu t\) we have \[Q(\mu)=2P^{\prime}(\mu^{2})\mu\;. \tag{102}\] From (100) we thus have \[f_{\pi}^{d-2}(\mu)=Q(\mu)/\mu. \tag{103}\] We now have to eliminate \(\mu\) in favor of \(Q\). Before we try do so for general \(Q\), recall that there is a threshold value for \(\mu\), the pole mass, below which there is no charge density and no SSB. So, when \(\mu\) crosses that threshold but is still very close to it, the charge density is very small and the above relationship simply becomes \[f_{\pi}(Q)\simeq\left(\frac{Q}{m_{\rm pole}}\right)^{\frac{1}{d-2}}\qquad\mbox {for $Q\to 0$}\;. \tag{104}\] This is a universal, non-perturbative prediction for the symmetry breaking scale in a very dilute superfluid made up of scalar bosons with physical mass \(m_{\rm pole}\). For larger values of \(Q\), the most useful way we have found to eliminate \(\mu\) from (103) is to derive with respect to \(Q\) and use the above relationships involving derivatives of \(P\) w.r.t. \(\mu^{2}\). After straightforward algebra we find the ODE \[(d-2)\,\frac{\mathrm{d}\log f_{\pi}}{\mathrm{d}\log Q}=\left(1-c_{s}^{2}(Q) \right), \tag{105}\] where \(c_{s}\) is a function of \(Q\) through \(\mu\). Such an ODE is to be supplemented by the boundary condition (104). The only solution is thus \[f_{\pi}^{d-2}(Q)=\left(\frac{Q}{m_{\rm pole}}\right)\exp\left(-\int_{0}^{Q} \frac{c_{s}^{2}(Q^{\prime})}{Q^{\prime}}\mathrm{d}Q^{\prime}\right). \tag{106}\] The integral is always convergent thanks to the low-\(Q\) behavior of \(c_{s}^{2}\) (see Appendix F), \[c_{s}^{2}(Q)=\frac{1}{4m_{\rm pole}^{3}P^{\prime\prime}(m_{\rm pole}^{2})}Q+\ldots \tag{107}\] The small \(Q\) expansions of \(f_{\pi}\) and \(\mu\) thus are \[f_{\pi}^{d-2}(Q)=\frac{Q}{m_{\rm pole}}-\frac{1}{4m_{\rm pole}^{4}P^{\prime\prime }(m_{\rm pole}^{2})}Q^{2}+\ldots, \tag{116}\] \[\mu(Q)=m_{\rm pole}+\frac{1}{4m_{\rm pole}^{2}P^{\prime\prime}(m_{\rm pole}^{2}) }Q+\ldots, \tag{117}\] where we used (109). Like the leading order term discussed above, the next-to-leading one is also universal, but it involves a new independent parameter, which we can take to be \({\rm d}c_{s}^{2}/{\rm d}Q\) evaluated at \(Q=0\). This is certainly an observable quantity, but with perhaps a less familiar interpretation. In the \(\phi^{4}\) model it is determined by the quartic coupling or, equivalently, the scattering length. The derivation of eq. (116) is completely general, valid beyond the one-loop order, if we replace \(m_{\rm pole}\) with \(\mu_{\rm crit}\), the critical value of the chemical potential. We expect the identification of these two scales to hold beyond the one-loop level but we have no rigorous proof of this at the moment. All these functions of \(Q\), being associated with a phase transition, are not analytic at \(Q=0\). In \(d=4\), non-analyticities appear starting at order \(Q^{5/2}\) in \(f_{\pi}^{2}\), and at order \(Q^{3/2}\) in \(c_{s}^{2}\) and \(\mu\), as a consequence of the non-analytic terms in the low-density limit of \(P(X)\)[10]. We can also derive a universal scaling relation for \(f_{\pi}\) at high density (\(Q\to\infty\)) under the assumption that the (\(\mu=0\)) theory flows to a Conformal Field Theory (CFT) at high energies, and that the superfluid EFT at high densities is that of a conformal superfluid up to scaling violations (see also [33]). This is a nontrivial assumption, being violated for instance in the case of super-renormalizable theories. Indeed, even if the \(\mu=0\) theory flows towards a free theory in the UV, the chemical potential term always destabilizes the free theory ground state, giving a prominent role to the interaction terms. If only relevant couplings are present, as in a super-renormalizable theory, the superfluid EFT will not flow to the EFT of a conformal superfluid, but will display a different scaling behavior (see for instance the case of the O(\(N\)) model in \(d=3\) of section 6). In the case of an almost conformal superfluid, the sound speed can be expressed as: \[c_{s}^{2}(Q)=\frac{1}{d-1}+\Delta c_{s}^{2}(Q), \tag{118}\] where [10]: \[\Delta c_{s}^{2}=-\frac{1}{(d-1)}\frac{T^{\prime}(\mu^{2})}{T^{ \prime}(\mu^{2})+(d-1)P^{\prime}(\mu^{2})}\;, \tag{119a}\] \[T(\mu^{2})\equiv{T^{\mu}}_{\mu}(\mu^{2})=2P^{\prime}(\mu^{2})\mu ^{2}-d\,P(\mu^{2}), \tag{119b}\] so that at large densities (large \(\mu\)), scaling violations, being proportional to the beta function of the couplings, are small, and \(\Delta c_{s}^{2}\ll 1\). Neglecting scaling violations, solving eq. (107) we get the universal behavior \[f_{\pi}\sim Q^{\frac{1}{d-1}}\qquad\mbox{for $Q\to\infty$}\;, \tag{120}\] which in fact just follows from dimensional analysis, since in a conformal superfluid \(Q\) is the only independent dimensionful quantity. We can be more precise. Using (5.13) in (5.9) we can write \[f_{\pi}^{d-2}(Q)=e^{-K_{0}}\,\frac{Q_{0}^{\frac{1}{d-1}}}{m_{\rm pole}}\,Q^{ \frac{d-2}{d-1}}\Big{(}1+\mathcal{O}(\epsilon)\Big{)},\qquad\qquad Q\to\infty \tag{5.16}\] where \[K_{0}=K(Q_{0})\equiv\int_{0}^{Q_{0}}c_{s}^{2}\,\frac{\mathrm{d}Q}{Q}+\int_{Q_{ 0}}^{\infty}\Delta c_{s}^{2}\,\frac{\mathrm{d}Q}{Q},\qquad\epsilon\equiv\int_ {Q}^{\infty}\Delta c_{s}^{2}\,\frac{\mathrm{d}Q^{\prime}}{Q^{\prime}}, \tag{5.17}\] and \(Q_{0}\) is an arbitrary reference charge density. These quantities are always finite, thanks to the assumed asymptotic behavior of \(c_{s}^{2}(Q)\). Despite appearances, the above expression for \(f_{\pi}\) is independent of the choice of \(Q_{0}\), as can be easily checked by taking a derivative with respect to \(Q_{0}\). One convenient choice of \(Q_{0}\) (\(\bar{Q}\)), can be defined by the condition \(K(\bar{Q})=0\), corresponding to \(\bar{Q}=e^{-(d-1)K(Q_{0})}Q_{0}\), for any other \(Q_{0}\),9 so that eq. (5.16) simply becomes Footnote 9: This implies that the function \(K(x)\) satisfies the functional equation \(K(x\,e^{-(d-1)K(x)})=0\). Defining \(h(x)=x\,e^{-(d-1)K(x)}\) this corresponds to \(h(h(x))=h(x)\), that has \(h(x)=\mathrm{const}\) or \(h(x)=x\) as smooth solutions, as it can be readily proved by taking a derivative. The constant solution corresponds to the physical property we already noticed: eq. (5.16) is independent of \(Q_{0}\). \[f_{\pi}^{d-2}(Q)\simeq\frac{\big{(}\bar{Q}Q^{d-2}\big{)}^{\frac{1}{d-1}}}{m_{ \rm pole}}\,\qquad\qquad Q\to\infty \tag{5.18}\] Another particularly natural choice (\(Q_{\star}\)) is suggested by dimensional analysis. In \(d\) dimensions, if there is only one marginal coupling \(\lambda\phi^{\alpha}\), where \(\alpha=2d/(d-2)\), and one mass scale (the pole mass \(m_{\rm pole}\)), keeping track of units of action \([S]\), one has \[\begin{cases}[\phi]=[m]^{(d-2)/2}[S]^{1/2},\\ [\lambda]=[S]^{-2/(d-2)},\end{cases}\implies\qquad\begin{cases}f_{\star}^{d-2} =\frac{m_{\rm pole}}{\sqrt{\lambda}},\\ \frac{m_{\rm pole}^{d-1}}{\lambda(d-2)/2}.\end{cases} \tag{5.19}\] As we shall see in the explicit example of \(\lambda\phi^{4}\) in \(d=4\), these are exactly the values of \(f_{\pi}\) and \(Q\) where the scaling behavior changes. Choosing \(Q_{0}=Q_{\star}\) and plugging these values into eq. (5.16) we arrive at \[\frac{f_{\pi}}{f_{\star}}\simeq e^{-\frac{1}{d-2}K_{\star}}\left(\frac{Q}{Q_{ \star}}\right)^{\frac{1}{d-1}},\quad\text{or equivalently}\quad f_{\pi}\simeq e^{- \frac{1}{d-2}K_{\star}}\left(\frac{Q}{\lambda}\right)^{\frac{1}{d-1}},\qquad Q \to\infty, \tag{5.20}\] where \(K_{\star}=K(Q_{\star})\). A redefinition of \(\lambda\) is always compensated by a change of the value of \(K_{\star}\) and a particularly physical choice can be made by defining the coupling in terms of a physical scattering amplitude at threshold [10]. On very general grounds, positivity and subluminality of \(c_{s}^{2}\) can be used directly in (5.8) to derive strict bounds on the behavior of \(f_{\pi}\) as a function of \(Q\). The positivity of \(c_{s}^{2}\) immediately implies to general upper bound \[f_{\pi}^{d-2}(Q)\leq\frac{Q}{m_{\rm pole}}. \tag{5.21}\] Subluminality, \(c_{s}^{2}\leq 1\), can instead be used to derive a monotonicity bound, \[f_{\pi}(Q_{1})\geq f_{\pi}(Q_{2})\qquad\text{for $Q_{1}>Q_{2}$}\;. \tag{108}\] More in detail, \(f_{\pi}(Q)\) is a monotonically increasing function of \(Q\) with bounded derivative: \[0\leq\frac{\mathrm{d}\log f_{\pi}}{\mathrm{d}\log Q}\leq\frac{1}{d-2}\;. \tag{109}\] In particular, \((\mathrm{d}f_{\pi}^{d-2}/\mathrm{d}Q)\leq 1/m_{\mathrm{pole}}\) is everywhere satisfied and is a strict bound everywhere except at \(Q=0\), where it is saturated. ## 6 The \(\mathrm{O}(N)\) model at large \(N\) We have seen that the relation between finite density and symmetry breaking holds quite generally at one loop for \(\mathrm{U}(1)\) symmetric theories of a complex scalar field. We wish now to give additional support for the conjectural relationship between these two phenomena by proving that it holds non-perturbatively in the large \(N\) limit for a \(\mathrm{O}(N)\) scalar theory. The model is that of \(N\) real scalar fields transforming in the vector representation and interacting with quartic interactions in \(d=3\).10 Related discussions on \(\mathrm{O}(N)\) vector models at finite \(\mu\) and their relation to the large charge expansion have appeared, for instance, in Refs. [38; 39; 40]. Footnote 10: This theory and its quantum effective potential for \(\mu=0\) have been analyzed at large \(N\) in [34]. The related \(\mathrm{O}(N)\) model with quartic interactions in four dimensions is known to be plagued by a tachyonic instability [34; 35], see also [36]. For a textbook treatment see _e.g._[37]. The finite \(\mu\) Lagrangian can be computed along the lines previously described. To be concrete let us denote by \(\bar{\Sigma}\) a set of \(N\) scalar fields transforming in the vector (fundamental) representation of \(\mathrm{O}(N)\), and assume that we have chosen a basis such that we have a chemical potential for the charge associated with rotations in the \((\Sigma_{1},\Sigma_{2})\) plane. It is convenient to adopt the following notation: \(\Sigma_{I}=(s_{1},s_{2},S_{i})\), where \(I=1,\ldots,N\) and \(i=3,\ldots,N\). In this notation we have: \[\begin{split}\mathcal{L}_{\mu}=&\frac{1}{2}(\partial _{\nu}s_{1})^{2}+\frac{1}{2}(\partial_{\nu}s_{2})^{2}+\mu\,(\dot{s}_{1}s_{2}- \dot{s}_{2}s_{1})-\frac{1}{2}(m^{2}-\mu^{2})(s_{1}^{2}+s_{2}^{2})-\frac{ \lambda_{N}}{4N}(s_{1}^{2}+s_{2}^{2})^{2}\\ &+\frac{1}{2}(\partial_{\nu}S_{i})^{2}-\frac{1}{2}\left[m^{2}+ \lambda(s_{1}^{2}+s_{2}^{2})\right]S_{i}^{2}-\frac{\lambda_{N}}{4N}S_{i}^{4}, \end{split} \tag{110}\] where we introduced \(\lambda_{N}=\lambda N\). We work in the limit of large \(N\) and small \(\lambda\), with \(\lambda_{N}\) fixed, but to all orders in \(\lambda_{N}\). Moreover, for simplicity let us assume \(m^{2}\geq 0\) (the \(m^{2}<0\) case can be treated similarly). This model can be solved at large \(N\) introducing an auxiliary field \(\chi\)[34; 35]. To do this we add to the Lagrangian the Gaussian term \[\delta\mathcal{L}=\frac{N}{4\lambda_{N}}\left(\chi-m^{2}-\frac{\lambda_{N}}{N }\Sigma_{I}^{2}\right)^{2}, \tag{111}\] which has the only effect of changing the normalization of the path integral by a constant. Carrying out the algebra we see that the quartic interaction terms are canceled by the new auxiliary term, and that the only residual interactions are trilinear couplings with the auxiliary field \(\chi\): \[\begin{split}\mathcal{L}_{\mu}=&\,\frac{1}{2}(\partial_ {\nu}s_{1})^{2}+\frac{1}{2}(\partial_{\nu}s_{2})^{2}+\mu\left(\dot{s_{1}}s_{2}- \dot{s_{2}}s_{1}\right)-\frac{1}{2}(\chi-\mu^{2})(s_{1}^{2}+s_{2}^{2})\\ &+\,\frac{1}{2}(\partial_{\nu}S_{i})^{2}-\frac{1}{2}\chi S_{i}^{ 2}+\frac{N}{4\lambda_{N}}\left(\chi-m^{2}\right)^{2}.\end{split} \tag{103}\] We are interested in computing the effective potential as a function of \(s_{1,2}\), \(S_{i}\) and \(\chi\) in the large \(N\) limit. From the new Lagrangian, we see that the \(\chi\) propagator is suppressed by a factor \(1/N\). As a consequence, it is easy to see from diagrammatic arguments that the only quantum contributions to the effective potential that are of the same order as the tree level terms, in the \(1/N\) expansion, arise from one-loop graphs with \(s_{1,2}\) and \(S_{i}\) internal propagators, and no internal \(\chi\).11 These contributions can be computed exactly, since the action written in terms of the auxiliary field is quadratic in \(s_{1,2}\) and \(S_{i}\). The full result at leading order in \(1/N\) in the \(\overline{\text{MS}}\) scheme is Footnote 11: The first line in eq. (103) is formally of order 1, while the second line is of order \(N\) (with \(\chi\) of order 1). We are interested in the leading order contributions in the effective potential for both \(s_{1,2}\) and \(S_{i}\), and we neglect consistently subleading orders in \(1/N\) generated when including loop diagrams with virtual \(\chi\)’s. In fact, in the symmetry broken phase it will turn out that \(s_{i}^{2}\sim N\). \[V_{\text{eff}}(s,S,\chi;\mu)=\frac{1}{2}(\chi-\mu^{2})(s_{1}^{2}+s_{2}^{2})+ \frac{1}{2}\chi S_{i}^{2}-\frac{N}{4\lambda_{N}}\left(\chi-m^{2}\right)^{2}- \frac{N}{12\pi}\chi^{3/2}. \tag{104}\] The finite \(\mu\) ground state of the theory corresponds to a stationary point of \(V_{\text{eff}}(s,S,\chi;\mu)\), which is determined by the conditions \[\left(s_{1}^{2}+s_{2}^{2}\right)+S_{i}^{2}=\frac{N}{\lambda_{N}} \left(\chi-m^{2}\right)+\frac{N}{4\pi}\sqrt{\chi}, \tag{105a}\] \[\left(\chi-\mu^{2}\right)s_{1,2}=0,\] (105b) \[\chi\,S_{i}=0. \tag{105c}\] From eq. (105a) it follows the in the \(\text{O}(N)\) symmetric state with \(s_{1,2}=S_{i}=0\), \(\chi\) is a function of \((m,\lambda_{N},N)\) and is independent of \(\mu\). As a consequence, the value of the potential (104) for such a state is independent of \(\mu\) and cannot support finite density. Therefore finite density is always accompanied by spontaneous symmetry breaking. More in detail, as we discuss in Appendix G, when \(\mu^{2}<\mu_{\text{crit}}^{2}\) the potential is minimized for \(s_{1,2}=S_{i}=0\), the \(\text{O}(N)\) symmetry is unbroken and the charge density is zero. On the other hand, whenever \(\mu>\mu_{\text{crit}}\) the minimum of the effective potential is attained for12 Footnote 12: The global stability of this minimum is valid in \(d=3\), as discussed in Appendix G. The similar result in \(d=4\) is plagued by an instability at large values of \(\chi\). \[\left(s_{1}^{2}+s_{2}^{2}\right)=\frac{N}{\lambda_{N}}\left(\mu^{2 }-m^{2}\right)+\frac{N}{4\pi}\mu, \tag{106a}\] \[\chi=\mu^{2},\] (106b) \[S_{i}=0. \tag{106c}\] The critical value of \(\mu\) can be found from eq. (101a) and the condition \(\left(s_{1}^{2}+s_{2}^{2}\right)\geq 0\). We find \[\mu_{\rm crit}^{2}=\left(\sqrt{\frac{\lambda_{N}^{2}}{64\pi^{2}}+m^{2}}-\frac{ \lambda_{N}}{8\pi}\right)^{2}, \tag{104}\] which coincides with the pole mass \(m_{\rm pole}^{2}\) in the unbroken phase of the UV theory with \(\mu=0\), as shown in Appendix G. Notice that this result is non-perturbative in \(\lambda_{N}\). For \(\mu>\mu_{\rm crit}\), the value of the effective potential at its minimum is \[V_{\rm min}(\mu)=-\frac{N}{4\lambda_{N}}\left(\mu^{2}-m^{2}\right)^{2}-\frac{N} {12\pi}\mu^{3}, \tag{105}\] so that from eq. (30) the charge density is \[Q=\frac{N}{\lambda_{N}}\mu(\mu^{2}-m^{2})+\frac{N}{4\pi}\mu^{2}. \tag{106}\] In particular the following relation is satisfied: \(Q=\mu\left(s_{1}^{2}+s_{2}^{2}\right)\), where \(s_{1,2}\) denote the expectation values of the corresponding quantum operators on the ground state. From the same result we obtain the low-energy effective action for the superfluid phase of this model at leading order in \(N\): \[P(X)\Big{|}_{N\to\infty}=\frac{N}{4\lambda_{N}}\left(X-m^{2}\right)^{2}+\frac{ N}{12\pi}X^{3/2}+\mathcal{O}(1)\qquad\qquad(d=3). \tag{107}\] We can compare this result with that obtained by repeating the computation of [10] in \(d=3\), working at finite \(N\) but only at one loop in \(\lambda_{N}\) (or equivalently \(\lambda\)): \[\begin{split} P(X)\Big{|}_{\rm one-loop}=&\,\frac{N} {4\lambda_{N}}\left(X-m^{2}\right)^{2}\\ &+\frac{1}{48\pi}\left[\left(9X-5m^{2}\right)\sqrt{6X-2m^{2}}+3 \frac{\left(X-m^{2}\right)^{2}}{\sqrt{X}}\arcsinh\left(\sqrt{\frac{2X}{X-m^{2} }}\right)\right]\\ &+\left(\frac{N-2}{12\pi}\right)X^{3/2},\end{split} \tag{108}\] where the first line is the tree-level contribution, the second line is the one-loop correction coming from integrating out the radial mode and the third line is that arising from integrating out the \((N-2)\) gapped Goldstones. We see that at large \(N\) the effective Lagrangian coincides with that of eq. (107): at leading order in \(1/N\) the large \(N\) result is one-loop exact. The critical value of \(\mu\) (104) matches that inferred from a consistency analysis of the low-energy \(P(X)\) theory. Requiring stability and subluminality of the phonon perturbations [10] gives the conditions \[P^{\prime}(\mu^{2})>0\quad\text{and}\quad P^{\prime\prime}(\mu^{2})>0, \tag{109}\] which are satisfied exactly for \(\mu>\mu_{\rm crit}\). We can now analyze this result in light of the scaling relations of section 5. From (6.10) we have \[f_{\pi}=2P^{\prime}(X)=\frac{N}{\lambda_{N}}\left(\mu^{2}-m^{2}\right)+\frac{N}{ 4\pi}\mu, \tag{6.13}\] with \(Q\) and \(\mu\) related by eq. (6.9). The small \(Q\) limit gives \(f_{\pi}\simeq Q/m_{\rm pole}\) as expected. The high density limit of this model is peculiar in that the theory we are studying is super-renormalizable in \(d=3\). As a consequence, the superfluid EFT we obtain does not approach the EFT of a conformal superfluid even if the \(\mu=0\) theory is conformal in the UV. More explicitly, we find \[P(X)\Big{|}_{N\to\infty}\simeq\frac{N}{4\lambda_{N}}X^{2},\quad f_{\pi}\simeq \frac{N^{1/3}}{\lambda_{N}^{1/3}}Q^{2/3},\qquad Q\to\infty, \tag{6.14}\] where the coupling \(\lambda_{N}\) has mass dimension one, so that the \(P(X)\) and the asymptotic scaling are not those of a \(d=3\) conformal superfluid. We can formally carry out the same analysis also in \(d=4\), by considering the symmetry breaking minimum of the effective potential. This is only a local minimum, as the true ground state is the symmetric one [36]. However the instability is non-perturbative, so that the symmetry breaking ground state is metastable, at least in the limit of small \(\lambda_{N}\). Repeating the previous analysis, we find: \[V_{\rm eff}(s,S,\chi;\mu)=\frac{1}{2}(\chi-\mu^{2})(s_{1}^{2}+s_{2}^{2})+\frac {1}{2}\chi S_{i}^{2}-\frac{N}{4\lambda_{N}}\left(\chi-m^{2}\right)^{2}-\frac{N }{384\pi^{2}}\chi^{2}(9-6\log(\chi)), \tag{6.15}\] \[V_{\rm min}(\mu)=-\frac{N}{4\lambda_{N}}\left(\mu^{2}-m^{2}\right)^{2}-\frac{N }{384\pi^{2}}\mu^{4}(9-6\log(\mu^{2})), \tag{6.16}\] so that the superfluid EFT at large \(N\) is \[P(X)\Big{|}_{N\to\infty}=\frac{N}{4\lambda_{N}}\left(X-m^{2}\right)^{2}+\frac {N}{384\pi^{2}}X^{2}(9-6\log(X))+\mathcal{O}(1)\qquad(d=4). \tag{6.17}\] At high density we can resum the logs using the running coupling, along the lines of [10]. The result is \[P(X)\Big{|}_{N\to\infty}\simeq\frac{N}{4\lambda_{N}(X)}X^{2}\qquad Q\to\infty \qquad\qquad(d=4). \tag{6.18}\] The scaling of \(f_{\pi}\) thus is \[f_{\pi}^{2}\simeq\frac{N^{1/3}}{\lambda_{N}^{1/3}}Q^{2/3}. \tag{6.19}\] This suggests that in the (conformal) superfluid phase of a \(d=4\) CFT at large \(N\) (see _e.g._[38; 39] for some examples in \(d=3\)) the symmetry breaking scale \(f_{\pi}\) scales as \(f_{\pi}^{2}\sim N^{1/3}\), where \(N\) is a parameter counting the number of species charged under the U(1) symmetry associated to \(\mu\). Our calculation is valid to all orders in \(\lambda_{N}\) (at leading order in \(1/N\)), suggesting that this scaling could be valid also in strongly coupled CFTs. It would be interesting to explore the regime of validity of such a relation in the framework of Conformal Field Theory (and its generalization to other dimensions), and understand how \(N\) is encoded in the CFT data (for instance if it is related to a combination of OPE coefficients in the current-current OPE).13 Applications and relation to previous works ### Conformal superfluid in \(d=3\): massless \(\lambda\phi^{6}\) In order to make contact with existing results and confirm the validity of our computations we can consider some special cases. Let us start from the three-dimensional theory of a massless complex scalar with \(\lambda\phi^{6}\) interactions. At the classical level this theory is scale (and conformal) invariant, since the \(\lambda\phi^{6}\) interaction is marginal and the field is massless. This property continues to hold also at one loop, for any perturbative value of \(\lambda\) and in \(d=3\), as pointed out in [27], since the running of \(\lambda\) first arises at two loops. This theory has also been considered as an interesting benchmark model in relation to positivity bounds in theories with non-linearly realized Lorentz invariance [33]. In our approach, the absence of running at one loop is a consequence of the absence of logarithmic divergencies, see eqs. (4.17) and (4.35). More generally, we see that this holds for arbitrary potentials in odd dimensions. We therefore consider the \(\lambda\phi^{6}\) theory at finite \(\mu\) in \(d=3\), with classical potential \[V(\phi;\mu)=-\mu^{2}\phi^{2}+\lambda\phi^{6}, \tag{7.1}\] for which \(g_{\rm min}=2\mu^{2}\) and \(\left(\phi_{0}\right)^{4}=\mu^{2}/(3\lambda)\). From eq. (4.43) we immediately obtain \[P(X)\Big{|}_{d=3,\lambda\phi^{6}}=\left(\frac{2}{3^{3/2}\,\sqrt{\lambda}}+ \frac{7\sqrt{2}+3\,{\rm arcsinh}(1)}{12\pi}\right)X^{3/2}, \tag{7.2}\] which has the form expected for a conformal superfluid.14 We can compare this result with that of [27] by taking into account the different conventions for the coupling constant.15 Denoting their coupling as \(\hat{\lambda}\), the relation is \(\lambda=\hat{\lambda}^{2}/36\), and we find that in terms of their notation Footnote 14: Notice that \({\rm arcsinh}(1)\) can be equivalently expressed as \(\log(1+\sqrt{2})\). Footnote 15: There is a minus sign misprint in eq. (34) of Ref. [27]. We thank A. Monin for double checking. \[\alpha_{1}=\frac{4}{\sqrt{3}}+\frac{7\sqrt{2}+3\,{\rm arcsinh}(1)}{12\pi} \hat{\lambda}=\frac{4}{\sqrt{3}}+0.33273\cdots\times\hat{\lambda}, \tag{7.3}\] to be compared with \(\alpha_{1}=4/\sqrt{3}+0.3326\;\hat{\lambda}\), where \(P(X)=\alpha_{1}X^{3/2}/\hat{\lambda}\).16 Footnote 16: We neglect corrections to \(\alpha_{1}\) of order \(\mathcal{O}(\hat{\lambda}^{2})\) or higher. As discussed in [41; 8; 42] (see [43] for a review), from the one-loop effective action for the superfluid phase of \(\lambda\phi^{6}\) it is possible to extract the scaling dimension of the lowest dimensional charge-\(n\) operator (denoted as \(\Delta_{\phi^{n}}\)), in the limit of large charge \(n\). In the notation of [27]: \[\Delta_{\phi^{n}}=\left(\frac{\hat{\lambda}n}{\sqrt{3}\pi}\right)^{3/2}\left[c _{3/2}+\mathcal{O}\left(\frac{\sqrt{3}\pi}{\hat{\lambda}n}\right)\right], \tag{7.4}\] and \(\hat{\lambda}\,c_{3/2}=\pi/(3^{3/4}\sqrt{\alpha_{1}})\). We find \[c_{3/2}=\frac{\sqrt{3}\pi}{6\hat{\lambda}}-\frac{7\sqrt{2}+3\,{\rm arcsinh}(1) }{192}=\frac{\sqrt{3}\pi}{6\hat{\lambda}}-0.0653313\ldots, \tag{7.5}\] in perfect numerical agreement with the result of [27] within its accuracy. ### Quartic potential in \(d=4\) In section 4.3 we computed the effective potential for a complex scalar field with arbitrary interaction potential \(V_{\rm int}(\phi)\), in generic \(d>2\) dimension, by expanding in powers of \(g\) defined in eq. (4.8). Here, we want to go back to our result (4.21) and specialize to the particular case \[V_{\rm int}(\phi)=\lambda\phi^{4},\qquad\qquad(d=4). \tag{7.6}\] In this case one has \(M^{2}=m^{2}-\mu^{2}+4\lambda\phi^{2}\) and \(g^{2}=4\lambda^{2}\phi^{4}\). It is straightforward to check that the counterterms needed to cancel UV divergencies (4.20) are in perfect agreement with the standard dimensional regularization results for the \(\phi^{4}\) theory with \(\mu=0\). Working in the \(\overline{\rm MS}\) scheme (with \(\bar{\mu}=1\) for simplicity) we obtain the result: \[\begin{split} V_{\rm eff}^{(1)}(\phi;\mu)\Big{|}_{d=4}=& -\frac{3}{64\pi^{2}}\left(m^{2}+4\lambda\phi^{2}\right)^{2}\left( 1-\frac{2}{3}\log\left(m^{2}+4\lambda\phi^{2}\right)\right)\\ &-\frac{\lambda^{2}}{4\pi^{2}}\phi^{4}\left(1-\frac{1}{2}\log \left(m^{2}+4\lambda\phi^{2}\right)-\frac{M}{\mu}\arctan\left(\frac{\mu}{M} \right)\right)\\ &-\frac{\lambda^{2}}{64\pi^{2}}\phi^{4}\int_{0}^{1}{\rm d}x\,y\ _{3}F_{2}\left[\begin{array}{cc}1,&1,&\frac{3}{2}\\ 2,&3\end{array}\right]y\end{array},\end{split} \tag{7.7}\] where now \[y=\frac{16x(1-x)\lambda^{2}\phi^{4}}{(M^{2}+(1-2x)^{2}\mu^{2})^{2}}. \tag{7.8}\] Expanding for \(\lambda\phi^{2}\ll m^{2}\), assuming \(\mu\neq m\), and adding the tree-level contribution, we obtain the effective potential at one loop: \[\begin{split} V_{\rm eff}(\phi)=&-\frac{3m^{4}}{64\pi^{2 }}\left(1-\frac{2}{3}\log m^{2}\right)+\left(m^{2}-\mu^{2}-\frac{\lambda m^{2 }}{4\pi^{2}}\left(1-\log m^{2}\right)\right)\phi^{2}+\\ &+\left(1-\frac{\lambda}{8\pi^{2}}\left(2-5\log m^{2}-2\frac{ \sqrt{m^{2}-\mu^{2}}}{\mu}\arctan\left(\frac{\mu}{\sqrt{m^{2}-\mu^{2}}} \right)\right)\right)\lambda\phi^{4}+{\cal O}(\phi^{6}).\end{split} \tag{7.9}\] Some comments are in order: first, notice that at \(\phi=0\) the effective potential is \(\mu\)-independent, in agreement with the general results of section 4.4; second, the quadratic term identifies the critical value of \(\mu\) with the pole mass computed in the \(\overline{\rm MS}\) scheme as expected from the argument of section 4.4. The \(P(X)\) one-loop effective theory for the superfluid phonons can be obtained from eq. (4.44), where now \(g_{\rm min}=(X-m^{2})\): \[\begin{split} P(X)\Big{|}_{\phi^{4}}=&\frac{(X-m^{2 })^{2}}{4\lambda}\\ &+\frac{1}{192\pi^{2}}\Bigg{(}-4X^{2}+\left(X^{2}+2g_{\rm min}X+2 g_{\rm min}^{2}\right)(9-6\log(2g_{\rm min}))\\ &\qquad\qquad-\frac{5X^{3}}{2g_{\rm min}}\ _{3}F_{2}\left[ \begin{array}{cc}1,&1,&\frac{7}{2}\\ 4,&5\end{array}\right]-\frac{2X}{g_{\rm min}}\Bigg{)}\end{split} \tag{7.10}\] and matches exactly the independent computation of Ref. [10], as we already noticed in section 4.5. We can also consider the low-density limit, corresponding to \(X\simeq m_{\rm pole}^{2}\), from which we obtain (setting the renormalization \(\bar{\mu}=m_{\rm pole}\)): \[P(X)=\frac{(X-m_{\rm pole}^{2})^{2}}{4\hat{\lambda}}-\frac{\sqrt{2}}{15\pi^{2}} \frac{(X-m_{\rm pole}^{2})^{5/2}}{m_{\rm pole}}-\frac{1}{12\pi^{2}}\frac{(X-m_ {\rm pole}^{2})^{3}}{m_{\rm pole}^{2}}+\ldots, \tag{111}\] where \(\hat{\lambda}\) is defined in terms of the coupling at threshold [10] \[\frac{1}{\hat{\lambda}}=\frac{1}{\lambda_{\rm thr}}-\frac{1}{6\pi^{2}}, \qquad\qquad\lambda_{\rm thr}=\lambda_{\overline{\rm MS}}-\frac{5}{4\pi^{2}} \lambda_{\overline{\rm MS}}^{2}\left(\frac{1}{3}-\log m\right), \tag{112}\] where \(\lambda_{\rm thr}\) is the two-to-two elastic scattering amplitude for identical particles at threshold in the \(\mu=0\) theory:17 Footnote 17: This quantity can be easily rewritten in terms of the scattering length often used in the study of low-energy quantum mechanical scattering. \[i\mathcal{M}_{2\to 2}(s=4m_{\rm pole}^{2},t=0,u=0)\equiv-6i\,\lambda_{\rm thr}. \tag{113}\] It can be instructive to study the scaling relations of section 5 in this explicit example. We notice that \(P^{\prime\prime}(m_{\rm pole}^{2})=1/2\hat{\lambda}\), so that at low densities we have \[c_{\rm s}^{2}(Q)=\frac{\hat{\lambda}}{2m_{\rm pole}^{3}}Q+\frac{ \hat{\lambda}^{5/2}}{2^{3/2}\pi^{2}m_{\rm pole}^{9/2}}Q^{3/2}+\ldots, \tag{114a}\] \[f_{\pi}^{2}(Q)=\frac{Q}{m_{\rm pole}}-\frac{\hat{\lambda}}{2m_{ \rm pole}^{4}}Q^{2}-\frac{\hat{\lambda}^{5/2}}{3\sqrt{2}\pi^{2}m_{\rm pole}^{1 1/2}}Q^{5/2}+\ldots,\] (114b) \[\mu(Q)=m_{\rm pole}+\frac{\hat{\lambda}}{2m_{\rm pole}^{2}}Q+\frac {\hat{\lambda}^{5/2}}{3\sqrt{2}\pi^{2}m_{\rm pole}^{7/2}}Q^{3/2}+\ldots, \tag{114c}\] where we only reported the leading order non-analyticities arising at one loop from the \((X-m_{\rm pole}^{2})^{5/2}\) term. The subleading results can be computed straightforwardly (see appendix F). On the other hand, at high densities we obtain the expected scaling for a superfluid theory which is approximately that of a conformal superfluid. Eq. (116) simplifies for the choice of \(Q_{\star}\) defined in eq. (119), since \(K_{\star}=0\) at tree level, and the high density scaling takes the simple form \[f_{\pi}(Q)\simeq\left(\frac{Q}{\sqrt{\hat{\lambda}}}\right)^{\frac{1}{3}}, \qquad\qquad Q\to\infty. \tag{115}\] ### The Lee-Huang-Yang relation and its relativistic extension As a last application, we shall compute the energy density of our \(d=4\), \(\phi^{4}\) theory in the limit of low charge density, which, for a massive theory, is a non-relativistic limit. In fact, in the non-relativistic limit dilute systems of interacting bosons have been studied extensively. Lee, Huang and Yang [44; 45] showed that at low densities the energy density is organized as an expansion in powers of \(\sqrt{Qa^{3}}\), where \(Q\) is the charge density and \(a\) is the scattering length, and computed the first correction to the leading order result for a dilute gas of hard spheres.18 More generally, the Lee-Huang-Yang relation gives a rigorous lower bound on the energy density of dilute systems of non-relativistic interacting bosons, provided that the potential satisfies some mild regularity conditions [49; 50]. We shall reproduce here the Lee-Huang-Yang relation and compute relativistic corrections to it. Footnote 18: For a modern perspective and a discussion of higher order corrections see _e.g._[46; 47] and the review [48]. The energy density can be obtained from the time component of the energy-momentum tensor: \[\rho=T^{00}=2P^{\prime}(X)X-P(X)=Q\mu-P(\mu^{2}). \tag{111}\] Before discussing the explicit result, let us make some general comments based on dimensional analysis that can help in understanding the systematics of the low density expansion. We reintroduce factors of the speed of light \(c\), and look for dimensionless combinations of the three quantities at our disposal (\(\hat{\lambda}\), \(m_{\rm pole}\), \(Q\)). It is convenient to trade \(\hat{\lambda}\) for a length scale \(\hat{a}\) (related to the scattering length by an order one number), defining \(\hat{\lambda}=\hat{a}\,m_{\rm pole}\,c\). There is only one dimensionless combination that is independent of the speed of light \(c\), and another one can be taken to parametrize relativistic corrections. By studying the structure of the loop expansion one can identify the two expansion parameters as \[\kappa=\sqrt{Q\hat{a}^{3}},\quad\xi=\frac{\sqrt{Q\hat{a}}}{m_{\rm pole}\,c}, \tag{112}\] where \(\kappa\) acts as a loop counting parameter and \(\xi\) parametrizes relativistic corrections.19 From (110) and (111c), we obtain Footnote 19: In order to arrive at this result notice that the relativistic expansion parameter \(\xi^{2}\) can be identified as the leading order non-relativistic chemical potential \(\mu_{\rm NR}=\hat{a}Q/2m_{\rm pole}\) normalized by the rest energy \(\xi^{2}=2\mu_{\rm NR}/m_{\rm pole}c^{2}\). Then \(\kappa=\hat{\lambda}\xi\), and its origin as loop counting parameter is manifest. \[\rho=Qm_{\rm pole}c^{2}+\frac{\hat{a}Q^{2}}{4m_{\rm pole}}\left(1-\frac{1}{2} \xi^{2}+\left(\frac{4\sqrt{2}}{15\pi^{2}}+\frac{1}{3\pi^{2}}\xi\right)\kappa+ \mathcal{O}(\kappa^{2})\right), \tag{113}\] where the first term represents the rest energy contribution, while the \(\xi\) terms are relativistic corrections. Discarding the rest energy and setting \(\xi=0\) we recover the result of Lee, Huang and Yang [44; 45] upon mapping \(\hat{a}\to 8\pi a\), where \(a\) is the rigid sphere scattering length: \[\rho_{\rm NR}=\frac{2\pi aQ^{2}}{m_{\rm pole}}\left(1+\frac{128}{15}\sqrt{ \frac{Qa^{3}}{\pi}}+\mathcal{O}\left(Qa^{3}\right)\right)\;. \tag{114}\] The coefficient of the subleading term is a non-trivial consistency check of our formalism, signaling that in the non-relativistic limit the \(\lambda\phi^{4}\) model admits an effective description in terms of rigid spheres. This fact suggests that the low density physics has some degree of universality -- see eq. (110). We hope to come back to this issue in the future. We are not aware of previous computations of relativistic corrections to the Lee-Huang-Yang relation. It would be interesting to understand their general status, and in particular if they provide a lower bound on the relativistic energy density along the lines of what has been proved in the non-relativistic case [49; 50]. Outlook: towards a non-perturbative understanding For U(1)-symmetric scalar field theories in generic spacetime dimensions \(d>2\), we have given strong evidence for these two statements: 1. The ground state at finite chemical potential cannot develop a charge density unless it (spontaneously) breaks the U(1) symmetry; 2. Assuming there is no SSB at zero chemical potential, the system develops charge density and SSB only when the chemical potential exceeds the pole mass of the charged particles in the Poincare invariant vacuum. We have proved these facts in perturbation theory at one loop for generic non-derivative interactions, and non-perturbatively in the O(\(N\)) model with quartic interactions to leading order in \(1/N\) (in \(d=3\) and 4). It is interesting to consider how to go beyond these limits. Can we prove these facts in general, non-perturbatively? If _i)_ the theory's lightest states are spinless charged particles of nonzero mass \(m_{\rm pole}\), and _ii)_ these interact only through short range interactions, then a possible argument for very small charge densities goes as follows: if the density is so low that the average distance between the particles is much longer than the interactions' range and the particles' Compton wavelength, then one can assume that the particles are free; to leading order in these approximations, the results of a free scalar QFT should then apply. In particular, a nonzero density cannot arise for \(\mu<m_{\rm pole}\). For \(\mu=m_{\rm pole}\), Bose-Einstein condensation kicks in, leading to both a nonzero density and the spontaneous breaking of the U(1) symmetry [9]. There is probably some truth in this argument, but it is not completely convincing to us, because, as we reviewed, the chemical potential is not a good control parameter in the free-theory case, since nothing happens for \(\mu<m\), and there is no ground state for \(\mu>m_{\rm pole}\). So, from this viewpoint the free-theory case is a degenerate limit and interactions, however weak or short range they might be, are important to stabilize the system for \(\mu>m_{\rm pole}\). However, this might also suggest that working at fixed chemical potential rather than fixed density is a bad choice for certain questions. A different approach is a functional one: Consider the finite-\(\mu\) path-integral representation for the generating functional \(W[J;\mu]\), where \(J(x)\) is the source for \(\Phi\). It is easy to see that, upon changing the integration variable as \(\Phi=\Phi^{\prime}e^{i\mu t}\), one can move the chemical potential to the source, \[W[J;\mu]=W[Je^{i\mu t};0]\;, \tag{104}\] as can also be understood by noticing that introducing a chemical potential is equivalent to modifying the generator of time translation as \(H\to H-\mu Q\). Now, this ties the charge density--the derivative of \(W\) with respect to \(\mu\)--to the expectation value of \(\Phi\)--the derivative of \(W\) with respect to \(J\): the former cannot be nonzero if the latter vanishes. This argument, however, must be too simplistic, because it would work essentially unaltered in the free Fermi gas case, where we know the ground state, and we know that there is no SSB at finite density. Indeed, for a free fermionic path integral at finite \(\mu\), consider introducing a source \(J(x)\) for a scalar charged operator, such as the Majorana mass combination \(\psi\psi\equiv\psi^{T}\cdot i\gamma^{2}\cdot\psi\), which has charge two. Then, the same manipulations on \(W[J]\) as above would lead to the same conclusion: by redefining the integration variable as \(\psi=\psi^{\prime}e^{i\mu t}\), one can move the chemical potential to the source, \(W[J;\mu]=W[Je^{2i\mu t};0]\), showing that there cannot be a nonzero charge density if the expectation value of \(\psi\psi\) vanishes. This conclusion conflicts with reality for the free Fermi gas, which suggests that this functional argument must be neglecting some important technical details. Finally, one could consider the connection of our phenomenon with the spontaneous breaking of boosts. The Goldstone theorem associated with these exhibits important differences with more standard ones, and can be obeyed in unconventional ways [51; 52; 53]. Perhaps one can show that, under reasonable assumptions, if the ground state of a relativistic system breaks boosts but no rotations or spatial translations, the gapless excitations required by the Goldstone theorem can only be of two types: the particle-hole continuum of a Fermi liquid, or the phonons of a superfluid. This would prove our conjecture, because a finite density certainly breaks boosts, and so in a homogeneous and isotropic bosonic system this would imply a superfluid-like low-energy spectrum. We hope to make progress in these directions in the near future. ###### Acknowledgements. It is a pleasure to thank Paolo Creminelli, Gabriel Cuomo, Luca Delacretaz, Lorenzo Di Pietro, Paolo Glorioso, Hofie Hannesdottir, Oliver Janssen, Austin Joyce, and Riccardo Rattazzi for useful discussions. We are especially grateful to Austin Joyce for early collaboration, to Paolo Creminelli for prompting the analysis of the \(\lambda\phi^{6}\) model in \(d=3\), and to Lorenzo Di Pietro for a question that inspired our study of the O(\(N\)) model. We also thank Jens Andersen, Gabriel Cuomo, Luca Delacretaz and Sean Hartnoll for comments on a preliminary version of this work. AP is grateful to the audiences of the Sapienza University of Rome, ICTP, APC Paris, Saclay and Perimeter seminars, where part of this work was presented, for interesting comments and questions. The work of AP is supported by the grant DOE DE-SC0011941. LS is supported by the Centre National de la Recherche Scientifique (CNRS). ## Appendix A The spinning rigid rotor and its ground state energy In this appendix we generalize the one-loop quantum mechanical effective potential to the case of a particle moving in three dimensional space, and use this result to rederive the well-known quantization condition for the ground state energy of a quantum mechanical rigid rotor at fixed angular momentum \(\hat{L}_{3}=J\): \[E_{0}=\frac{J(J+1)}{2I}, \tag{104}\] where \(I\) is the moment of inertia. More generally, we shall see that in an \((N+1)\)-dimensional space, the \(N-1\) gapped Goldstones with fixed mass gap \(\mu\) reproduce the eigenvalues of the Laplacian operator on the \(N\)-sphere \(S^{N}\), _i.e._\(J(J+N-1)\). A similar check was carried out in Ref. [41] in the large charge limit \(J\gg 1\). Starting with the three-dimensional case, we consider a point particle parametrized by three coordinates \(\vec{q}\), with canonical kinetic energy and a quartic potential expressed in the convenient form \[V(q)=\frac{m^{2}}{2}\left(\left|\vec{q}\,\right|^{2}-\ell^{2}\right)^{2}. \tag{110}\] In the limit \(m\to\infty\), this corresponds to a quantum mechanical particle confined on the 2-sphere: \(\left|\vec{q}\,\right|^{2}=\ell^{2}\). The Hamiltonian is symmetric under SO(3) rotations and the associated current is \(J^{a}=p_{i}\epsilon^{a}_{ij}q_{j}\). Choosing a fixed direction \(\bar{a}=3\), we add a chemical potential term for \(J^{\bar{a}}\) to obtain the generalized Hamiltonian \(H_{\mu}=H-\mu J^{3}\). After straightforward computations the finite \(\mu\) Lagrangian can be expressed as: \[L_{\mu}=\frac{1}{2}\dot{q}_{i}\dot{q}^{i}+\mu\epsilon^{3}_{\ ij}\dot{q}_{i}q_{j}- \frac{m^{2}}{2}\left(\left|\vec{q}\,\right|^{2}-\ell^{2}\right)^{2}+\frac{1}{ 2}\mu^{2}\left(q_{1}^{2}+q_{2}^{2}\right). \tag{111}\] For positive \(m^{2}\) the ground state of the classical potential \(V(q)\) is obtained for20 Footnote 20: Without loss of generality, we choose the ground state to be aligned along the \(q_{1}\) direction. \[q_{1,\text{min}}=\ell\frac{m^{2}}{m^{2}-\mu^{2}}\xrightarrow[m\to+\infty]{}q_ {1,\text{min}}=\ell,\qquad q_{2,\text{min}}=q_{3,\text{min}}=0. \tag{112}\] The corresponding moment of inertia is given by \(I=\vec{q}_{\text{min}}^{\,2}=\ell^{2}\), where we used that the classical Lagrangian \(L_{\mu}\) describes the motion of a point particle with unit mass. The average angular momentum and the corresponding ground state energy (expressed as a function of \(J\) after a Legendre transform) are: \[\begin{split}& J^{\text{tree}}_{m\to\infty}=-\frac{\partial}{ \partial\mu}V_{\text{tree}}(\vec{q}_{\text{min}};\mu)=\mu\ell^{2},\\ & E^{\text{tree}}_{0}(J)=J\mu(J)+V_{\text{tree}}(\vec{q}_{\text{ min}};\mu(J))=\frac{J^{2}}{2\ell^{2}}+\text{const},\end{split} \tag{113}\] so that up to a zero-point energy renormalization, the leading order relation \(E_{0}=J^{2}/2I\) is recovered. Moreover, we see that the large charge limit corresponds to the limit \(\mu\ell^{2}\to\infty\), where we are varying \(\mu\) while keeping \(\ell\) fixed. The subleading result is reproduced by the one-loop computation. The analysis of section 3 goes through with minor modifications and the one-loop contribution is given by the frequencies \(\omega_{i}\), the poles of the propagator of the quadratic action obtained by expanding around the tree level minimum \(q_{\text{min}}\). We find: \[\omega_{1}=0,\qquad\qquad\omega_{2}=\mu,\qquad\qquad\omega_{3}=m. \tag{114}\] The first two poles correspond to the gapless and the gapped Goldstone excitations [54], whereas the third one is the massive radial mode, as expected on general grounds. The value of the one-loop effective potential at the minimum is \[V_{\text{eff}}(\vec{q}_{\text{min}};\mu)=V_{\text{tree}}(\vec{q}_{\text{min}} ;\mu)+\frac{1}{2}\sum_{i}\omega_{i}\xrightarrow[m\to+\infty]{}-\frac{1}{2} \ell^{2}\mu^{2}+\frac{m}{2}+\frac{\mu}{2}, \tag{115}\] which implies \[\begin{split}& J_{m\to\infty}=-\frac{\partial}{\partial\mu}V_{ \rm eff}(\vec{q}_{\rm min};\mu)=\mu\ell^{2}-\frac{1}{2},\\ & E_{0}(J)=J\mu(J)+V_{\rm eff}(\vec{q}_{\rm min};\mu(J))=\frac{J( J+1)}{2\ell^{2}}+{\rm const},\end{split} \tag{104}\] where as before \(\ell^{2}=I\). This reproduces the well-known quantization condition for the eigenvalues of a quantum mechanical rigid rotor, and connects our approach to the (dual) large charge approach of [41]. Notice that in the approach of the present paper, with fixed chemical potential, the correct quantization condition is obtained without taking the large charge limit. More generally, if we consider a rigid rotor confined on an \(N\)-sphere \(S^{N}\) and introduce a chemical potential for one component of the current associated to the SO(\(N+1\)) symmetry, from the counting of Goldstone bosons of Ref. [54] we can immediately infer that there will be one gapless Goldstone, \(N-1\) gapped Goldstones with fixed gap \(\mu\), plus the massive radial mode. Repeating the previous analysis we obtain \[E_{0}(J)=\frac{J(J+N-1)}{2I}+{\rm const}, \tag{105}\] reproducing the eigenvalues of the Laplacian of the \(N\)-sphere. ## Appendix B Alternative derivation of the path integral for \(\mu<m\) We provide here an alternative derivation of the \(\mu\) independence of the path integral for \(\mu<m\). We need to compute the integral in eq. (102). Expanding the log as a sum of two terms we have: \[I_{+}=\int\frac{{\rm d}^{d}p}{(2\pi)^{d}}\left[\log\left(p^{2}+m^{2}+\xi_{0}^{ 2}\right)+\log\left(1+\frac{2\xi_{0}p_{0}}{p^{2}+m^{2}+\xi_{0}^{2}}\right) \right]. \tag{106}\] We can Taylor expand the second term21 to obtain: Footnote 21: The inequality \((p_{0}-\xi_{0})^{2}\geq 0\Rightarrow p_{0}^{2}+\xi_{0}^{2}\geq 2\xi_{0}p_{0}\) guarantees that we are inside the radius of convergence of the expansion. \[I_{+}=\int\frac{{\rm d}^{d}p}{(2\pi)^{d}}\log\left(p^{2}+m^{2}+\xi_{0}^{2} \right)+\int\frac{{\rm d}^{d}p}{(2\pi)^{d}}\sum_{n=1}^{\infty}\frac{(-1)^{n+1 }}{n}\left(\frac{2\xi_{0}p_{0}}{p^{2}+m^{2}+\xi_{0}^{2}}\right)^{n}. \tag{107}\] Exchanging the integral and the series, and transforming the \((p_{0})^{n}\) integral into a spherically symmetric integral by introducing a compensating factor (see Appendix D): \[I_{+} =\int\frac{{\rm d}^{d}p}{(2\pi)^{d}}\log\left(p^{2}+m^{2}+\xi_{0} ^{2}\right)+\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}(2\xi_{0})^{n}\int\frac{{ \rm d}^{d}p}{(2\pi)^{d}}\left(\frac{p_{0}}{p^{2}+m^{2}+\xi_{0}^{2}}\right)^{n}\] \[=\int\frac{{\rm d}^{d}p}{(2\pi)^{d}}\log\left(p^{2}+m^{2}+\xi_{0} ^{2}\right)-\sum_{k=1}^{\infty}\frac{1}{2k}(2\xi_{0})^{2k}\frac{\Gamma(\frac{ d}{2})\Gamma(k+\frac{1}{2})}{\Gamma(\frac{1}{2})\Gamma(k+\frac{d}{2})}\int\frac{{ \rm d}^{d}p}{(2\pi)^{d}}\frac{p^{2k}}{(p^{2}+m^{2}+\xi_{0}^{2})^{2k}}\] \[=-\frac{\Gamma(2-\frac{d}{2})}{\frac{d}{2}(\frac{d}{2}-1)}\frac{ 1}{(4\pi)^{d/2}}\Lambda^{4-d}(m^{2}+\xi_{0}^{2})^{\frac{d}{2}}\] \[\qquad\qquad-\sum_{k=1}^{\infty}\frac{1}{2k}(2\xi_{0})^{2k}\frac {\pi^{d/2}}{(2\pi)^{d}}\frac{\Gamma(k+1/2)\Gamma(k-d/2)}{\Gamma(1/2)\Gamma(2k )}\frac{\Lambda^{4-d}}{(m^{2}+\xi_{0}^{2})^{k-d/2}}. \tag{108}\] Resumming and taking the limit \(d\to 4-2\varepsilon\) we obtain \[I_{+}=\frac{m^{4}\left(\log\left(\frac{m^{4}}{\Lambda^{4}}\right)+2\gamma-3-\log( 16\pi^{2})\right)}{64\pi^{2}}-\frac{m^{4}}{32\pi^{2}\epsilon}, \tag{114}\] in agreement with the result obtained previously. ## Appendix C One-loop path integral at finite \(\mu\) for arbitrary potential In this appendix we derive a general expression for the determinant relevant for the computation of the one-loop effective potential of a complex scalar field at finite \(\mu\) with arbitrary interactions in \(d\) dimensions. We use a notation in terms of real components for the scalar fields \(\varphi_{1},\varphi_{2}\) and consider the finite \(\mu\) Lagrangian derived in section 2: \[\mathcal{L}_{\mu}=\frac{1}{2}(\partial_{\nu}\varphi_{1})^{2}+\frac{1}{2}( \partial_{\nu}\varphi_{2})^{2}+\mu\left(\dot{\varphi}_{1}\varphi_{2}-\dot{ \varphi}_{2}\varphi_{1}\right)-V(\varphi;\mu), \tag{115}\] with an arbitrary \(\mathrm{U}(1)\) invariant interaction potential \[V(\varphi;\mu)=\frac{m^{2}-\mu^{2}}{2}(\varphi_{1}^{2}+\varphi_{2}^{2})+V_{ \mathrm{int}}(\varphi). \tag{116}\] We shall denote partial derivatives \(\partial_{\varphi_{i}}\) of the function \(V(\varphi;\mu)\) with a subscript \(i\) and suppress the argument \(\mu\) for ease of notation. Up to boundary terms, the quadratic action for the perturbations \(\delta\varphi_{i}(x)\) around an arbitrary constant and homogeneous field background \(\varphi_{i}\) is: \[\delta S^{(2)}[\delta\varphi_{i}(x)]=-\frac{1}{2}\int\mathrm{d}^{d}x\,\delta \vec{\varphi}^{\,t}\,\mathbf{K}\,\delta\vec{\varphi}, \tag{117}\] where in position space \[\mathbf{K}(x)=\begin{pmatrix}\partial_{\nu}\partial^{\nu}+V_{11}(\varphi)&-2 \mu\partial_{t}+V_{12}(\varphi)\\ 2\mu\partial_{t}+V_{12}(\varphi)&\partial_{\nu}\partial^{\nu}+V_{22}(\varphi) \end{pmatrix}, \tag{118}\] while in momentum space \[\mathbf{K}(p)=\begin{pmatrix}-p^{2}+V_{11}(\varphi)&2i\mu\,p_{0}+V_{12}(\varphi )\\ -2i\mu\,p_{0}+V_{12}(\varphi)&-p^{2}+V_{22}(\varphi)\end{pmatrix}. \tag{119}\] We can now compute the one-loop effective action through the path integral as \[e^{i\,\Gamma(\varphi)}=e^{-i\,\mathrm{Vol}\cdot V_{\mathrm{eff}}(\varphi)}= \underset{\delta q_{i}(\pm\infty)=0}{\int[\mathcal{D}\delta\varphi_{i}(x)]}e^ {i\,\delta S^{(2)}[\delta\varphi_{i}(x)]}=\left[\det\left(\frac{i}{\pi} \mathbf{K}\right)\right]^{-1/2}. \tag{120}\] As discussed in section 2.2, the \(i\varepsilon\) term projects on the ground state of \(V(\varphi;\mu)\), and the structure of the poles is such that the integration contours can be deformed continuously, without crossing any singularity, by performing a Wick rotation and going to Euclidean space. We can therefore set \(p^{0}_{E}=-ip^{0}\) and \(p^{2}_{E}=-p^{2}\). Dropping the \(E\) subscript and defining \(\xi=(i\mu,\vec{0})\) to maintain a formally Lorentz invariant notation, and up to an additive constant, the one-loop effective potential is given by \[V^{(1)}_{\rm eff}(\varphi;\mu)=\frac{1}{2}\int\frac{\mathrm{d}^{d}p}{(2\pi)^{d }}\log\Big{[}(p^{2}+M^{2})^{2}-4(p\cdot\xi)^{2}-g^{2}\Big{]}\,, \tag{111}\] where we defined \[\begin{split}& M^{2}=\frac{1}{2}\Big{(}V_{11}(\varphi)+V_{22}( \varphi)\Big{)},\\ & g^{2}=\frac{1}{4}\Big{(}V_{11}(\varphi)+V_{22}(\varphi)\Big{)}^ {2}-\Big{(}V_{11}(\varphi)\Big{)}\Big{(}V_{22}(\varphi)\Big{)}+\Big{(}V_{12}( \varphi)\Big{)}^{2}.\end{split} \tag{112}\] It is straightforward to check that the same result is recovered when using a notation in terms of the complex scalar \(\Phi\). It is useful to express these quantities in terms of the interaction potential: \[\begin{split}& M^{2}=m^{2}-\mu^{2}+\frac{1}{2}\Big{(}\partial^{2}_ {\varphi_{1}}V_{\rm int}(\varphi)+\partial^{2}_{\varphi_{2}}V_{\rm int}( \varphi)\Big{)},\\ & g^{2}=\frac{1}{4}\Big{(}\partial^{2}_{\varphi_{1}}V_{\rm int}( \varphi)+\partial^{2}_{\varphi_{2}}V_{\rm int}(\varphi)\Big{)}^{2}-\Big{(} \partial^{2}_{\varphi_{1}}V_{\rm int}(\varphi)\Big{)}\Big{(}\partial^{2}_{ \varphi_{2}}V_{\rm int}(\varphi)\Big{)}+\Big{(}\partial_{\varphi_{1}}\partial _{\varphi_{2}}V_{\rm int}(\varphi)\Big{)}^{2},\end{split} \tag{113}\] from which it follows that the combination \((M^{2}+\mu^{2})\) is \(\mu\) independent, and \(g^{2}\) is \(\mu\) independent and vanishes for \(\varphi_{i}=0\). Another, particularly useful, representation is obtained by expressing \(M^{2}\) and \(g^{2}\) in terms of \(\phi=[(\varphi_{1}^{2}+\varphi_{2}^{2})/2]^{1/2}\). After straightforward manipulation we find: \[\begin{split}& M^{2}=\frac{1}{4\,\phi}\Big{(}V^{\prime}(\phi)+ \phi\,V^{\prime\prime}(\phi)\Big{)},\\ & g^{2}=\frac{1}{16\,\phi^{2}}\Big{(}V^{\prime}(\phi)-\phi\,V^{ \prime\prime}(\phi)\Big{)}^{2},\end{split} \tag{114}\] where primes denote derivatives with respect to \(\phi\). Denoting by \(\phi_{0}\) the value of \(\phi\) at which the tree level finite \(\mu\) potential (110) is minimized -- such that \(V^{\prime}(\phi_{0})=0\) -- it follows that: \[M^{2}\Big{|}_{\rm min}=g\Big{|}_{\rm min}=\frac{V^{\prime\prime}(\phi_{0})}{4}. \tag{115}\] This fact proves to be extremely useful in finding a closed form expression for the one-loop low-energy effective action for the superfluid phase of our UV scalar field theory, which can be obtained by evaluating the effective potential for the UV theory at finite \(\mu\) at its minimum. ## Appendix D Some useful identities in dimensional regularization The following general identities on dimensionally regularized integrals turn out to be very useful. For a thorough introduction to the methods and properties of dimensional regularization see for instance [55]. \[\int\frac{\mathrm{d}^{d}p}{(2\pi)^{d}}\log\left(p^{2}+\Delta\right) =-\frac{\Gamma(-d/2)}{(4\pi)^{d/2}}\Delta^{d/2} \tag{115}\] \[\int\frac{\mathrm{d}^{d}p}{(2\pi)^{d}}\frac{p^{\mu_{1}}p^{\nu_{1}}\ldots p^{\mu _{n}}p^{\nu_{n}}}{\left(p^{2}+\Delta\right)^{m}} =\frac{\Gamma(d/2)\Gamma(n+1/2)}{\Gamma(1/2)\Gamma(n+d/2)}\,\eta^ {(\mu_{1}\nu_{1}}\ldots\eta^{\mu_{n}\nu_{n})}\int\frac{\mathrm{d}^{d}p}{(2\pi) ^{d}}\frac{p^{2n}}{\left(p^{2}+\Delta\right)^{m}} \tag{116}\] \[\int\frac{\mathrm{d}^{d}p}{(2\pi)^{d}}\frac{p^{2n}}{(p^{2}+\Delta)^{m}} =\frac{1}{(4\pi)^{d/2}}\frac{\Gamma(m-n-d/2)\Gamma(n+d/2)}{\Gamma(m) \Gamma(d/2)}\left(\frac{1}{\Delta}\right)^{m-n-d/2} \tag{117}\] ## Appendix E Divergencies and counterterms at finite \(\mu\) As we discussed in section 4.3 the divergent terms arising in the one-loop effective potential in \(d=4\) are \(\mu\) independent. In particular, defining \[\Xi^{(1)}(\phi;\mu)\Big{|}_{d,\mathrm{div}}\equiv V^{(1)}_{\mathrm{eff}}(\phi ;\mu)\Big{|}_{d,\mathrm{div}}-V^{(1)}_{\mathrm{eff}}(\phi;0)\Big{|}_{d, \mathrm{div}}, \tag{118}\] one finds that \(\Xi^{(1)}=0\) in \(d=4\). Starting from \(d=6\), on the other hand, the divergencies turn out to be \(\mu\) dependent, see Table 1. In order to understand this, consider the \(\mu=0\) theory. We take \(V_{\mathrm{int}}(\phi)\) to be an arbitrary \(\mathrm{U}(1)\) invariant polynomial potential, built from even powers of \(\phi\). Since we are considering (possibly) non-renormalizable interactions, the counterterms needed to renormalize the theory will in general include derivative operators, even if the interaction terms we started from are non-derivative. Introducing a chemical potential is equivalent to promoting ordinary derivatives to \(\mu\) dependent covariant derivatives, so that derivative coun \begin{table} \begin{tabular}{c|c|c} & \(\varepsilon\cdot V^{(1)}_{\mathrm{eff}}(\phi;\mu)\Big{|}_{d,\mathrm{div}}\) & \(\varepsilon\cdot\Xi^{(1)}(\phi;\mu)\Big{|}_{d,\mathrm{div}}\) \\ \hline \hline \(d=4+\varepsilon\) & \(\frac{g^{2}+(M^{2}+\mu^{2})^{2}}{16\pi^{2}}\) & \(0\) \\ \hline \(d=6+\varepsilon\) & \(-\frac{(M^{2}+\mu^{2})^{3}+g^{2}(3M^{2}+\mu^{2})}{192\pi^{3}}\) & \(\frac{g^{2}}{96\pi^{3}}\mu^{2}\) \\ \hline \(d=8+\varepsilon\) & \(\frac{5g^{4}+5(M^{2}+\mu^{2})^{4}+g^{2}(30M^{4}+20M^{2}\mu^{2}+6\mu^{4})}{1536 0\pi^{4}}\) & \(-\frac{g^{2}(M^{2}+\mu^{2})}{384\pi^{4}}\mu^{2}+\frac{g^{2}}{960\pi^{4}}\mu^{4}\) \\ \end{tabular} \end{table} Table 1: _Divergent terms in dimensional regularization in even dimensions \(d=2n+\varepsilon\) (with \(n>1\)), and their \(\mu\) dependent part. See eqs. (114) and (115) for the expression of \(M^{2}\) and \(g^{2}\) in terms of the interaction potential \(V_{\mathrm{int}}(\phi)\) and their \(\mu\) dependence._ terterms will generate \(\mu\) dependent counterterms as well.22 By a simple power counting argument it is easy to see that no derivative counterterms are needed at one loop in \(d=4\), for arbitrary interactions. In order to see this notice that the (amputated) diagrams which depend on external momenta, \(a_{n}\)) with \(n\geq 2\) in Fig. 3, are convergent or at most logarithmically divergent in \(d=4\). As a consequence, the corresponding divergencies are independent of external momenta and can be renormalized by non-derivative polynomial counterterms. This is a generalization of the familiar statement that at one loop there is no wave-function renormalization in \(d=4\). In the case of even (U(1) symmetric) interactions the wave-function renormalization is still absent at one-loop for \(d=2n\), but starting from \(d=6\) derivative interaction counterterms are needed. Notice that interaction terms are always irrelevant in \(d\geq 6\), so that the theories we consider are non-renormalizable and the derivative operators we are discussing are always generated. Footnote 22: We are grateful to Riccardo Rattazzi for a discussion on this point. In order to check our diagrammatic argument, consider the example of the \(\lambda\phi^{4}\) theory in \(d=6\). The \(\mu\) dependent divergence in the one-loop effective potential can be read from Tab. 1 and is given by \[\Xi^{(1)}(\phi;\mu)\Big{|}_{d,{\rm div}}=\frac{1}{\varepsilon}\frac{\lambda^{ 2}}{24\pi^{3}}\mu^{2}\phi^{4}. \tag{112}\] On the other hand, in the \(\mu=0\) theory, the only diagram that can have divergencies dependent on external momenta is \(a_{2}\)) in Fig. 3. Computing the divergent part of this diagram we find that the following derivative counterterm is needed for \(\mu=0\): \[\mathcal{L}_{\rm ct}\supset-\frac{1}{\varepsilon}\frac{\lambda^{2}}{24\pi^{3} }(\partial_{\nu}\Phi)^{\dagger}(\partial^{\nu}\Phi)\Phi^{\dagger}\Phi, \tag{113}\] where we only wrote the divergent part. Promoting derivatives to \(\mu\) dependent covariant derivatives \(D_{\nu}=\partial_{\nu}-i\mu\delta^{0}_{\nu}\), we find that a \(\mu\) dependent counterterm is generated that exactly cancels the divergence of eq. (112). Figure 3: _One loop Feynman diagrams up to four interaction points. Diagrams \(a_{0})\) and \(a_{1})\) are divergent in dimension \(d>2\), but the corresponding divergence is always independent of external momenta. Diagram \(a_{n})\) is marginally (logarithmically) divergent in \(d=2n\) dimensions. Derivative counterterms in the \(\mu=0\) theory arise from diagrams \(a_{n})\) with \(n\geq 2\) when they are at least quadratically divergent. This corresponds to \(d\geq 6\)._ As a last remark, we note that the classic argument on the non-renormalization of conserved currents (see _e.g._[55], p. 162) advocated in [16] to justify the absence of \(\mu\) dependent divergencies in \(d=4\) is not applicable at finite \(\mu\). In fact, at finite chemical potential, an identically conserved current can be constructed with the aid of the object \(\mu\delta^{0}_{\nu}\), invalidating the non-renormalization theorem. The current is simply given by \(\tilde{J}^{\nu}\ =\ (\partial^{\nu}\partial_{0}-\delta^{\nu}_{0}\Box)\phi^{2}\). ## Appendix F More details on the derivation of the scaling relations of section 5 In this appendix the variable \(X\) will be always set at its background value \(X\to\bar{X}=\mu^{2}\). We start from the relations \(f_{\pi}^{2}(Q)=Q/\mu=2P^{\prime}(X)\) and \(Q=2P^{\prime}(X)\sqrt{X}\), and the sound speed \[c_{s}^{2}=\frac{P^{\prime}(X)}{P^{\prime}(X)+2P^{\prime\prime}(X)X}. \tag{112}\] The derivatives of \(f_{\pi}^{2}\) are: \[\frac{\mathrm{d}f_{\pi}^{2}}{\mathrm{d}Q}=\frac{\mathrm{d}(2P^{ \prime}(X))}{\mathrm{d}X}\frac{\mathrm{d}X}{\mathrm{d}Q}=2P^{\prime\prime}(X) \frac{1}{2P^{\prime\prime}(X)\sqrt{X}+P^{\prime}(X)/\sqrt{X}}=\frac{1-c_{s}^{ 2}}{\sqrt{X}}, \tag{113}\] \[\frac{\mathrm{d}^{2}f_{\pi}^{2}}{\mathrm{d}Q^{2}}=-\frac{\mathrm{ d}c_{s}^{2}}{\mathrm{d}Q}\frac{1}{\sqrt{X}}+(1-c_{s}^{2})\frac{\mathrm{d}}{ \mathrm{d}X}\left(\frac{1}{\sqrt{X}}\right)\frac{\mathrm{d}X}{\mathrm{d}Q}=- \frac{\mathrm{d}c_{s}^{2}}{\mathrm{d}Q}\frac{1}{\sqrt{X}}-\frac{c_{s}^{2}}{Q} \frac{(1-c_{s}^{2})}{\sqrt{X}}. \tag{114}\] The derivative of the sound speed is instead: \[\frac{\mathrm{d}c_{s}^{2}}{\mathrm{d}Q}=\frac{\mathrm{d}c_{s}^{2}}{\mathrm{d} X}\frac{\mathrm{d}X}{\mathrm{d}Q}=\frac{\mathrm{d}c_{s}^{2}}{\mathrm{d}X} \frac{1}{2P^{\prime\prime}(X)\sqrt{X}+P^{\prime}(X)/\sqrt{X}}. \tag{115}\] In the limit \(Q\to 0\), using that \(2P^{\prime}(X)=\mathcal{O}(Q)\), we obtain: \[\frac{\mathrm{d}c_{s}^{2}}{\mathrm{d}X}=\frac{P^{\prime\prime}(X)}{P^{\prime }(X)+2P^{\prime\prime}(X)X}+\mathcal{O}(Q)=\frac{1}{2X}+\mathcal{O}(Q)\implies \frac{\mathrm{d}c_{s}^{2}}{\mathrm{d}Q}=\frac{1}{4X^{3/2}P^{\prime\prime}(X) }+\mathcal{O}(Q). \tag{116}\] Using that for \(Q\to 0\), \(\sqrt{X}=\mu_{\mathrm{crit}}+\mathcal{O}(Q)=m_{\mathrm{pole}}+\mathcal{O}(Q)\) we arrive at the result: \[\frac{\mathrm{d}c_{s}^{2}}{\mathrm{d}Q}=\frac{1}{4m_{\mathrm{pole}}^{3/2}P^{ \prime\prime}(m_{\mathrm{pole}}^{2})}+\mathcal{O}(Q). \tag{117}\] Higher order terms can be straightforwardly computed by rewriting \(c_{s}^{2}\) as \[c_{s}^{2}=\frac{Q}{Q+4P^{\prime\prime}(\mu^{2})\mu^{3}}, \tag{118}\] and then expressing \(\mu(Q)\) recursively through \[\mu(Q)=m_{\mathrm{pole}}\exp\left(\int_{0}^{Q}\frac{c_{s}^{2}}{Q^{\prime}} \mathrm{d}Q^{\prime}\right). \tag{119}\] Ground state and stability for the \(\mathrm{O}(N)\) model with quartic interactions We want to study the minimum of the large \(N\) effective potential in the \(\mathrm{O}(N)\) model with quartic interactions in \(d=3\), given by eq. (100). The conditions for minimization are given in eqs. (101). First, it is easy to see that the minimum of the effective potential always corresponds to \(S_{i}=0\). Moreover it is convenient to define the \(\mathrm{SO}(2)\) invariant variable \(t=(s_{1}^{2}+s_{2}^{2})/N\). The stationary points of \(V_{\mathrm{eff}}(s_{i})\) as a function of \(s_{i}\) correspond through the chain rule to either \(t=0\) or to real stationary points of \(V_{\mathrm{eff}}(t)\), satisfying \(V_{\mathrm{eff}}^{\prime}(t)=0\). As already remarked in [34], it can be convenient to solve the condition (101) for \(\chi\) and plug back in eq. (100) to study the ordinary effective potential, without the auxiliary parameter \(\chi\). From eq. (101) we have: \[f(t)\equiv\sqrt{\chi}=\frac{\lambda_{N}}{8\pi}\left(\sqrt{1+\frac{64\pi^{2}}{ \lambda_{N}}\left(t+\frac{m^{2}}{\lambda_{N}}\right)}-1\right), \tag{102}\] where we selected the solution with \(\sqrt{\chi}>0\). We obtain \[\frac{V_{\mathrm{eff}}(t)}{N}=\frac{1}{2}\left(f^{2}-\mu^{2}\right)t-\frac{ \left(f^{2}-m^{2}\right)^{2}}{4\lambda_{N}}-\frac{f^{3}}{12\pi}, \tag{103}\] from which it follows \[\frac{V_{\mathrm{eff}}^{\prime}(t)}{N}=\frac{1}{2}\left(f^{2}-\mu^{2}\right)+ ff^{\prime}t-\frac{(f^{2}-m^{2})}{\lambda_{N}}ff^{\prime}-\frac{f^{2}f^{\prime}}{4 \pi}=\frac{1}{2}\left(f^{2}-\mu^{2}\right), \tag{104}\] where we suppressed the explicit \(t\) dependence in \(f(t)\) and used \(f^{2}-m^{2}=\lambda_{N}t-\lambda_{N}f/4\pi\). The stationary points of \(V_{\mathrm{eff}}(t)\) correspond to real (hence physical) values of \(t\) only when \[\mu^{2}\geq\mu_{\mathrm{crit}}^{2}=f(0)^{2}, \tag{105}\] from which eq. (102) follows. For values of \(\mu^{2}\leq\mu_{\mathrm{crit}}^{2}\) the potential \(V_{\mathrm{eff}}(s_{i})\) is minimized for \(t=0\), corresponding to the unbroken phase. Moreover we see that in \(d=3\) and for \(\mu^{2}>\mu_{\mathrm{crit}}^{2}\) the potential has only one stationary point for \(t>0\) (a minimum) and is always bounded from below, thus stable. The situation is different in \(d=4\), where due to logarithmic terms the potential is unbounded from below, signalling an instability [34; 35]. Notice also that by rescaling eq. (104) we can also derive the value of the pole mass in the unbroken phase of the UV theory at \(\mu=0\). This is given by \[m_{\mathrm{pole}}^{2}=\frac{2}{N}V_{\mathrm{eff}}^{\prime}(t;\mu=0)=f(0)^{2}= \left(\sqrt{\frac{\lambda_{N}^{2}}{64\pi^{2}}+m^{2}}-\frac{\lambda_{N}}{8\pi} \right)^{2}. \tag{106}\] In particular, it follows that \(\mu_{\mathrm{crit}}^{2}=m_{\mathrm{pole}}^{2}\). This result is non-perturbative in \(\lambda_{N}\), at leading order in the \(1/N\) expansion.
2306.00742
The Galerkin method beats Graph-Based Approaches for Spectral Algorithms
Historically, the machine learning community has derived spectral decompositions from graph-based approaches. We break with this approach and prove the statistical and computational superiority of the Galerkin method, which consists in restricting the study to a small set of test functions. In particular, we introduce implementation tricks to deal with differential operators in large dimensions with structured kernels. Finally, we extend on the core principles beyond our approach to apply them to non-linear spaces of functions, such as the ones parameterized by deep neural networks, through loss-based optimization procedures.
Vivien Cabannes, Francis Bach
2023-06-01T14:38:54Z
http://arxiv.org/abs/2306.00742v3
# Going Deeper with Spectral Embeddings ###### Abstract To make sense of millions of raw data and represent them efficiently, practitioners rely on representation learning. Recently, deep connections have been shown between these approaches and the spectral decompositions of some underlying operators. Historically, explicit spectral embeddings were built from graphs constructed on top of the data. In contrast, we propose two new methods to build spectral embeddings: one based on functional analysis principles and kernel methods, which leads to algorithms with theoretical guarantees, and the other based on deep networks trained to optimize principled variational losses, which yield practically efficient algorithms. Furthermore, we provide a new sampling algorithm that leverages learned representations to generate new samples in a single step. ## 1 Introduction A long-standing problem in artificial intelligence has been to extract good features from raw data that, once fed to supervised learning algorithms, would ease the learning process. For example, images can be stored as arrays of pixels stored in computer memory, but learning to recognize patterns or segment objects based on these raw data is quite challenging. Historically, features were manually engineered using domain-expert knowledge. However, with the advent of larger datasets and the increase of computing resources, generic methods have emerged to process large amounts of data, deduce a structure beyond them, and design new embeddings of the data. From a mathematical standpoint, many representation learning algorithms can be seen as estimating the eigenfunctions of some base operators. Embeddings that are linked to the spectral decomposition of an operator are known as spectral embeddings. In the machine learning literature, spectral embeddings are often understood thanks to graph theory as the eigenvectors of some graph built on top of the data. However, we will argue that a richer picture can be offered by functional analysis perspectives that drop out graph theory. Our contributions are two-fold. 1. Firstly, we recall how certain operators, if approximated properly, can be used to efficiently embed input data (i.e. "solve" representation learning). Additionally, we introduce a new "pushforward sampler" that leverages those embeddings to generate plausible new data (i.e. "solve" some aspect of generative AI). 2. Secondly, we present principles for estimating these operators and provide two procedures to approximate them. One approach is based on neural networks and generalizes recent algorithms that are empirically effective. The other approach is based on kernel methods and generalizes recent algorithms that offer theoretically guarantees. For this kernel approach, we derive two algorithms to use Laplacian regularization, with a number of floating point operations comparable to that of the Nystrom method. This derivation enables the future use of derivatives in kernel regression essentially for free. The prototypical example of spectral embedding is given by a Laplacian operator. Laplacians are helpful for representation learning as they encode some sort of "modes" on the input space, together with a notion of complexity of those (e.g. think of Fourier modes where complexity increases with frequency). They are also used in generative AI (that aims to generate new samples that look like the original data) as they describe Langevin diffusion dynamics toward any specified stationary measure. Overall, this paper offers efficient algorithms and a principled framework to estimate spectral embeddings, and unveils new links between diffusion models and representation learning. Related work.The usefulness of Laplacian spectral embeddings for representation learning has been discussed extensively over the last two decades. Laplacians were introduced in the realm of computer science through graph-Laplacian (Chung, 1997), and were integrated soon after into machine learning (Zhu et al., 2003; Belkin and Niyogi, 2003), before theory was derived regarding their convergence towards real Laplacian operators (Hein et al., 2007; Singer, 2006). Recently, new insights were provided by Cabannes et al. (2021); Pillaud-Vivien and Bach (2023). This serves as a strong inspiration for Algorithm 4. In this meantime, deep learning practitioners have developed similar ideas, beginning with the TangentProp algorithm of Simard et al. (1991). This stream of research has led to self-supervised learning methods that have become the state-of-the-art for representation learning (Chen et al., 2020; Zbontar et al., 2021; Bardes et al., 2022). Recent works have cast deep links between those new methods and spectral embedding (HaoChen et al., 2021; Balestriero and LeCun, 2022). This serves as a strong inspiration for Algorithm 3, which can be seen as generalizing recent ideas in the physics community (Zhang et al., 2022). Laplacian operators are also known as the generators of the overdamped Langevin dynamics, which can be used as a diffusion process for sampling (see e.g. Bakry et al., 2014). Discretizing the diffusion leads to the Langevin Monte Carlo or Metropolis-adjusted Langevin algorithm (Grenander and Miller, 1994; Roberts and Tweedie, 1996; Andrieu et al., 2003), which has been used to generate samples given access to an unnormalized density (see e.g. Chewi, 2023). In machine learning, the picture is quite different, as the goal is to generate fresh samples that look like previous examples. However, recent advances have shown the usefulness of diffusion models in those generative settings (e.g. Ho et al., 2022; Rombach et al., 2022), notably when accessing their infinitesimal generators (Liu and Wang, 2016). Reciprocally, diffusion processes can be used to compute spectral embedding in a power method fashion (Han et al., 2020). The "pushforward sampler" approach considered here was not directly inspired by any of those papers, but derived from scratch based on simple theoretical observations for SDE. ## 2 Why do spectral embeddings matter? This section reminds the reader of the concept of spectral embedding, introduces an illustrative Laplacian operator, and explains its usefulness for representation learning and for sampling. ### Definition and examples Machine learning settings start with some data that are assumed to have been collected and stored as raw vectors \(x\in\mathcal{X}=\mathbb{R}^{d}\), which will be used as inputs for computer programs. The collection process is idealized as underlying a distribution \(\rho\in\Delta_{\mathcal{X}}\) which generates \(n\) independent variables \((X_{i})\sim\rho^{\otimes n}\), i.e. \(n\) input samples. Spectral embeddings consist in defining embeddings \(\varphi:\mathcal{X}\rightarrow\mathbb{R}^{m}\), or representation of the data, based on the spectral decomposition of an operator \(\mathcal{L}\) on \(L^{2}(\rho)\), could it be its eigen or its singular value decomposition. E.g., if a positive symmetric operator \(\mathcal{L}\) is diagonalized in \(L^{2}(\rho)\) as \[\mathcal{L}=\sum_{i\in\mathbb{N}}\lambda_{i}f_{i}\otimes f_{i}, \tag{1}\] for \(\lambda_{i}>0\) the eigenvalues of \(\mathcal{L}\) sorted in increasing order and \(f_{i}\in L^{2}(\rho)\) the corresponding eigenfunctions, those embeddings could be (see e.g. Coifman and Lafon, 2006; Cabannes et al., 2023b) \[\varphi_{\mathrm{diff.maps}}=(\exp(-\beta\lambda_{i})f_{i})_{i\in[m]},\qquad \varphi_{\mathrm{VICReg}}=(\sqrt{(1-\beta\lambda_{i})_{+}}f_{i})_{i\in[m]}, \tag{2}\] for some parameter \(\beta>0\) that assimilates to an inverse temperature. The Laplacian example.To ground the discussion, a prototypical example is provided by a Laplacian operator. The Laplace operator, also known as Laplacian, is arguably the most natural differential symmetric operator. It is defined as the sum of unmixed second-order partial derivatives. Equivalently, it is characterized as the divergence of the gradient; the divergence being itself characterized as the adjoint of the gradient. Those two observations unveil generalizations of the Laplace operator into a rich family of operators also named Laplacians. Laplacians are differentiated by two factors: the geometry to define adjunction, the manifold to define gradients. Those operators describe many physical phenomena, such as heat diffusion, relations between electrostatic (resp. gravitational) potentials and charge (resp. mass) distributions, or wave equations. The running example of this study will be the Laplacian \(\mathcal{L}=\nabla^{*}\nabla\) where \(\nabla\) is the Euclidean gradient, and the adjoint is taken with respect to the \(L^{2}(\rho)\) geometry.1 In other terms, \(\mathcal{L}:L^{2}(\rho)\to L^{2}(\rho)\) is defined for \(f\) and \(g\) in \(L^{2}(\rho)\) as Footnote 1: Usually, Laplacians are defined as negative self-adjoint operators \(-\nabla^{*}\nabla\) as for the usual Laplacian \(\Delta=\sum\partial_{ti}^{2}\) which corresponds to adjunction with respect to \(L^{2}(\mathrm{d}x)\) endowed with the Lebesgue measure. This paper rather uses the convention that Laplacian are positive self-adjoint operators. \[\left\langle f,\mathcal{L}g\right\rangle_{L^{2}(\rho)}=\mathbb{E}[\left\langle \nabla f(X),\nabla g(X)\right\rangle]. \tag{3}\] The quadratic form associated with \(\mathcal{L}\) is known as the Dirichlet energy, it reads \[\mathcal{E}_{\mathcal{L}}(f)=\left\langle f,\mathcal{L}f\right\rangle_{L^{2}( \rho)}=\mathbb{E}[\|\nabla f(X)\|^{2}].\] For simplicity, we will often consider the setting where \(\rho\) has a density \(p\) expressed through a Gibbs potential \(V\), which gradient would be the opposite of the "score" \(\nabla\log p\) in the generative AI vocabulary (Hyvarinen, 2005). Formally, \(\rho\ll\mathrm{d}x\) with \(\mathrm{d}x\) the Lebesgue measure, and for \(x\in\mathcal{X}\), \[p(x)=\frac{\rho(\mathrm{d}x)}{\mathrm{d}x}=\frac{\mathrm{d}\rho}{\mathrm{d}x }(x)=\frac{1}{Z}\exp(-V(x)),\qquad V(x)=-\log p(x)+C, \tag{4}\] where \(Z\) is a normalization constant known as the partition function, and \(C=-\log(Z)\). Under mild assumptions on \(p\), \(\mathcal{L}^{-1}\) is compact and \(\mathcal{L}\) is characterized as, see Appendix A.1, \[\mathcal{L}f=-\Delta f+\left\langle\nabla V,\nabla f\right\rangle. \tag{5}\] Self-supervised learning.The operator in (3) was the operator of choice for machine learning in the pre-deep learning area. Interestingly, many self-supervised learning algorithms can be seen as using variations of this operator. For example, the VICReg algorithm of Bardes et al. (2022) is built from on data augmentation \(\xi,\xi^{\prime}\) generated conditionally to \(X\) and is stylized (when disregarding deep learning engineering tricks) as estimating spectral embeddings based on the operator \(\mathcal{L}_{\mathrm{aug}}\) such that \[\left\langle f,\mathcal{L}_{\mathrm{aug}}g\right\rangle=\mathbb{E}_{X\sim p }\left[\mathbb{E}_{\xi,\xi^{\prime}}\left[\left\langle f(\xi)-f(\xi^{\prime}),g(\xi)-g(\xi^{\prime})\right\rangle|X\right]\right], \tag{6}\] where \(\xi\) and \(\xi^{\prime}\) are two independent identically distributed random transformations of the input \(X\). In essence, this corresponds to the introduction of a new geometry for the \(\nabla\) operator, which could be represented as a signed measure on \(\mathcal{X}^{2}\), \[\nabla_{\mathrm{aug}}f(x)=\mathbb{E}_{\xi,\xi^{\prime}}\left[(f(\xi)-f(\xi^{ \prime}))\delta_{\xi,\xi^{\prime}}\left|X=x\right]\in\mathrm{Span}\,\Delta_{ \mathcal{X}^{2}}.\] Once again, \(\mathcal{L}_{\mathrm{aug}}\) is a self-adjoint operator, indeed \(\mathcal{L}_{\mathrm{aug}}=\nabla_{\mathrm{aug}}^{*}\nabla_{\mathrm{aug}}\) is a Laplacian. As mentioned earlier, spectral embeddings could also be defined through the singular value decomposition of an operator that is not necessarily self-adjoint or diagonalizable in \(L^{2}(\rho)\). This is notably useful to model the large limits of the CLIP model (see the work of Lee et al., 2021, for a characterization of spectral embeddings behind CLIP). ### Usefulness of spectral embeddings The usefulness of the Laplacians, and more generally of spectral embeddings, for representation learning has been a long subject of discussion. In essence, Laplacian regularization, i.e. adding a regularization term \(\mathcal{E}_{\mathcal{L}}(f)\) on an objective to learn a function \(f\), enforces the variations of \(f\) to be localized on low-density regions of the input space. The prior is that when inputs are close to one another they should have similar outputs, while if data are separated by region with no data, they could have different outputs. We refer the reader to the vast literature on the subject (Bousquet et al., 2003; Cabannes et al., 2023a). In terms of applications, it seems that the refined embedding defined by Coifman and Lafon (2006) has had the greatest impact. Their method, known as diffusion maps, impacted many different fields, such as molecular simulation (Glielmo et al., 2021), acoustics (Bianco et al., 2019) or the study of gene interaction (van Dijk et al., 2018) to name a few. More recently, the VICReg algorithm of Bardes et al. (2022) was shown to be an highly efficient way to get embeddings on raw images data consisting of arrays of pixels with neural network architectures. Some useful remarks for sampling.While there exist many perspectives and algorithms to perform sampling, recent advances have showcased the usefulness of diffusion models in the realm of machine learning (see e.g. Dockhorn et al., 2022). One of the simplest diffusion model is the overdamped Langevin dynamics \[\mathrm{d}X_{t}=-\nabla V(X_{t})\,\mathrm{d}t+\sqrt{2\beta}\,\mathrm{d}B_{t}. \tag{7}\] In this equation, a random particle \(X\) is initialized at \(X_{0}\) and follows the evolution that \(X_{t+\mathrm{d}t}\) equals \(X_{t}\) plus some random noise added by a Brownian motion model \(\mathrm{d}B_{t}\) and a temperature parameter \(\beta\), together with a drift \(\nabla V(X_{t})\,\mathrm{d}t\) which derives from a potential \(V\) and attracts \(X_{t}\) towards small values of \(V\). Appendix A.2 details the evolution of the law of the particle. In essence for \(\beta=1\), if \(\mu_{t}\) is the law of \(X_{t}\), the particle at time \(t\), and if \(\nu=\mu_{0}\) has a density against \(\rho\), \(\mu_{t}\) follows the Fokker-Planck evolution \[\mathrm{d}\frac{\mathrm{d}\mu_{t}}{\mathrm{d}\rho}=-\mathcal{L}\frac{\mathrm{ d}\mu_{t}}{\mathrm{d}\rho}\,\mathrm{d}t, \tag{8}\] where \(\mathcal{L}\) is defined as per (5), and \(\rho\) is the Gibbs density associated with the potential \(V\). This leads to the following characterization of the Langevin dynamics, which can be seen as the dual of Monte-Carlo Markov chains and diffusion models, where rather than working on particles one works on densities. **Proposition 1**.: _The Langevin dynamics identifies as \(X_{t}\sim(\psi_{t})_{\#}\nu\), where \(\nu\) is the law of \(X_{0}\) (assumed to have a density) and \(\psi_{t}:\mathcal{X}\to\mathcal{X}\) is initialized at the identity \(\psi_{0}=\mathrm{id}\) and follows the gradient flow of the variational objective_ \[\psi\in\operatorname*{arg\,min}_{\psi:\mathcal{X}\to\mathcal{X}}\sum_{i\in \mathbb{N}}\lambda_{i}\mathbb{E}_{Z\sim\nu}[f_{i}(\psi(Z_{1}))]^{2}, \tag{9}\] _where \((\lambda_{i},f_{i})\) is the spectral decomposition of \(\mathcal{L}\) in (3), \(\psi:\mathbb{R}^{p}\to\mathcal{X}\) is a pushforward maps to be learned (recall that \(X\sim\psi_{\#}\nu\) means \(X=\psi(Z)\) for \(Z\sim\nu\)) and the differentiation is understood with respect to \(\mathrm{d}\psi_{\#}\nu/\,\mathrm{d}\rho\) in \(L^{2}(\rho)\). Moreover, the solution of (9) verifies \(\rho=\psi_{\#}\nu\) for any \(\psi\) in the set of arguments of the minima, where \(\rho=\exp(-V)\,\mathrm{d}x\) is the Gibbs density associated with \(V\)._ In simple terms, Proposition 1 states that \(X_{t}\) will converge in law towards \(X\sim\rho\). Moreover, it implicitly shows how the mixing time of the Langevin diffusion is governed by \(\lambda_{1}^{-1}\), which is known as the Poincare constant, and corresponds to the time \(t\) to divide by a factor at most \(e\) the energy of \((\psi_{t})_{\#}\nu\) on the span of the \((f_{i})_{i\geq 1}\), the bottleneck being due to the energy on low-frequencies, i.e. the eigenfunctions associated with small eigenvalues. In practice, it might preferable to follow a different dynamic, which could be done e.g. by replacing \(\lambda_{i}\) by \(h(\lambda_{i})\) in (9) for \(h\) a decreasing function, assuming that the inductive bias of the architecture that parameterized \(\psi\) will easily remove the energy on the high frequencies functions (i.e. the \(f_{i}\) for big \(i\in\mathbb{N}\)). The well-known fact that \(\rho\) is the stationary measure of the Langevin dynamics can be seen as a consequence of a more generic characterization of distributions stated by Theorem 1. While we are not aware of any links between representation learning and sampling, this will provide a useful characterization of distributions in order to turn data representations into sampling procedures. **Theorem 1** (Pushforward sampler learned by function matching in a contrastive fashion).: _Let \(\rho\) be a distribution deriving from a Gibbs potential \(V\), and let \(\varphi\in L^{2}(\rho)^{\mathbb{N}}\) be a representation of the data under the form \(\varphi=U(c_{i}f_{i})_{i\in\mathbb{N}}\) for \((f_{i})\) an orthogonal basis of \(L^{2}(\rho)\), \(f_{0}=1\) the constant function, \((c_{i}\neq 0)\) a set of non-null weights and \(U\) an orthogonal matrix on \(\ell^{2}\). Then \(\rho\) is characterized for any \(p\in\mathbb{N}^{*}\) and any nonatomic distribution \(\nu\in\Delta_{\mathbb{R}^{p}}\) in \(\mathbb{R}^{p}\) as_ \[\rho=\psi_{\#}(\nu);\qquad\psi\in\operatorname*{arg\,min}_{\psi:\mathbb{R}^{p} \rightarrow\mathcal{X}}\mathbb{E}_{Z_{1},Z_{2}\sim\nu}[\langle\varphi(\psi(Z_{1 })),\varphi(\psi(Z_{2}))\rangle], \tag{10}\] _where the equation (10) holds for any \(\psi\) in the set of minimizers._ A new family of sampling algorithms.Theorem 1 unveils a simple procedure to generate new samples when accessing a spectral representation \(\varphi\) built from functions \(f_{i}\) coming from the spectral decomposition of an operator in \(L^{2}(\mu)\) as long as constant functions are in the null space of \(\mathcal{L}\) as for the examples provided precedently (in which case our sampling procedure assimilates to Stein's method (Stein, 1986)). Based on a original random variable \(Z\sim\nu\) for \(\nu\) chosen by the practitioner (e.g. a Gaussian variable in \(\mathbb{R}\)), one can solve for \(\psi\) according to equation (10). Then new samples are generated _in one step_ as \(X=\psi(Z)\) for \(Z\sim\nu\). Note that while the base distribution \(\nu\) could be any continuous distribution _a priori_, its choice will condition the hardness to learn the pushforward map \(\psi\). In terms of analogy with existing methods, the objective (10) could be seen as the opposite of an auto-encoder, given an encoder \(\varphi\), one wants to learn a decoding \(\psi\) so that \((\varphi\circ\psi)_{\#}\nu\) maps to \(\varphi_{\#}\rho\), which can also be seen as statistics matching (per analogy with moment matching). It could also be seen as some form of non-adversarial GAN, where \(\varphi\) is some critic, and \(\psi\) wants to make sure that the evaluation of the critic averages to zero. ``` Data: Representation \(\varphi:\mathcal{X}\rightarrow\mathbb{R}^{m}\) (2), initial distribution \(\nu\in\Delta_{Z}\) for any sample space \(\mathcal{Z}\). Result: Samples \((x_{i})\) from the plausible underlying distribution \(\rho\). Fit a model \(\psi_{\theta}:\mathcal{Z}\rightarrow\mathcal{X}\) by minimizing the objective \(\mathcal{E}_{Z_{1},Z_{2}\sim\nu}[\langle\varphi(\psi_{\theta}(Z_{1})),\varphi (\psi_{\theta}(Z_{2}))\rangle]\); Generate samples \((z_{i})\) from \(\nu\) and set \(x_{i}=\psi_{\theta}(z_{i})\). ``` **Algorithm 1**Learning pushforward sampler by implicit function matchings ## 3 Principles to build spectral embeddings This section gathers different ideas from existing literature that can be used to estimate the spectral decomposition of an operator, or at least build representations as in Theorem 1 for \((f_{i})\) the eigenfunctions of \(\mathcal{L}\) and \((c_{i})\) defined from the corresponding eigenvalues. It focuses on operators \(\mathcal{L}:L^{2}(\rho)\to L^{2}(\rho)\) defining quadratic forms as expectations over the data, i.e. \[\mathcal{E}_{\mathcal{L}}(f,g)=\langle f,\mathcal{L}g\rangle=\mathbb{E}_{X}[F( f,g,X)]\] for some evaluable base operation \(F:L^{2}(\rho)\times L^{2}(\rho)\times\mathcal{X}\rightarrow\mathbb{R}\) that is supposed to be linear in both \(f\) and \(g\). This is notably the form of the VICReg operator (6) and the Laplacian of (3) where \(F\) is defined as \(F(f,g,X)=\langle\nabla f(X),\nabla g(X)\rangle\). For the sake of exposition clarity, some technical details are relegated to Appendix B. ### Empirical variational objectives Based on Courant-Fischer min-max principle, one could search for the first \(m\) left and right singular functions of \(\mathcal{L}\) by optimizing the quantity, for \(\varphi^{\text{left}},\varphi^{\text{right}}:\mathcal{X}\rightarrow\mathbb{R }^{m}\) with coordinates \(\varphi_{i}\), \[\arg\max_{\varphi^{\text{left}}}\min_{\varphi^{\text{left}}}\mathcal{E}_{ \mathcal{L},m}(\varphi^{\text{left}},\varphi^{\text{right}}):=\sum_{i\in[m]} \mathcal{E}_{\mathcal{L}}(\varphi_{i}^{\text{left}},\varphi_{i}^{\text{right} })=\mathbb{E}_{X\sim\rho}\left[F(\varphi_{i}^{\text{left}},\varphi_{i}^{\text{ right}},X)\right].\] together with the constraint that both the \((\varphi_{i}^{\text{left}})\) and \((\varphi_{i}^{\text{right}})\) should be orthonormal families in \(L^{2}(\rho)\). For simplicity, we will focus on the symmetric case where \(F(f,g,\cdot)=F(g,f,\cdot)\) and more particularly on the case where \(\mathcal{L}\) is the Laplacian of (3). When \(F\) is not symmetric, the principles would work similarly, yet one would have to replace "eigen" with "singular", and search for both the left and the right singular functions simultaneously. In the symmetric case, one could search for the eigenfunctions of \(\mathcal{L}\) with \(\varphi:\mathcal{X}\to\mathbb{R}^{m}\) by minimizing \[\arg\min_{\varphi}\mathbb{E}_{\mathcal{L},m}(\varphi):=\sum_{i\in[m]}\left\langle \varphi_{i},\mathcal{L}\varphi_{i}\right\rangle_{L^{2}(\rho)}=\mathbb{E}_{X \sim\rho}\,[F(\varphi,\varphi,X)]=\mathbb{E}_{X\sim\rho}[\|D\varphi(X)\|^{2}], \tag{11}\] with the constraint that the \((\varphi_{i})\) should be orthogonal to one another in \(L^{2}(\mu)\), where \(D\) denotes the Jacobian, the norm being the Frobenius norm on matrices. In machine learning settings, one does not access the distribution \(\rho\) but \(n\) samples \((x_{i})_{i\in[n]}\) thought as \(n\) independent realizations of the random variable \(X\sim\rho\), hence approximation rules have to be found to estimate the objective (11). The case of graph-Laplacian.Graph-Laplacians are the classical way to estimate Laplacian-related objectives in machine learning (Zhu et al., 2003; Belkin and Niyogi, 2003). Although there exist many variants, those methods mainly consist in approximating the Laplacian operator \(\mathcal{L}\) with finite differences. For \(f:\mathcal{X}\to\mathbb{R}\) \[\mathbb{E}[\|\nabla\varphi(X)\|^{2}]\approx\sum_{i,j\in[n]}w_{ij}(\varphi(x_{ i})-\varphi(x_{j}))^{2}, \tag{12}\] where the \(w_{ij}\) are a set of weights usually taken as \((w_{ij})=D^{-1/2}\tilde{W}D^{-1/2}\) where \(\tilde{w}_{ij}\) follows a Gaussian law \(\tilde{w}_{ij}\propto\exp(-\alpha\|x_{i}-x_{j}\|^{2})\) with \(\alpha\) a scale parameter, and \(D=\operatorname{diag}(\tilde{w}_{ii})\) is the degree matrix (associated with the weighted graph build on the \(n\) points with weights \(\tilde{w}_{ij}\)). The approximation of the Laplacian operator with finite differences could be justified when \(f\) is thought of as a non-parametric model. However, this method is known to suffer from the curse of dimensionality (Bengio et al., 2006; Hein et al., 2007; Singer, 2006). Generic sub-sampling.When \(\varphi\) is actually parameterized as \(\varphi_{\theta}\), one might simply replace the expectation by the empirical expectation, and approximate \(\mathcal{E}_{\mathcal{L}}\) through the sub-sampling scheme \[\mathbb{E}[F(\varphi_{\theta},\varphi_{\theta},X)^{2}]\approx n^{-1}\sum_{i \in[n]}F(\varphi_{\theta},\varphi_{\theta},x_{i})=n^{-1}\sum_{i\in[n]}\|D \varphi_{\theta}(x_{i})\|^{2}\,, \tag{13}\] where the Jacobian has to be understood with respect to the input variable \(x\in\mathcal{X}\). By replacing the finite differences approximation in (12) by the empirical summation of (13), one can actually get stronger guarantees on the statistical convergence of the estimated eigenfunctions towards the real ones (cf. Pillaud-Vivien and Bach, 2023, for the Laplacian case). Low-rank approximation.The \(m\) gradients \(\nabla(\varphi_{\theta})_{i}\) of the Jacobian matrix in the objective (13) are functions from \(\mathcal{X}\) to the tangent spaces of \(\mathcal{X}\) which can be really big, and eventually inconvenient to encode on a computer without low-rank approximation techniques. Those low-rank approximation techniques could be projections of the Jacobian onto some directions, e.g. leveraging the formula that for \(u\) a uniformly random direction on the sphere \(\mathcal{S}^{d-1}\) and any \(x\) in \(\mathcal{X}=\mathbb{R}^{d}\), \[\|D\varphi_{\theta}(x)\|=c^{-1}\mathbb{E}_{u\sim\mathcal{U}}[\langle D\varphi _{\theta}(x),u\rangle^{2}],\qquad c=\mathbb{E}[u_{1}^{2}]\approx\operatorname {vol}(\mathcal{S}^{d-2}).\] or by selecting certain particular directions such as the ones defined by \(\xi^{\prime}-\xi\) for \(\xi\) and \(\xi^{\prime}\) two random augmentations from \(x\), which would higher the constant \(c\) in the latter objective, and accelerate convergence of stochastic gradient descent (SGD) on this objective (recall how SGD speed is determined by the standard deviation of the gradients, which depends linearly on \(c^{-1}\) in this case, cf. Bubeck (2015)). For linear parameterization of each coordinate of \(\varphi\) under the form \(\varphi_{\theta}=\langle\theta,\kappa(x)\rangle\) for some Hilbert space \(\mathcal{H}\) and a feature map \(\kappa:\mathcal{X}\to\mathcal{H}\), one could consider the reparameterization of \(\theta\) with \(a\in\mathbb{R}^{p}\) through \(\theta=\sum_{i\in[p]}a_{i}\kappa(y_{i})\) for some predefined representer \((y_{i})\in\mathcal{X}^{p}\) and a small \(p\). In molecular dynamics, this is similar to the Galerkin method, while in machine learning it resembles Nystrom subsampling. One could furthermore refer to the Rayleigh-Ritz method in numerical analysis. The approximation (13) becomes, with \(k(x,y)=\kappa(x)^{\top}\kappa(y)\), \[\langle\varphi_{\theta},\mathcal{L}\varphi_{\theta}\rangle\approx a^{\top}La, \qquad L=(n^{-1}\sum_{l\in[n]}F(k(\cdot,y_{i}),k(\cdot,y_{j}),x_{l}))_{ij}\in \mathbb{R}^{p\times p}. \tag{14}\] This will lead to a small eigenvalue problem in \(\mathbb{R}^{p}\), although building \(L\) might requires \(O(np^{2}c)\) where \(c\) is the cost of computing the \(F(k(\cdot,y_{i}),k(\cdot,y_{j}),x_{k})\). This cost is typically \(c=d\) in the Laplacian case (Cabannes et al., 2021), which could be quite prohibitive for large-scale applications. A main algorithmic contribution of this paper is to reduce it to \(c=1\) for dot-product and translation-invariant kernels as detailed by Algorithms 2, 4 and proven in Appendix C. This seemingly innocuous improvement allows to match the sample complexity of Nystrom kernel ridge regression, meaning that practitioners can now use Laplacian regularization while simply multiply by a small constant factor the total number of floating point operations of their methods (Meanti et al., 2020). ### Orthogonality constraints As mentioned earlier, the minimization of the variational objective (11) demands enforcing orthogonality constraints on the recovered eigenfunctions. Many methods could be proposed, such as naive iterative methods based on Courant-Fischer min-max principle; or using unitary-invariant objectives together with Cholesky decomposition at each gradient update with bilevel optimization (Pfau et al., 2019). This section proposes two efficient methods, an advanced one for kernel methods; and a simple one for neural network architecture. Generalized eigenvalues decomposition.Classically, eigenfunctions are searched as parametric function \(\varphi_{\theta}:x\mapsto\theta^{\top}\kappa(x)\) through a mapping \(\kappa:\mathcal{X}\rightarrow\mathcal{H}\) for \(\mathcal{H}\) a separable Hilbert space, and a parameter \(\theta\in\mathcal{H}\)(Ham et al., 2004). Those "linear" models parameterize rich spaces of functions known as reproducing kernel Hilbert spaces. The full geometry of those spaces being captured by the kernel \(k(x,y)=\kappa(x)^{\top}\kappa(y)\), those functions can actually be described through the sole use of a (reproducing) kernel \(k:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}\)(Scholkopf and Smola, 2001). Many \(k\), such as the Gaussian kernel \(k(x,y)=\exp(-\left\|x-y\right\|^{2})\) on \(\mathcal{X}=\mathbb{R}^{d}\), are known to possess universal approximation properties, i.e. to enable the learning of any function (Micchelli et al., 2006). Figure 1: Learning spherical harmonics with polynomials of degree three (with \(k(x,y)=(1+x^{\top}y)^{3}\) which corresponds to \(\kappa(x)\) concatenating all the multivariate monomials of degree smaller or equal to \(D=3\)). Like many diffusion operators, when \(\rho\) is uniform on the sphere, the operator \(\mathcal{L}\) is diagonalized by polynomials of increasing degrees (Bakry et al., 2021). The polynomial kernel of degree \(D\) allows to learn all the spherical harmonics of \(s\)-th kind for \(s\) smaller or equal to \(D\) (the ones of higher kind are polynomials of higher degree that can not be reconstructed with polynomials of degree \(D\) as illustrated with the fourth kind on the Figure). Some of the learned eigenfunctions are represented on the top row, while some ground truths are represented on the bottom row. Our method learned perfectly valid harmonics, although, for eigenvalues that are repeated, it does not learn the canonical ones, but any basis of the different eigenspaces (which can be observed with the harmonics of the second kind on the Figure). It is useful to think with the mapping \(S:\mathcal{H}\to L^{2}(\rho);\theta\mapsto\varphi_{\theta}\)(Caponnetto and De Vito, 2006). In particular, the search for the spectral decomposition solving the system \(\mathcal{L}\varphi=\lambda\varphi\) can be solved through \(\mathcal{H}\) as \(\mathcal{L}S\theta=\lambda S\theta\). Moreover, projecting this later system into \(\mathcal{H}\) leads to \(S^{*}\mathcal{L}S\theta=\lambda S^{*}S\theta\). Two central operators on \(\mathcal{H}\) appears there, which, as explained in Appendix B, are simply defined as \[\Sigma_{\mathcal{H}}=S^{*}S=\mathbb{E}_{X}[\kappa(X)\otimes\kappa(X)],\qquad L _{\mathcal{H}}=S^{*}\mathcal{L}S=\mathbb{E}_{X}\big{[}\sum_{i=1}^{d}\partial_{ i}\kappa(X)\otimes\partial_{i}\kappa(X)\big{]}.^{2} \tag{15}\] The spectral decomposition of \(\mathcal{L}\) can be solved with the generalized spectral decomposition of \((L_{\mathcal{H}},\Sigma_{\mathcal{H}})\). More in particular, under definition assumptions, if \((\lambda_{i},\theta_{i})\) is the generalized eigendecomposition of \((L_{\mathcal{H}},\Sigma_{\mathcal{H}})\), then \((\lambda_{i},\varphi_{\theta_{i}})\) is the eigendecomposition of \(\mathcal{L}\). In particular, the orthogonal constraints on the \((\varphi_{\theta_{i}})\) are enforced by the fact that \[\big{\langle}\varphi_{\theta_{i}},\varphi_{\theta_{j}}\big{\rangle}=(S\theta _{i})^{\top}S\theta_{j}=\theta_{i}^{\top}\Sigma_{\mathcal{H}}\theta_{j}= \delta_{ij},\] where the last equality is due to properties of the generalized eigendecomposition (Golub and Loan, 2013). In practice, the eigenfunctions of \(\mathcal{L}\) might not belong to our parametric models, and one might need to regularized the system as \((L_{\mathcal{H}}+\varepsilon I,\Sigma_{\mathcal{H}})\) or \((L_{\mathcal{H}},\Sigma_{\mathcal{H}}+\varepsilon I)\) for a small regularizer \(\varepsilon>0\) to avoid getting solution \(\theta_{i}=+\infty\). For the Galerkin method described precedently (14), the orthogonal constraints are cast with \(\Sigma=n^{-1}K^{\top}K\in\mathbb{R}^{p\times p}\) where \(K=(k(x_{i},y_{j}))\in\mathbb{R}^{n\times p}\), and the spectral decomposition of \(\mathcal{L}\) is approximated from the generalized eigenvalue decomposition (GEVD) of \((L,\Sigma)\). Interestingly, this property of the GEVD seems to have never been utilized for graph-Laplacians. ``` Data: Datapoints \((x_{i})\in\mathcal{X}^{n}\), a dot-product kernel \(k(x,y)=q(x^{\top}y)\), \(q:\mathbb{R}\to\mathbb{R}\). Result: Estimate \((\lambda_{i},f_{i})\) of the spectral decomposition of \(\mathcal{L}\). Compute \(X=(x_{i}^{\top}x_{j})\in\mathbb{R}^{p\times n}\); Compute \(\Sigma=q(X)q(X)^{\top}\in\mathbb{R}^{p\times p}\) where \(q(X)\) is understood elementwise; Compute \(L=(q^{\prime}(X)q^{\prime}(X)^{\top})\); Update \(L_{ij}\gets X_{ij}L_{ij}\) for all \(i,j\in[p]\); Solve the generalized eigenvalues problems \((\lambda_{i},(a_{ij})_{j\in[p]})_{i\in[p]}\leftarrow\mathrm{GEVD}(L,\Sigma)\); Set \(f_{i}(x):=\sum_{j\in[p]}a_{ij}k(x,x_{i})\). ``` **Algorithm 2**Fast Laplacian spectral embeddings algorithm for dot-product kernel Plain orthogonality constraints.In the era of deep learning, fancy linear algebra considerations such as generalized spectral decompositions can be replaced by simple losses. In particular, when minimizing (11) with \(\varphi:\mathcal{X}\to\mathbb{R}^{m}\), orthogonality constraints could be enforced through the regularizer for \(\varphi:\mathcal{X}\to\mathbb{R}^{m}\) parameterizing \(m\) eigenfunctions, \[\mathcal{R}(\varphi)=\|\mathbb{E}_{X\sim\rho}[\varphi(X)\varphi(X)^{\top}]-I \|^{2}. \tag{16}\] This regularizer is to relate with the Variance-Covariance regularizer (VCReg) in the realm of self-supervised learning (Bardes et al., 2022). In terms of spectral decomposition retrieval, for any \(\beta>0\), Lemma 2 of Cabannes et al. (2023b) states that the consequent energy minimization satisfies \[\operatorname*{arg\,min}_{\varphi:\mathcal{X}\to\mathbb{R}^{m}}2\beta\sum_{i \in[m]}\left\langle\varphi_{i},\mathcal{L}\varphi_{i}\right\rangle+\mathcal{ R}(\varphi)=\left\{U(\sqrt{(1-\beta\lambda_{i})_{+}}f_{i})_{i\in[m]}\, \middle|\,U\in\mathbb{R}^{m\times m};UU^{\top}=I\right\}, \tag{17}\] where \((\lambda_{i},f_{i})\) is the eigenvalue decomposition of \(\mathcal{L}\), with \(\lambda_{i}\) in increasing order. In essence, the parameter \(\beta\) acts as a threshold parameter on the eigenvalues of \(\mathcal{L}\), forbidding the retrieval of eigenfunctions associated with an eigenvalue that is bigger than \(\beta^{-1}\). While the spectral decomposition of \(\mathcal{L}\) is only retrieved up to a rotation matrix \(U\) and a function \(\lambda\mapsto\sqrt{(1-\beta\lambda)_{+}}\), the solution could be rectified with PCA on the embedding space \(U\varphi^{\star}(\mathcal{X})\) endowed with the push-forward distribution \((U\varphi^{\star})_{\#}\mu\). However, for both representation learning, and the pushforward sampler algorithm, there is a nonlocal code code code \((\Omega_{\theta},K)\) in the representation learning, and a global algorithm for the spectral decomposition. ## 4 Experiments Representation learning.First of all, one can check that our techniques do learn eigenfunctions of the Laplacian operator \(\mathcal{L}\) (3) in settings where the ground truth is known. To this end, let us consider the sphere in \(\mathbb{R}^{3}\) with uniform distribution where the operator \(\mathcal{L}\) (3) identifies to the square of the orbital angular momentum (Condon and Shortley, 1935), whose eigenfunctions are known to be the spherical harmonics. Figure 1 illustrates how our Galerkin approach enables the learning of spherical harmonics. Moreover, the different methods could be compared by looking at the error made when estimating the corresponding eigenvalues, which is reported on Figure 2. Sampling.Once again to check for soundness of our methods, one can consider settings where many quantities can be derived in closed-form. For example, when \(\mathcal{X}\) is one-dimensional and \(\nu\) is taken as the uniform distribution on \([0,1]\), an optimal \(\psi\) in (10) is provided by the quantile function, in which case the consequent sampling procedure identifies with inverse transform sampling. Reciprocally, when \(\rho\) is taken as the uniform distribution on \([0,1]\), then an optimal \(\psi\) is provided by the cumulative distribution function. As a proof of concept, Figure 2 illustrates how when choosing \(\nu=\mathcal{N}(0,1)\) and \(\rho\) the uniform distribution on \([0,1]\), one can learn the error function. In this latter setting, \(\mathcal{L}\) assimilates to \(-\Delta\) and its spectral decomposition is \(f_{j}=\cos(2\pi(j-1)x)\), \(\lambda_{j}=4\pi^{2}(j-1)^{2}\). Additional experiments and details are provided in Appendix D. ## 5 Conclusion This paper's first aim was to convince the reader of the usefulness of spectral embeddings, notably the ones linked to the Laplacian operator \(\mathcal{L}\) in (3). With this perspective in mind, it introduces a new algorithm, the "pushforward sampler", which has the advantage of providing fast sampling at inference time (sample \(Z\sim\nu\) and apply \(\psi\)). This algorithm provides new avenues to link what is learned about the underlying structure of the data in generative AI and in representation learning. We Figure 2: (Left) Testing error when learning the first 25 “spherical harmonics” eigenvalues with the exponential kernel (blue), the gaussian kernel (orange), graph-Laplacian (green), and the exponential kernel with 100 “Galerkin” representers (red). We notice the superiority of subsampling over graph-Laplacian (green vs all), the usefulness of taking a good kernel (orange vs blue), and the efficiency of Galerkin method (red vs blue). (Right) Learning the error function as the optimal mapping \(\psi\) to map the Gaussian distribution \(\mathcal{N}(0,1)\) to the uniform distribution on \([0,1]\) with the objective (10) for \(\varphi\) defined by the first ten eigenfunctions of \(\mathcal{L}\) in (3), i.e. \(f_{j}(x)=\cos(2\pi(j-1)x)\) with \(j\in[10]\) and \(c_{j}=1\). The ground truth, that relates to the error function, is plotted in dashed orange, while the learned CDF is plotted in blue. Experimental setups and reproducibility specifications are detailed in Appendix D. have then turned our attention to the estimation of those embeddings while only being given samples \((X_{i})\sim\rho^{\otimes n}\) rather than the full distribution \(\rho\). We differentiate two techniques. * Kernel-method techniques that offer strong statistical guarantees, extending recent algorithms (Pillaud-Vivien and Bach, 2023) and improving them as per Algorithms 2 and 4. * Deep learning techniques that were proven successful in representation learning, extrapolating the underlying structure behind VICReg (Bardes et al., 2022) as per Algorithm 3. Limitations.Although the losses to estimate spectral embeddings in this paper were designed to be convex on the cone of positive matrices \(\varphi(X)\varphi(X)^{\top}\), when models are not convex, training dynamics might exhibit robustness and stability issues, and proper hyperparameters tuning might be required to induce behaviors of interest in neural networks. Those considerations related to deep learning engineering were not considered in this paper. Moreover, this paper heavily relies on linear structures in abstract spaces (operator on Hilbert spaces). While those structures are both powerful and easy to work with, it is not clear if they really capture what makes things work in practice. Finally, empirical validations on large datasets with high-dimensional inputs are missing, and it is not clear yet if the "pushforward sampler" could make a name for itself in the generative AI landscape. Indeed, it might find applications elsewhere, for example to define better initial distributions for the Metropolis-adjusted Langevin algorithm. In the meanwhile, spectral embeddings could yield good reaction coordinates (Wang et al., 2019), which could be used to modify Langevin dynamics to lower its mixing time (Comer et al., 2014; Dalalyan and Riou-Durand, 2018). Future work.The introduction of the pushforward sampler raises many questions, both theoretical and empirical. In practice, we expect useful to stack several features into \(\varphi\), e.g. on images, some features could have been learned with self-supervised learning, while others could be engineered to enforce super-resolution. In theory, we would like to deepen the duality where rather than transporting particles one learns a transport map directly. In particular, while gradient flow with a specific objective can be seen as following Langevin dynamics on measures, one may unveil links between other transports and gradient flow over objectives that might evolve over time, e.g. \(\mathcal{L}=\mathcal{L}_{t}\) could be the Laplacian linked to some mixed distribution \(t\rho+(1-t)\mathcal{N}(0,I)\) as in diffusion models (Song et al., 2021). More in general, the pushforward sampler presented in this paper might be one of many pushforward learning algorithms by function matching, and it practice it might be more interesting to learn \(\psi\) by matching characteristic functions with \[\mathbb{E}_{Z_{1},Z_{2}\sim\nu}\left[\int_{\mathbb{R}}(e^{it\psi(Z_{1})}- \mathbb{E}_{X\sim\nu}[e^{itX}])(e^{it\psi(Z_{2})}-\mathbb{E}_{X\sim\rho}[e^{ itX}])\mu(\mathrm{d}t)\right],\] for some measure \(\mu\) on \(\mathbb{R}\), which might lead to better conditioned gradients when performing backpropagation since the characteristic function only takes values that are bounded by one in modulus. Indeed, as explained in Appendix D, the objective used for Figure 2 can be seen as minimizing the real-part of this objective for \(\mu\) the counting measure on \([10]/2\). Reproducibility.The code to reproduce the experiments is available on the author GitHub at [https://github.com/VivienCabannes/](https://github.com/VivienCabannes/), the kernel laplacian repository is also available on PyPI as the "klap" package ($ pip install klap). Acknowledgement.The author would like to thanks Loucas Pillaud-Vivien, Alberto Bietti, Carles Domingo-Enrich, Aram-Alexandre Pooladian, Valentin de Bortoli, Ricky Chen, Yann Lecun and Leon Bottou for useful inputs.
2307.07304
Breaking the $3/4$ Barrier for Approximate Maximin Share
We study the fundamental problem of fairly allocating a set of indivisible goods among $n$ agents with additive valuations using the desirable fairness notion of maximin share (MMS). MMS is the most popular share-based notion, in which an agent finds an allocation fair to her if she receives goods worth at least her MMS value. An allocation is called MMS if all agents receive at least their MMS value. Since MMS allocations need not exist when $n>2$, a series of works showed the existence of approximate MMS allocations with the current best factor of $\frac34 + O(\frac{1}{n})$. However, a simple example in [DFL82, BEF21, AGST23] showed the limitations of existing approaches and proved that they cannot improve this factor to $3/4 + \Omega(1)$. In this paper, we bypass these barriers to show the existence of $(\frac{3}{4} + \frac{3}{3836})$-MMS allocations by developing new reduction rules and analysis techniques.
Hannaneh Akrami, Jugal Garg
2023-07-14T12:29:31Z
http://arxiv.org/abs/2307.07304v2
# Breaking the \(3/4\) Barrier for Approximate Maximin Share ###### Abstract We study the fundamental problem of fairly allocating a set of indivisible goods among \(n\) agents with additive valuations using the desirable fairness notion of maximin share (MMS). MMS is the most popular share-based notion, in which an agent finds an allocation fair to her if she receives goods worth at least her MMS value. An allocation is called MMS if all agents receive at least their MMS value. Since MMS allocations need not exist when \(n>2\), a series of works showed the existence of approximate MMS allocations with the current best factor of \(\frac{3}{4}+O(\frac{1}{n})\). However, a simple example in [1, 2, 1] showed the limitations of existing approaches and proved that they cannot improve this factor to \(3/4+\Omega(1)\). In this paper, we bypass these barriers to show the existence of \((\frac{3}{4}+\frac{3}{3836})\)-MMS allocations by developing new reduction rules and analysis techniques. ## 1 Introduction Fair allocation of resources (goods) is a fundamental problem in the intersection of computer science, economics, and social choice theory. This age-old problem arises naturally in a wide range of real-life settings, which was formally introduced in the seminal work of Steinhaus in the 1940s [10]. Depending on what properties the goods have and what notion of fairness is considered, one can address a wide range of problems. Extensive work has been done for the case of _divisible_ goods, where goods can be fractionally allocated, e.g., [12, 13, 14, 15]. More recently, fair division of indivisible goods has received significant attention due to their applications in various multi-agent settings. Formally, an instance of fair division of indivisible goods consists of a set \(N=\{1,2,\ldots,n\}\) of agents, a set \(M\) of \(m\) indivisible goods, and valuation vector \(\mathcal{V}=(v_{1},\ldots,v_{n})\) where \(v_{i}:2^{M}\to\mathbb{R}_{\geq 0}\) is the valuation function of agent \(i\). The goal is to find an allocation \(A=\langle A_{1},A_{2},\ldots,A_{n}\rangle\), in which agent \(i\) gets \(A_{i}\), and \(A\) satisfies some fairness criteria. Two main categories of fairness are envy-based notions and share-based notions. Roughly speaking, in envy-based notions, an agent finds an allocation fair by comparing her bundle with other agents' bundles. Under allocation \(A\), if certain conditions are met for all agents (e.g., \(v_{i}(A_{i})\geq v_{i}(A_{j})\) for all \(i,j\in N\) in the case of envy-freeness), then \(A\) is fair. Popular examples of envy-based notions are envy-freeness (EF) and its relaxations envy-freeness up to any good (EFX) [13], and envy-freeness up to one good (EF1) [12]. In share-based notions, an agent finds an allocation fair only through the value she obtains from her bundle (irrespective of what others receive). For each agent \(i\), if the value \(i\) receives is at least some threshold \(t_{i}\), then the allocation is said to be fair. An example of a share-based notion is proportionality. An allocation \(A\) is proportional if all agents receive their proportional share, i.e., \(v_{i}(A_{i})\geq v_{i}(M)/n\) for all agents \(i\in N\). It is easy to see that proportionality is too strong to be satisfied in the discrete setting.* This necessitates studying relaxed fairness notions when goods are indivisible. Footnote *: As a counter-example, consider two agents and one good with a positive utility to both of the agents. Note that no matter how we allocate this good, one agent receives \(0\) utility, which rules out the existence of proportional allocations and any approximation of proportionality. In this paper, we consider a natural relaxation of proportionality called _maximin share (MMS)_, introduced by Budish [1]. It is also preferred by participating agents over other notions, as shown in real-life experiments by [1]. Maximin share of an agent is the maximum value she can guarantee to obtain if she divides the goods into \(n\) bundles (one for each agent) and receives a bundle with the minimum value. Basically, for an agent \(i\), assuming that all agents have \(i\)'s valuation function, the maximum value one can guarantee for all the agents is the \(i\)'s maximin share, denoted by \(\mathrm{MMS}_{i}\). Formally, for a set \(S\) of goods and any positive integer \(d\), let \(\Pi_{d}(S)\) denote the set of all partitions of \(S\) into \(d\) bundles. Then, \[\mathrm{MMS}_{i}^{d}(S):=\max_{P\in\Pi_{d}(S)}\min_{j=1}^{d}v_{i}(P_{j}).\] For all agents \(i\), \(\mathrm{MMS}_{i}=\mathrm{MMS}_{i}^{n}(M)\). An allocation is MMS if all agents value their bundles at least as much as their MMS values. Formally, allocation \(A\) is MMS if \(v_{i}(A_{i})\geq\mathrm{MMS}_{i}\) for all agents \(i\in N\). Since MMS allocations do not always exist when there are three or more agents with additive valuations [12, 13], the focus shifted to study approximations of MMS. An allocation \(A\) is \(\alpha\)-MMS if \(v_{i}(A_{i})\geq\alpha\cdot\mathrm{MMS}_{i}\) for all agents \(i\in N\). We note that the MMS notion is closely related to the popular max-min objective or the classic Santa Claus problem \((\max_{A}\min_{i}v_{i}(A_{i}))\)[1]. Unlike the max-min objective, the (\(\alpha\)-)MMS objective satisfies the desirable scale-invariance property. In the case of agents with identical valuations, an exact MMS allocation exists, and in this case, finding \(\alpha\)-MMS allocation is equivalent to \(\alpha\)-approximation of the Santa Claus problem. The best approximation factor known for the max-min objective under additive valuations is \(\tilde{O}(m^{\varepsilon})\) for any \(\varepsilon>0\)[1] for any \(\varepsilon>0\)[1]. For the MMS problem, Procaccia and Wang [12] showed the existence of \(2/3\)-MMS allocations. Many follow-up works have improved the approximation factor [1, 1, 1, 1, 2, 3] with the current best result of \(\alpha=\frac{3}{4}+\min(\frac{1}{36},\frac{3}{16n-4})\)[1]. However, since the work of Ghodsi et al. [1], the best known constant approximation factor for MMS has remained \(3/4\) for large \(n\). In this work, we break this \(3/4\) wall by proving the existence of \((\frac{3}{4}+\frac{3}{3836})\)-MMS allocations. After Ghodsi et al. [1] proved the existence of \(3/4\)-MMS allocations and gave a PTAS to compute one, Garg and Taki [13] gave a simple algorithm with complicated analysis proving the existence of \((\frac{3}{4}+\frac{1}{12n})\)-MMS allocations and also computing a \(3/4\)-MMS allocation in polynomial time. Very recently, Akrami et al. [1] simplified the analysis of (a slight modification of) the Garg-Taki algorithm significantly and proved the existence of \((\frac{3}{4}+\min(\frac{1}{36},\frac{3}{16n-4}))\)-MMS allocations. However, a simple example in [1, 2, 1] shows that no constant factor better than \(3/4\) can be obtained for approximate MMS using Garg-Taki algorithm. In Section 3, we discuss the known techniques' barriers in more detail and how our algorithm overcomes these barriers. The complementary problem is to find upper bounds on the largest \(\alpha\) for which \(\alpha\)-MMS allocations exist. Feige et al. [13] constructed an example with three agents and nine goods for which no allocation is better than \(39/40\)-MMS. For \(n\geq 4\), their construction gives an example for which no allocation is better than \((1-n^{-4})\)-MMS. Table 1 summarizes all these results. We note that most of these existence results can be easily converted into PTAS for finding such an allocation using the PTAS for finding the MMS values [10]. ### Further related work **Special cases.** There has been a line of work on the instances with a limited number of agents or goods. When \(m\leq n+3\), an MMS allocation always exists [1]. Feige et al. [13] improved this bound to \(m\leq n+5\). For \(n=2\), MMS allocations always exist [1]. For \(n=3\), the MMS approximation was improved from \(3/4\)[2] to \(7/8\)[1] to \(8/9\)[1], and then to \(11/12\)[14]. For \(n=4\), Ghodsi et al. [1] showed the existence of \(4/5\)-MMS. For \(n\geq 5\), the best known factor is the general \((\frac{3}{4}+\min(\frac{1}{36},\frac{3}{16n-4}))\) bound given by Akrami et al. [1]. **Ordinal approximation.** An alternative way of relaxing MMS is guaranteeing \(1\)-out-of-\(d\) maximin share (MMS) for \(d>n\), which is the maximum value that an agent can ensure by partitioning the goods into \(d\) bundles and choosing the least preferred bundle. This notion only depends on the bundles' ordinal ranking and is not affected by a small perturbation in the value of every single good (as long as the ordinal ranking of the bundles does not change). A series of works studied this notion [1, 2] with the state-of-the-art being the existence of \(1\)-out-of-\(\lfloor\frac{3n}{2}\rfloor\) MMS allocations for goods [1]. **Chores.** MMS can be analogously defined for fair division of chores. MMS allocations do not always exist for chores [1], which motivated the study of approximate MMS [1, 2], with the current best approximation ratio being very recently improved from \(11/9\)[1] to \(13/11\)[1]. In the case of \(n=3\), \(19/18\)-MMS allocations exist [14]. MMS in the chores setting is closely related to the well-studied variants of bin-packing and job scheduling problems. In particular, the recent paper [1] utilizes the Multifit algorithm for makespan minimization to obtain the best approximation factor. Therefore, many ideas which are already developed are proven to be useful when dealing with chores. On the other hand, when dealing with goods, the related variants of bin packing and scheduling problems do not make much sense where the objective becomes to maximize the number/capacity of bins or maximize the minimum processing time of a machine while allocating all the items. Therefore, new ideas specific to this problem are required. Furthermore, although the explicit study of MMS for goods started much before chores, the advancement in approximate MMS for chores has been faster. Also, the \begin{table} \begin{tabular}{|c|l|l|} \hline & **Existence** & **Non-existence** \\ \hline \hline \(n=3\) & \(11/12\)[14] & \(>39/40\)[13] \\ \hline \(n=4\) & \(4/5\)[1, 1, 1] & \(>67/68\)[13] \\ \hline & \(2/3\)[2, current best factor \((13/11)\) is much better than the analogous factor for goods \((3/4+3/3836)\), despite the extensive work by many researchers on the goods problem. For ordinal approximation, the best-known factor for existence is \(1\)-out-of-\(\lfloor\frac{3n}{4}\rfloor\) MMS allocations for chores. The discrepancy carries on to the ordinal approximations of MMS. While the best known \(d\) for which \(1\)-out-of-\(d\) MMS allocations exist in the goods setting is \(\lfloor 3n/2\rfloor\)[1], the analogous factor for the chores setting is \(\lfloor 3n/4\rfloor\)[1]. **Other settings.** The MMS notion has also been studied when agents have more general valuations than additive, e.g., [1, 1, 14, 15]. Generalizations have also been studied where restrictions are imposed on the set of feasible allocations, such as matroid constraints [1], cardinality constraints [1], and graph connectivity constraints [1, 13]. Strategyproof versions of fair division have also been studied [1, 1, 1, 1, 2]. MMS has also inspired other notions of fairness, like weighted MMS [1, 1], AnyPrice Share (APS) [1], Groupwise MMS [1, 1, 1], \(1\)-out-of-\(d\) share [1], and self-maximizing shares [1]. MMS has also been studied in best-of-both-worlds settings, where both ex-ante and ex-post guarantees are sought [1]. ## 2 Preliminaries For all \(n\in\mathbb{N}\), let \([n]=\{1,2,\ldots,n\}\). A fair division instance \(\mathcal{I}=(N,M,\mathcal{V})\) consist of a set of agents \(N=[n]\), a set of goods \(M=[m]\) and a vector of valuation functions \(\mathcal{V}=(v_{1},v_{2},\ldots,v_{n})\) such that for all \(i\in[n]\), \(v_{i}:2^{M}\to\mathbb{R}_{\geq 0}\) indicates how much agent \(i\) likes each subset of the goods. In this paper, we assume the valuation functions are additive, i.e., for all \(i\in[n]\) and \(S\subseteq M\), \(v_{i}(S)=\sum_{g\in S}v_{i}(\{g\})\). For ease of notation, for all \(g\in M\), we use \(v_{i}(g)\) or \(v_{i,g}\) instead of \(v_{i}(\{g\})\). For a set \(S\) of goods and any positive integers \(d\), let \(\Pi_{d}(S)\) denote the set of all partitions of \(S\) into \(d\) bundles. Then for any valuation function \(v\), \[\mathrm{MMS}^{d}_{v}(S):=\max_{P\in\Pi_{d}(S)}\min_{j=1}^{d}v(P_{j}). \tag{1}\] When the instance \(\mathcal{I}=(N,M,\mathcal{V})\) is clear from the context, we denote \(\mathrm{MMS}^{n}_{v_{i}}\) by \(\mathrm{MMS}_{i}(\mathcal{I})\) or \(\mathrm{MMS}_{i}\) for all \(i\in[n]\). For each agent \(i\), let \(P^{i}=(P^{i}_{1},P^{i}_{2},\ldots,P^{i}_{n})\) be a partition of \(M\) into \(n\) bundles admitting the MMS value of agent \(i\). Formally, \(\mathrm{MMS}_{i}=\min_{j\in[n]}v_{i}(P^{i}_{j})\). We call such a partition, an MMS partition of agent \(i\). An allocation \(X\) is MMS if for all agents \(i\in N\), \(v_{i}(X_{i})\geq\mathrm{MMS}_{i}\). Similarly, for any \(0<\alpha\leq 1\), an allocation \(X\) is \(\alpha\)-MMS if \(v_{i}(X_{i})\geq\alpha\cdot\mathrm{MMS}_{i}\) for all agents \(i\in N\). **Definition 1** (Ordered instance).: _An instance \(\mathcal{I}=(N,M,\mathcal{V})\) is ordered if there exists an ordering of the goods \((g_{1},g_{2},\ldots,g_{m})\) such that for all agents \(i\in N\), \(v_{i}(g_{1})\geq v_{i}(g_{2})\geq\ldots\geq v_{i}(g_{m})\)._ It is known that the hardest instances of MMS are the ordered instances [1]. We use the notations used in [1]. **Definition 2** ([1]).: _For the fair division instance \(\mathcal{I}=([n],[m],\mathcal{V})\), \(\mathtt{order}(\mathcal{I})\) is defined as the instance \(([n],[m],\mathcal{V}^{\prime})\), where for each \(i\in[n]\) and \(j\in[m]\), \(v^{\prime}_{i}(j)\) is the \(j^{\text{th}}\) largest number in the multiset \(\{v_{i}(g)\mid g\in[m]\}\)._ The transformation \(\mathtt{order}\) is \(\alpha\)-_MMS-preserving_, i.e., for a fair division instance \(\mathcal{I}\), given an \(\alpha\)-MMS allocation of \(\mathtt{order}(\mathcal{I})\), one can compute an \(\alpha\)-MMS allocation of \(\mathcal{I}\) in polynomial time [1]. Given any ordered instance \(\mathcal{I}=([n],[m],\mathcal{V})\), without loss of generality, we assume \(v_{i}(1)\geq v_{i}(2)\geq\ldots\geq v_{i}(m)\) for all \(i\in[n]\). **Lemma 1** ([1]).: _Given an instance \(\mathcal{I}\) and an \(\alpha\)-MMS allocation of \(\mathtt{order}(\mathcal{I})\), one can compute an \(\alpha\)-MMS allocation of \(\mathcal{I}\) in polynomial time._ **Definition 3** (Normalized instance).: _An instance \(\mathcal{I}=(N,M,\mathcal{V})\) is normalized, if for all \(i,j\in[n]\), \(v_{i}(P_{j}^{i})=1\)._ Note that since \(v_{i}\) is additive, if \(\mathcal{I}\) is normalized, then for all MMS partitions of \(i\) like \(Q=(Q_{1},\ldots,Q_{n})\) and for all \(j\in[n]\) we have \(v_{i}(Q_{j})=1\). [1] shows that given any instance \(\mathcal{I}=(N,M,\mathcal{V})\), one can compute a normalized instance \(\mathcal{I}^{\prime}=(N,M,\mathcal{V}^{\prime})\) such that any \(\alpha\)-MMS allocation for \(\mathcal{I}^{\prime}\) is an \(\alpha\)-MMS allocation for \(\mathcal{I}\). Their algorithm converting an instance to a normalized instance is shown in Algorithm 1. We note that since finding an agent's MMS value is NP-hard, this is not a polynomial-time algorithm, but a PTAS exists. **Lemma 2** ([1]).: _Let \(\mathcal{I}^{\prime}=(N,M,\mathcal{V}^{\prime})=\mathtt{normalize}(\mathcal{I}=(N,M, \mathcal{V}))\). Then for any allocation \(A\), \(v_{i}(A_{i})\geq v_{i}^{\prime}(A_{i})\mathrm{MMS}_{i}(\mathcal{I})\) for all \(i\in N\)._ ``` 1:for\(i\in N\)do 2: Compute agent \(i\)'s MMS partition \(P^{i}\). 3:\(\forall j\in N\), \(\forall g\in P_{j}^{i}\), let \(v_{i,g}^{\prime}\gets v_{i,g}/v_{i}(P_{j}^{i})\). 4:endfor 5:return\((N,M,\mathcal{V}^{\prime})\). ``` **Algorithm 1**\(\mathtt{normalize}(N,M,\mathcal{V})\) Lemma 2 implies that \(\mathtt{normalize}\) is \(\alpha\)-MMS-preserving, since if \(A\) is an \(\alpha\)-MMS allocation for the normalized instance \((N,M,\mathcal{V}^{\prime})\), then \(A\) is also an \(\alpha\)-MMS allocation for the original instance \((N,M,\mathcal{V})\). [1] give some structural property of ordered normalized instances which we repeat here in Lemma 3. For completeness, we repeat its proof in Appendix A. **Lemma 3**.: _[_1_]_ _Let \(([n],[m],\mathcal{V})\) be an ordered and normalized fair division instance. For all \(k\in[n]\) and agent \(i\in[n]\), if \(v_{i}(k)+v_{i}(2n-k+1)>1\), then \(v_{i}(2n-k+1)\leq 1/3\) and \(v_{i}(k)>2/3\)._ ### Reduction rules Given any instance \(\mathcal{I}\), a reduction rule \(R(\mathcal{I})\) is a procedure that allocates a subset \(S\subseteq M\) of goods to an agent \(i\) and outputs the instance \(\mathcal{I}^{\prime}=(N\setminus\{i\},M\setminus S,\mathcal{V})\). **Definition 4** (Valid reductions).: _Let \(R\) be a reduction rule and \(R(\mathcal{I})=(N^{\prime},M^{\prime},\mathcal{V})\) such that \(\{i\}=N\setminus N^{\prime}\) and \(S=M\setminus M^{\prime}\). Then \(R\) is a "valid \(\alpha\)-reduction" if_ 1. \(v_{i}(S)\geq\alpha\cdot\mathit{MMS}_{v_{i}}^{|N|}(M)\)_, and_ 2. _for all_ \(j\in N^{\prime}\)_,_ \(\mathit{MMS}_{v_{j}}^{|N|-1}(M^{\prime})\geq\mathit{MMS}_{v_{j}}^{|N|}(M)\)_._ _Furthermore, a reduction rule \(R\) is a "valid reduction for agent \(j\in N^{\prime}\)", if \(\mathit{MMS}_{v_{j}}^{|N|-1}(M^{\prime})\geq\mathit{MMS}_{v_{j}}^{|N|}(M)\) where \(N^{\prime}\) and \(M^{\prime}\) are the set of remaining agents and remaining goods respectively after the reduction._ Note that if \(R\) is a valid \(\alpha\)-reduction and an \(\alpha\)-MMS allocation \(A\) exists for \(R(\mathcal{I})\), then an \(\alpha\)-MMS allocation exists for \(\mathcal{I}\). Such an allocation can be obtained by allocating \(S\) to \(i\) and allocating the rest of the goods as they are allocated under \(A\). **Lemma 4**.: _Given an instance \(\mathcal{I}=(N,M,\mathcal{V})\), let \(S\subseteq M\) be such that \(v_{i}(S)\leq\text{MMS}_{i}\) and \(|S|\leq 2\). Then allocating \(S\) to an arbitrary agent \(j\neq i\), is a valid reduction for agent \(i\)._ Proof.: Let \(P=(P_{1},P_{2},\ldots,P_{n})\) be an MMS partition of \(M\) for agent \(i\). Let \(g_{1},g_{2}\in S\). In case \(|S|=1\), \(g_{1}=g_{2}\). Without loss of generality, we assume \(g_{1}\in P_{1}\). If \(g_{2}\in P_{1}\), then \((P_{2},\ldots,P_{n})\) is a partition of a subset of \(M\setminus S\) into \(n-1\) bundles with minimum value at least \(\text{MMS}_{v_{i}}^{n}(M)\). Therefore, \(\text{MMS}_{v_{i}}^{n-1}(M\setminus S)\geq\text{MMS}_{v_{i}}^{n}(M)\). In case \(g_{2}\notin P_{1}\), without loss of generality, let us assume \(g_{2}\in P_{2}\). Then \(v_{i}(P_{1}\cup P_{2}\setminus S)=v_{i}(P_{1})+v_{i}(P_{2})-v_{i}(S)\geq\text{ MMS}_{v_{i}}^{n}\). Therefore, \((P_{1}\cup P_{2}\setminus S,P_{3},\ldots,P_{n})\) is a partition of \(M\setminus S\) into \(n-1\) bundles with minimum value at least \(\text{MMS}_{v_{i}}^{n}(M)\). Hence also in this case, \(\text{MMS}_{v_{i}}^{n-1}(M\setminus S)\geq\text{MMS}_{v_{i}}^{n}(M)\). Thus, allocating \(S\) to an arbitrary agent \(j\neq i\), is a valid reduction for agent \(i\). Now we define four reduction rules that we use in our algorithm. **Definition 5**.: _For an ordered instance \(\mathcal{I}=(N,M,\mathcal{V})\) and \(\alpha>0\), reduction rules \(R_{1}^{\alpha}\), \(R_{2}^{\alpha}\), \(R_{3}^{\alpha}\) and \(R_{4}^{\alpha}\) are defined as follows._ * \(R_{1}^{\alpha}(\mathcal{I}):\) _If_ \(v_{i}(1)\geq\alpha\) _for some_ \(i\in N\)_, allocate_ \(\{1\}\) _to agent_ \(i\) _and remove_ \(i\) _from_ \(N\)_._ * \(R_{2}^{\alpha}(\mathcal{I}):\) _If_ \(v_{i}(\{2n-1,2n,2n+1\})\geq\alpha\) _for some_ \(i\in N\)_, allocate_ \(\{2n-1,2n,2n+1\}\) _to agent_ \(i\) _and remove_ \(i\) _from_ \(N\)_._ * \(R_{3}^{\alpha}(\mathcal{I}):\) _If_ \(v_{i}(\{3n-2,3n-1,3n,3n+1\})\geq\alpha\) _for some_ \(i\in N\)_, allocate_ \(\{3n-2,3n-1,3n,3n+1\}\) _to agent_ \(i\) _and remove_ \(i\) _from_ \(N\)_._ * \(R_{4}^{\alpha}(\mathcal{I}):\) _If_ \(v_{i}(\{1,2n+1\})\geq\alpha\) _for some_ \(i\in N\)_, allocate_ \(\{1,2n+1\}\) _to agent_ \(i\) _and remove_ \(i\) _from_ \(N\)_._ We note that \(R_{1}^{\alpha}\), \(R_{2}^{\alpha}\), \(R_{4}^{\alpha}\) in addition to one more rule of allocating \(\{n,n+1\}\) to an agent is used in [1, 1]. Our algorithm does not use the rule of allocating \(\{n,n+1\}\). Moreover, \(R_{3}^{\alpha}\) (allocating \(\{3n-2,3n-1,3n,3n+1\}\)) is used in our work and not elsewhere. **Lemma 5**.: _Given any \(\alpha>0\) and an ordered instance \(\mathcal{I}\), \(R_{1}^{\alpha}\), \(R_{2}^{\alpha}\), and \(R_{3}^{\alpha}\) are valid reductions for all the remaining agents._ Proof.: For a remaining agent \(i\), let \(P=(P_{1},\ldots,P_{n})\) be an MMS partition of \(M\) for \(i\). It suffices to prove that after each of these reduction rules, there exists a partition of the remaining goods for each remaining agent into \(n-1\) bundles with a minimum value of \(\text{MMS}_{i}^{n}(M)\) for agent \(i\). * \(R_{1}^{\alpha}\): Let \(1\in P_{k}\). Then removing \(P_{k}\) from \(P\) results in a partition of a subset of \(M\setminus\{1\}\) into \(n-1\) bundles of value at least \(\text{MMS}_{i}^{n}(M)\) for agent \(i\). * \(R_{2}^{\alpha}\): By the pigeonhole principle, there exists \(k\) such that \(|P_{k}\cap\{1,2,\ldots,2n+1\}|\geq 3\). Let \(g_{1},g_{2},g_{3}\in P_{k}\cap\{1,2,\ldots,2n+1\}\) and \(g_{1}<g_{2}<g_{3}\). Replace \(g_{1}\) with \(2n-1\), \(g_{2}\) with \(2n\) and \(g_{3}\) with \(2n+1\) and remove \(P_{k}\) from \(P\). Note that the value of the remaining bundles can only increase. Thus, the result is a partition of a subset of \(M\setminus\{2n-1,2n,2n+1\}\) into \(n-1\) bundles with a minimum \(\text{MMS}_{i}^{n}(M)\) for agent \(i\). * \(R_{3}^{\alpha}\): The proof is very similar to \(R_{2}^{\alpha}\) case. By the pigeonhole principle, there exists \(k\) such that \(|P_{k}\cap\{1,2,\ldots,3n+1\}|\geq 4\). Let \(g_{1},g_{2},g_{3},g_{4}\in P_{k}\cap\{1,2,\ldots,3n+1\}\) and \(g_{1}<g_{2}<g_{3}<g_{4}\). Replace \(g_{1}\) with \(3n-2\), \(g_{2}\) with \(3n-1\), \(g_{3}\) with \(3n\) and \(g_{4}\) with \(3n+1\) and remove \(P_{k}\) from \(P\). Note that the value of the remaining bundles can only increase. Thus, the result is a partition of a subset of \(M\setminus\{3n-2,3n-1,3n,3n+1\}\) into \(n-1\) bundles with a minimum value of \(\text{MMS}_{i}^{n}(M)\) for agent \(i\). **Proposition 1**.: _If \(\mathcal{I}\) is ordered and for a given \(\alpha\geq 0\), none of the rules \(R_{1}^{\alpha}\), \(R_{2}^{\alpha}\) or \(R_{3}^{\alpha}\) is applicable, then_ 1. _for all_ \(k\geq 1\)_,_ \(v_{i}(k)<\alpha\)_, and_ 2. _for all_ \(k>2n\)_,_ \(v_{i}(k)<\alpha/3\)_, and_ 3. _for all_ \(k>3n\)_,_ \(v_{i}(k)<\alpha/4\)_._ Proof.: We prove each case separately. 1. Since \(R_{1}^{\alpha}\) is not applicable, \(v_{i}(k)\leq v_{i}(1)<\alpha\) for all agents \(i\) and all \(k\geq 1\). 2. Since \(R_{2}^{\alpha}\) is not applicable, \(3v_{i}(k)\leq 3v_{i}(2n+1)\leq v_{i}(2n-1)+v_{i}(2n)+v_{i}(2n+1)<\alpha\) for all agents \(i\) and all \(k>2n\). Therefore, \(v_{i}(k)<\alpha/3\). 3. Similar to the former case, since \(R_{3}^{\alpha}\) is not applicable, \(4v_{i}(k)\leq 4v_{i}(3n+1)\leq v_{i}(3n-2)+v_{i}(3n-1)+v_{i}(3n)+v_{i}(3n+1)<\alpha\) for all agents \(i\) and all \(k>3n\). Therefore, \(v_{i}(k)<\alpha/4\). **Definition 6** (\(\alpha\)-irreducible and \(\delta\)-Oni).: _We call an instance \(\mathcal{I}\)\(\alpha\)-irreducible if none of the rules \(R_{1}^{\alpha}\), \(R_{2}^{\alpha}\), \(R_{3}^{\alpha}\) or \(R_{4}^{\alpha}\) is applicable. Moreover, we call an instance \(\delta\)-ONI if it is ordered, normalized, and \((3/4+\delta)\)-irreducible._ ## 3 Technical overview Most algorithms for approximating MMS, especially those with a factor of at least \(3/4\)[1, 1, 1], utilize two simple tools: valid reductions and bag filling. Although these tools are easy to use in a candidate algorithm, the novelty of these works is in the analysis, which is challenging. Like previous works, the analysis is the most difficult part of our algorithm based on these tools. Unlike previous works, we also need to use a new reduction rule and initialize bags differently, which are counterintuitive. First, we discuss the algorithm given by [1], which is a slight modification of the algorithm in [1]. For \(\alpha\leq 3/4\), [1] showed how to obtain an ordered normalized \(\alpha\)-irreducible instance from any arbitrary instance such that the transformation is \(\alpha\)-MMS preserving.+ That is, given an \(\alpha\)-MMS allocation for the resulting ordered normalized irreducible instance, one can obtain an \(\alpha\)-MMS allocation for the original instance. In the first phase of their algorithm, they obtain an ordered normalized \(\alpha\)-irreducible instance \(\hat{\mathcal{I}}\) and in the second phase, they compute an \(\alpha\)-MMS allocation for \(\hat{\mathcal{I}}\). Let \(\hat{\mathcal{I}}=([n],[m],\mathcal{V})\). Without loss of generality, we can assume that \(m\geq 2n\) (Observation 2). Footnote †: [1] uses \(R_{1}^{\alpha}\), \(R_{2}^{\alpha}\), \(R_{4}^{\alpha}\) and one more rule as reduction rules. However, all that matters in their proof is that the applied reduction rules are valid \(\alpha\)-reduction rules. In the second phase, they initialize \(n\) bags with the first \(2n\) goods as follows. \[B_{k}:=\{k,2n-k+1\}\text{ for }k\in[n] \tag{2}\] See Figure 1 for a better intuition. As long as an agent \(i\) values a bag \(B_{k}\) at least \(\alpha\), allocate \(B_{k}\) to \(i\) and remove \(B_{k}\) and \(i\). Then, as long as an unallocated bag exists (and thus a remaining agent), pick an arbitrary remaining bag \(B_{k}\) and add unassigned goods \(g>2n\) until some remaining agent \(i\) values it at least \(\alpha\). Then, allocate \(B_{k}\) to \(i\) and continue. The second phase is called the bag-filling phase. Algorithm 2 shows the pseudocode of the bag-filling phase of [1]. To prove that the algorithm's output is \(\alpha\)-MMS, it suffices to prove that we never run out of goods in the bag-filling phase or, equivalently, all agents receive a bag at some point during the algorithm. To prove this, they categorize agents into two groups. Let \(N^{1}=\{i\in N\mid\forall k\in[n]:v_{i}(B_{k})\leq 1\}\) and \(N^{2}=N\setminus N^{1}=\{i\in N\mid\exists k\in[n]:v_{i}(B_{k})>1\}\). We note that the sets \(N^{1}\) and \(N^{2}\) are defined based on the instance \(\hat{\mathcal{I}}\) at the beginning of phase 2, and they do not change throughout the algorithm. Agents in \(N^{1}\)Proving that all agents in \(N^{1}\) receive a bag is easy. Using the fact that at the beginning of Phase 2, the instance is ordered, normalized, and \(\alpha\)-irreducible, they prove \(v_{i}(g)<1/4\) for all \(i\in N\) and all \(g\in M\setminus[2n]\). This helps to prove that any bag which is not assigned to an agent \(i\in N^{1}\) while \(i\) was available has a value at most 1 to \(i\). Therefore, since \(v_{i}(M)=n\), running out of goods is impossible before agent \(i\) receives a bag. Agents in \(N^{2}\)The main bulk and difficulty of the analysis of [12] is to prove that all agents in \(N^{2}\) receive a bag. By normalizing the instance, [1] managed to simplify this argument significantly. [1] prove \(v_{i}(g)<1/12\) for all \(i\in N^{2}\) and all \(g\in M\setminus[2n]\). This helps to bound the value of the bags that receive some goods in the bag-filling phase by \(5/6\) for all available \(i\in N^{2}\). Again, if the number of such bags is high enough, it is easy to prove that the algorithm does not run out of goods in the bag-filling phase. The difficult case is when the total value of the bags which are of value more than \(1\) to some agent \(i\in N^{2}\) is large. Roughly speaking, in this case, it seems that the bags which receive goods in the bag-filling phase and their values are bounded by \(5/6\) cannot compensate for the large value of the bags that do not require any goods in the bag-filling phase. This is where the normalized property of \(\hat{\mathcal{I}}\) simplifies the matter significantly. Intuitively, there are many goods with a high value that happened to be paired in the same bag in the bag initialization phase. Since the instance is normalized, we know that in the MMS partition of \(i\), these goods cannot be in the same bag. This implies that many bags in the MMS partition of \(i\) have at most \(1\) good in common with the goods in \([2n]\). This means that the value of the remaining goods (the goods in \(M\setminus[2n]\)) must be large since they fill the bags in the MMS partition such that the value of each bag equals \(1\). Hence, enough goods remain in \(M\setminus[2n]\) to fill the bags. There are two main obstacles to generalizing this algorithm to obtain \(\alpha\)-MMS allocations when \(\alpha>3/4\). The first obstacle lies in the first phase of the algorithm. \(R^{\alpha}_{4}\) is a valid \(\alpha\)-reduction when \(\alpha\leq 3/4\) and \(R^{\alpha}_{1}\) and \(R^{\alpha}_{2}\) are not applicable. This no longer holds when \(\alpha>3/4\). In this case, the MMS value of the agents can indeed decrease after applying \(R^{\alpha}_{4}\). When \(\alpha=3/4+\mathcal{O}(1/n)\), [12] and [1] managed to resolve this issue by adding some dummy goods after each iteration of \(R^{\alpha}_{4}\) and proving that the total value of these dummy goods is negligible. Essentially, since we only need to guarantee the last agent a value of \(\alpha\), the idea is to divide the excess \(1-\alpha\) among all agents and improve the factor. However, this can only improve the factor by at most \(\mathcal{O}(1/n)\). If \(\alpha>3/4+\epsilon\) for a constant \(\epsilon>0\), the same technique does not work since the value of dummy goods cannot be reasonably bounded. We resolve this issue in Section 4. Unlike the previous works, we allow the MMS values of the remaining agents to drop. Although the MMS values of the agents can drop, we show that they do not drop by more than a multiplicative factor of \((1-4\epsilon)\) after an arbitrary number of applications of \(R^{3/4+\epsilon}_{k}\) for \(k\in[4]\). Basically, while for \(\alpha\leq 3/4\), one can get \(\alpha\)-irreduciblelity for free (i.e., without losing any approximation factor on MMS), for \(\alpha=3/4+\epsilon\) and \(\epsilon>0\), we lose an approximation factor of \((1-4\epsilon)\). The second obstacle is that for goods in \(M\setminus[2n]\), we do not get the neat bound of \(v_{i}(g)<1/4\) for \(i\in N\). Instead, we get this bound with an additive factor of \(\mathcal{O}(\epsilon)\). This even complicates the analysis for agents in \(N^{1}\), which was trivial in [1]. Furthermore, a tight example in [12, 1] shows that this algorithm cannot do better than \(3/4+\mathcal{O}(1/n)\) and all the agents are in \(N^{1}\) in this example. To overcome this hurdle, we further categorize the agents in \(N^{1}\). One group consists of the agents with a reasonable bound on the value of good \(2n+1\), and the other agents, the _problematic_ ones, do not. We break the problem into two cases depending on the number of these problematic agents. In Section 5.1, we consider the case when the number of problematic agents is not too large. In this case, we work with a slight modification of the algorithm in [1], and using an involved analysis, we show that it gives a \((3/4+\epsilon)\)-MMS allocation. Otherwise, we introduce a new reduction rule for the first time that allocates the two most valuable goods to an agent. Although allocating these goods seem counterintuitive, surprisingly, that seems to be the only way to obtain a \((3/4+\epsilon)\)-MMS allocation for the tight example in [12, 1]. In Section 5.2, we give another algorithm to handle the case where the number of problematic agents is too large. In this case, we first apply the reduction rules (including the new one), and then initialize the bags with three goods, unlike the previous works. Precisely, we set \(C_{k}:=\{k,2n-k+1,2n+k\}\) and then do bag-filling. To summarize, the structure of the rest of the paper is as follows. In Section 4, given any instance \(\mathcal{I}=(N,M,\mathcal{V})\) and \(\epsilon>0\), for \(\delta\geq 4\epsilon/(1-4\epsilon)\) we obtain an ordered normalized \((3/4+\delta)\)-irreducible (\(\delta\)-ONI) instance \(\mathcal{I}^{\prime}=(N^{\prime},M^{\prime},\mathcal{V}^{\prime})\) such that \(N^{\prime}\subseteq N\), \(M^{\prime}\subseteq M\) and all agents in \(N\setminus N^{\prime}\) receive a bag of value at least \((3/4+\epsilon)\text{MMS}_{i}(\mathcal{I})\). Moreover, we prove from any \((3/4+\delta)\)-MMS allocation for \(\mathcal{I}^{\prime}\), one can obtain a \(\min\left(3/4+\epsilon,(3/4+\delta)(1-4\epsilon)\right)\)-MMS allocation for \(\mathcal{I}\). In Section 5, we prove a \((3/4+\delta)\)-MMS allocation exists for all \(\delta\)-ONI instances for any \(\delta\leq 3/956\). Therefore, we prove that for \(4\epsilon/(1-4\epsilon)\leq\delta\leq 3/956\), a \(\min\left(3/4+\epsilon,(3/4+\delta)(1-4\epsilon)\right)\)-MMS exists for all instances. Setting \(\delta=3/956\) and \(\epsilon=\delta/(4(\delta+1))=3/3836\), there always exists a \((3/4+3/3836)\)-MMS allocation. ## 4 Reduction to \(\delta\)-ONI instances In this section, for any \(\epsilon>0\) and \(\delta\geq 4\epsilon/(1-4\epsilon)\) we show how to obtain a \(\delta\)-ONI instance \(\mathcal{I}^{\prime}\) from any arbitrary instance \(\mathcal{I}\), such that from any \(\alpha\)-MMS allocation for \(\mathcal{I}^{\prime}\), one can obtain a \(\min\left(3/4+\epsilon,(1-4\epsilon)\alpha\right)\)-MMS allocation for \(\mathcal{I}\). To obtain such an allocation, first, we obtain a \((3/4+\epsilon)\)-irreducible instance, and we prove that the MMS value of no remaining agent drops by more than a multiplicative factor of \((1-4\epsilon)\). Then, we normalize and order the resulting instance, giving us a \(\delta\)-ONI instance (for \(\delta\geq 4\epsilon/(1-4\epsilon)\)). In the rest of this section, by \(R_{k}\) we mean \(R_{k}^{(3/4+\epsilon)}\) for \(k\in[4]\). We start with transforming the instance into an ordered one using the order algorithm. Then we scale the valuations such that for all \(i\in N\), \(\text{MMS}_{i}=1\). Then, as long as one of the reduction rules \(R_{1}\), \(R_{2}\), \(R_{3}\), or \(R_{4}\) is applicable, we apply \(R_{k}\) for the smallest possible \(k\). Algorithm 3 shows the pseudocode of this procedure. ``` 1:\(\mathcal{I}\leftarrow\texttt{order}(N,M,\mathcal{V})\) 2:for\(i\in N\)do 3:\(v_{i,g}\gets v_{i,g}/\text{MMS}_{i},\forall g\in[m]\) 4:endfor 5:while\(R_{1}^{(3/4+\epsilon)}\) or \(R_{2}^{(3/4+\epsilon)}\) or \(R_{3}^{(3/4+\epsilon)}\) or \(R_{4}^{(3/4+\epsilon)}\) is applicable do 6:\(\mathcal{I}\gets R_{k}^{(3/4+\epsilon)}(\mathcal{I})\) for smallest possible \(k\) 7:endwhile 8:return\(\mathcal{I}\). ``` **Algorithm 3**\(\texttt{reduce}((N,M,\mathcal{V}),\epsilon)\) In this section, we prove the following two theorems. **Theorem 1**.: _Given an instance \(\mathcal{I}=(N,M,\mathcal{V})\) and \(\epsilon\geq 0\), let \(\mathcal{I}^{\prime}=(N^{\prime},M^{\prime},\mathcal{V}^{\prime})=\texttt{reduce }(\mathcal{I},\epsilon)\). For all agents \(i\in N^{\prime}\), \(\text{MMS}_{i}(\mathcal{I}^{\prime})\geq 1-4\epsilon\)._ **Theorem 2**.: _Given an instance \(\mathcal{I}\) and \(\epsilon\geq 0\), let \(\hat{\mathcal{I}}=\texttt{order}(\texttt{normalize}(\texttt{reduce}(\mathcal{I},\epsilon)))\). Then \(\hat{\mathcal{I}}\) is ordered, normalized and \((\frac{3}{4}+\frac{4\epsilon}{1-4\epsilon})\)-irreducible (\(\frac{4\epsilon}{1-4\epsilon}\)-ONI). Furthermore, from any \(\alpha\)-MMS allocation of \(\hat{\mathcal{I}}\) one can obtain a \(\min(3/4+\epsilon,(1-4\epsilon)\alpha)\)-MMS allocation of \(\mathcal{I}\)._ Note that once \(R_{1}\) is not applicable, we have \(v_{i}(1)<3/4+\epsilon\) for all remaining agents \(i\). Since we never increase the values, \(R_{1}\) can no longer apply. So \(\texttt{reduce}(\mathcal{I},\epsilon)\) first applies \(R_{1}\) as long as it is applicable and then applies the rest of the reduction rules. Since \(R_{1}\) is a valid reduction rule for all the remaining agents \(i\) by Lemma 5, \(\text{MMS}_{i}\geq 1\) after applications of \(R_{1}\). So to prove Theorem 1 without loss of generality, we assume \(R_{1}\) is not applicable on \(\mathcal{I}=([n],M,\mathcal{V})\). Let \(\mathcal{I}^{\prime}=(N^{\prime},M^{\prime},\mathcal{V})=\mathtt{reduce}( \mathcal{I},\epsilon)\). For the rest of this section, we fix agent \(i=N^{\prime}\). Let \(P=(P_{1},P_{2},\ldots,P_{n})\) be the initial MMS partition of \(i\) (in \(\mathcal{I}\)). We construct a partition \(Q=(Q_{1},Q_{2},\ldots,Q_{|N^{\prime}|})\) of \(M^{\prime}\) such that \(v_{i}(Q_{j})\geq 1-4\epsilon\) for all \(j\in[|N^{\prime}|]\). Let \(G_{2}\), \(G_{3}\), and \(G_{4}\) be the set of goods removed by applications of \(R_{2}\), \(R_{3}\), and \(R_{4}\), respectively. Also, let \(r_{2}=|G_{2}|/3\), \(r_{3}=|G_{3}|/4\), and \(r_{4}=|G_{4}|/2\) be the number of times each rule is applied, respectively. Note that in the end, all that matters is that we construct a partition \(Q\) of \(M\setminus(G_{2}\cup G_{3}\cup G_{4})\) into \(n-(r_{2}+r_{3}+r_{4})\) bundles of value at least \(1-4\epsilon\) for \(i\). For this sake, it does not matter in which order the goods are removed. Therefore, without loss of generality, we assume all the goods in \(G_{4}\) are removed first, and then the goods in \(G_{2}\) and \(G_{3}\) are removed in their original order. Note that we are not applying the reduction rules in a different order. We are removing the same goods that would be removed by applying the reduction rules in their original order. Only for the sake of our analysis, we remove these goods in a different order. For better intuition, consider the following example. Assume \(\mathtt{reduce}(\mathcal{I},\epsilon)\) first applies \(R_{2}\) removing \(\{a_{1},a_{2},a_{3}\}\), then \(R_{4}\) removing \(\{b_{1},b_{2}\}\), then another \(R_{2}\) removing \(\{c_{1},c_{2},c_{3}\}\) and then \(R_{3}\) removing \(\{d_{1},d_{2},d_{3},d_{4}\}\). Without loss of generality we can assume that first \(\{b_{1},b_{2}\}\) is removed, then \(\{a_{1},a_{2},a_{3}\}\), then \(\{c_{1},c_{2},c_{3}\}\) and then \(\{d_{1},d_{2},d_{3},d_{4}\}\). We know that when there are \(n\) agents, removing \(\{2n-1,2n,2n+1\}\) (or \(\{3n-2,3n-1,3n,3n+1\}\)) and an agent is a valid reduction for \(i\) by Lemma 5. With the same argument, it is not difficult to see that removing \(\{g_{1},g_{2},g_{3}\}\) where \(g_{1}\geq 2n-1\), \(g_{2}\geq 2n\) and \(g_{3}\geq 2n+1\) (or \(\{g_{1},g_{2},g_{3},g_{4}\}\) where \(g_{1}\geq 3n-2\), \(g_{2}\geq 3n-1\), \(g_{3}\geq 3n\) and \(g_{4}\geq 3n+1\)) and an agent is also a valid reduction for \(i\). For completeness, we prove this in Lemma 6. **Lemma 6**.: _Let \(\mathcal{I}=(N,M,\mathcal{V})\) be an ordered instance and \(i\in N\)._ 1. _Let_ \(g_{1}\geq 2n-1\)_,_ \(g_{2}\geq 2n\) _and_ \(g_{3}\geq 2n+1\)_. Then_ \(\text{MMS}_{v_{i}}^{n-1}(M\setminus\{g_{1},g_{2},g_{3}\})\geq\text{MMS}_{v_{ i}}^{n}(M)\)_._ 2. _Let_ \(g_{1}\geq 3n-2\)_,_ \(g_{2}\geq 3n-1\)_,_ \(g_{3}\geq 3n\) _and_ \(g_{4}\geq 3n+1\)_. Then_ \(\text{MMS}_{v_{i}}^{n-1}(M\setminus\{g_{1},g_{2},g_{3},g_{4}\})\geq\text{MMS}_{v _{i}}^{n}(M)\)_._ Proof.: 1. By the pigeonhole principle, there exists \(k\) such that \(|P_{k}\cap\{1,2,\ldots,2n+1\}|\geq 3\). Let \(h_{1},h_{2},h_{3}\in P_{k}\cap\{1,2,\ldots,2n+1\}\) and \(h_{1}<h_{2}<h_{3}\). Replace \(h_{1}\) with \(g_{1}\), \(h_{2}\) with \(g_{2}\) and \(h_{3}\) with \(g_{3}\) and remove \(P_{k}\) from \(P\). Note that the value of the remaining bundles can only increase. Thus, the result is a partition of a subset of \(M\setminus\{g_{1},g_{2},g_{3}\}\) into \(n-1\) bundles with a minimum value of \(\text{MMS}_{i}^{n}(M)\) for agent \(i\). 2. By the pigeonhole principle, there exists \(k\) such that \(|P_{k}\cap\{1,2,\ldots,3n+1\}|\geq 4\). Let \(h_{1},h_{2},h_{3},h_{4}\in P_{k}\cap\{1,2,\ldots,3n+1\}\) and \(h_{1}<h_{2}<h_{3}<h_{4}\). Replace \(h_{1}\) with \(g_{1}\), \(h_{2}\) with \(g_{2}\), \(h_{3}\) with \(g_{3}\) and \(h_{4}\) with \(g_{4}\) and remove \(P_{k}\) from \(P\). Note that the value of the remaining bundles can only increase. Thus, the result is a partition of a subset of \(M\setminus\{g_{1},g_{2},g_{3},g_{4}\}\) into \(n-1\) bundles with a minimum value of \(\text{MMS}_{i}^{n}(M)\) for agent \(i\). **Observation 1**.: _Given an ordered instance \(\mathcal{I}=(N,M,\mathcal{V})\), let \(v_{i}(g_{1})\geq\ldots\geq v_{i}(g_{m}),\forall i\in N\). Let \(\mathcal{I}^{\prime}=(N^{\prime},M^{\prime},\mathcal{V})\) be the instance after removing an agent \(i\) and a set of goods \(\{a,b\}\) from \(\mathcal{I}\). Let \(g\in M^{\prime}\) be the \(j^{\text{th}}\) most valuable good in \(M\) and the \(j^{\prime}{}^{\text{th}}\) most valuable good in \(M^{\prime}\). Then \(j^{\prime}\geq j-2\)._ **Corollary 1** (of Observation 1).: _Given an ordered instance \(\mathcal{I}=(N,M,\mathcal{V})\), let \(\mathcal{I}^{\prime}=(N^{\prime},M^{\prime},\mathcal{V})\) be the instance after removing an agent \(i\) and a set of goods \(\{a,b\}\) from \(\mathcal{I}\). Let \(n=|N|\) and \(n^{\prime}=|N^{\prime}|=n-1\). Let \(g\in M^{\prime}\) be the \(j^{\text{th}}\) most valuable good in \(M\) and the \(j^{\prime}{}^{\text{th}}\) most valuable good in \(M^{\prime}\). Then,_ * _for any_ \(k\)_, in particular,_ \(k\in\{-1,0,1\}\)_, if_ \(j\geq 2n+k\)_, then_ \(j^{\prime}\geq 2n^{\prime}+k\)_, and_ * _for any_ \(k\)_, in particular,_ \(k\in\{-2,-1,0,1\}\)_, if_ \(j\geq 3n+k\)_, then_ \(j^{\prime}\geq 3n^{\prime}+k\)_._ Next, assume at a step where the number of agents is \(n\), \(\{g_{2n-1},g_{2n},g_{2n+1}\}\) (or \(\{g_{3n-2},g_{3n-1},\)\(g_{3n},g_{3n+1}\}\)) is removed with an application of \(R_{2}\) (or \(R_{3}\)). Corollary 1 together with Lemma 6 imply that removing \(\{g_{2n-1},g_{2n},g_{2n+1}\}\) (or \(\{g_{3n-2},g_{3n-1},g_{3n},g_{3n+1}\}\)) at a later step where the number of agents is \(n^{\prime}\leq n\) is also valid for agent \(i\). Therefore, all that remains is to prove that after removing the goods in \(G_{4}\) and \(r_{4}\) agents, the MMS value of \(i\) remains at least \(1-4\epsilon\). That is, \(\operatorname{MMS}_{i}^{n-r_{4}}(M\setminus G_{4})\geq 1-4\epsilon\). **Lemma 7**.: _Let \((N^{\prime},M^{\prime},\mathcal{V})=\operatorname{\texttt{reduce}}(([n],M, \mathcal{V}),\epsilon)\). Let \(r_{4}\) be the number of times \(R_{4}\) is applied during \(\operatorname{\texttt{reduce}}(\mathcal{I},\epsilon)\) and let \(G_{4}\) be the set of removed goods by applications of \(R_{4}\). Then for all agents \(i\in N^{\prime}\), \(\operatorname{\textsc{MMS}_{v_{i}}^{n-r_{4}}(M\setminus G_{4})}\geq 1-4\epsilon\)._ Proof.: Without loss of generality, assume all the goods in \(G_{4}\) are in \(P_{1}\cup P_{2}\cup\ldots\cup P_{k}\) for some \(k\leq 2r_{4}\). Namely, we have \(P_{j}\cap G_{4}\neq\emptyset\) for all \(j\in[k]\) and \((P_{k+1}\cup\ldots\cup P_{n})\cap G_{4}=\emptyset\). If \(k\leq r_{4}\), then \((P_{k+1},\ldots,P_{n})\) is already a partition of a subset of \(M\setminus G_{4}\) into at least \(n-r_{4}\) bundles. Therefore the lemma follows. So assume \(k>r_{4}\). In each application of \(R_{4}\), two goods \(h\) and \(\ell\) are removed. Let \(h\) be the more valuable good. We call \(h\) the heavy good and \(\ell\) the light good of this application of \(R_{4}\). By Proposition 1, for all heavy goods \(h\) and light goods \(\ell\) we have, \(v_{i}(h)<3/4+\epsilon\) and \(v_{i}(\ell)<1/4+\epsilon/3\). Let \(H\) be the set of all heavy goods and \(L\) be the set of all light goods removed during these reductions. Hence, \(G_{4}=H\cup L\), \(|H|=|L|=r_{4}\). We prove that we can partition \((P_{1}\cup\ldots P_{k})\setminus G_{4}\) into \(k-r_{4}\) bundles \(Q_{1},\ldots,Q_{k-r_{4}}\), each of value at least \(1-4\epsilon\). Or equivalently \(\operatorname{MMS}_{v_{i}}^{k-r_{4}}\bigl{(}(P_{1}\cup\ldots\cup P_{k}) \setminus G_{4}\bigr{)}\geq 1-4\epsilon\). Then, \((Q_{1},\ldots,Q_{k-r_{4}},P_{k+1},\ldots,P_{n})\) is a partition of \(M\setminus G_{4}\) into \(n-r_{4}\) bundles, each of value at least \(1-4\epsilon\) and the lemma follows. It suffices to prove the following claim. **Claim**.: _For \(r<k\leq 2r\), if \(|(P_{1}\cup P_{2}\cup\ldots\cup P_{k})\cap H|\leq r\) and \(|(P_{1}\cup P_{2}\cup\ldots\cup P_{k})\cap L|\leq r\), then \(\operatorname{\textsc{MMS}_{v_{i}}^{k-r}}\bigl{(}(P_{1}\cup\ldots\cup P_{k}) \setminus G_{4}\bigr{)}\geq 1-4\epsilon\) for all \(0<r<k\)._ The proof of the claim is by induction on \(k\). For \(k=2\), we have \(r=1\) and \(v_{i}(P_{1}\cup P_{2})-v_{i}(H\cup L)\geq 2-(\frac{3}{4}+\epsilon)-(\frac{1}{4}+ \frac{\epsilon}{3})>1-4\epsilon\) and therefore, \(\operatorname{MMS}_{v_{i}}^{1}(P_{1}\cup P_{2}\setminus G_{4})\geq 1-4\epsilon\). Now assume that the statement holds for all values of \(k^{\prime}\leq k-1\), and we prove it for \(k>2\). First, we prove the claim when at least one of the inequalities is strict. Assume \(|(P_{1}\cup P_{2}\cup\ldots\cup P_{k})\cap H|<r\) and \(|(P_{1}\cup P_{2}\cup\ldots\cup P_{k})\cap L|\leq r\). The proof of the other case is symmetric. If \((P_{1}\cup P_{2}\cup\ldots\cup P_{k})\cap L\neq\emptyset\), without loss of generality assume \(P_{k}\cap L\neq\emptyset\). Therefore, \(|(P_{1}\cup\ldots\cup P_{k-1})\cap H|\leq r-1<k-1\) and \(|(P_{1}\cup\ldots\cup P_{k-1})\cap L|\leq r-1<k-1\). We have, \[\operatorname{MMS}_{v_{i}}^{k-r}\bigl{(}(P_{1}\cup\ldots\cup P_{k}) \setminus G_{4}\bigr{)} \geq\operatorname{MMS}_{v_{i}}^{(k-1)-(r-1)}\bigl{(}(P_{1}\cup \ldots\cup P_{k-1})\setminus G_{4}\bigr{)}\] \[\geq 1-4\epsilon.\hskip 56.905512pt\text{(by induction assumption)}\] Now assume \(|(P_{1}\cup P_{2}\cup\ldots\cup P_{k})\cap H|=r\) and \(|(P_{1}\cup P_{2}\cup\ldots\cup P_{k})\cap L|=r\). Case 1: There exists \(\boldsymbol{j\in[k]}\), such that \(\boldsymbol{P_{j}\cap H\neq\emptyset}\) and \(\boldsymbol{P_{j}\cap L\neq\emptyset}\).Without loss of generality, assume \(P_{k}\cap H\neq\emptyset\) and \(P_{k}\cap L\neq\emptyset\). In this case \(|(P_{1}\cup\ldots\cup P_{k-1})\cap H|\leq r-1<k-1\) and \(|(P_{1}\cup\ldots\cup P_{k-1})\cap L|\leq r-1<k-1\). Therefore, \[\operatorname{MMS}_{v_{i}}^{k-r}\bigl{(}(P_{1}\cup\ldots\cup P_{k}) \setminus G_{4}\bigr{)} \geq\operatorname{MMS}_{v_{i}}^{(k-1)-(r-1)}\bigl{(}(P_{1}\cup \ldots\cup P_{k-1})\setminus G_{4}\bigr{)}\] \[\geq 1-4\epsilon.\hskip 56.905512pt\text{(by induction assumption)}\] Case 2: There exist \(j,\ell\in[k]\), such that \(|P_{j}\cap H|\geq 2\) and \(|P_{\ell}\cap L|\geq 2\).Similar to the former case, we have \[\mathrm{MMS}_{v_{i}}^{k-r}\big{(}(P_{1}\cup\ldots\cup P_{k})\setminus G _{4}\big{)} \geq\mathrm{MMS}_{v_{i}}^{(k-2)-(r-2)}\big{(}(P_{1}\cup\ldots\cup P_ {k-2})\setminus G_{4}\big{)}\] \[\geq 1-4\epsilon.\hskip 142.26378pt\text{(by induction assumption)}\] Case 3: Neither Case 1 nor Case 2 holds.For all \(j\in[k]\), we have \(P_{j}\cap H=\emptyset\) or \(P_{j}\cap L=\emptyset\); otherwise, we are in case 1. Let \(S_{1}:=\{j\in[k]\mid P_{j}\cap L\neq\emptyset\}\) and \(S_{2}=[k]\setminus S_{1}=\{j\in[k]\mid P_{j}\cap H\neq\emptyset\}\). If there exist bundles \(P_{j}\) and \(P_{\ell}\) such that \(|P_{j}\cap H|\geq 2\) and \(|P_{\ell}\cap L|\geq 2\), we are in case 2. Therefore, for all \(j\in S_{1}\), \(|P_{j}\cap L|=1\) or for all \(j\in S_{2}\), \(|P_{j}\cap H|=1\). Hence, there are \(r\) bundles \(P_{1},\ldots,P_{r}\) such that either \(|P_{j}\cap H|=1\) (and \(|P_{j}\cap L|=0\)) for all \(j\in[r]\) or \(|P_{j}\cap L|=1\) (and \(|P_{j}\cap H|=0\)) for all \(j\in[r]\). Case 3.1: \(k>r+1\).Assume \(|P_{j}\cap H|=1\) for all \(j\in[r]\). (The case where \(|P_{j}\cap L|=1\) for all \(j\in[r]\) is symmetric when \(k>r+1\).) Let \(|P_{k}\cap L|=a\). Then \(|(P_{1}\cup\ldots\cup P_{a}\cup P_{k})\cap H|=a\) and \(|(P_{1}\cup\ldots\cup P_{a}\cup P_{k})\cap L|=a\). Thus by the induction assumption, we have \[\mathrm{MMS}_{v_{i}}^{(a+1)-a}\big{(}(P_{1}\cup\ldots\cup P_{a}\cup P_{k}) \setminus G_{4}\big{)}\geq 1-4\epsilon.\] Moreover, \(|(P_{a+1}\cup\ldots\cup P_{k-1})\cap H|\leq r-a\) and \(|(P_{a+1}\cup\ldots\cup P_{k-1})\cap L|\leq r-a\). Thus by the induction assumption, we have \[\mathrm{MMS}_{v_{i}}^{(k-a-1)-(r-a)}\big{(}(P_{a+1}\cup\ldots\cup P_{k-1}) \setminus G_{4}\big{)}\geq 1-4\epsilon.\] So we can partition \((P_{1}\cup\ldots\cup P_{a}\cup P_{k})\setminus G_{4}\) into one bundle of value at least \(1-4\epsilon\) for \(i\) and also we can partition \((P_{a+1}\cup\ldots\cup P_{k-1})\setminus G_{4}\) into \(k-r-1\) bundles of value at least \(1-4\epsilon\) for \(i\). Thus, the lemma holds. Case 3.2: \(k=r+1\).Let \(B=(P_{1}\cup\ldots\cup P_{k})\setminus G_{4}\). We want to show \(\mathrm{MMS}_{v_{i}}^{1}(B)\geq 1-4\epsilon\). Hence it suffices to show \(v_{i}(B)\geq 1-4\epsilon\). \[v_{i}(B) \geq\sum_{j\in[k-1]}v_{i}\left(P_{j}\setminus(H\cup L)\right)\] \[=\sum_{j\in[k-1]}\left(v_{i}(P_{j})-v_{i}(P_{j}\cap(H\cup L))\right)\] \[>(k-1)\left(1-(\frac{3}{4}+\epsilon)\right)\hskip 42.679134pt\text{( since $|P_{j}\cap(H\cup L)|=1$, $v_{i}(P_{j}\cap(H\cup L))\leq\frac{3}{4}+\epsilon$)}\] \[=(k-1)(\frac{1}{4}-\epsilon)\geq 1-4\epsilon.\hskip 142.26378pt\text{( for $k>4$)}\] It remains to prove the claim when \(k=3\) and \(k=4\). If there are two bundles \(P_{1}\) and \(P_{2}\) such that \(|P_{1}\cap L|=|P_{2}\cap L|=1\), \(v_{i}(B)\geq v_{i}(P_{1}\setminus L)+v_{i}(P_{2}\setminus L)>2\left(1-(\frac{1} {4}+\frac{\epsilon}{3})\right)>1-4\epsilon\). Otherwise, for \(k=3\), there are two bundles \(P_{1}\) and \(P_{2}\) such that \(|P_{1}\cap H|=|P_{2}\cap H|=1\) and \(|P_{3}\cap L|=2\). Then, \[v_{i}(B) =v_{i}(P_{1}\setminus H)+v_{i}(P_{2}\setminus H)+v_{i}(P_{3} \setminus L)\] \[>2\left(1-(\frac{3}{4}+\epsilon)\right)+\left(1-2(\frac{1}{4}+ \frac{\epsilon}{3})\right)\] \[=1-\frac{8\epsilon}{3}>1-4\epsilon.\] For \(k=4\), we have \(|P_{1}\cap H|=|P_{2}\cap H|=|P_{3}\cap H|=1\) and \(|P_{4}\cap L|=3\). Then, \[v_{i}(B) =v_{i}(P_{1}\setminus H)+v_{i}(P_{2}\setminus H)+v_{i}(P_{3} \setminus H)+v_{i}(P_{4}\setminus L)\] \[>3\left(1-(\frac{3}{4}+\epsilon)\right)+\left(1-3(\frac{1}{4}+ \frac{\epsilon}{3})\right)=1-4\epsilon.\qed\] We are ready to prove Theorem 1 and Theorem 2. **Theorem 1**.: _Given an instance \(\mathcal{I}=(N,M,\mathcal{V})\) and \(\epsilon\geq 0\), let \(\mathcal{I}^{\prime}=(N^{\prime},M^{\prime},\mathcal{V}^{\prime})=\texttt{ reduce}(\mathcal{I},\epsilon)\). For all agents \(i\in N^{\prime}\), \(\text{MMS}_{i}(\mathcal{I}^{\prime})\geq 1-4\epsilon\)._ Proof.: Fix an agent \(i\in N^{\prime}\). Let \(\mathcal{I}^{1}\) be the instance after all applications of \(R_{1}\) and before any further reduction. By Lemma 5, \(\text{MMS}_{i}(\mathcal{I}^{1})\geq 1\). So without loss of generality, let us assume \(\mathcal{I}=\mathcal{I}^{1}\). Let \(G_{2}\), \(G_{3}\), and \(G_{4}\) be the set of goods removed by applications of \(R_{2}\), \(R_{3}\), and \(R_{4}\), respectively. Also, let \(r_{2}=|G_{2}|/3\), \(r_{3}=|G_{3}|/4\), and \(r_{4}=|G_{4}|/2\) be the number of times each rule is applied, respectively. By Lemma 7, \(\text{MMS}_{v_{i}}^{n-r_{4}}(M\setminus G_{4})\geq 1-4\epsilon\). For an application of \(R_{3}\) (or \(R_{4}\)) at step \(t\), let \(\{a_{1},a_{2},a_{3}\}\) (or \(\{b_{1},b_{2},b_{3},b_{4}\}\)) be the set of goods that are removed. By Lemma 6, removing this set at a step \(t^{\prime}\geq t\) is still a valid reduction for \(i\). Therefore, removing \(G_{2}\) and \(G_{3}\) and \(r_{2}+r_{3}\) agents does not decrease the MMS value of \(i\). Thus, \(\text{MMS}_{i}(\mathcal{I}^{\prime})\geq 1-4\epsilon\). **Theorem 2**.: _Given an instance \(\mathcal{I}\) and \(\epsilon\geq 0\), let \(\hat{\mathcal{I}}=\texttt{order}(\texttt{normalize}(\texttt{reduce}(\mathcal{I}, \epsilon)))\). Then \(\hat{\mathcal{I}}\) is ordered, normalized and \((\frac{3}{4}+\frac{4\epsilon}{1-4\epsilon})\)-irreducible (\(\frac{4\epsilon}{1-4\epsilon}\)-ONI). Furthermore, from any \(\alpha\)-MMS allocation of \(\hat{\mathcal{I}}\) one can obtain a \(\min(3/4+\epsilon,(1-4\epsilon)\alpha)\)-MMS allocation of \(\mathcal{I}\)._ Proof.: In \(\texttt{reduce}\), as long as \(R_{1}^{(3/4+\epsilon)}\) is applicable, we apply it. Once it is not applicable anymore, for all remaining agents \(i\), \(v_{i}(1)<3/4+\epsilon\). In the rest of the procedure \(\texttt{reduce}\), we do not increase the value of any good for any agent. Therefore, \(R_{1}^{(3/4+\epsilon)}\) remains inapplicable. As long as one of the rules \(R_{k}^{(3/4+\epsilon)}\) is applicable for \(k\in\{2,3,4\}\), we apply it. Therefore, \(\texttt{reduce}(\mathcal{I},\epsilon)\) is \((3/4+\epsilon)\)-irreducible. Let \(\mathcal{I}^{\prime}=(N^{\prime},M^{\prime},\mathcal{V}^{\prime})=\texttt{ reduce}(\mathcal{I},\epsilon)\). Since \(\text{MMS}_{i}(\mathcal{I}^{\prime})\geq 1-4\epsilon\) (by Theorem 1), normalize can increase the value of each good by a multiplicative factor of at most \(1/(1-4\epsilon)\). Therefore, after ordering the instance, none of the rules \(R_{k}^{\alpha}\) for \(k\in[4]\) would be applicable for \(\alpha\geq\frac{3/4+\epsilon}{1-4\epsilon}=\frac{3}{4}+\frac{4\epsilon}{1-4\epsilon}\). Hence, \(\hat{\mathcal{I}}=\texttt{order}(\texttt{normalize}(\texttt{reduce}(\mathcal{I}, \epsilon)))\) is \(\alpha\)-irreducible for \(\alpha\geq\frac{3}{4}+\frac{4\epsilon}{1-4\epsilon}\) and it is of course ordered. Since \(\texttt{order}\) does not change the multiset of the values of the goods for each agent, the instance remains normalized. Now let us assume \(A\) is an \(\alpha\)-MMS allocation for \(\hat{\mathcal{I}}=\texttt{order}(\texttt{normalize}(\texttt{reduce}(\mathcal{I}, \epsilon)))\). By Lemma 1, we can obtain an allocation \(B\) which is \(\alpha\)-MMS for \(\texttt{normalize}(\texttt{reduce}(\mathcal{I},\epsilon))\). Lemma 2 implies that \(B\) is \(\alpha\)-MMS for \(\mathcal{I}^{\prime}=(N^{\prime},M^{\prime},\mathcal{V}^{\prime})=\texttt{reduce }(\mathcal{I},\epsilon)\). For all agents \(i\in N\setminus N^{\prime}\), \(v_{i}^{\prime}(B_{i})=v_{i}(B_{i})/\text{MMS}_{i}(\mathcal{I})\). Therefore, \[v_{i}(B_{i}) =v_{i}^{\prime}(B_{i})\text{MMS}_{i}(\mathcal{I})\] \[\geq\alpha\text{MMS}_{i}(\mathcal{I}^{\prime})\text{MMS}_{i}( \mathcal{I})\] ( \[B\text{ is $\alpha$-MMS for $\mathcal{I}^{\prime}$}\] ) \[\geq\alpha(1-4\epsilon)\text{MMS}_{v_{i}}^{n}(M). (\text{MMS}_{v_{i}^{\prime}}^{n}(M)\geq 1-4\epsilon\text{ by Theorem 1})\] Thus, \(B\) gives all the agents in \(N^{\prime}\), \(\alpha(1-4\epsilon)\) fraction of their MMS. All agents in \(N\setminus N^{\prime}\) receive \((3/4+\epsilon)\) fraction of their MMS value. Therefore, the final allocation is a \(\min(3/4+\epsilon,(1-4\epsilon)\alpha)\)-MMS allocation of \(\mathcal{I}\). ## 5 \((3/4+\delta)\)-Mms allocation for \(\delta\)-ONI instances In this section, we prove that for \(\delta\leq 3/956\) there exists a \((3/4+\delta)\)-MMS allocation if the input is a \(\delta\)-ONI instance. First we prove that in any \(\delta\)-ONI instance \(\mathcal{I}=([n],[m],\mathcal{V})\), \(m\geq 2n\). **Observation 2**.: _For any \(\delta\leq 1/4\), if \(\mathcal{I}=([n],[m],\mathcal{V})\) is \(\delta\)-ONI, then \(m\geq 2n\)._ Proof.: Towards a contradiction, assume \(m<2n\). Now for an arbitrary agent \(i\), let \((P_{1},P_{2},\ldots,P_{n})\) be the MMS partition of \(i\). Since \(m<2n\), there must be a bundle \(P_{j}\) such that \(|P_{j}|=1\). Therefore, \(v_{i}(1)\geq v_{i}(P_{j})=1\) which means \(R_{1}^{3/4+\delta}\) is applicable. This contradicts \(\mathcal{I}\) being \((3/4+\delta)\)-irreducible. Thus, \(m\geq 2n\). We initialize \(n\) bags \(\{B_{1},\ldots,B_{n}\}\) with the first \(2n\) goods as follows: \[B_{k}:=\{k,2n-k+1\}\text{ for }k\in[n]. \tag{3}\] See Figure 1 for a better intuition. Note that by Observation 2, \(m\geq 2n\) and such bag-initialization is possible. Given an instance \(\mathcal{I}=([n],[m],\mathcal{V})\) (with \(m\geq 2n\)), let \(N^{1}(\mathcal{I})=\{i\in[n]\mid\forall k\in[n]:v_{i}(B_{k})\leq 1\}\) and \(N^{2}(\mathcal{I})=\{i\in[n]\mid\exists k\in[n]:v_{i}(B_{k})>1\}\). **Observation 3**.: _For \(\delta\leq 1/4\) and instance \(\mathcal{I}\), if \(\mathcal{I}\) is \(\delta\)-ONI, then for all agents \(i\in N^{2}(\mathcal{I})\), \(v_{i}(2n+1)<1/12+\delta\)._ Proof.: By the definition of \(N^{2}\), there exist \(k\in[n]\) such that \(v_{i}(k)+v_{i}(2n-k+1)=v_{i}(B_{k})>1\). Therefore, by Lemma 3, \(v_{i}(k)>2/3\). We have, \[v_{i}(2n+1) <\frac{3}{4}+\delta-v_{i}(1) (R_{4}^{3/4+\delta}\text{ is not applicable})\] \[\leq\frac{3}{4}+\delta-v_{i}(k) (v_{i}(1)\geq v_{i}(k))\] \[<\frac{3}{4}+\delta-\frac{2}{3}=\frac{1}{12}+\delta, (v_{i}(k)>\frac{2}{3})\] which completes the proof. We refer to \(N^{1}(\mathcal{I})\) and \(N^{2}(\mathcal{I})\) by \(N^{1}\) and \(N^{2}\) respectively when \(\mathcal{I}\) is the initial \(\delta\)-ONI instance. Recall that \(N^{1}\) and \(N^{2}\) do not change over the course of our algorithm. Let \(N^{1}_{1}=\{i\in N^{1}\mid v_{i}(2n+1)\geq 1/4-5\delta\}\) and \(N^{1}_{2}=N^{1}\setminus N^{1}_{1}\). Depending on the number of agents in \(N^{1}_{1}\), we run one of the approxMMS\(1(\mathcal{I},\delta)\) or approxMMS\(2(\mathcal{I},\delta)\) shown in Algorithms 4 or 5 respectively. Roughly speaking, if the size of \(N^{1}_{1}\) is not too large, we run Algorithm 4 and prioritize agents in \(N^{1}_{1}\). Otherwise, we run Algorithm 5 giving priority to agents in \(N^{1}_{2}\cup N^{2}\). Giving priority to agents in a certain set \(S\) means that when the algorithm is about to allocate a bag \(B\) to an agent, if there is an agent in \(S\) who gets satisfied upon receiving \(B\) (i.e., \(v_{i}(B)\geq 3/4+\delta\) for some \(i\in S\)), then the algorithms give \(B\) to such an agent and not to someone outside \(S\). ### Case 1: \(|N^{1}_{1}|\leq n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3})\) In this case we run Algorithm 4. For \(k\in[n]\), let \(B_{k}\) and \(\hat{B}_{k}\supseteq B_{k}\) be the \(k^{th}\) bag at the beginning and end of Algorithm 4, respectively. **Lemma 8**.: _Let \(i\) be any agent who did not receive any bag by the end of Algorithm 4. For all \(k\in[n]\) such that \(v_{i}(B_{k})\leq 1\), we have \(v_{i}(\hat{B}_{k})<1+4\delta/3\)._ Proof.: The claim trivially holds if \(\hat{B}_{k}=B_{k}\). Now assume \(B_{k}\subsetneq\hat{B}_{k}\). Let \(g\) be the last good added to \(\hat{B}_{k}\). We have \(v_{i}(\hat{B}_{k}\setminus g)<3/4+\delta\), otherwise \(g\) would not be added to \(\hat{B}_{k}\). Also note that \(g>2n\) and hence \(v_{i}(g)<1/4+\delta/3\) by Proposition 1. Thus, we have \[v_{i}(\hat{B}_{k}) =v_{i}(\hat{B}_{k}\setminus g)+v_{i}(g)\] \[<\left(\frac{3}{4}+\delta\right)+\left(\frac{1}{4}+\frac{\delta}{ 3}\right)=1+\frac{4\delta}{3}.\qed\] **Lemma 9**.: _For \(\delta\leq\frac{1}{4}\), given a \(\delta\)-ONI instance with \(|N_{1}^{1}|\leq n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3})\), all agents \(i\in N_{1}^{1}\) receive a bag of value at least \((3/4+\delta)\cdot\text{MMS}_{i}\) at the end of Algorithm 4._ Proof.: It suffices to prove that all agents \(i\in N_{1}^{1}\) receive a bag at the end of Algorithm 4. Towards a contradiction, assume that \(i\in N_{1}^{1}\) does not receive any bag. **Claim 1**.: _For all bags \(B\) not allocated to an agent in \(N_{1}^{1}\), \(v_{i}(B)<3/4+\delta\)._ Claim 1 holds since the priority is with agents in \(N_{1}^{1}\). Let \(S\) be the set of bags allocated to agents in \(N_{1}^{1}\) and \(\bar{S}\) be the set of the remaining bags. We have \[v_{i}(M) =\sum_{k\in[n]}v_{i}(\hat{B}_{k})=\sum_{B\in S}v_{i}(B)+\sum_{B\in \bar{S}}v_{i}(B)\] \[<|N_{1}^{1}|\left(1+\frac{4\delta}{3}\right)+\left(n-|N_{1}^{1}| \right)\left(\frac{3}{4}+\delta\right) \text{(Lemma \ref{lem:2} and Claim \ref{lem:2})}\] \[\leq n, (|N_{1}^{1}|\leq n(\tfrac{1}{4}-\delta)/(\tfrac{1}{4}+\tfrac{ \delta}{3}))\] which is a contradiction since \(v_{i}(M)=n\). Thus, all agents \(i\in N_{1}^{1}\) receive a bag at the end of Algorithm 4. **Remark 1**.: _The last inequality in the proof of Lemma 9 is tight for \(|N_{1}^{1}|=n(\tfrac{1}{4}-\delta)/(\tfrac{1}{4}+\tfrac{\delta}{3})\)._ **Lemma 10**.: _For \(\delta\leq\tfrac{1}{4}\), given a \(\delta\text{-ONI instance with }|N_{1}^{1}|\leq n(\tfrac{1}{4}-\delta)/(\tfrac{1}{4}+\tfrac{ \delta}{3})\), all agents \(i\in N_{2}^{1}\) receive a bag of value at least \((3/4+\delta)\cdot\text{MMS}_{i}\) at the end of Algorithm 4._ Proof.: It suffices to prove that all agents \(i\in N_{2}^{1}\) receive a bag at the end of Algorithm 4. Towards a contradiction, assume that \(i\in N_{2}^{1}\) does not receive any bag. **Claim 2**.: _For all \(k\in[n]\), \(v_{i}(\hat{B}_{k})\leq 1\)._ Proof.: The claim trivially holds if \(\hat{B}_{k}=B_{k}\). Now assume \(B_{k}\subsetneq\hat{B}_{k}\). Let \(g\) be the last good added to \(\hat{B}_{k}\). We have \(v_{i}(\hat{B}_{k}\setminus g)<3/4+\delta\), otherwise \(g\) would not be added to \(\hat{B}_{k}\). Also note that \(g\geq 2n+1\) and hence \(v_{i}(g)\leq v_{i}(2n+1)<1/4-5\delta\) by the definition of \(N_{2}^{1}\). Therefore, we have \[v_{i}(\hat{B}_{k}) =v_{i}(\hat{B}_{k}\setminus g)+v_{i}(g)\] \[<(\frac{3}{4}+\delta)+(\frac{1}{4}-5\delta)<1.\] Thus, the claim holds. Since agent \(i\) did not receive a bag, there exists an unallocated bag with value less than \(1\) for agent \(i\). Therefore, \(v_{i}(M)=\sum_{k\in[n]}v_{i}(\hat{B}_{k})<n\) which is a contradiction. Thus, all agents \(i\in N_{2}^{1}\) receive a bag at the end of Algorithm 4. #### 5.1.1 Agents in \(N^{2}\) In this section, we prove that all agents in \(N^{2}\) also receive a bag at the end of Algorithm 4. For the sake of contradiction, assume that agent \(i\in N^{2}\) does not receive a bag at the end of Algorithm 4. Let \(A^{+}:=\{k\in[n]\mid v_{i}(B_{k})>1\}\) and \(A^{-}:=\{k\in[n]\mid v_{i}(B_{k})<3/4+\delta\}\) be the indices of the bags satisfying the respective constraint. Also, let \(\ell\) be the smallest such that for all \(k\in[\ell+1,n]\), \(v_{i}(k)+v_{i}(2n-k+1+\ell)<1\). See Figure 2 taken from [1]. [1] proved \(\sum_{k\in A^{+}}v_{i}(\hat{B}_{k})<|A^{+}|+\ell(\tfrac{1}{12}+\delta)\). For completeness, we repeat its proof in Appendix A. **Lemma 11**.: _[_1_]__\(\sum_{k\in A^{+}}v_{i}(\hat{B}_{k})<|A^{+}|+\ell(\tfrac{1}{12}+\delta)\)._ **Observation 4**.: _For all \(k\in A^{-}\), \(v_{i}(\hat{B}_{k})<\tfrac{5}{6}+2\delta\)._ Proof.: If \(\hat{B}_{k}=B_{k}\), then \(v_{i}(\hat{B}_{k})<3/4+\delta<5/6+2\delta\). Otherwise, let \(g\) be the last good added to \(\hat{B}_{k}\). Note that \(v_{i}(\hat{B}_{k}\setminus g)<3/4+\delta\), otherwise the algorithm would assign \(\hat{B}_{k}\setminus g\) to agent \(i\) instead of adding \(g\) to it. We have \[v_{i}(\hat{B}_{k}) =v_{i}(\hat{B}_{k}\setminus g)+v_{i}(g)\] \[<(\frac{3}{4}+\delta)+v_{i}(2n+1) (v_{i}(\hat{B}_{k}\setminus g)<\frac{3}{4}+\delta\text{ and }v_{i}(g)\leq v_{i}(2n+1))\] \[<(\frac{3}{4}+\delta)+(\frac{1}{12}+\delta)=\frac{5}{6}+2\delta. (v_{i}(2n+1)<\frac{1}{12}+\delta\text{ by Observation 3})\] **Observation 5**.: _For all \(k\in[n]\), \(v_{i}(B_{k})>\frac{1}{2}-2\delta\)._ Proof.: Let \(t\) be smallest such that \(v_{i}(B_{t})>1\). By Lemma 3, \(v_{i}(t)>\frac{2}{3}\). Therefore, for all \(k\leq t\), \[v_{i}(B_{k})\geq v_{i}(k)\geq v_{i}(t)>\frac{2}{3}>\frac{1}{2}.\] Note that \(v_{i}(t)+v_{i}(2n-t+1)>1\) and by Proposition 1, \(v_{i}(t)<3/4+\delta\). Thus, \(v_{i}(2n-t+1)>1/4-\delta\). For all \(k>t\), we have \[v_{i}(B_{k}) =v_{i}(k)+v_{i}(2n-k+1)\] \[\geq 2\cdot v_{i}(2n-t+1) (k<2n-k+1\leq 2n-t+1)\] \[>\frac{1}{2}-2\delta. (v_{i}(2n-t+1)>\frac{1}{4}-\delta)\] **Observation 6**.: \(v_{i}(M\setminus[2n])>\ell(\frac{1}{4}-\delta)\)_._ Proof.: By the definition of \(\ell\), there exists a \(k\in\{\ell,\dots,n\}\) such that \(v_{i}(k)+v_{i}(2n-k+\ell)>1\). Therefore, for all \(j\leq k\) and \(t\leq 2n-k+\ell\), \(v_{i}(j)+v_{i}(t)>1\). Let \(P=(P_{1},\dots,P_{n})\) be an MMS partition of agent \(i\). For \(j\in[k]\), let \(j\in P_{j}\). Note that for different \(j,j^{\prime}\in[k]\), \(P_{j}\) and \(P_{j^{\prime}}\) are different since \(v_{i}(j)+v_{i}(j^{\prime})>1=v_{i}(P_{j})\). Also note that for every good \(g\in[2n-k+\ell]\) and \(j\in[k]\), \(g\notin P_{j}\), otherwise \(v_{i}(P_{j})>1\). Therefore, there are at least \(\ell\) bundles like \(P_{j}\) among \(P_{1},\dots,P_{k}\) such that \(P_{j}\cap[2n]=\{j\}\). We have \[v_{i}(M\setminus[2n]) \geq\sum_{j\in[k]}v_{i}(P_{j}\setminus\{j\})\geq\sum_{j\in[\ell]} \left(v_{i}(P_{j})-v_{i}(j)\right)\] \[>\sum_{j\in[\ell]}\left(1-(\frac{3}{4}+\delta)\right)=\ell(\frac {1}{4}-\delta). (v_{i}(j)<\frac{3}{4}+\delta\text{ by Proposition 1})\] We are now ready to prove Lemma 12. **Lemma 12**.: _For \(\delta\leq 0.011\), given a \(\delta\)-ONI instance with \(|N^{1}_{1}|\leq n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3})\), all agents \(i\in N^{2}\) receive a bag of value at least \((\frac{3}{4}+\delta)\) at the end of Algorithm 4._ Proof.: It suffices to prove that all agents \(i\in N^{2}\) receive a bag at the end of Algorithm 4. Towards a contradiction, assume that \(i\in N^{2}\) does not receive any bag. For all \(k\in N\setminus(A^{-}\cup A^{+})\), since \(v_{i}(B_{k})\geq 3/4+\delta\) and \(i\) has not received a bag, \(\hat{B}_{k}=B_{k}\). Thus, for all \(k\in N\setminus(A^{-}\cup A^{+})\) \[v_{i}(\hat{B}_{k})=v_{i}(B_{k})\leq 1. \tag{4}\] We have \[n =v_{i}(M)=\sum_{k\in A^{-}}v_{i}(\hat{B}_{k})+\sum_{k\in A^{+}}v_ {i}(\hat{B}_{k})+\sum_{k\in N\setminus(A^{-}\cup A^{+})}v_{i}(\hat{B}_{k})\] \[<\left(|A^{-}|(\frac{5}{6}+2\delta)\right)+\left(|A^{+}|+\ell( \frac{1}{12}+\delta)\right)+\left(n-|A^{-}|-|A^{+}|\right)\] (Observation 4, Lemma 11 and Inequality (4)) \[=n-|A^{-}|(\frac{1}{6}-2\delta)+\ell(\frac{1}{12}+\delta).\] Therefore, we have \[\frac{|A^{-}|}{\ell}<\frac{1/12+\delta}{1/6-2\delta} \tag{5}\] Next, we bound the value of the goods in \(M\setminus[2n]\) and contradict Inequality (5). We have, \[\ell(\frac{1}{4}-\delta) \leq v_{i}(M\setminus[2n])\] (Observation 6) \[=\sum_{k\in A^{-}}\left(v_{i}(\hat{B}_{k})-v_{i}(B_{k})\right) (M\setminus[2n]=\bigcup_{k\in A^{-}}(\hat{B}_{k}\setminus B_{k}))\] \[<|A^{-}|\left((\frac{5}{6}+\delta)-(\frac{1}{2}-2\delta)\right)\] (Observation 4 and Observation 5) \[=|A^{-}|\cdot(\frac{1}{3}+3\delta).\] Thus, \[\frac{|A^{-}|}{\ell}>\frac{1/4-\delta}{1/3+3\delta} \tag{6}\] Inequalities (5) and (6) imply that \(\frac{1/12+\delta}{1/6-2\delta}>\frac{1/4-\delta}{1/3+3\delta}\), which is a contradiction with \(\delta\leq 0.011\). Thus, all agents \(i\in N^{2}\) receive a bag at the end of Algorithm 4. **Theorem 3**.: _Given any \(\delta\leq 0.011\), for all \(\delta\)-ONI instances where \(|N^{1}_{1}|\leq n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3})\), Algorithm 4 returns a \((\frac{3}{4}+\delta)\)-MMS allocation._ Proof.: Since \(N=N^{1}_{1}\cup N^{1}_{2}\cup N^{2}\), by Lemmas 9, 10 and 12 all agents receive a bag of value at least \((\frac{3}{4}+\delta)\cdot\mathrm{MMS}_{i}\) in Algorithm 4. ### Case 2: \(|N_{1}^{1}|>n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3})\) In this case, we run Algorithm 5. Starting from an ordered normalized \((3/4+\delta)\)-irreducible instance, as long as there is a bag \(B_{k}\) with value at least \(3/4+\delta\) for some agent, we give \(B_{k}\) to such an agent. The priority is with agents who initially belonged to \(N_{2}^{1}\cup N^{2}\). Therefore, in the remaining instance, all bags are of value less than \(3/4+\delta\) for all the remaining agents. We introduce one more reduction rule in this section. * \(R_{5}^{\alpha}:\) If \(v_{i}(1)+v_{i}(2)\geq\alpha\) for some \(i\in N\), allocate \(\{1,2\}\) to agent \(i\) and remove \(i\) from \(N\). The priority is with agents in \(N_{2}^{1}\cup N^{2}\). Starting from an ordered normalized \((3/4+\delta)\)-irreducible instance, after allocating bags of value at least \(3/4+\delta\) to some agents, we run \(R_{5}^{3/4+\delta}\) as long as it is applicable. For ease of notation, we write \(R_{j}\) instead of \(R_{j}^{3/4+\delta}\) for \(j\in[5]\). Then, we run \(R_{2}\) and \(R_{3}\) as long as they are applicable. Afterwards, for all \(k\in[n]\), we initialize \(C_{k}=\{k,2n-k+1,2n+k\}\).++ See Figure 3 for better intuition. Then, we do bag-filling. Let \(\hat{C}_{k}\) be the result of bag-filling on bag \(C_{k}\). The pseudocode of this algorithm is shown in Algorithm 5. Footnote ‡: Note that it is without loss of generality to assume \(m\geq 3n\). If \(m<3n\), add dummy goods of value \(0\) to everyone. The MMS value of the agents remains the same, and any \(\alpha\)-MMS allocation in the final instance is an \(\alpha\)-MMS allocation in the original instance after removing the dummy goods. **Lemma 13**.: _For all agents \(i\in N_{2}^{1}\cup N^{2}\) and bags \(B\) which is allocated to an agent in \(N_{2}^{1}\cup N^{2}\) during Algorithm 5, \(v_{i}(B)<3/2+2\delta\)._ Proof.: We prove the lemma by upper bounding the value of the bags allocated at each step. **Claim 3**.: _For all bags \(B\) allocated to an agent before or during \(R_{5}\), \(v_{i}(B)<3/2+2\delta\)._ Proof.: Since we start with a \((3/4+\delta)\)-irreducible instance, by Proposition 1, for all goods \(g\), \(v_{i}(g)<3/4+\delta\). Therefore, for all the bags \(B\) of size two, we have \(v_{i}(B)<3/2+2\delta\). **Claim 4**.: _For all bags \(B\) which is allocated to an agent during \(R_{2}\), \(v_{i}(B)<3/2+2\delta\)._ Proof.: Note that when we run \(R_{2}\), \(R_{5}\) is not applicable. Therefore, \(v_{i}(1)+v_{i}(2)<3/4+\delta\). Hence, \(v_{i}(\{2n-1,2n,2n+1\})\leq v_{i}(\{1,2\})+v_{i}(2n+1)<2(3/4+\delta)=3/2+2\delta\). **Claim 5**.: _For all bags \(B\) which is allocated to an agent during \(R_{3}\), \(v_{i}(B)<3/2+2\delta\)._ Proof.: Note that when we run \(R_{3}\), \(R_{5}\) is not applicable. Therefore, \(v_{i}(1)+v_{i}(2)<3/4+\delta\). Hence, \(v_{i}(\{3n-2,3n-1,3n,3n+1\})\leq 2v_{i}(\{1,2\})<3/2+2\delta\). **Claim 6**.: _For all bags \(B\) allocated to an agent during the bag-filling phase, \(v_{i}(B)<3/2+2\delta\)._ Proof.: If \(B=\{k,2n-k+1,2n+k\}\), similar to the claims above, \(v_{i}(B)\leq v_{i}(\{1,2\})+v_{i}(2n+k)\leq 2(3/4+\delta)=3/2+2\delta\). Otherwise, let \(g\) be the last good added to \(B\). We have \(v_{i}(B\setminus g)<3/4+\delta\), otherwise \(g\) would not be added to \(B\). Therefore, we have \(v_{i}(B)=v_{i}(B\setminus g)+v_{i}(g)<2(3/4+\delta)=3/2+2\delta\). By Claims 3, 4, 5 and 6, all bags that are allocated during Algorithm 5 are of value less than \(3/2+2\delta\). Therefore, the lemma holds. **Lemma 14**.: _For \(\delta\leq 1/20\), given a \(\delta\)-ONI instance with \(|N_{1}^{1}|>n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3})\), all agents in \(N_{2}^{1}\cup N^{2}\) receive a bag of value at least \(3/4+\delta\) at the end of Algorithm 5._ Proof.: It suffices to prove that all agents \(i\in N_{2}^{1}\cup N^{2}\) receive a bag at the end of Algorithm 5. Towards a contradiction, assume that \(i\in N_{2}^{1}\cup N^{2}\) does not receive any bag. **Claim 7**.: _For all bags \(B\) which is either unallocated or is allocated to an agent in \(N_{1}^{1}\), \(v_{i}(B)<3/4+\delta\)._ The claim holds since the priority is with agents in \(N_{2}^{1}\cup N^{2}\) and also that we allocate all the bags of value at least \(3/4+\delta\) for some remaining agent. Let \(S\) be the set of bags allocated to agents in \(N_{2}^{1}\cup N^{2}\) and \(\bar{S}\) be the set of the remaining bags. We have \[n =v_{i}(M)=\sum_{B\in S}v_{i}(B)+\sum_{B\in\bar{S}}v_{i}(B)\] \[\leq(n-|N_{1}^{1}|)\left(\frac{3}{2}+2\delta\right)+|N_{1}^{1}| \left(\frac{3}{4}+\delta\right)\] (Lemma 13 and Claim 7) \[=\left(\frac{3}{4}+\delta\right)(2n-|N_{1}^{1}|)\] \[<n(\frac{3}{4}+\delta)(2-\frac{\frac{1}{4}-\delta}{\frac{1}{4}+ \frac{\delta}{3}}). (|N_{1}^{1}|>n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3}))\] \[=3n(\frac{5\delta}{3}+\frac{1}{4})\] This implies that \(\frac{5\delta}{3}+\frac{1}{4}>\frac{1}{3}\). which is a contradiction with \(\delta\leq 1/20\). Therefore, all agents \(i\in N_{2}^{1}\cup N^{2}\) receive a bag at the end of Algorithm 5. #### 5.2.1 Agents in \(N_{1}^{1}\) In this section, we prove that all agents in \(N_{1}^{1}\) also receive a bag at the end of Algorithm 5. First, we prove a general lemma that lower bounds the MMS value of an agent after allocating \(2k\) goods to \(k\) other agents. This way, we can lower bound the MMS value of agents in \(N_{1}^{1}\) after the sequence of \(R_{5}\) rules is applied. **Lemma 15**.: _Given a set of goods \(M\) and a valuation function \(v\), let \(S\subseteq M\) be such that \(|S|=2k\) for \(k<n\) and \(x\geq 0\) be such that \(v(g)\leq\mbox{MMS}_{v}^{n}(M)/2+x\) for all \(g\in S\). Then, \(\mbox{MMS}_{v}^{n-k}(M\setminus S)\geq\mbox{MMS}_{v}^{n}(M)-2x\)._ Proof.: We construct a partition of a subset of \(M\setminus S\) into \(n-k\) bundles such that the minimum value of these bundles is at least \(\mbox{MMS}_{v}^{n}(M)-2x\). Let \((P_{1},\ldots,P_{n})\) be an MMS partition of \(M\) according to valuation function \(v\). For all \(j\in[n]\), let \(Q_{j}=P_{j}\cap S\). Without loss of generality, assume \(|Q_{1}|\geq\ldots\geq|Q_{n}|\). Let \(t\) be largest such that for all \(\ell\leq t\), \(\sum_{j\in[\ell]}|Q_{j}|\geq 2\ell\). This implies that \(|Q_{t+1}|\leq 1\). **Claim 8**.: \(\sum_{j\in[t]}|Q_{j}|=2t\)_._ Proof.: If \(|Q_{t+1}|=1\), and \(\sum_{j\in[t]}|Q_{j}|>2t\), then \(\sum_{j\in[t+1]}|Q_{j}|\geq 2(t+1)\) which is a contradiction with the definition of \(t\). If \(|Q_{t+1}|=0\), then \(\sum_{j\in[t]}|Q_{j}|=2k\). If \(t<k\), then \(\sum_{j\in[t+1]}|Q_{j}|=2k\geq 2(t+1)\) which is again a contradiction with the definition of \(t\). So in this case, \(t=k\) and therefore, \(\sum_{j\in[t]}|Q_{j}|=2t\). Hence, Claim 8 holds. **Claim 9**.: \(Q_{2k-t+1}=\emptyset\)_._ _Proof._ If \(Q_{2k-t+1}\neq\emptyset\) then \(|Q_{2k-t+1}|\geq 1\). Therefore, \[\sum_{j\in[2k-t+1]}|Q_{j}| =\sum_{j\leq t}|Q_{j}|+\sum_{t<j\leq 2k-t+1}|Q_{j}|\] \[\geq 2t+(2k-2t+1)\hskip 56.905512pt\text{(Claim \ref{eq:2k-t+1}, and $|Q_{j}|\geq 1$ for $j\leq 2k-t+1$)}\] \[>k,\] which is a contradiction. Therefore, Claim 9 holds. \(\blacksquare\) Now we remove the first \(t\) bundles (i.e., \(P_{1},\ldots,P_{t}\)) and merge the next \(k-t\) pairs of bundles after removing \(S\) (i.e., \((P_{t+1}\setminus S)\) with \((P_{t+2}\setminus S)\) and so on) as follows: \[\hat{P}=\big{(}(P_{t+1}\cup P_{t+2})\setminus S,(P_{t+3}\cup P_{t+4})\setminus S,\ldots,(P_{2k-t-1}\cup P_{2k-t})\setminus S,P_{2k-t+1},\ldots,P_{n}\big{)}.\] Claim 9 implies that for all \(j>2k-t\), \(P_{j}=P_{j}\setminus S\). Therefore, \(\hat{P}\) is a partition of the goods in \((M\setminus(P_{1}\cup\ldots\cup P_{t}))\setminus S\subseteq M\setminus S\). For all \(j>2k-t\), we have \(v_{i}(P_{j})\geq\operatorname{MMS}_{v}^{|N|}(M)\geq\operatorname{MMS}_{v}^{|N| }(M)-2x\). Also, for all \(t<j\leq 2k-t\), we have \(|P_{j}\cap S|\leq 1\) and \(v_{i}(g)\leq\operatorname{MMS}_{v}^{|N|}(M)/2+x\) for all \(g\in P_{j}\). Therefore, \[v_{i}(P_{j}\setminus S) \geq v_{i}(P_{j})-(\operatorname{MMS}_{v}^{|N|}(M)/2+x)\] \[\geq\operatorname{MMS}_{v}^{|N|}(M)/2-x.\] Thus, for all \(t<j<2k-t\), \(v_{i}((P_{j}\cup P_{j+1})\setminus S)\geq\operatorname{MMS}_{v}^{|N|}(M)-2x\). Hence, \(\hat{P}\) is a partition of a subset of \(M\setminus S\) into \(n-k\) bundles with minimum value at least \(\operatorname{MMS}_{v}^{|N|}(M)-2x\). Therefore, Lemma 15 holds. \(\blacksquare\) **Lemma 16**.: _Let \(i\in N_{1}^{1}\) be a remaining agent after no more \(R_{5}\) is applicable. Then, before applying more reduction rules, \(\operatorname{MMS}_{i}\geq 1-12\delta\)._ _Proof._ We start by proving the following claim. **Claim 10**.: _Right before applying any \(R_{5}\), \(v_{i}(1)\leq 1/2+6\delta\)._ _Proof._ Right before applying any \(R_{5}\), no bag is of value at least \(\frac{3}{4}+\delta\) to any agent and in particular agent \(i\). Therefore, \(v_{i}(1)+v_{i}(2n+1)\leq v_{i}(1)+v_{i}(2n)<3/4+\delta\). Since \(v_{i}(2n+1)>1/4-5\delta\), by the definition of \(R_{5}\), we have \(v_{i}(1)<1/2+6\delta\). Therefore the claim holds. \(\blacksquare\) Consider the step right before applying any \(R_{5}\). Note that until this step, only some \(B_{j}\)'s are allocated. Since \(i\in N^{1}\), \(v_{i}(B_{j})\leq 1\) for all \(j\in[n]\) and since \(|B_{j}|=2\), allocating \(B_{j}\)'s are valid reductions for agent \(i\) by Lemma 4. Thus, before applying any \(R_{5}\), \(\operatorname{MMS}_{i}\geq 1\). Now let \(\mathcal{I}^{\prime}=([n^{\prime}],M^{\prime},\mathcal{V})\) be the instance after applying the sequence of \(R_{5}\)'s. Claim 10 and Lemma 15 imply that \(\operatorname{MMS}_{v_{i}}^{n^{\prime}}(M^{\prime})\geq 1-12\delta\). \(\blacksquare\) For the sake of contradiction, assume that agent \(i\in N_{1}^{1}\) does not receive a bag at the end of Algorithm 5. By Lemma 16, \(\operatorname{MMS}_{i}\geq 1-12\delta\) after applying the sequence of \(R_{5}\)'s. By Lemma 5, \(R_{2}\) and \(R_{3}\) are valid reductions for \(i\) and, therefore, \(\operatorname{MMS}_{i}\geq 1-12\delta\) at the beginning of the bag-filling phase. Let us abuse the notation and assume the instance at this step is \(([n],[m],\mathcal{V})\). **Lemma 17**.: _Assuming \(\delta\leq 1/212\), for all \(k\in[n]\), if \(v_{i}(C_{k})\leq 1-12\delta\), then \(v_{i}(\hat{C}_{k})\leq 1-12\delta\)._ Proof.: If \(\hat{C}_{k}=C_{k}\), the claim follows. Otherwise, let \(g\) be the last good allocated to \(\hat{C}_{k}\). We have \(v_{i}(\hat{C}_{k}\setminus g)<3/4+\delta\), otherwise \(g\) would not be added to \(\hat{C}_{k}\). Since \(g>3n\), by Proposition 1, \(v_{i}(g)<3/16+\delta/4\). We have \[v_{i}(\hat{C}_{k}) =v_{i}(\hat{C}_{k}\setminus g)+v_{i}(g)\] \[<\left(\frac{3}{4}+\delta\right)+\left(\frac{3}{16}+\frac{\delta }{4}\right)\] \[=\frac{15}{16}+\frac{5\delta}{4}\leq 1-12\delta.\] ( \[\delta\leq 1/212\] ) **Lemma 18**.: _If \(\delta\leq 1/212\), there exists \(k\in[n]\) such that \(v_{i}(C_{k})>1-12\delta\)._ Proof.: For the sake of contradiction, assume that for all \(k\in[n]\), \(v_{i}(C_{k})\leq 1-12\delta\). Since \(i\) did not receive a bag at the end of Algorithm 5, there exists an unallocated bag \(\tilde{C}_{t}\) such that \(v_{i}(\hat{C}_{t})<3/4+\delta\). We have \[v_{i}(M) =\sum_{k\in[n]}v_{i}(\hat{C}_{k})=\sum_{k\neq t}v_{i}(\hat{C}_{k} )+v_{i}(\hat{C}_{t})\] \[<(n-1)(1-12\delta)+(\frac{3}{4}+\delta)\] (Lemma 17 and \[v_{i}(\hat{C}_{t})<\frac{3}{4}+\delta\] ) \[<n(1-12\delta),\] ( \[\delta\leq 1/212\] ) Note that \(\text{MMS}_{i}\geq 1-12\delta\) and thus \(v_{i}(M)\geq n(1-12\delta)\) which is a contradiction and therefore, Lemma 18 holds. Let \(t\) be largest s.t. \(v_{i}(C_{t})>1-12\delta\). **Observation 7**.: _Assuming \(\delta\leq 1/212\), \(t>1\)._ Proof.: For the sake of contradiction, assume \(t=1\). Since \(1-12\delta\geq 3/4+\delta\), we have \[v_{i}(\hat{C}_{1})=v_{i}(C_{1}) =v_{i}(1)+v_{i}(2n)+v_{i}(2n+1)\] \[\leq v_{i}(1)+v_{i}(2)+(\frac{1}{4}+\frac{\delta}{3})\] (Proposition 1) \[<\left(\frac{3}{4}+\delta\right)+\left(\frac{1}{4}+\frac{\delta }{3}\right)=1+\frac{4\delta}{3}.\] ( \[R_{5}(3/4+\delta)\] is not applicable) Also, since no bag is allocated to agent \(i\), there must be a bag like \(C_{\ell}\) with \(v_{i}(\hat{C}_{\ell})<\frac{3}{4}+\delta\). \[n(1-12\delta)\leq v_{i}(M) =v_{i}(\hat{C}_{1})+\sum_{k\in([n]\setminus\{1,\ell\})}v_{i}( \hat{C}_{k})+v_{i}(\hat{C}_{\ell})\] \[<(1+\frac{4\delta}{3})+(n-2)(1-12\delta)+\frac{3}{4}+\delta,\] (Lemma 17) \[<n(1-12\delta),\] ( \[\delta\leq 1/212\] ) which is a contradiction. Thus, \(t>1\). **Observation 8**.: \(v_{i}(2n+t)>1/4-13\delta\) Proof.: We have \[1-12\delta<v_{i}(C_{t}) =v_{i}(t)+v_{i}(2n-t+1)+v_{i}(2n+t)\] \[\leq v_{i}(1)+v_{i}(2)+v_{i}(2n+t) (t\geq 1\text{ and }2n-t+1\geq 2)\] \[<\frac{3}{4}+\delta+v_{i}(2n+t). (R_{5}\text{ is not applicable})\] Therefore, \(v_{i}(2n+t)>1/4-13\delta\). **Observation 9**.: \(v_{i}(2n-t+1)>3/8-\delta(12+5/6)\)_._ Proof.: Since \(R_{5}\) is not applicable, \(v_{i}(1)+v_{i}(2)<3/4+\delta\) and therefore, \(v_{i}(2)<3/8+\delta/2\). We have \[1-12\delta<v_{i}(C_{t}) =v_{i}(t)+v_{i}(2n-t+1)+v_{i}(2n+t) (C_{t}=\{t,2n-t+1,2n+t\})\] \[\leq v_{i}(2)+v_{i}(2n-t+1)+(\frac{1}{4}+\frac{\delta}{3})\] \[(t\geq 2\text{ by Observation 7 and }v_{i}(2n+t)<\frac{1}{4}+\frac{\delta}{3}\text{ by Proposition 1})\] \[<(\frac{3}{8}+\frac{\delta}{2})+v_{i}(2n-t+1)+(\frac{1}{4}+ \frac{\delta}{3}) (v_{i}(2)<\frac{3}{8}+\frac{\delta}{2})\] \[=v_{i}(2n-t+1)+\frac{5}{8}+\frac{5\delta}{6}.\] Therefore, \(v_{i}(2n-t+1)>3/8-\delta(12+5/6)\). Now let \(\ell\) be largest such that \(v_{i}(2n+\ell)\geq\delta(26+2/3)\). **Observation 10**.: _If \(\delta\leq 3/476\), then \(\ell\geq t\)._ Proof.: By Observation 8, \(v_{i}(2n+t)>1/4-13\delta\). For \(\delta\leq 3/476\), we have \(1/4-13\delta\geq\delta(26+2/3)\). Thus, \(\ell\geq t\). **Lemma 19**.: _If \(\delta\leq 3/956\), for all \(k\leq\min(\ell,n)\), \(v_{i}(C_{k})\geq 3/4+\delta\)._ Proof.: By Observation 10, we have \(\ell\geq t\). For all \(k\leq t\) we have \[v_{i}(C_{k}) =v_{i}(k)+v_{i}(2n-k+1)+v_{i}(2n+k) (C_{k}=\{k,2n-k+1,2n+k\})\] \[\geq v_{i}(2n-t+1)+2v_{i}(2n+t) (k\leq 2n-t+1\text{ and }2n-k+1<2n+k\leq 2n+t)\] \[>\left(\frac{3}{8}-\delta(12+\frac{5}{6})\right)+2\left(\frac{1}{ 4}-13\delta\right) (\text{Observation 8 and 9})\] \[=\frac{7}{8}-\delta(38+\frac{5}{6})\geq\frac{3}{4}+\delta. (\delta\leq 3/956)\] Therefore, no good would be added to \(C_{k}\) for \(k\leq t\). Now assume \(t<k\leq\ell\). We have \[v_{i}(C_{k}) =v_{i}(k)+v_{i}(2n-k+1)+v_{i}(2n+k) (C_{k}=\{k,2n-k+1,2n+k\})\] \[\geq 2v_{i}(2n-t+1)+v_{i}(2n+\ell) (k<2n-k+1<2n-t+1\text{ and }2n+k\leq 2n+\ell)\] \[>2\left(\frac{3}{8}-\delta(12+\frac{5}{6})\right)+\delta(26+ \frac{2}{3}) (\text{Observation 9 and the definition of }\ell)\] \[=\frac{3}{4}+\delta.\] Note that since \(i\) does not receive a bag by the end of Algorithm 5, there must be a remaining bag \(C_{k}\) such that \(v_{i}(C_{k})<3/4+\delta\). Thus, Lemma 19 implies that \(\ell<n\) when \(\delta\leq 3/956\). **Corollary 2** (of Lemma 19).: _If \(\delta\leq 3/956\), for all \(k\leq\ell\), \(\hat{C}_{k}=C_{k}\)._ **Observation 11**.: \(v_{i}(M\setminus\{1,2,\ldots,2n+\ell\})\geq(n-\ell)(1/4-13\delta)\)_._ Proof.: Consider the set of goods \(\{1,2,\ldots,2n+\ell\}\) in the MMS partition of agent \(i\). At least \(n-\ell\) bags in the MMS partition have at most two goods in \(\{1,2,\ldots,2n+\ell\}\). Let \(P\) be the set of these bags. For all \(B\in P\), we have \(v_{i}(B\cap\{1,2,\ldots,2n+\ell\})\leq 3/4+\delta\) since \(|B\cap\{1,2,\ldots,2n+\ell\}|\leq 2\) and \(R_{5}\) is not applicable. Therefore, \(v_{i}(B\setminus\{1,2,\ldots,2n+\ell\})\geq(1-12\delta)-(3/4+\delta)=1/4-13\delta\). We have \[v_{i}(M\setminus\{1,2,\ldots,2n+\ell\}) \geq v_{i}(\cup_{B\in P}B\setminus\{1,2,\ldots,2n+\ell\})\] \[\geq(n-\ell)(\frac{1}{4}-13\delta).\qed\] **Lemma 20**.: _If \(\delta\leq 796\), for all \(k>\ell\), \(v_{i}(\hat{C}_{k}\setminus\{k,2n-k+1\})<1/4-13\delta\)._ Proof.: Since \(1/4-13\delta\geq\delta(53+1/3)\) for \(\delta\leq 3/796\), it suffices to prove \(v_{i}(\hat{C}_{k}\setminus\{k,2n-k+1\})<\delta(53+1/3)\). Note that for all \(k>\ell\), \(v_{i}(2n+k)<\delta(26+2/3)\). Therefore, if \(\hat{C}_{k}=C_{k}=\{k,2n-k+1,2n+k\}\), the observation holds. Moreover, we have \[v_{i}(\{k,2n-k+1\}) \geq 2v_{i}(2n-t+1)\] ( \[k<2n-k+1\leq 2n-t+1\] ) \[>2\left(\frac{3}{8}-\delta(12+\frac{5}{6})\right)\] (Observation 9 ) \[=\frac{3}{4}-\delta(25+\frac{2}{3}). \tag{7}\] If \(\hat{C}_{k}\neq C_{k}\), let \(g\) be the last good added to \(\hat{C}_{k}\). Since \(g>3n+1>2n+\ell\), \(v_{i}(g)<\delta(26+2/3)\). We have \(v_{i}(\hat{C}_{k}\setminus g)<3/4+\delta\) otherwise \(g\) would not be added to \(\hat{C}_{k}\). We have \[v_{i}(\hat{C}_{k}) =v_{i}(\hat{C}_{k}\setminus g)+v_{i}(g)\] \[<\left(\frac{3}{4}+\delta\right)+\delta(26+\frac{2}{3})\] (Proposition 1 ) \[=\frac{3}{4}+\delta(27+\frac{2}{3}).\] Hence, \[\frac{3}{4}+\delta(27+\frac{2}{3}) >v_{i}(\hat{C}_{k})\] \[=v_{i}(\{k,2n-k+1\})+v_{i}(\hat{C}_{k}\setminus\{k,2n-k+1\})\] \[>\frac{3}{4}-\delta(25+\frac{2}{3})+v_{i}(\hat{C}_{k}\setminus\{ k,2n-k+1\}).\] (Inequality ( 7 )) Thus, \[v_{i}(\hat{C}_{k}\setminus\{k,2n-k+1\})<\delta(53+\frac{1}{3}).\] We are ready to prove Lemma 21. **Lemma 21**.: _For \(\delta\leq 3/956\), given a \(\delta\)-ONI instance with \(|N_{1}^{1}|>n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3})\), all agents in \(N_{1}^{1}\) receive a bag of value at least \(3/4+\delta\) at the end of Algorithm 5._ Proof.: It suffices to prove that all agents \(i\in N_{1}^{1}\) receive a bag at the end of Algorithm 5. Towards a contradiction, assume that \(i\in N_{1}^{1}\) does not receive any bag. By Lemma 18, there exists a \(k\in[n]\) such that \(v_{i}(C_{k})>1-12\delta\). Recall that \(\ell\) is largest such that \(v_{i}(2n+\ell)\geq\delta(26+2/3)\). We have \[(n-\ell)(\frac{1}{4}-13\delta) \leq v_{i}(M\setminus\{1,2,\ldots,2n+\ell\})\] (Observation 11 ) \[=\sum_{k>\ell}v_{i}(\hat{C}_{k}\setminus\{k,2n-k+1\})\] ( \[\hat{C}_{k}=C_{k}\] for \[k\in[\ell]\] by Corollary 2 ) \[<(n-\ell)(\frac{1}{4}-13\delta),\] (Lemma 20 ) which is a contradiction. **Theorem 4**.: _Given any \(\delta\leq 3/956\), for all \(\delta\)-ONI instances where \(|N_{1}^{1}|>n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3})\), Algorithm 5 returns a \((\frac{3}{4}+\delta)\)-MMS allocation._ Proof.: For all other agents \(i\), if \(i\in N_{2}^{1}\cup N^{2}\), by Lemma 14, \(i\) receives a bag of value at least \(\frac{3}{4}+\delta\) and if \(i\in N_{1}^{1}\), by Lemma 21\(i\) receives such a bag. Since \(N=N_{1}^{1}\cup N_{2}^{1}\cup N^{2}\), the theorem follows. ## 6 \((3/4+\epsilon)\)-Mms allocations In this section, we give the complete algorithm \(\mathtt{mainApproxMMS}(\mathcal{I},\alpha)\) that achieves an \(\alpha\)-MMS allocation for any instance \(\mathcal{I}\) with additive valuations and any \(\alpha=3/4+\epsilon\) for \(\epsilon\leq 3/4220\). To this end, first we obtain a \(\delta\)-ONI instance for \(\delta=4\epsilon/(1-4\epsilon)\) by running \(\mathtt{order}(\mathtt{normalize}(\mathtt{reduce}(\mathcal{I},\epsilon)))\). Then depending on whether \(|N_{1}^{1}|\leq n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3})\) or \(|N_{1}^{1}|>n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3})\), we run \(\mathtt{approxxMMS1}\) or \(\mathtt{approxxMMS2}\). The pseudocode of our algorithm \(\mathtt{mainApproxMMS}(\mathcal{I},\alpha)\) is shown in Algorithm 6. **Theorem 5**.: _Given any instance \(\mathcal{I}=(N,M,\mathcal{V})\) where agents have additive valuations and any \(\alpha\leq\frac{3}{4}+\frac{3}{3836}\), \(\mathtt{mainApproxMMS}(\mathcal{I},\alpha)\) returns an \(\alpha\)-MMS allocation for \(\mathcal{I}\)._ Proof.: Let \(\epsilon=\alpha-3/4\) and \(\hat{\mathcal{I}}=\texttt{order}(\texttt{normalize}(\texttt{reduce}(\mathcal{I}, \epsilon)))\). Then by Theorem 2, \(\hat{\mathcal{I}}\) is ordered, normalized and \((\frac{3}{4}+\frac{4\epsilon}{1-4\epsilon})\)-irreducible \((\frac{4\epsilon}{1-4\epsilon}\)-ONI). Since \(\epsilon\leq\frac{3}{3836}\), \(\frac{4\epsilon}{1-4\epsilon}\leq\frac{3}{956}=\delta\). Thus, \(\hat{\mathcal{I}}\) is \(\delta\)-ONI. Furthermore, from any \(\beta\)-MMS allocation of \(\hat{\mathcal{I}}\) one can obtain a \(\min(\frac{3}{4}+\epsilon,(1-4\epsilon)\beta)\)-MMS allocation of \(\mathcal{I}\). By Theorem 3, given any \(\delta\leq 3/956\), for all \(\delta\)-ONI instances where \(|N_{1}^{1}|\leq n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3})\), approxMMS1 returns a \((\frac{3}{4}+\delta)\)-MMS allocation. Also, by Theorem 4, for all \(\delta\)-ONI instances where \(|N_{1}^{1}|>n(\frac{1}{4}-\delta)/(\frac{1}{4}+\frac{\delta}{3})\), approxMMS2 returns a \((\frac{3}{4}+\delta)\)-MMS allocation. Therefore, mainApproxMMS\((\mathcal{I},\alpha)\) returns a \(\min(\frac{3}{4}+\epsilon,(1-4\epsilon)(\frac{3}{4}+\delta))\)-MMS allocation of \(\mathcal{I}\). We have \[(1-4\epsilon)(\frac{3}{4}+\delta) \geq(1-\frac{3}{959})(\frac{3}{4}+\frac{3}{956})\] \[=\frac{3}{4}+\frac{3}{3836}\] \[\geq\frac{3}{4}+\epsilon=\alpha.\] Thus, mainApproxMMS\((\mathcal{I},\alpha)\) returns an \(\alpha\)-MMS allocation of \(\mathcal{I}\). ## Appendix A Missing Proofs **Lemma 3**.: _[_AGST23_]_ _Let \(([n],[m],\mathcal{V})\) be an ordered and normalized fair division instance. For all \(k\in[n]\) and agent \(i\in[n]\), if \(v_{i}(k)+v_{i}(2n-k+1)>1\), then \(v_{i}(2n-k+1)\leq 1/3\) and \(v_{i}(k)>2/3\)._ Proof.: It suffices to prove \(v_{i}(2n-k+1)\leq 1/3\) and then \(v_{i}(k)>2/3\) follows. Let \(P=(P_{1},\ldots,P_{n})\) be an MMS partition of agent \(i\). For \(j\in[k]\) and \(j^{\prime}\in[2n+1-k]\), \(v_{i}(j)+v_{i}(j^{\prime})\geq v_{i}(k)+v_{i}(2n+1-k)>1\), since the instance is ordered. Furthermore, \(j\) and \(j^{\prime}\) cannot be in the same bundle in \(P\) since the instance is normalized. In particular, no two goods from \([k]\) are in the same bundle in \(P\). Hence, assume without loss of generality that \(j\in P_{j}\) for all \(j\in[k]\). For all \(j\in[k]\) and \(j^{\prime}\in[2n-k+1]\), \(j^{\prime}\not\in P_{j}\). Thus, \(\{k+1,\ldots,2n-k+1\}\subseteq P_{k+1}\cup\ldots\cup P_{n}\). By pigeonhole principle, there exists a bundle \(B\in\{P_{k+1},\ldots,P_{n}\}\) that contains at least \(3\) goods \(g_{1},g_{2},g_{3}\) in \(\{k+1,\ldots,2n-k+1\}\). Hence, \[v_{i}(2n-k+1)\leq\min_{g\in\{g_{1},g_{2},g_{3}\}}v_{i}(g)\leq\frac{1}{3}\sum_ {g\in\{g_{1},g_{2},g_{3}\}}v_{i}(g)\leq\frac{v_{i}(B)}{3}=\frac{1}{3}.\qed\] **Lemma 11**.: _[_AGST23_]_ \(\sum_{k\in A^{+}}v_{i}(\hat{B}_{k})<|A^{+}|+\ell(\frac{1}{12}+\delta)\)._ Proof.: Let \(S\in A^{+}\) be the set of \(\ell\) smallest indices in \(A^{+}\) and \(L\in A^{+}\) be the set of \(\ell\) largest indices in \(A^{+}\). Since \(\hat{B}_{k}=B_{k},\forall k\in A^{+}\), we have \[\sum_{k\in A^{+}}v_{i}(\hat{B}_{k})=(\sum_{k\in S}v_{i}(k)+\sum_{k\in L}v_{i}(2 n-k+1))+(\sum_{k\in A^{+}\setminus S}v_{i}(k)+\sum_{k\in A^{+}\setminus L}v_{i}(2n-k +1)).\] We upper bound \((\sum_{k\in S}v_{i}(k)+\sum_{k\in L}v_{i}(2n-k+1))\) and \((\sum_{k\in A^{+}\setminus S}v_{i}(k)+\sum_{k\in A^{+}\setminus L}v_{i}(2n-k+1))\) in Claims 11 and 12 respectively. **Claim 11**.: \(\sum_{k\in S}v_{i}(k)+\sum_{k\in L}v_{i}(2n-k+1)<\ell(\frac{13}{12}+\delta)\). _Proof._ Note that \(v_{i}(k)<3/4+\delta\) by Proposition 1 and \(v_{i}(2n-k+1)\leq 1/3\) by Lemma 3. Thus, \[\sum_{k\in S}v_{i}(k)+\sum_{k\in L}v_{i}(2n-k+1)<\ell(\frac{3}{4}+ \delta+\frac{1}{3})=\ell(\frac{13}{12}+\delta).\] Therefore, Claim 11 holds. \(\blacksquare\) **Claim 12**.: \(\sum_{k\in A^{+}\setminus S}v_{i}(k)+\sum_{k\in A^{+}\setminus L}v_{i}(2n-k+1) <|A^{+}|-\ell\)_._ _Proof._ Assume \(A^{+}=\{g_{1},\ldots,g_{|A^{+}|}\}\) and \(g_{1}<\ldots<g_{|A^{+}|}\). Then, \(A^{+}\setminus S=\{g_{\ell+1},\ldots,g_{|A^{+}|}\}\) and \(A^{+}\setminus L=\{g_{1},\ldots,g_{|A^{+}|-\ell}\}\). The idea is to pair the goods \(g_{k+\ell}\) and \(2n-g_{k}+1\) and prove that their value is less than \(1\) for agent \(i\). Since \(g_{k+\ell}\geq g_{k}+\ell\), \(v_{i}(g_{k+\ell})+v_{i}(2n-g_{k}+1)<1\) by the definition of \(\ell\). We have \[\sum_{k\in A^{+}\setminus S}v_{i}(k)+\sum_{k\in A^{+}\setminus L}v_{i}(2n-k+1) =\sum_{k\in[|A^{+}|-\ell]}(v_{i}(g_{k+\ell})+v_{i}(2n-g_{k}+1))<|A^{+}|-\ell.\] Therefore, Claim 12 holds. \(\blacksquare\) Claim 11 and Claim 12 together imply Lemma 11. \(\blacksquare\)
2305.09550
Life of PII -- A PII Obfuscation Transformer
Protecting sensitive information is crucial in today's world of Large Language Models (LLMs) and data-driven services. One common method used to preserve privacy is by using data perturbation techniques to reduce overreaching utility of (sensitive) Personal Identifiable Information (PII) data while maintaining its statistical and semantic properties. Data perturbation methods often result in significant information loss, making them impractical for use. In this paper, we propose 'Life of PII', a novel Obfuscation Transformer framework for transforming PII into faux-PII while preserving the original information, intent, and context as much as possible. Our approach includes an API to interface with the given document, a configuration-based obfuscator, and a model based on the Transformer architecture, which has shown high context preservation and performance in natural language processing tasks and LLMs. Our Transformer-based approach learns mapping between the original PII and its transformed faux-PII representation, which we call "obfuscated" data. Our experiments demonstrate that our method, called Life of PII, outperforms traditional data perturbation techniques in terms of both utility preservation and privacy protection. We show that our approach can effectively reduce utility loss while preserving the original information, offering greater flexibility in the trade-off between privacy protection and data utility. Our work provides a solution for protecting PII in various real-world applications.
Ajinkya Deshmukh, Saumya Banthia, Anantha Sharma
2023-05-16T15:48:36Z
http://arxiv.org/abs/2305.09550v2
# Life of PII - A PII Obfuscation Transformer ###### Abstract Protecting sensitive information is crucial in today's world of Large Language Models (LLMs) and data-driven services. One common method used to preserve privacy is by using data perturbation techniques to reduce overreaching utility of (sensitive) Personal Identifiable Information (PII) data while maintaining its statistical and semantic properties. Data perturbation methods often result in significant information loss, making them impractical for use. In this paper, we propose 'Life of PII', a novel Obfuscation Transformer framework for transforming PII into faux-PII while preserving the original information, intent, and context as much as possible. Our approach includes an API to interface with the given document, a configuration-based obfuscator, and a model based on the Transformer architecture, which has shown high context preservation and performance in natural language processing tasks and LLMs. Our Transformer-based approach learns mapping between the original PII and its transformed faux-PII representation, which we call "obfuscated" data. Our experiments demonstrate that our method, called Life of PII, outperforms traditional data perturbation techniques in terms of both utility preservation and privacy protection. We show that our approach can effectively reduce utility loss while preserving the original information, offering greater flexibility in the trade-off between privacy protection and data utility. Our work provides a solution for protecting PII in various real-world applications. ## 1 Introduction The Use of LLMs like Chat-GPT is increasing in the world, people from all backgrounds have started to use these tools to keep at pace with the world and to make their lives easier. Many financial, insurance companies and investment banks want to use these LLMs to satisfy their requirements. One of the main challenges which they are facing is maintaining their data privacy because these LLMs are not yet hostable on their private servers. Most financial institution's data is rich in PII and therefore carries with it an extra risk, sending it out. One solution to this problem is to transform and obfuscate this data prior to sending it to the LLMs like Chat-GPT, get the response from the LLMs and then again re-transform this obfuscated response to get the final answer. This is what our approach discusses as part of this paper. In a previous work, we had explored transformer-based models for use in Question Answering [1]. This gives us the opportunity and scope to provide a solution to this problem. Our approach provides transforming and re-transforming of data within the organization maintaining data privacy. It also ensures that transformed and obfuscated data retains its semantic meaning. This paper focuses on the use of python dependency modules and Natural Language Processing techniques to provide transforming and re-transforming facilities. ## 2 Methodology To transform the data, three transformation techniques were implemented which are User Provided Tokens (UPT) Transformation, Named Entity Recognition (NER) Transformation and Part of Speech (PoS) Transformation and checked response from LLMs for different combinations of these transformation techniques. Flow for text transformation and LLM question and answering system is as below: The step for synonyms replacement is carried out to ensure transformation of that word and also to avoid different transformation for the same word. Transformation techniques used are as below: ### UPT Transformation UPT Transformation consists of providing words (which user wants to hide) inside configuration and providing tokens for those words. Then after applying UPT Transformation, these words will be replaced with the tokens user has provided. For example, when user provides token 'D202' for the word 'Krypton' and token 'Meridian' for word 'Eastern Richard'. Result of this transformation will be: \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline Original Text & The **Eastern Richard** Company Monthly Status Report states that it is performing good, but Project **Krypton** has a red status. \\ \hline UPT Transformed Text & The **Meridian** Company Monthly Status Report states that it is performing good, but Project **D202** has a red status. \\ \hline \end{tabular} As this transformation hid information user wanted to hide and at the same time ensured semantic meaning. Word 'Eastern Richard' is hiding with word 'Meridian' and 'Meridian' is also keeping the semantic meaning as it sounds like one company. ### NER Transformation NER Transformation consists of identifying if there are any named entities in text [2] and replacing these named entities with some tokens. Considering same example as above, result of NER transformation will be: \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline Original Text & **The Eastern Richard Company Monthly Status Report** states that it is performing good, but **Project Krypton** has a red status. \\ \hline NER Transformed Text & **N0** states that it is performing good, but **N1** has a red status.. \\ \hline \end{tabular} It can be observed that named entities found in the original text are 'The Eastern Richard Company Monthly Status Report' and 'Project Krypton' which were replaced with 'N0' and 'N1'. Figure 1: Proposed Text Transformation Flow for LLMs Question and Answering ### PoS Transformation PoS Transformation consists of identifying if there are parts of speech like nouns in the text [2] and replacing these nouns with some tokens. Considering same example as above, result of PoS transformation will be: \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline Original Text & The **Eastern Richard Company Monthly Status** Report states that it is performing good, but **Project Krypton** has a red status. \\ \hline PoS Transformed Text & The **P0** Report states that it is performing good, but **PI** has a red status. \\ \hline \end{tabular} It can be observed that PoS I.e., nouns found in the original text are 'Eastern Richard Company Monthly Status' and 'Project Krypton' which were replaced with 'P0' and 'P1'. ### II When user follows procedure of applying transformation technique on the data, sending this data to LLM, getting transformed response, applying re transformation and obtaining final response, it is possible that user can lose some information in the response he obtains from the LLMs due to provision of transformed data to LLMs compared with the response user has otherwise obtained without any transformation. This can be understood with the example, suppose 'Response1' is the response user obtained from LLMs without any transformation and 'Response2' is the re-transformed response user obtained from LLMs using some transformation technique. Response1 = "According to our analysis, the company's revenue for the first quarter of 2023 increased **by 15%** compared to the same period last year, reaching a **total of $10 million.** This growth was driven by a 20 % increase in sales of our flagship product, which accounted for 60% of the total revenue. However, operating expenses also increased **by 10%**, mainly due to higher marketing and research and development costs. As a result, the company's net profit for the quarter was **$1.2 million, a 12% increase** from last year. Overall, the company's performance for the quarter was positive, but we recommend monitoring expenses closely to maintain profitability. " Response2= " According to our analysis, the company's revenue for the first quarter of 2023 increased compared to the same period last year. This growth was driven by a 20% increase in sales of our flagship product, which accounted for 60% of the total revenue. However, operating expenses also increased, mainly due to higher marketing and research and development costs. As a result, the company's net profit for the quarter was positive, but we recommend monitoring expenses closely to maintain profitability. " As this can be seen from responses, Response2 lacks some of the important information like 'company's revenue increased by 15%', 'company's revenue is of $10 million', 'operating expenses also increased by 10%',' company's net profit for the quarter was $1.2 million, a 12% increase' which were there in Response1. So, this information which is lost from Response2 as compared with Response1 is called 'Information loss.' This information can be crucial at many times so transformation techniques need to be chosen in such a way that the user will be having minimum information loss or information loss which can be tolerated. Information Loss (IL) is expressed in percentage like 20% information loss. Two methods were used here for the calculation of the IL. Information loss is expressed as IL= (0.5*Manual information loss) + (0.5*Similarity based Information loss) Similarity based Information loss (ILS): This method uses hugging face sentence transformer model to calculate cosine similarity between our two responses. (e.g., Response 1 and Response 2). It converts the Responses into embedding and then compares how similar these sentences are based on cosine similarity and comes back with similarity score ranging from 0 to 1. ILS is calculated as ILS= 1 - Similarity score Manual information loss (ILM): One drawback of ILS is it can give information loss more than 0% even if both responses meanings are same. For example, if user provides context as "Mango is Fruit", ask question as "What is Mango?" then one response can be "Mango is a fruit" and other response can be "A fruit". Now both these responses are giving the right answer to the question and there is no information loss here, but ILS can still give information loss greater than 0 as these two responses are not same and occupies different positions in the vector space when compared from cosine similarity. To overcome this, ILM is also taken into consideration, adding human into loop. Manual Information loss is the loss in which human analyses both responses (e.g., Response1 and Response2), compares loss of information and then provides information loss ranging from 0 to 1. Here for comparison, a person can consider information which he/she finds important, it can be some figures, names or anything. Method of calculation of ILM is given as ILM = Information loss from response obtained after use of Transformation Technique / Total important information in response obtained without use of Transformation Technique Consider Response1 and Response2 to understand ILM. So important information lost from Response2 are figures like "15%", "total of $10 million", "by 10%", "$1.2 million" and "a 12% increase". So, this count is 5. Total important information in Response1 is "15%", "total of $10 million", "by 10%", "$1.2 million" and "a 12% increase", "first quarter of 2023", "20% increase", "flagship product", "60% of the total revenue" and count of this is 9. So, ILM is 5/9 = 0.55 ### Transformation cycle A complete process of applying Transformation Technique to data, sending this data to the LLMs, obtaining transformed response from LLMs, applying re-transformation on this response and obtaining final response is called as 'Transformation cycle'. This term is used in paper for ease of understanding and to avoid describing this cycle again and again. ### Stt When Transformation cycle is applied using UPT Transformation and suppose user has provided tokens for the words on which LLMs are trained on, then it can be possible that LLMs can use their trained understanding as a response, which means those are sensitive to some of tokens. This can be understood with one example, suppose user has given token 'Mango' for the word 'Rose'. Transformation cycle is as below. \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline Original Text & **Rose** is a flowering plant that is widely recognized for its beauty, fragrance, and symbolic significance. It belongs to the family Rosaceae and is native to Asia but is now cultivated in many parts of the world. **Rose** comes in a variety of colors, such as red, pink, yellow, and white, and is commonly used in gardens, bouquets, and various decorative arrangements. \\ \hline UPT Transformed Text & **Mango** is a flowering plant that is widely recognized for its beauty, fragrance, and symbolic significance. It belongs to the family Rosaceae and is native to Asia but is now cultivated in many parts of the world. **Rose** **Mango** comes in a variety of colors, such as red, pink, yellow, and white, and is commonly used in gardens, bouquets, and various decorative arrangements. \\ \hline \end{tabular} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline Question & What is **Rose?** \\ \hline UPT Transformed question along with transformed context & The **Mango** is a flowering plant that is widely recognized for its beauty, fragrance, and symbolic significance. It belongs to the family Rosaceae and is native to Asia but is now cultivated in many parts of the world. **Mango** comes in a variety of colors, such as red, pink, yellow, and white, and is commonly used in gardens, bouquets, and various decorative arrangements. \\ \hline \end{tabular} So, user has provided information that 'Mango is Flower'. This is user provided knowledge, this information can conflict with the information of 'Mango is fruit' on which LLMs are already trained on. When user asks to LLMs that 'what is Mango?' then it gives response as 'Mango is fruit' so it means that even if user has provided information that 'Mango is flower', still it is using data on which it is pre-trained on which means that it is sensitive to this kind of information. So likewise, LLMs can be sensitive to many types of words and will give a different response than expected, so this is called Sensitivity to Transformation Technique (STT), and this should be considered into effect before giving tokens otherwise user will be getting responses which will come from LLMs trained understanding rather than information user is providing to it. Also, LLMs can give out of context answers from data on which it is already trained on. STT is expressed as either of these formats like yes or no, 100% or 0%, 1 or 0. STT of 100% or 1 means LLMs are sensitive to Transformation Technique and will use its pre-trained knowledge than using our provided information to generate the response. ### Prompt Engineering Prompt engineering refers to the process of designing effective prompts or input examples that help LLMs learn to perform a specific task. It can also be seen as a way of communicating more effectively with LLMs, such that the resulting output adheres more closely to the context and constraints within which the problem needs to be addressed. This is a need more than a good to have, as LLMs are prone to hallucinating - generating real looking responses that are factually inaccurate. As part of our technique, we use Prompt Engineering for two main reasons: * LLMs give out of context answers in their vanilla state. * We do this by POS Transformation-prompting, which means adding an instructional prompt at the end of each context and question pair. There are better ways to do this, such as prompt evaluation, which would be beyond the scope of discussion in this paper. ## 3 Experimental Results We have compared responses from LLMs for different combinations of UPT Transformation, NER Transformation and PoS Transformation. In different combinations, Transformation Technique is used alone, or different stages are applied like performing other Transformation Technique on top of the first Transformation Technique. As an example, UPT+NER means first UPT Transformation is applied and then NER Transformation. At the time of re-transformation also the same stages of re-transformation will be applied to get the clarified response. Like first NER re-transformation will be done and then UPT re-transformation to obtain response. We obtained responses for three kinds of questions. 1. Pointed questions: These are straightforward questions having one- or two-line answers. Sample question will be 'who is owner of Facebook?' and answer will be 'The owner of Facebook is Mark Zuckerberg.' 2. Key questions: These are more complex questions having more than 2 lines of answers. Example question will be 'What are key accomplishments. 3. Summarizing questions: These are questions for summarizing reports. Example question will be 'Summarize this report'. We asked 40 questions belonging to the above three kinds to LLM (LLM considered here is Chat-GPT), calculated STT, ILM, ILS and IL for all these responses and calculated average of these measures for each Transformation Technique and these are presented in Table 1.1. From Table 1.1, IL is observed minimum for the UPT and maximum for the UPT+NER+PoS. This seems obvious as it has more stages of Transformation Technique applied over it. All Other techniques give IL in similar range. From the above information losses which were obtained based on the final responses from LLMs, it is observed that we are outperforming traditional data perturbation techniques in terms of both utility preservation and privacy protection as LLMs has given these responses based on our Transformation Technique. Also, STT is observed in Transformation Technique like NER, PoS, UPT+NER and NER+PoS. It means that for these techniques for few of questions, LLMs was sensitive or has given answer from its data on which it was pre-trained on. Table 1.2 shows IL and STT for the final responses for questions when Prompt Engineering is used. As it can be observed that Prompt Engineering has reduced STT to zero almost for all the techniques. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Transformation Technique** & **STT** & **ILM** & **ILS** & **IL** \\ \hline UPT & 0.00\% & 1.28\% & 12.70\% & 6.99\% \\ \hline NER & 7.69\% & 35.90\% & 16.40\% & 26.15\% \\ \hline PoS & 2.56\% & 13.85\% & 14.12\% & 13.98\% \\ \hline UPT+NER & 2.56\% & 28.97\% & 27.94\% & 28.46\% \\ \hline UPT + PoS & 0.00\% & 22.82\% & 19.95\% & 21.39\% \\ \hline NER + PoS & 5.13\% & 32.95\% & 26.72\% & 29.84\% \\ \hline UPT + NER + PoS & 0.00\% & 43.08\% & 33.80\% & 38.44\% \\ \hline \end{tabular} \end{table} Table 1.1: Experiment Result for questions \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Transformation Technique** & **STT** & **ILM** & **ILS** & **IL** \\ \hline UPT & 0.00\% & 1.28\% & 12.70\% & 6.99\% \\ \hline NER & 0.00\% & 34.21\% & 23.63\% & 28.48\% \\ \hline PoS & 2.56\% & 13.85\% & 14.12\% & 13.98\% \\ \hline UPT + NER & 0.00\% & 28.97\% & 29.82\% & 29.39\% \\ \hline UPT + PoS & 0.00\% & 22.82\% & 19.95\% & 21.39\% \\ \hline NER + PoS & 0.00\% & 32.95\% & 31.14\% & 32.04\% \\ \hline UPT + NER + PoS & 0.00\% & 43.08\% & 33.80\% & 38.44\% \\ \hline \end{tabular} \end{table} Table 1.2: Experiment Result for questions with Prompt Engineering ## 4 Conclusion We have provided and tested Life of PII for sending information to LLMs ensuring protection of critical information and keeping the semantic meaning thereby ensuring appropriate responses from the LLMs. Selection of Transformation Technique to use depends on the type of use case. For use cases where only a few terms/information needs to be protected and requires minimum information loss, UPT can be used. Use cases where protecting most of the information is a top priority and information loss is on less priority, UPT+NER+PoS can be used. If balance of both protecting information as well as less information loss is required, then other Transformation Technique can be used. Also, to ensure LLMs give response within context, prompt engineering can be used. Currently in UPT, tokens are manually provided by the user to have full control over the technique which can make the process a little slower. In the future we will work on automating token provision process so that it can make process faster and remove task of user to add appropriate tokens.
2304.08573
An isomorphism of unitals, and an isomorphism of classical groups
An isomorphism between two hermitian unitals is proved, and used to treat isomorphisms of classical groups that are related to the isomorphism between certain simple real Lie algebras of types A and D (and rank 3).
Markus Johannes Stroppel
2023-04-17T19:30:26Z
http://arxiv.org/abs/2304.08573v1
# An isomorphism of unitals, and an isomorphism of classical groups ###### Abstract An isomorphism between two hermitian unitals is proved, and used to treat isomorphisms of classical groups that are related to the isomorphism between certain simple real Lie algebras of types A and D (and rank 3). In the present paper, we use an isomorphism between two hermitian unitals to treat isomorphisms of classical groups that are related to the isomorphism between the simple real Lie algebras of type \(\mathsf{A}_{3}^{\mathbb{C},\mathbb{I}}\) and \(\mathsf{D}_{3}^{\mathbb{H}}\) (in the notation of Tits [18, pp. 28, 40], Helgason [6, X SS 2.1, SS 6.2] denotes the algebras in question by \(\mathfrak{su}(3,1)\) and \(\mathfrak{so}^{*}(6)\), respectively). Our incidence geometric approach complements the algebraic approach used in [12, 2.14] by a geometric explanation for the exceptional isomorphism of classical groups. That algebraic approach works in much greater generality, including certain characteristic two cases where the unital over the quaternions collapses into a line, and cannot be used for our purposes. ## 1 Hermitian unitals We generalize the notion of finite hermitian unital (see [1, p. 104]) to the case of hermitian forms over infinite (and not necessarily commutative) fields, as follows. **1.1 Definitions**.: Let \(K\) be any (not necessarily commutative) field, and let \(\sigma\) be an anti-automorphism of \(K\), with \(\sigma^{2}=\operatorname{id}\neq\sigma\). If \(V\) is a vector space over \(K\), and \(h\colon V\times V\to K\) is a non-degenerate \(\sigma\)-hermitian form, we define the set \(U_{h}:=\{Kv\in\operatorname{Gr}_{1}(V)\,|\;\,v\perp_{h}v\}=\{P\in\operatorname {Gr}_{1}(V)\,|\;\,P\leqq P^{\perp_{h}}\}\) of _absolute points_ (_with respect to \(h\)_). If \(d:=\dim V\) is finite, the hermitian form \(h\) defines a polarity \(\pi_{h}\) of the projective space \(\operatorname{PG}(V)\cong\operatorname{PG}(d-1,K)\) (see [2, I, SS 5, p. 9 ff], [7, II.6, p. 45 ff]). The set \(U_{h}\) then consists of all points of \(\operatorname{PG}(V)\) that are incident with their image under that polarity. Consider a line \(L\in\operatorname{Gr}_{2}(V)\). If the set \(U_{h}\cap\operatorname{Gr}_{1}(L)\) of absolute points on \(L\) contains more than one point then it is called a _block_ of \(U_{h}\). The set of all these blocks is denoted by \(\mathcal{B}_{h}\). Clearly, any two points of \(U_{h}\) are joined by a unique member of \(\mathcal{B}_{h}\). If the form \(h\) has Witt index \(1\), we call \((U_{h},\mathcal{B}_{h},\in)\) the _hermitian unital with respect to \(h\)_. **1.2 Lemma**.: _Assume that \(h\colon V\times V\to K\) is a non-degenerate \(\sigma\)-hermitian form of Witt index \(1\). If \(h\) is trace-valued then the set of blocks through a given point \(P\in U_{h}\) is_ \[\left\{U_{h}\cap\operatorname{Gr}_{1}(L)\,\Big{|}\,\,L\in\operatorname{Gr}_{2 }(V),P<L\nleqq P^{\perp_{h}}\right\}\,.\] Proof.: We write \(h\colon V\times V\to K\colon(x,y)\mapsto\langle x|y\rangle\). Recall (see [2, I, SS 10, p. 19]) that \(h\) is trace-valued if, and only if, the set \(\{\langle v|v\rangle\mid\ v\in V\}\) is contained in \(\{s+s^{\sigma}\mid\ s\in K\}\). Consider any line \(L\in\operatorname{Gr}_{2}(V)\) through \(P\in U_{h}\). Then \(P=Kv\) with \(v\in V\smallsetminus\{0\}\) such that \(\langle v|v\rangle=0\). If \(L\leqq P^{\perp_{h}}\) then every \(w\in L\smallsetminus Kv\) satisfies \(\langle w|w\rangle\neq 0\) because \(h\) has Witt index \(1\). So \(P\) is the unique absolute point in \(L\), and \(U_{h}\cap\operatorname{Gr}_{1}(L)\) contains no block. If \(L\nleqq P^{\perp_{h}}\), we pick any \(x\in L\smallsetminus P\); then \(\langle x|v\rangle\neq 0\). Replacing \(x\) by a suitable scalar multiple, we achieve \(\langle x|v\rangle=-1\). For each \(s\in K\), we now have \(K(sv+x)\in L\) and \(\langle sv+x|sv+x\rangle=\langle sv|sv\rangle+\langle sv|x\rangle+\langle x| sv\rangle+\langle x|x\rangle=s\langle v|v\rangle s^{\sigma}+s\langle v|x\rangle+ \langle x|v\rangle s^{\sigma}+\langle x|x\rangle=0-s-s^{\sigma}+\langle x|x\rangle\). If the form \(h\) is trace-valued, we find \(s\) such that \(s+s^{\sigma}=\langle x|x\rangle\), and \(K(sv+x)\) is a second absolute point on \(L\). So \(U_{h}\cap\operatorname{Gr}_{1}(L)\) is indeed a block in that case. From [2, I, SS 10, p. 19] we recall that every \(\sigma\)-hermitian form over a field \(K\) with \(\operatorname{char}K\neq 2\) is trace valued. Also, if \(\sigma\) acts non-trivially on the center of \(K\) (in particular, if \(K\) is commutative) then every \(\sigma\)-hermitian form is trace-valued. **1.3 Examples**.: Let \(C|R\) be a separable quadratic extension of commutative fields, and let \(\sigma\) be the generator of \(\operatorname{Gal}(C|R)\). Then the form \[h\colon C^{3}\times C^{3}\to C\colon\big{(}(x_{0},x_{1},x_{2}),(y_{0},y_{1},y_ {2})\big{)}\mapsto x_{0}y_{2}^{\sigma}+x_{1}y_{1}^{\sigma}+x_{2}y_{0}^{\sigma}\] is not degenerate, trace-valued, and has Witt index \(1\). If \(C\) is finite of order \(e\) then the hermitian unital \((U_{h},\mathcal{B}_{h},\in)\) is the finite hermitian unital of order \(e\). **1.4 Definitions**.: Let \(\mathbb{U}:=(U_{h},\mathcal{B}_{h},\in)\) be the hermitian unital with respect to a non-degenerate hermitian form \(h\colon V\times V\to K\) of Witt index \(1\), let \(X\in U_{h}\) be a point of \(\mathbb{U}\), and let \((P,\mathcal{L},I)\) be any incidence structure. A map \(\eta\colon U_{h}\to P\) is called an _isomorphism_ from \(\mathbb{U}\) onto \((P,\mathcal{L},I)\) if \(\eta\) is bijective, for every block \(B\in\mathcal{B}_{h}\) there exists a unique block \(B^{\prime}\in\mathcal{L}\) with \(B^{\eta}=\big{\{}X\in P\,\big{|}\,\,\,(X,B^{\prime})\in I\big{\}}\), and the resulting map \(\beta\colon\mathcal{B}_{h}\to\mathcal{L}\colon B\mapsto B^{\prime}\) is a bijection. As usual, an _automorphism_ of \(\mathbb{U}\) is an isomorphism of \(\mathbb{U}\) onto \(\mathbb{U}\) itself. An automorphism of \(\mathbb{U}\) is called a _translation of \(\mathbb{U}\) with center \(X\)_ if it leaves invariant every block through \(X\). We write \(\operatorname{T}_{[X]}\) for the set of all translations of \(\mathbb{U}\) with center \(X\). If \(h\colon V\times V\to K\) is a \(\sigma\)-hermitian form of Witt index \(1\), then clearly the group \(\operatorname{P}\Gamma\mathrm{U}(V,h)\) of collineations induced by semi-similitudes acts by automorphisms of the hermitian unital \((U_{h},\mathcal{B}_{h},\in)\). See 2.4 and 2.7 below for examples of translations. **1.5 Theorem**.: _Consider an anti-automorphism \(\sigma\) of a (not necessarily commutative) field \(K\), with \(\sigma^{2}=\mathrm{id}\neq\sigma\). Let \(h\colon V\times V\to K\colon(v,w)\mapsto\langle v|w\rangle\) be a non-degenerate \(\sigma\)-hermitian form of Witt index \(1\). If the form is trace-valued (in particular, if \(\mathrm{char}\ K\neq 2\) or if \(K\) is commutative) and \(\dim V\) is finite then the group \(\operatorname{P}\mathrm{U}(V,h)\) acts two-transitively on \(U_{h}\), and thus transitively both on \(\mathcal{B}_{h}\) and on the set of flags of \((U_{h},\mathcal{B}_{h},\in)\)._ Proof.: As \(h\) has Witt index \(1\), there exists \(a\in V\smallsetminus\{0\}\) with \(\langle a|a\rangle=0\), so \(Ka\) lies in \(U_{h}\). As \(h\) is not degenerate, there exists \(x\in V\) with \(\langle a|x\rangle\neq 0\). In \(L:=Ka+Kx\) there is a second absolute point \(Kb\), see 1.2. Let \(P,Q\) be two arbitrary points in \(U_{h}\). Then there are \(v,w\in V\smallsetminus\{0\}\) with \(\langle v|v\rangle=0=\langle w|w\rangle\) such that \(P=Kv\) and \(Q=Kw\). As \(h\) has Witt index \(1\), we have \(\langle v|w\rangle\neq 0\). Replacing \(v\) by a suitable scalar multiple, we achieve \(\langle v|w\rangle=1\). Now Witt's Theorem (see [2, SS 11, p. 21]) asserts that there exists \(A\in\mathrm{U}(V,h)\) with \(aA=v\) and \(bA=w\). The induced collineation \(\lfloor A\rfloor\in\operatorname{P}\mathrm{U}(V,h)\) then maps the pair \((Ka,Kb)\) to \((P,Q)\), and maps the block joining \(Ka\) and \(Kb\) to the block joining \(P\) and \(Q\). **1.6 Lemma**.: _Let \(\mathbb{U}:=(U_{h},\mathcal{B}_{h},\in)\) be the hermitian unital with respect to a non-degenerate \(\sigma\)-hermitian form \(h\colon V\times V\to K\) of Witt index \(1\)._ * _For each point_ \(X\in U_{h}\)_, the set_ \(\mathrm{T}_{[X]}\) _is a subgroup of_ \(\operatorname{Aut}(\mathbb{U})\)_, and a normal subgroup in the stabilizer of_ \(X\) _in_ \(\operatorname{Aut}(\mathbb{U})\)_._ * _For each block_ \(B\in\mathcal{B}_{h}\) _through_ \(X\)_, the subgroup_ \(\mathrm{T}_{[X]}\) _acts transitively on the set_ \(B\smallsetminus\{X\}\)_. In fact, the intersection_ \(\mathrm{T}_{[X]}\cap\operatorname{P}\mathrm{U}(V,h)\) _acts transitively on that set._ Proof.: The set \(\mathrm{T}_{[X]}\) is the kernel of the action of the stabilizer \(\operatorname{Aut}(\mathbb{U})_{X}\) of \(X\) in \(\operatorname{Aut}(\mathbb{U})\) on the set \(\mathcal{B}_{X}\) of all blocks through \(X\). So \(\mathrm{T}_{[X]}\) is a normal subgroup of \(\operatorname{Aut}(\mathbb{U})_{X}\). Pick \(v,w\in V\) such that \(X=Kv\) and \(B=U_{h}\cap\mathrm{Gr}_{1}(L)\), where \(L=Kv+Kw\). Then \(\langle v|v\rangle=0\), and without loss of generality, we may assume \(\langle w|w\rangle=0\) and \(\langle v|w\rangle=1\). An easy computation shows that \(B=\{Kv\}\cup\{K(pv+w)\,|\,\ p\in K,p+p^{\sigma}=0\}\). For each \(p\in K\) with \(p+p^{\sigma}=0\), the linear map \(M^{\prime}\) defined by \(vM^{\prime}=v\) and \(wM^{\prime}=pv+w\) is an isometry of the restriction of \(h\) to \(L\times L\). As that restriction is not degenerate, the space \(L^{\perp}\) is a vector space complement to \(L\) in \(V\). We extend \(M^{\prime}\) to a linear map \(M\) that acts trivially on \(L^{\perp}\). Then \(M\) belongs to \(\mathrm{U}(V,h)\), and induces a collineation \(\lfloor M\rfloor\in\mathrm{T}_{[X]}\cap\operatorname{P}\mathrm{U}(V,h)\) that maps \(Kw\) to \(K(pv+w)\). This shows that \(\mathrm{T}_{[X]}\cap\operatorname{P}\mathrm{U}(V,h)\) is transitive on \(B\smallsetminus\{X\}\), as claimed. ## 2 Two hermitian forms, and their unitals Let \(R\) be a commutative field, and let \(C|R\) be a quadratic field extension. Then the Galois group \(\operatorname{Gal}(C|R)\) has order two, and is generated by an involution \(\sigma\colon x\mapsto\overline{x}\). We choose an element \(i\in C\smallsetminus\{0\}\) with \(i^{\sigma}=-i\). (If \(\operatorname{char}R=2\) then \(j\) lies in \(R\); we will exclude that case later on.) We assume that there is an _anisotropic_\(\sigma\)-hermitian form on \(C^{2}\). Without loss of generality (i.e., up to similitude) we may assume that this form has Gram matrix \(N=\left(\begin{smallmatrix}1&0\\ 0&s\end{smallmatrix}\right)\). We consider the quaternion field \[H:=H^{s}_{C|R}=\left\{\begin{pmatrix}a&x\\ -s\overline{x}&\overline{a}\end{pmatrix}\biggm{|}a,x\in C\right\}\,.\] Using \(w:=\left(\begin{smallmatrix}0&1\\ -s&0\end{smallmatrix}\right)\) and the embedding \(c\mapsto\left(\begin{smallmatrix}c&0\\ 0&c\end{smallmatrix}\right)\) of \(C\) into \(H\), we obtain \(H=C+wC\) with the multiplication rule \((a+wb)(c+wd)=ac-s\overline{b}d+w(\overline{a}d+bc)\), for \(a,b,c,d\in C\). **2.1 Lemma**.: _The map \(\alpha\colon a+wb\mapsto\overline{a}+wb\) (where \(a,b\in C\)) is an involutory anti-automorphism of \(H\), the fixed points are those in \(R+wC\)._ _We have \((a+wb)+(a+wb)^{\alpha}=a+\overline{a}+2wb\) and \((a+wb)(a+wb)^{\alpha}=a\overline{a}-sb\overline{b}+2w\overline{a}b\)._ Proof.: In fact, we have \(X^{\alpha}=i^{-1}X^{\kappa}i\) for each \(X\in H\), where \(\kappa\colon a+wb\mapsto\overline{a}-wb\) is the standard involution of \(H\). So \(\alpha\) is the composition of an anti-automorphism (namely, \(\kappa\)) and an (inner) automorphism of \(H\). Straightforward calculations yield the remaining assertions. We note that \(\alpha\) is the standard involution if \(\operatorname{char}R=2\). ### A unital in projective space **2.2 Definitions**.: On \(C^{4}\), we consider the \(\sigma\)-hermitian form \[g\colon C^{4}\times C^{4}\to C\colon\left((x_{0},x_{1},x_{2},x_{3}),(y_{0},y _{1},y_{2},y_{3})\right)\mapsto x_{0}y_{3}^{\sigma}+x_{3}y_{0}^{\sigma}+x_{1} y_{1}^{\sigma}+sx_{2}y_{2}^{\sigma}\,.\] This form has Witt index \(1\) because the norm form of \(H\) is anisotropic. We assume \(\operatorname{char}R\neq 2\) (so \(i\notin R\)), and consider \(\Xi:=\left\{\xi(u,p)\,\big{|}\,\ u\in C^{2},p\in Ri\right\}\subseteq \operatorname{PGL}(4,C)\), where \[\xi\big{(}(u_{0},u_{1}),p\big{)}:=\begin{vmatrix}1&u_{0}&u_{1}&p-\frac{1}{2}N (u_{0}+wu_{1})\\ 0&1&0&-u_{0}^{\sigma}\\ 0&0&1&-su_{1}^{\sigma}\\ 0&0&0&1\end{vmatrix}\,.\] (For any matrix \(A\in\operatorname{GL}(4,C)\), we denote by \(\lfloor A\rfloor\) the corresponding element in \(\operatorname{PGL}(4,C)\), obtained as the coset modulo scalars.) **2.3 Proposition**.: 1. _We have_ \[U_{g}=\left\{C(0,0,0,1)\right\}\cup\left\{C(1,x_{1},x_{2},x_{3})\,|\,\,\,x_{3}+x_{ 3}^{\sigma}=-x_{1}x_{1}^{\sigma}-sx_{2}x_{2}^{\sigma}\right\}.\] 2. _The set_ \(\Xi\) _is a subgroup of_ \(\,\mathrm{PSU}(C^{4},g)\)_. That subgroup fixes the point_ \(C(0,0,0,1)\)_, and acts sharply transitively on_ \(U_{g}\smallsetminus\left\{C(0,0,0,1)\right\}\)_._ _In fact, for_ \(u,v\in C^{2}\) _and_ \(p,q\in Ri\) _the product in_ \(\Xi\) _is obtained as_ \(\xi(u,p)\,\xi(v,q)=\xi\left(u+v,p+q+\frac{1}{2}(vMu^{\sigma}-uMv^{\sigma})\right)\)_, where_ \(M=\left(\begin{smallmatrix}1&0\\ 0&s\end{smallmatrix}\right)\)_._ Proof.: Consider \(x=(x_{0},x_{1},x_{2},x_{3})\in C^{4}\smallsetminus\left\{(0,0,0,0)\right\}\) with \(Cx<x^{\perp_{g}}\). If \(x_{0}=0\) then \(0=x_{1}x_{1}^{\sigma}+sx_{2}x_{2}^{\sigma}=N(x_{1}+wx_{2})\), and \(Cx=(0,0,0,1)\) because the norm form \(N\) is anisotropic. If \(x_{0}\neq 0\) then we may assume \(x_{0}=1\), and \(x_{3}+x_{3}^{\sigma}=-x_{1}x_{1}^{\sigma}-sx_{2}x_{2}^{\sigma}\) follows, as claimed. It is easy to verify \(\Xi\subseteqq\,\mathrm{SU}(C^{4},g)\), and that each element of \(\Xi\) fixes the point \(C(0,0,0,1)\). We note \(M(u_{0},u_{1})^{\sigma}=M\binom{u_{0}^{\sigma}}{u_{1}^{\sigma}}=\binom{u_{0}^ {\sigma}}{u_{0}^{\sigma}}\). Straightforward calculations now yield \(N(u_{0}+wu_{1})=(u_{0},u_{1})M(u_{0},u_{1})^{\sigma}\), and then \(-(u+v)M(u+v)^{\sigma}+2(vMu^{\sigma}-uMv^{\sigma})=-uMu^{\sigma}-vMv^{\sigma}- uMv^{\sigma}\) leads to \[\xi(u,p)\,\xi(v,q) =\begin{vmatrix}1&u&p-\frac{1}{2}N(u_{0}+wu_{1})\\ 0&E&-Mu^{\sigma}\\ 0&0&1\end{vmatrix}\begin{vmatrix}1&v&q-\frac{1}{2}N(v_{0}+wv_{1})\\ 0&E&-Mv^{\sigma}\\ 0&0&1\end{vmatrix}\] \[= \begin{vmatrix}1&u+v&p+q+\frac{1}{2}(vMu^{\sigma}-uMv^{\sigma})- \frac{1}{2}N(u_{0}+v_{0}+w(u_{1}+v_{1}))\\ 0&E&-M(u+v)^{\sigma}\\ 0&0&1\end{vmatrix}\,,\] where \(E=\left(\begin{smallmatrix}1&0\\ 0&1\end{smallmatrix}\right)\). As \(z:=vMu^{\sigma}-uMv^{\sigma}\) satisfies \(z+z^{\sigma}=0\), we obtain \(\xi(u,p)\,\xi(v,q)=\xi\left(u+v,p+q+\frac{1}{2}(vMu^{\sigma}-uMv^{\sigma})\right)\), as claimed. So \(\Xi\) is closed under multiplication. The inverse of \(\xi(u,p)\) is \(\xi(-u,-p)\in\Xi\). Finally, we note that \(\xi(u,p)\) maps \(C(1,0,0,0)\) to \(C\left(1,u_{0},u_{1},p-\frac{1}{2}N(u_{0}+wu_{1})\right)\). This shows that \(\Xi\) acts sharply transitively on \(U_{g}\). **2.4 Remarks**.: The set \(\left\{\xi((0,0),p)\,\big{|}\,\,p\in Ri\right\}\) forms both the center and the commutator group of the group \(\Xi\). That commutator group is the group \(\mathrm{T}_{[C(0,0,0,1)]}\) of translations of the unital \(\mathbb{U}_{g}=(U_{g},\mathcal{B}_{g},\in)\) with center \(C(0,0,0,1)\). For the point \(C(1,0,0,0)\in U_{g}\), we obtain \[\mathrm{T}_{[C(1,0,0,0)]}=\left\{\begin{vmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ p&0&0&1\end{vmatrix}\,\,\right|\,\,p\in Ri\right\}\,.\] ### A unital in the quaternion plane #### 2.5.1 Definitions We continue to assume \(\operatorname{char}R\neq 2\). On \(H^{3}\), we consider the \(\alpha\)-hermitian form \[h\colon H^{3}\times H^{3}\colon\big{(}(X_{0},X_{1},X_{2}),(Y_{0},Y_{1},Y_{2}) \big{)}\mapsto X_{0}Y_{2}^{\alpha}+X_{1}Y_{1}^{\alpha}+X_{2}Y_{0}^{\alpha}\,,\] here \(\alpha\) is the involution introduced in 2.1. The form \(h\) has Witt index \(1\). We consider the subset \(\Psi:=\big{\{}\psi(X,p)\bigm{|}\ X\in H,p\in Ri\big{\}}\) of the group \(\operatorname{PGL}(3,H)\), where \[\psi(X,p):=\begin{bmatrix}1&X&p-\frac{1}{2}XX^{\alpha}\\ 0&1&-X^{\alpha}\\ 0&0&1\end{bmatrix}\,.\] (Again, for \(A\in\operatorname{GL}(3,H)\), we denote by \(\lfloor A\rfloor\) the corresponding element in \(\operatorname{PGL}(3,H)\), obtained as the coset modulo central scalars in this case.) #### 2.6.1 Proposition **a.**: _We have_ \[U_{h}=\{H(0,0,1)\}\cup\{H(1,X,Y)\,|\ Y+Y^{\alpha}=-XX^{\alpha}\}\,.\] **b.**: _The set_ \(\Psi\) _is a subgroup of_ \(\operatorname{PU}(H^{3},h)\)_. That subgroup fixes the point_ \(H(0,0,1)\)_, and acts sharply transitively on_ \(U_{h}\smallsetminus\{H(0,0,1)\}\)_._ _The multiplication in_ \(\Psi\) _is given by_ \[\psi(X,p)\,\psi(Y,q)=\psi\left(X+Y,p+q+\tfrac{1}{2}(YX^{\alpha}-XY^{\alpha}) \right)\,.\] Proof.: The proof is quite analogous to the proof of 2.3. #### 2.7.2 Remarks The center and the commutator group of the group \(\Psi\) both coincide with \(\big{\{}\psi(0,p)\bigm{|}\ p\in Ri\big{\}}\). That group is the group \(\operatorname{T}_{[H(0,0,1)]}\) of translations of the unital \(\mathbb{U}_{h}=(U_{h},\mathcal{B}_{h},\in)\) with center \(H(0,0,1)\). For the point \(H(1,0,0)\in U_{h}\), we obtain \[\operatorname{T}_{[H(1,0,0)]}=\left\{\begin{bmatrix}1&0&0\\ 0&1&0\\ p&0&1\end{bmatrix}\,\Biggm{|}\ p\in Ri\right\}\,.\] #### 2.8.2 Remark The groups \(\Xi\) and \(\Psi\) are examples of generalized Heisenberg groups (cp. [15], [5], [11]). In fact, they are both isomorphic to \(\operatorname{GH}(R^{4},R,\beta)\), where \(\beta\) is any non-degenerate alternating form on \(R^{4}\). We give a direct isomorphism explicitly, in 2.10 below. ### An isomorphism of unitals **2.9 Definition**.: For each \(u=(u_{0},u_{1})\in C^{2}\) and each \(p\in Ri\), we define the point \[C\left(1,u_{0},u_{1},p-\tfrac{1}{2}N(u_{0}+wu_{1})\right)^{\eta}:=H\left(1,u_{0} +wu_{1},p-\tfrac{1}{2}(u_{0}+u_{1}w)(u_{0}+u_{1}w)^{\alpha}\right).\] Moreover, we put \(C(0,0,0,1)^{\eta}:=H(0,0,1)\). Thus we obtain a bijection \(\eta\colon U_{g}\to U_{h}\colon P\mapsto P^{\eta}\), see 2.3 and 2.6. **2.10 Theorem**.: _We assume \(\operatorname{char}R\neq 2\), and use the notation introduced in 2.1, 2.2, 2.5, and 2.9 above._ * _The map_ \(\varphi\colon\Xi\to\Psi\colon\xi\big{(}(u_{0},u_{1}),p\big{)}\mapsto\psi \big{(}u_{0}+wu_{1},p\big{)}\) _is an isomorphism of groups._ * _For each_ \(u=(u_{0},u_{1})\in C^{2}\)_, each_ \(p\in Ri\)_, and each point_ \(P\in U_{g}\) _we have_ \(P^{\eta\,\psi(u_{0}+wu_{1},p)}=P^{\xi(u,p)\,\eta}\)_; here_ \(\eta\colon U_{g}\to U_{h}\) _is the map introduced in_ 2.9_._ * _The map_ \(\eta\colon U_{g}\to U_{h}\) _induces an isomorphism of incidence structures from_ \((U_{g},\mathcal{B}_{g},\in)\) _onto_ \((U_{h},\mathcal{B}_{h},\in)\)_._ Proof.: We use the multiplication formulae given in 2.3 and 2.6 to prove assertion a. It suffices to verify \[\begin{array}{l}(v_{0}+wv_{1})(u_{0}+wu_{1})^{\alpha}-(u_{0}+wu_{1})(v_{0}+ wv_{1})^{\alpha}\\ =(v_{0}+wv_{1})(\overline{u_{0}}+wu_{1})-(u_{0}+wu_{1})(\overline{v_{0}}+wv_{1}) \\ =v_{0}\overline{u_{0}}-u_{0}\overline{v_{0}}+w^{2}(\overline{v_{1}}\,u_{1}- \overline{u_{1}}\,v_{1})\\ =vMu^{\sigma}-uMv^{\sigma};\end{array}\] here we use \(wc=\overline{c}w\) (for \(c\in C\)) and \(w^{2}=-s\). Assertion b is easily checked. As any two points in a hermitian unital are joined by a unique block, it remains to verify that \(B^{\eta}\in\mathcal{B}_{h}\) holds for each block \(B\in\mathcal{B}_{g}\). Using transitivity of \(\Xi\) on \(U_{g}\smallsetminus\{C(0,0,0,1)\}\), we see that it suffices to consider blocks through \(C(0,0,0,1)\), and blocks through \(C(1,0,0,0)\). Any block through \(C(0,0,0,1)\) is of the form \(B=U_{g}\cap L\), where \(L=C(0,0,0,1)+C(1,u_{0},u_{1},u_{2}).\) We may assume \(C(1,u_{0},u_{1},u_{2})\in U_{g}.\) Then \(u_{2}=p-\tfrac{1}{2}N(u_{0}+wu_{1})\) holds for some \(p\in Ri.\) So the block in question is \[B=\big{\{}C(1,u_{0},u_{1},p-\tfrac{1}{2}N(u_{0}+wu_{1}))\,\big{|}\,\,p\in Ri \big{\}}\cup\{C(0,0,0,1)\},\] and its image \[B^{\eta}=\big{\{}H(1,u_{0}+wu_{1},p-\tfrac{1}{2}(u_{0}+wu_{1})(u_{0}+wu_{1})^ {\alpha})\,\big{|}\,\,p\in Ri\big{\}}\cup\{H(0,0,1)\}\] belongs to \(\mathcal{B}_{h}\). Now consider a block \(B\) through \(C(1,0,0,0)\). There exist \(u=(u_{0},u_{1})\in C^{2}\) and \(x\in Ri\) such that \(C\left(1,u_{0},u_{1},x-\frac{1}{2}N(u_{0}+wu_{1})\right)\in B\smallsetminus\{C(1,0,0,0)\}\). We abbreviate \(n:=N(u_{0}+wu_{1})\). Every point in \(B\smallsetminus\{C(1,0,0,0)\}\) is of the form \(P_{a}:=C\left(1,au_{0},au_{1},a(x-\frac{n}{2})\right)\), where \(a=a_{0}+a_{1}i\in C\) (with \(a_{0},a_{1}\in R\)) satisfies \[a\overline{a}n+2a_{1}ix-a_{0}n=0\,.\] ( \[*\] ) So \(P_{a}=C\left(1,au_{0},au_{1},y_{a}-a\overline{a}\frac{n}{2}\right)\), with \(y_{a}:=a(x-\frac{n}{2})+a\overline{a}\frac{n}{2}\). Note that \(y_{a}\in Ri\). We abbreviate \(Z:=u_{0}+wu_{1}\), so \(C(1,u_{0},u_{1},x-\frac{n}{2})^{\eta}=H(1,Z,x-\frac{1}{2}ZZ^{\alpha})\). For each \(a\in C\) satisfying condition \((*)\) from above, we obtain \[C\left(1,au_{0},au_{1},a(x-\frac{n}{2})\right)^{\eta} = C\left(1,u_{0}a,u_{1}a,y_{a}-a\overline{a}\frac{n}{2}\right)^{\eta}\] \[= H\left(1,Za,y_{a}-\frac{1}{2}Za(Za)^{\alpha}\right)\] \[= H\left(1,Za,y_{a}-\frac{1}{2}a\overline{a}ZZ^{\alpha}\right)\,.\] Each one of those points is contained in \(U_{h}=U_{g}^{\eta}\). In order to see that it is actually contained in the block \(\left(H(1,0,0)+H(1,Z,x-\frac{1}{2}ZZ^{\alpha})\right)\cap U_{h}\), it remains to check that there exists \(Y\in H\) such that \(Y(Z,x-\frac{1}{2}YY^{\alpha})=(Za,y_{a}-\frac{1}{2}a\overline{a}ZZ^{\alpha})\). The entry on the left yields \(Y=ZaZ^{-1}\). Using \(ZZ^{\alpha}=(u_{0}+wu_{1})(\overline{u_{0}}+wu_{1})=u_{0}\overline{u_{0}}-su_{ 1}\overline{u_{1}}+2w\overline{u_{0}}u_{1}\) and \(ZaZ^{-1}=(u_{0}a+wu_{1}a)(\overline{u_{0}}-wu_{1})\frac{1}{n}=a_{0}+a_{1}i\, \overline{ZZ^{\alpha}}\frac{1}{n}\), we compute \[ZaZ^{-1}(x-\frac{1}{2}ZZ^{\alpha}) = ZaZ^{-1}x-\frac{1}{2}ZaZ^{\alpha}\] \[= a_{0}x+a_{1}i\,\overline{ZZ^{\alpha}}x\frac{1}{n}-\frac{1}{2}a_{ 0}ZZ^{\alpha}-\frac{1}{2}a_{1}in\] \[= a_{0}(x-\frac{1}{2}ZZ^{\alpha})+a_{1}i(\overline{ZZ^{\alpha}}x \frac{1}{n}-\frac{1}{2}n)\] \[= a_{0}(x-\frac{1}{2}ZZ^{\alpha})+a_{1}i(xZZ^{\alpha}\frac{1}{n}- \frac{1}{2}n)\,;\] we have used \(i\,\overline{F}i^{-1}=F^{\alpha}\) and \(x\in Ri\). On the other hand, we find \[y_{a}-\frac{1}{2}a\overline{a}ZZ^{\alpha} = a(x-\frac{n}{2})+a\overline{a}\frac{n}{2}-\frac{1}{2}a\overline {a}ZZ^{\alpha}\] \[= ax-\frac{1}{2}an+\frac{1}{2}a_{0}n-a_{1}ix-\frac{1}{2}(a_{0}-2a_ {1}ix\frac{1}{n})ZZ^{\alpha}\] \[= a_{0}x-a_{1}i\frac{n}{2}-\frac{1}{2}(a_{0}-2a_{1}ix\frac{1}{n}) ZZ^{\alpha}\,,\] and this equals \(ZaZ^{-1}(x-\frac{1}{2}ZZ^{\alpha})\), as required. So we have established that \(B^{\eta}\) is contained in some block \(B^{\prime}\) of \(\mathcal{B}_{h}\), for each \(B\in\mathcal{B}_{g}\). It remains to show that \(B^{\eta}\) fills all of \(B^{\prime}\). To this end, we use the fact that the group \(\mathrm{T}:=\mathrm{T}_{[H(1,0,0)]}=\left\{\left|\begin{array}{cc}1&0&0\\ 0&1&0\\ p&0&1\end{array}\right|\ \right|\ p\in Ri\right\}\) of translations with center \(H(1,0,0)\) acts transitively on \(D\smallsetminus\{H(1,0,0)\}\), for each block \(D\in\mathcal{B}_{h}\) through \(H(1,0,0)\), see 1.6 and 2.7. In particular, we obtain that the block \(B^{\prime}=\left(H(1,0,0)+H(1,Z,x-\frac{1}{2}ZZ^{\alpha})\right)\cap U_{h}\) equals the set \(\{H(1,0,0)\}\cup\big{\{}H(1+(x-\frac{1}{2}ZZ^{\alpha})p,Z,x-\frac{1}{2}ZZ^{\alpha} \,\big{|}\,\,\,p\in Ri\big{\}}\). So it suffices to show that for each \(p\in Ri\) there exists \(a\in C\) satisfying condition \((*)\) and such that \[H\big{(}1+(x-\tfrac{1}{2}ZZ^{\alpha})p,Z,x-\tfrac{1}{2}ZZ^{\alpha}\big{)}=H \big{(}1,Za,ZaZ^{-1}(x-\tfrac{1}{2}ZZ^{\alpha})\big{)};\] the description on the right hand side then yields that the point in question lies in \(B^{\eta}\). We need to find \(a\in C\) with \(1+(x-\frac{1}{2}ZZ^{\alpha})p=(ZaZ^{-1})^{-1}=Za^{-1}Z^{-1}\). We write \(b:=a^{-1}\) as \(b=b_{0}+b_{1}i\) with \(b_{0},b_{1}\in R\), and compare \(1+(x-\frac{1}{2}ZZ^{\alpha})p=(1+xp)-\frac{1}{2}ZZ^{\alpha}p\) with \(ZbZ^{-1}=b_{0}+b_{1}i\,\overline{ZZ^{\alpha}}\frac{1}{n}\). Since \(1+xp\) lies in \(R\) and \(\frac{1}{2}ZZ^{\alpha}p\in Ri+wC\), we obtain \(1+xp=b_{0}\) and \(-\frac{1}{2}ZZ^{\alpha}p=b_{1}i\,\overline{ZZ^{\alpha}}\frac{1}{n}\), so \(b_{1}i=-\frac{1}{2}pn\), and \(b=1+xp-\frac{1}{2}pn\). Condition \((*)\) for \(a\) means \(n-2b_{1}ix-b_{0}n=0\), and is easily verified. ## 3 Groups of translations, and an isomorphism of groups **3.1 Definition**.: Let \((P,\mathcal{L},I)\) be an incidence structure such that through any two points in \(P\) there is at most one line in \(\mathcal{L}\) incident with both of those points. An _O'Nan configuration_ in \((P,\mathcal{L},I)\) consists of \(4\) lines meeting in \(6\) points (see Fig. 1 below). In particular, any two of those four lines have a (unique) point in common. These configurations are named in honor of Michael O'Nan, who used the finite case of the following result 3.2 in his study of the automorphisms of finite hermitian unitals, see [13]. In the (axiomatic) context of projective spaces, O'Nan configurations are called Veblen-Young figures. The proof of the following result is taken from [4, 2.2]. **3.2 Proposition**.: _Let \(V\) be a vector space over a commutative field \(F\), and assume that there is a non-trivial involutory automorphism \(\sigma\) of \(F\). Let \(h\colon V\times V\to F\colon(u,v)\mapsto\langle u|v\rangle\) be a non-degenerate \(\sigma\)-hermitian form of Witt index \(1\). Then the hermitian unital \(\mathbb{U}=(U_{\sigma},\mathcal{B}_{\sigma},\in)\) does not contain any O'Nan configurations._ Proof.: Consider an O'Nan configuration in the projective space \(\mathrm{PG}(V)\). Then the six points of the configuration are contained in the projective plane spanned by any two of the lines inside \(\mathrm{PG}(V)\). Therefore, there are linearly independent vectors \(b_{0}\), \(b_{1}\), \(b_{2}\) in \(V\) such that the six points of the configuration are \(Fb_{0}\), \(Fb_{1}\), \(F(b_{0}+b_{1})\), \(Fb_{2}\), \(F(b_{0}+b_{2})\) and \(F(b_{1}-b_{2})\), respectively. If these points belong to \(U_{h}\) then \(\langle b_{n}|b_{n}\rangle=0\) and \(\langle b_{n}|b_{m}\rangle=-\langle b_{m}|b_{n}\rangle\) holds for all \(m<n<3\). The matrix \((\langle b_{m}|b_{n}\rangle)_{m,n<3}\) has determinant \(0\) (here we use that \(F\) is commutative). Hence \(f\) is degenerate, and the restriction of \(h\) to \(Fb_{0}+Fb_{1}+Fb_{2}\) has Witt index at least \(2\). But then the Witt index of \(h\) is greater than \(1\), contradicting our assumption. **3.3 Remark**.: Kestenband [10] claims that 3.2 holds even for hermitian unitals over skew fields. This claim is false. For instance, consider the quaternion field \(\mathbb{H}:=H^{1}_{\mathbb{C}|\mathbb{R}}=\mathbb{C}+j\mathbb{C}\) over the real number field \(\mathbb{R}\), constructed from \(\mathbb{C}=\mathbb{R}+\mathbb{R}i\) with \(j^{2}=-1\), the standard involution \(\kappa\colon x\mapsto\overline{x}\), and the hermitian form given by \[\langle(u_{0},u_{1},u_{2})|(v_{0},v_{1},v_{2})\rangle=u_{0}i\overline{v_{1}}+u _{0}j\overline{v_{2}}-u_{1}i\overline{v_{0}}-u_{1}ji\overline{v_{2}}-u_{2}j \overline{v_{0}}+u_{2}ji\overline{v_{1}}\,.\] That form is not degenerate, and has Witt index \(1\). However, the corresponding hermitian unital contains the O'Nan configuration with the points \(\mathbb{H}(1,0,0)\), \(\mathbb{H}(0,1,0)\), \(\mathbb{H}(0,0,1)\), \(\mathbb{H}(1,1,0)\), \(\mathbb{H}(1,0,1)\), and \(\mathbb{H}(0,1,-1)\). **3.4 Proposition**.: _Let \(\mathbb{U}=(U_{h},\mathcal{B}_{h},\in)\) be a hermitian unital, and let \(X\) be any point in \(U_{h}\). If \(\mathbb{U}\) contains no O'Nan configurations then the translation group \(\mathrm{T}_{[X]}\) acts sharply transitively on \(B\smallsetminus\{X\}\), for each block \(B\) through \(X\)._ Proof.: We already know from 1.6 that \(\mathrm{T}_{[X]}\) is transitive on \(B\smallsetminus\{X\}\). If the action is not sharply transitive then there exists \(\tau\in\mathrm{T}_{[X]}\smallsetminus\{\mathrm{id}\}\) such that \(\tau\) fixes some point \(Y\in B\smallsetminus\{X\}\). Let \(Z\) be any point in \(U_{h}\smallsetminus B\) with \(Z^{\tau}\neq Z\). As \(\tau\) is a translation of the unital \(\mathbb{U}\), the block \(B_{Z}\) joining \(X\) and \(Z\) is invariant under \(\tau\), and contains \(Z^{\tau}\). In the block \(D\) joining \(Y\) and \(Z\), choose a third point \(W\). Then \(W^{\tau}\) lies in the intersection of \(D^{\tau}\) and the block \(B_{W}\) joining \(X\) and \(W\). So the six points \(X\), \(Y\), \(Z\), \(Z^{\tau}\), \(W\), \(W^{\tau}\) and the four blocks \(D\), \(D^{\tau}\), \(B_{Z}\), \(B_{W}\) form an O'Nan configuration in the unital, contradicting our assumption. See Fig. 1. **3.5 Corollary**.: _Let \(h\colon V\times V\to K\) be a non-degenerate \(\sigma\)-hermitian form of Witt index \(1\). If the corresponding hermitian unital \(\mathbb{U}\) has no O'Nan configurations then \(\mathrm{T}_{[X]}=\mathrm{T}_{[X]}\cap\mathrm{PU}(V,h)\) holds for each point of the unital._ Figure 1: Constructing an O’Nan configuration from a translation with a fixed point. **3.6 Corollary**.: _Let \(h\colon V\times V\to K\) be a non-degenerate \(\sigma\)-hermitian form of Witt index \(1\). If the corresponding hermitian unital \(\mathbb{U}\) has no O'Nan configurations then every translation of the unital \(\mathbb{U}\) is induced by a transvection of the projective space \(\operatorname{PG}(V)\); in fact, each translation with center \(X=Kv\) is induced by a transvection \(\tau_{\lambda,v}\in\operatorname{U}(V,h)\) with \(\ker\lambda=v^{\perp_{h}}\). _ Explicitly, we obtain for the two unitals considered here: the commutator groups \(\Xi^{\prime}=\big{\{}\xi((0,0),p)\bigm{|}p\in Ri\big{\}}\) and \(\Psi^{\prime}=\big{\{}\psi(0,p)\bigm{|}p\in Ri\big{\}}\) of 2.4 and 2.7 are full translation groups, with centers \(C(0,0,0,1)\) and \(H(0,0,1)\), respectively. **3.7 Theorem**.: _The groups \(\operatorname{PEU}(C^{4},g)\) and \(\operatorname{PEU}(H^{3},h)\) are isomorphic._ Proof.: Recall that the groups \(\operatorname{EU}(C^{4},g)\) and \(\operatorname{EU}(H^{3},h)\), respectively, are generated by all unitary transvections; those transvections induce the translations of the unital. Conjugation by the isomorphism \(\eta\colon\mathbb{U}_{g}\to\mathbb{U}_{h}\) maps \(\operatorname{Aut}(\mathbb{U}_{g})\) onto \(\operatorname{Aut}(\mathbb{U}_{h})\), and maps the group \(\operatorname{T}_{[X]}\) to \(\operatorname{T}_{[X^{\eta}]}\), for each point \(X\in U_{g}\). So conjugation by \(\eta\) induces an isomorphism from \(\operatorname{PEU}(C^{4},g)\) onto \(\operatorname{PEU}(H^{3},h)\). **3.8 Example**.: We take the field \(\mathbb{C}\) of complex numbers for \(C\), with the standard involution \(\sigma\colon c\mapsto\overline{c}\) generating \(\operatorname{Gal}(\mathbb{C}|\mathbb{R})\), and the field \(\mathbb{H}=H^{1}_{\mathbb{C}|\mathbb{R}}=\mathbb{C}+j\mathbb{C}\) of Hamilton's quaternions. The involution \(\alpha\) from 2.1 represents the unique class of involutory anti-automorphisms of \(\mathbb{H}\) apart from the standard involution \(\kappa\). For the forms \(g\) and \(h\) introduced in 2.2 and 2.5, respectively, we obtain the groups \(\operatorname{PEU}(\mathbb{C}^{4},g)\cong\operatorname{PSU}_{4}(\mathbb{C},1)\) and \(\operatorname{PEU}(\mathbb{H}^{3},h)\cong\operatorname{PS}\alpha\mathbb{U}_{ 3}(\mathbb{H})\) (in the notation of [14, 94.33], in Tits [18, pp. 28, 40], these occur as the groups of type \(\mathsf{A}_{3}^{\mathbb{C},1}\) and \(\mathsf{D}_{3}^{\mathbb{H}}\), Helgason [6, X SS 2.1, SS 6.2] denotes the corresponding algebras by \(\mathfrak{su}(3,1)\) and \(\mathfrak{so}^{*}(6)\), respectively). **3.9 Remarks**.: For the commutative field \(C\), one knows that \(\operatorname{EU}(C^{4},g)=\operatorname{SU}(C^{4},g)\), so \(\operatorname{PEU}(C^{4},g)=\operatorname{PSU}(C^{4},g)\). Also, it is known that the groups \(\operatorname{PEU}(C^{4},g)\) and \(\operatorname{PEU}(H^{3},h)\) are simple: see [2, II SS 4] for a general result, cf. [17, 10.20] or [3, 11.26] for the case of a commutative ground field. As we restrict our investigation to cases where the characteristic is different from two, all the forms in question are trace valued forms. **3.10 Remarks**.: As the field \(C\) is commutative, the involution \(\sigma\) of \(C\) is an involution of the second kind (in the sense of Dieudonne [2, SS 10, p. 19]). According to [16, 5.6c], every reflection in the group \(\operatorname{PU}(C^{4},g)\) is thus admissible, and we obtain \(\operatorname{Aut}(U_{g},\mathcal{B}_{g},\in)=\operatorname{P\Gamma U}(C^{4},g)\). From our result 2.10 we then also infer \(\operatorname{Aut}(U_{h},\mathcal{B}_{h},\in)\cong\operatorname{P\Gamma U}(C^ {4},g)\). **3.11 Remark**.: Let \(F\) be a commutative field, and let \(Q\) be a quaternion algebra over \(F\). Then \(Q\) is a central simple \(F\)-algebra (cp. [9, 4.5, Lemma 3, p. 232], and every \(F\)-linear automorphism is inner (by the Skolem-Noether Theorem, see [9, p. 222], or see [8, Theorem 2, p. 67] for a direct proof). It then follows that every \(F\)-linear anti-automorphism \(\beta\) is the product of the standard involution and some inner automorphism, say \(x\mapsto i^{-1}xi\) with \(i\in F\setminus\{0\}\), so \(x^{\beta}=i^{-1}\overline{\ast}i\). We obtain that \(\beta\) is an involution precisely if \(i^{2}\in F\), i.e., if either \(i\in F\) or \(\overline{i}=-i\). If \(i\in F\) then \(\beta\) is the standard involution. If \(i\notin F\), we form the quadratic extension \(C=F+Fi\). The restriction \(\sigma\) of the standard involution of \(Q\) then is the generator of \(\operatorname{Gal}(C|F)\), and \(\beta\) is obtained as in 2.1.
2306.09014
Geometric Wide-Angle Camera Calibration: A Review and Comparative Study
Wide-angle cameras are widely used in photogrammetry and autonomous systems which rely on the accurate metric measurements derived from images. To find the geometric relationship between incoming rays and image pixels, geometric camera calibration (GCC) has been actively developed. Aiming to provide practical calibration guidelines, this work surveys the existing GCC tools and evaluates the representative ones for wide-angle cameras. The survey covers camera models, calibration targets, and algorithms used in these tools, highlighting their properties and the trends in GCC development. The evaluation compares six target-based GCC tools, namely, BabelCalib, Basalt, Camodocal, Kalibr, the MATLAB calibrator, and the OpenCV-based ROS calibrator, with simulated and real data for wide-angle cameras described by four parametric projection models. These tests reveal the strengths and weaknesses of these camera models, as well as the repeatability of these GCC tools. In view of the survey and evaluation, future research directions of wide-angle GCC are also discussed.
Jianzhu Huai, Yuan Zhuang, Yuxin Shao, Grzegorz Jozkow, Binliang Wang, Yijia He, Alper Yilmaz
2023-06-15T10:16:00Z
http://arxiv.org/abs/2306.09014v2
# A Review and Comparative Study of Close-Range Geometric Camera Calibration Tools ###### Abstract In many camera-based applications, it is necessary to find the geometric relationship between incoming rays and image pixels, i.e., the projection model, through the geometric camera calibration (GCC). Aiming to provide practical calibration guidelines, this work surveys and evaluates the existing GCC tools. The survey covers camera models, calibration targets, and algorithms used in these tools, highlighting their properties and the trends in GCC development. The evaluation compares six target-based GCC tools, namely, BabelCalib, Basalt, Camodocal, Kalibr, the MATLAB calibrator, and the OpenCV-based ROS calibrator, with simulated and real data for cameras of wide-angle and fisheye lenses described by three traditional projection models. These tests reveal the strengths and weaknesses of these camera models, as well as the repeatability of these GCC tools. In view of the survey and evaluation, future research directions of GCC are also discussed. geometric camera calibration, calibration tool, camera model, calibration target, calibration algorithm. ## I Introduction Cameras are indispensable to a host of applications ranging from remote sensing [1], surveying [2], robotics [3], to endoscopy [4]. These applications usually need the knowledge of the geometric relationship between the real-world points and their images in a camera (Fig. 1). To solve for the geometric mapping, the geometric camera calibration (GCC) is introduced. As one of the converging points of computer vision, internet of things, and robotics, GCC have been extensively studied since 1970s and are still being actively researched today, possibly driven by the evolving needs of various applications. A wide range of cameras have been developed and can be categorized in several ways. With varying operating principles, there are traditional cameras, depth cameras, event cameras, thermal cameras, and so on. This paper focuses on the traditional cameras that measure intensities at pixels of an image due to visible light. Other types of cameras are usually modeled with the same geometric models as traditional cameras. Based on the angle of view (AOV), cameras can be roughly grouped into conventional cameras (typically \(<\)64\({}^{\circ}\)), wide-angle cameras (\(<\)100\({}^{\circ}\)), fisheye cameras, and omnidirectional cameras (\(\geq 180^{\circ}\)), with blurry boundaries between adjacent groups. The conventional and wide-angle cameras are usually well represented by a pinhole model, i.e., the perspective model. The omnidirectional cameras include fisheye cameras with an AOV \(\geq 180^{\circ}\), and catadioptric cameras comprising of lenses and mirrors ("cata" for mirror reflection and "dioptric" for lens refraction). There are also camera rigs consisting of multiple cameras which achieve a great AOV by stitching images. Based on whether all incoming rays pass through a single point, cameras can be divided into central cameras of a single effective viewpoint, i.e., the optical center, and non-central cameras. Central cameras include the conventional cameras, fisheye cameras with an AOV \(\leq\) 195\({}^{\circ}\), and many catadioptric cameras built by combining a pinhole camera and hyperbolic, parabolic, or elliptical mirrors. Instances of non-central cameras include catadiptric cameras built with spherical mirrors. As a special class in the non-central cameras, axial cameras have all projection rays intersect a line, e.g., the push-broom cameras on some remote sensing satellites. Numerous geometric camera models [5] have been proposed, ranging from specific global models of a dozen parameters to generic local models of thousands of parameters. Traditional geometric camera models are tailored for specific lens types, and expressed by a closed-form function of usually \(<\)100 parameters. They are global since a parameter's change affects the projection of every incoming ray. These models are well supported by the existing calibration tools, and structure from motion (SfM) packages. By contrast, generic models can model a wide range of cameras by using lots of parameters each of which determines the projection of incoming rays in a Fig. 1: The geometric camera calibration and the evaluated tools. local area, for instance, B-spline models [6]. To the extreme, a local model associates separate ray parameters for each pixel, giving a per-pixel model [7]. These models achieve a continuous mapping between ray directions and image points by interpolation. While they are typically more accurate than global models, they also require more data for calibration. Numerous tools have been developed for carrying out GCC, each with a unique set of features. They are often available as proprietary programs, such as the camera calibrator in MATLAB [8] and Agisoft Metashape [9], or open-source programs, such as Kalibr [10]. As for similarities, existing tools usually support global camera models and calibration with some planar target. Notably, many tools are based on the same underlying packages, e.g., OpenCV [11], thus, they tend to have similar limitations. Moreover, many programs developed independently are very close in functionality, implying a possible duplicate effort. As for practical differences, these tools usually support different sets of camera models and calibration targets. The diverse landscape of camera models and calibration tools on one hand offers ready-to-use solutions in a variety of situations, but on the other hand, it gets overwhelming for practitioners to choose the proper calibration tool. To address this difficulty, quite a few comparative studies have been conducted. For instance, three calibration algorithms were compared in [12] for cameras with large focal lengths. Digital displays and printed targets were compared in [13] for close-range cameras. These reviews usually focus on components of GCC, such as camera models or calibration targets. Overall, there is lack of a qualitative overview and quantitative comparison of existing GCC tools which elucidates choosing the proper camera model and calibration tool. To fill this gap, we extensively review existing GCC tools from several practical aspects and benchmark several popular tools with simulated and real data. To confine the scope while catering to a large audience, this paper focuses on traditional close-range grayscale or color monocular cameras, as we believe they are actively studied and their calibration methods often carry over to other camera types with some adaption. The contributions of this work are summarized as follows: First, this review categorizes camera models, calibration targets, and calibration algorithms as used in GCC tools, providing a concise reference for these aspects. We then qualitatively reveal the strengths and similarities of these calibration tools, hopefully preventing repetitive development efforts in the future. Second, an evaluation of six calibration tools is conducted for in-house cameras with varying AOV by simulation and real-data tests to show their accuracy and repeatability. The evaluation clearly shows strengths and weaknesses of three popular global geometric camera models and indicates which calibration tool to use for close-range applications. Third, based on the review and evaluation, we highlight future research directions for GCC. The following text is organized as shown in Fig. 2. Next, Section II briefly reviews related work on comparative studies of GCC. For the available camera calibration tools, Section III sorts out the camera models, the calibration targets, and the calibration algorithms. The GCC tools are reviewed in Section IV. Section V presents experiments of six calibration tools with a range of cameras and three popular global camera models. Finally, conclusions and future research trends are given in Section VI. ## II Related Work This section briefly reviews comparative studies and surveys about GCC from several aspects including camera models, calibration targets, and calibration methods. ### _Camera Models_ Comparative studies about camera models are usually conducted in papers proposing new or enhanced models. For fisheye cameras, in [14], the double sphere (DS) model was proposed and compared with several global models including the Kannala-Brandt (KB) model [15], the extended unified camera model (EUCM) [16], the field of view (FOV) model [17], validating that its accuracy approached that of the KB model with 8 parameters. In [18], a per-pixel generic model was shown to be more accurate than a pinhole camera model with radial distortion. The generic B-spline model [19] was enhanced in [6] with a denser grid of control points for the cubic B-spline surface, and it was shown that generic models led to more accurate results than traditional global models in photogrammetric applications. Authors of [5, 20] extensively reviewed existing camera models and established a taxonomy based on several criteria. In this paper, we survey the camera models commonly found in GCC tools and provide their exact formulations for reference (Section III-A). ### _Calibration Targets_ To achieve high accuracy, GCC is often performed with a set of points of known positions, such as a calibration field used in remote sensing, and calibration targets in close-range applications. The diversity of calibration targets made necessary comparative analyses of these targets. Regarding control point detection in camera calibration, circle grids and checkerboards were studied in [21] and it was found that circles suffered from perspective and distortion biases whereas corner points of checkerboards were invariant to the distortion bias. Schmalz et al. [13] systematically compared the active targets with digital displays to the printed checkerboard for Fig. 2: The structure of this survey. GCC with several combinations of displays, cameras, and lenses. They found that calibration with the active target had much lower reprojection errors, but required compensation for the refraction of the glass plate and multiple images per pose and hence a tripod or the like. In an underwater environment, fiducial markers including the ARToolKit [22], the AprilTag [23], and the Aruco [24] were compared in [25] where the AprilTag showed better detection performance but required higher computation. In environments with occlusions and rotations, three markers, the ARTag [26], the AprilTag [23], and the CALTag [27] were compared in [28] and the CALTag empirically achieved the best recognition rate. For pose tracking in surgery, Kunz et al. [29] compared the Aruco and AprilTag markers and found that both could achieve sub-millimeter accuracy at distances up to 1 m. For localization of unmanned aerial systems, four fiducial markers, the ARTag, the AprilTag, the Aruco, and the STag [30], were compared in [31] in terms of detection rate and localization accuracy. The AprilTag, the STag, and the Aruco were shown to have close performance whereas the Aruco was the most efficient in computation. In simulation, Zakiev et al. [32] reported that an Aruco marker had much better detection rate than an AprilTag marker when the marker board rotated along an in-plane axis. For drone landing, several variants of the AprilTag and the circular WhyCode [33] were compared in [34] on an embedded system and the suitable variants were determined. Unlikely above comparative studies about targets, our paper briefly surveys the calibration targets (Section III-B) supported by the available GCC tools. ### _Calibration Algorithms_ The algorithms for GCC are vast, ranging from target-based to self-calibration, from offline calibration to online interactive calibration. Quite a few papers have reviewed the GCC methods in view of different applications. For close-range photogrammetry, an overview of developments of camera calibration methods up to 1995 was provided in [35]. Several calibration techniques up to 1992 for conventional cameras with a pinhole model were reviewed and evaluated in [36]. For close-range applications, several target-based and self-calibration methods were compared in [37] with a 3D target and a checkerboard, showing that the self-calibration methods based on bundle adjustment often achieved good calibration for consumer-grade cameras. For time-of-flight range cameras, three intrinsic calibration methods were compared in [38] for calibrating camera lens parameters and range error parameters by using a multi-resolution planar target. For cameras of large focal lengths (\(\geq\)35 mm), Hieronymus [12] compared three calibration methods, one with a test field of a known geometric pattern, and two methods with devices for generating laser beams. He found that these methods achieved comparable high accuracy for the pinhole model with radial and tangential distortion. For cameras with lenses of focal lengths \(\geq\)50 mm in particle tracking velocimetry, Joshi et al. [39] studied the accuracy of three camera calibration methods, the direct linear transform (DLT) that ignores the distortion [40], a linear least squares method with the rational polynomial coefficient (RPC) model [41] but only using the numerator terms, and Tsai's method which determines the intrinsic and extrinsic parameters in two steps [42]. They found that errors of the Tsai's method were fluctuant due to the unstable nonlinear optimization. For infrared cameras, Usamentiaga et al. [43] compared three calibration methods, a DLT method, an iterative method, and a complete method that considered lens distortion, and unsurprisingly, the last method resulted in best distance measurements. For roadside cameras, GCC methods based on vanishing points were compared in [44], assuming no lens distortion. For X-ray cameras ignoring radial distortion, the DLT method [40], Tsai's method [42], and Zhang's method [45] were compared in [46], and the DLT showed superiority in accuracy and operation simplicity. For a camera-projector pair, Tiscareno et al. [47] calibrated the camera with the DLT method, Tsai's method, and Zhang's method, and calibrated the projector with the DLT, through simulation. They found that Zhang's method gave smaller reprojection errors than the others for camera calibration. For zoom-lens cameras with varying focal lengths, calibration methods were reviewed in [48]. Different from the preceding surveys and comparisons focusing on calibration methods, this paper reviews and compares GCC tools for close-range cameras of fixed intrinsic parameters. ## III Geometric Camera Calibration Components This section reviews geometric camera models, targets, algorithms as available in existing calibration tools. Before elaborating GCC, some definitions are clarified here. The focal length is defined to be the distance between the camera's optical center and the sensor as in [20]. Since the optical center is defined only for central cameras, the focal length is not defined for non-central cameras. Accordingly, the focal length can take a range of values including the one when the camera is focused at infinity. We define the principal/optical axis as the line passing through the optical center and orthogonal to the sensor chip. For ease with pinhole cameras, the sensor is often inverted and placed in front of the optical center, forming the image plane [40]. For a catadioptric camera, the mirror axis refers to the symmetry axis of the mirror. We define the AOV of a lens to be the maximum angle formed by rays coming into the lens. Likewise, the AOV of a camera is defined as the maximum angle formed by rays corresponding to the sensor's exposed pixels, along the sensor's horizontal axis, vertical axis, or diagonal, leading to HAOV, VAOV, or DAOV, respectively. Thus, the AOV of a camera depends on both the lens and the sensor. ### _Camera Models_ The following describes the variety of camera models used in close-range applications, which have been adopted in GCC tools surveyed in this paper. Camera models used for remote sensing, such as the affine camera model [40], the RPC model [41], the detector directional model [49], are referred to [50]. We begin with global models for central cameras which dominate the GCC tools, and end with generic models. These global models are typically defined in a (forward) projection manner where image points are formulated given world points or rays, although the same formulae may be used the other way round to obtain a ray given an image point, i.e., backward projection / back-projection / unprojection, for instance, (4) and (8). For local models, however, the backward projection is usually used to express the camera model as the forward projection can be very complex [6]. For the below camera models listed in Fig. 3, we describe either the forward or the backward model unless both are closed-form, with the understanding that going the other way often requires iterative optimization. A set of symbols is defined in order here. We denote a point in the camera frame by \(\mathbf{x}_{c}=[X_{c},Y_{c},Z_{c}]\) with Euclidean coordinates \(X_{c}\), \(Y_{c}\), and \(Z_{c}\). The measured image point is denoted by \(\mathbf{u}_{m}=[u_{m},v_{m}]\) with pixel coordinates \(u_{m}\) and \(v_{m}\). The word-to-image forward projection is denoted by \(\boldsymbol{\pi}(\mathbf{x}_{c},\mathbf{i}):\mathbb{R}^{3}\rightarrow\mathbb{ R}^{2}\) where \(\mathbf{i}\) is the set of intrinsic parameters. Its inverse, the image-to-world inverse projection model is \(\boldsymbol{\pi}^{-1}(\mathbf{u}_{m},\mathbf{i}):\mathbb{R}^{2}\rightarrow \mathbb{S}^{2}\) where \(\mathbb{S}^{2}\) is the set of 3D unit vectors. We denote by \(\theta\) the incidence angle between an incoming ray and the optical axis. We use the subscripts'm', 'd', 'n', and 'c' to indicate measurement, distortion, normalization, and the camera coordinate frame. #### Iii-B1 Global Models for Wide-Angle Cameras Conventional and wide-angle cameras of an AOV \(<\)100\({}^{\circ}\) usually have little distortion and satisfy well the pinhole model. The set of parameters in the pinhole projection without distortion are \(\mathbf{i}=[f_{x},f_{y},c_{x},c_{y}]\), including the focal length and the principal point along the image plane's two axes in units of pixels. The distortion-free pinhole model is given by \[\mathbf{u}_{m}=\boldsymbol{\pi}(\mathbf{x}_{c},\mathbf{i})=\begin{bmatrix}f_{ x}X_{c}/Z_{c}+c_{x}\\ f_{y}Y_{c}/Z_{c}+c_{y}\end{bmatrix}, \tag{1}\] with the closed-form inverse model, \[\boldsymbol{\pi}^{-1}(\mathbf{u}_{\mathbf{m}},\mathbf{i})=\frac{1}{\sqrt{x_{ n}^{2}+y_{n}^{2}+1}}\begin{bmatrix}x_{n}\\ y_{n}\\ 1\end{bmatrix}, \tag{2}\] where \(x_{n}=(u_{m}-c_{x})/f_{x}\) and \(y_{n}=(v_{m}-c_{y})/f_{y}\). To account for lens distortion, a variety of distortion models for pinhole cameras have been proposed. The most popular one is probably the radial-tangential polynomial model, i.e., the plumb bob model or the Brown-Conrady model [51]. Its intrinsic parameters, \(\mathbf{i}=[f_{x},f_{y},c_{x},c_{y},k_{1},k_{2},p_{1},p_{2}]\), include the pinhole projection parameters, the radial distortion parameters \(k_{j},j=1,2,\cdots,p\) (the maximum index \(p\) is usually truncated to two in practice), and the tangential distortion parameters \(p_{1},p_{2}\). The pinhole radial tangential model is given by \[\begin{bmatrix}x_{n}\\ y_{n}\end{bmatrix} =\begin{bmatrix}X_{c}/Z_{c}\\ Y_{c}/Z_{c}\end{bmatrix},\qquad r_{n}^{2}=x_{n}^{2}+y_{n}^{2}, \tag{3}\] \[\begin{bmatrix}x_{d}\\ y_{d}\end{bmatrix} =\begin{bmatrix}x_{n}(1+\sum_{j=1}^{p}k_{j}r_{n}^{2j})+\delta_{ud} \\ y_{n}(1+\sum_{j=1}^{p}k_{j}r_{n}^{2j})+\delta_{vd}\end{bmatrix},\] (4) \[\begin{bmatrix}\delta_{ud}\\ \delta_{vd}\end{bmatrix} =\begin{bmatrix}2p_{1}x_{n}y_{n}+p_{2}(r_{n}^{2}+2x_{n}^{2})\\ p_{1}(r_{n}^{2}+2y_{n}^{2})+2p_{2}x_{n}y_{n}\end{bmatrix},\] (5) \[\boldsymbol{\pi}(\mathbf{x}_{c},\mathbf{i}) =\begin{bmatrix}f_{x}x_{d}+c_{x}\\ f_{y}y_{d}+c_{y}\end{bmatrix}. \tag{6}\] This model usually suits well lenses with an AOV \(<\)120\({}^{\circ}\)[14]. The inverse of (4) has no closed-form solution, and usually requires an iterative procedure. Notably, Drap and Lefevre [52] propose an exact formula involving a power series to invert (4). Alternatively, the pinhole radial tangential model can also be defined in a backward manner, i.e., \[\begin{bmatrix}x_{d}\\ y_{d}\end{bmatrix} =\begin{bmatrix}(u_{m}-c_{x})/f_{x}\\ (v_{m}-c_{y})/f_{y}\end{bmatrix},\qquad r_{d}^{2}=x_{d}^{2}+y_{d}^{2}, \tag{7}\] \[\begin{bmatrix}x_{n}\\ y_{n}\end{bmatrix} =\begin{bmatrix}x_{d}(1+\sum_{j=1}^{p}k_{j}r_{n}^{2j})+\delta_{ud} \\ y_{d}(1+\sum_{j=1}^{p}k_{j}r_{n}^{2j})+\delta_{vd}\end{bmatrix},\] (8) \[\begin{bmatrix}\delta_{ud}\\ \delta_{vd}\end{bmatrix} =\begin{bmatrix}2p_{1}x_{d}y_{d}+p_{2}(r_{d}^{2}+2x_{d}^{2})\\ p_{1}(r_{d}^{2}+2y_{d}^{2})+2p_{2}x_{d}y_{d}\end{bmatrix},\] (9) \[\boldsymbol{\pi}^{-1}(\mathbf{u}_{m},\mathbf{i}) =\frac{1}{\sqrt{x_{n}^{2}+y_{n}^{2}+1}}\begin{bmatrix}x_{n}\\ y_{n}\\ 1\end{bmatrix}. \tag{10}\] Obviously, for the same camera, the parameters of the backward model differ from those of the forward model. This backward model is less common but has been used in e.g., the PhotoModeler [53]. The forward pinhole radial tangential model in (4) can be simplified to the division model proposed by [54] which is a radial symmetric model with the set of intrinsic parameters \(\mathbf{i}=[f_{x},f_{y},c_{x},c_{y},k_{1}]\), \[\begin{bmatrix}x_{d}\\ y_{d}\end{bmatrix} =\begin{bmatrix}(u_{m}-c_{x})/f_{x}\\ (v_{m}-c_{y})/f_{y}\end{bmatrix},\qquad r_{d}=\sqrt{x_{d}^{2}+y_{d}^{2}}, \tag{11}\] \[\begin{bmatrix}x_{n}\\ y_{n}\end{bmatrix} =\begin{bmatrix}x_{d}/(1+k_{1}r_{d}^{2})\\ y_{d}/(1+k_{1}r_{d}^{2})\end{bmatrix},\] (12) \[\boldsymbol{\pi}^{-1}(\mathbf{u}_{m},\mathbf{i}) =\frac{1}{\sqrt{x_{n}^{2}+y_{n}^{2}+1}}\begin{bmatrix}x_{n}\\ y_{n}\\ 1\end{bmatrix}. \tag{13}\] A backward rational model is proposed in [55], \[\begin{bmatrix}x_{n}\\ y_{n}\end{bmatrix} =\begin{bmatrix}x_{d}\\ y_{d}\end{bmatrix}\frac{1+\sum_{j=1}^{p}k_{j}^{1}r_{d}^{2j}}{1+\sum_{j=1}^{q}k_ {j}^{2}r_{d}^{2j}}, \tag{14}\] \[\boldsymbol{\pi}^{-1}(\mathbf{u}_{m},\mathbf{i}) =\frac{1}{\sqrt{x_{n}^{2}+y_{n}^{2}+1}}\begin{bmatrix}x_{n}\\ y_{n}\\ 1\end{bmatrix}, \tag{15}\] with the intrinsic parameters \(\mathbf{i}=[f_{x},f_{y},c_{x},c_{y}]\cup\mathbf{k}^{1}\cup\mathbf{k}^{2}\) where \(\mathbf{k}^{1}=[k_{j}^{1},j=1,2,\cdots,p]\) and \(\mathbf{k}^{2}=[k_{j}^{2},j=1,2,\cdots,q]\). The rational model in OpenCV [11] supports \(p\leq 3\) and \(q\leq 3\). Fig. 3: The camera models reviewed in Section III-A. Furthermore, the thin prism effect is considered in [56] along with radial and tangential distortion, where the model is defined as \[\begin{bmatrix}x_{n}\\ y_{n}\end{bmatrix} =\begin{bmatrix}X_{c}/Z_{c}\\ Y_{c}/Z_{c}\end{bmatrix},\qquad r_{n}^{2}=x_{n}^{2}+y_{n}^{2}, \tag{16}\] \[\begin{bmatrix}x_{d}\\ y_{d}\end{bmatrix} =\begin{bmatrix}x_{n}(1+k_{1}r_{n}^{2})+\delta_{ud}+\delta_{up}\\ y_{n}(1+k_{1}r_{n}^{2})+\delta_{vd}+\delta_{vp}\end{bmatrix},\] (17) \[\begin{bmatrix}\delta_{up}\\ \delta_{vp}\end{bmatrix} =\begin{bmatrix}s_{1}r_{n}^{2}\\ s_{2}r_{n}^{2}\end{bmatrix},\] (18) \[\boldsymbol{\pi}(\mathbf{x}_{c},\mathbf{i}) =\begin{bmatrix}f_{x}x_{d}+c_{x}\\ f_{y}y_{d}+c_{y}\end{bmatrix}, \tag{19}\] where the tangential distortion \([\delta_{ud},\delta_{vd}]\) is given in (5). Overall, the intrinsic parameter set is \(\mathbf{i}=[f_{x},f_{y},c_{x},c_{y},k_{1},p_{1},p_{2},s_{1},s_{2}]\). The OpenCV considers more terms for the thin prism effect by \(\delta_{up}=s_{1}r_{n}^{2}+s_{2}r_{n}^{4}\) and \(\delta_{vp}=s_{3}r_{n}^{2}+s_{4}r_{n}^{4}\). #### Iii-B2 Global Fisheye Camera Models Fisheye cameras typically have an AOV \(\geq 100^{\circ}\), and can reach 280\({}^{\circ}\)1. They are quite common but show great distortion, thus, quite a few global models have been proposed. The most popular ones are probably the KB model [15] and the FOV model [17]. Footnote 1: [https://www.back-bone.ca/product/entanija-280/](https://www.back-bone.ca/product/entanija-280/) The full KB model proposed in [15] has 23 parameters where four describe the affine transform (6), five describe an equidistant radial symmetric distortion, and the other 14 describe the asymmetric distortion. The commonly used KB-8 model is radially symmetric and has 8 intrinsic parameters, \(\mathbf{i}=[f_{x},f_{y},c_{x},c_{y},k_{1},k_{2},k_{3},k_{4}]\). It is defined by \[\boldsymbol{\pi}(\mathbf{x}_{c},\mathbf{i}) =\begin{bmatrix}f_{x}d(\theta)X_{c}/r_{c}+c_{x}\\ f_{y}d(\theta)Y_{c}/r_{c}+c_{y}\end{bmatrix}, \tag{20}\] \[r_{c} =\sqrt{X_{c}^{2}+Y_{c}^{2}}=Z_{c}tan(\theta),\] (21) \[d(\theta) =\theta+k_{1}\theta^{3}+k_{2}\theta^{5}+k_{3}\theta^{7}+k_{4} \theta^{9}, \tag{22}\] Unlike the KB-9 in [15], the KB-8 model sets the coefficient of the term \(\theta\) in \(d(\theta)\) to be 1. The KB-8 model can handle an AOV \(\geq 180^{\circ}\), but when it is formulated as an equidistant distortion on top of a pinhole projection as in Kalibr [10] and OpenCV, the projection will fail for points of \(Z_{c}\leq 0\). The Scaramuzzo model [57] for central catadoptric cameras and fisheye cameras up to a 195\({}^{\circ}\) AOV resembles the inverse of the KB-8 model. It is defined in a backward manner for a measured image point \([u_{m},v_{m}]\) as \[\begin{bmatrix}u_{m}\\ v_{m}\end{bmatrix} =\begin{bmatrix}c&d\\ e&1\end{bmatrix}\begin{bmatrix}u_{h}\\ v_{h}\end{bmatrix}+\begin{bmatrix}c_{x}\\ c_{y}\end{bmatrix}, \tag{23}\] \[\boldsymbol{\pi}^{-1}(\mathbf{u}_{m},\mathbf{i}) =\frac{1}{\sqrt{u_{h}^{2}+v_{h}^{2}+w_{h}^{2}(\rho_{h})}}\begin{bmatrix} u_{h}\\ v_{h}\\ w_{h}(\rho_{h})\end{bmatrix},\] (24) \[\rho_{h} =\sqrt{u_{h}^{2}+v_{h}^{2}},\] (25) \[w_{h}(\rho_{h}) =a_{0}+a_{2}\rho_{h}^{2}+a_{3}\rho_{h}^{3}+a_{4}\rho_{h}^{4}, \tag{26}\] where \(u_{h}\), \(v_{h}\) are the ideal coordinates of the image point on a hypothetical plane orthogonal to the mirror axis. The parameter vector for the model is \(\mathbf{i}=[a_{0},a_{2},a_{3},a_{4},c_{x},c_{y},c,d,e]\). Since \(c\) in the 2\(\times\)2 stretch matrix is about one, \(a_{0}\) is similar in role to \(f_{x}\) or \(f_{y}\) in (20). This model is available in the MATLAB camera calibrator [8]. For projecting a world point to the image, a polynomial approximation of the involved forward projection is adopted in [57] to reduce the computation. The FOV model [17] has one distortion parameter and a closed-form inversion. It has been popular for fisheye lenses in consumer products, e.g., Tango phones. With intrinsic parameters \(\mathbf{i}=[f_{x},f_{y},c_{x},c_{y},\omega]\), its definition is given by \[\boldsymbol{\pi}(\mathbf{x}_{c},\mathbf{i}) =\begin{bmatrix}f_{x}X_{c}\frac{r_{d}}{r_{u}}+c_{x}\\ f_{y}Y_{c}\frac{r_{u}^{2}}{r_{u}}+c_{y}\end{bmatrix}, \tag{27}\] \[r_{u} =\sqrt{X_{c}^{2}+Y_{c}^{2}},\] (28) \[r_{d} =\frac{1}{\omega}\mathrm{arctan2}(2r_{u}\tan\frac{\omega}{2},Z_{c }). \tag{29}\] For backward projection of an image point, the FOV model has a closed-form solution given by \[\boldsymbol{\pi}^{-1}(\mathbf{u}_{m},\mathbf{i}) =\begin{bmatrix}\frac{x_{d}\sin(r_{d}\omega)}{2r_{d}\tan\frac{ \omega}{2}}&\frac{y_{d}\sin(r_{d}\omega)}{2r_{d}\tan\frac{\omega}{2}}&\cos(r_{ d}\omega)\end{bmatrix}^{\mathsf{T}} \tag{30}\] \[\begin{bmatrix}x_{d}\\ y_{d}\end{bmatrix} =\begin{bmatrix}(u_{m}-c_{x})/f_{x}\\ (v_{m}-c_{y})/f_{y}\end{bmatrix},\] (31) \[r_{d} =\sqrt{x_{d}^{2}+y_{d}^{2}}. \tag{32}\] Despite only one distortion parameter, the FOV model often requires as much computation as the KB-8 model for forward and backward projections due to the trigonometric functions. The DS model [14] fits well large AOV lenses, has a closed-form inversion, and does not involve trigonometric functions, thus making it very efficient. This model contains 6 parameters, \(\mathbf{i}=[f_{x},f_{y},c_{x},c_{y},\xi,\alpha]\). In forward projection, a world point is projected consecutively onto two unit spheres of a center offset \(\xi\), and lastly projected onto the image plane using a pinhole model. The projection model is defined by \[\boldsymbol{\pi}(\mathbf{x}_{c},\mathbf{i}) =\begin{bmatrix}f_{x}\frac{X_{c}}{\alpha d_{2}+(1-\alpha)(\xi d_{ 1}+Z_{c})}+c_{x}\\ f_{y}\frac{X_{c}}{\alpha d_{2}+(1-\alpha)(\xi d_{1}+Z_{c})}+c_{y}\end{bmatrix}, \tag{33}\] \[d_{1} =\sqrt{X_{c}^{2}+Y_{c}^{2}+Z_{c}^{2}},\] (34) \[d_{2} =\sqrt{X_{c}^{2}+Y_{c}^{2}+(\xi d_{1}+Z_{c})^{2}}. \tag{35}\] Its closed-form unprojection is given by \[\boldsymbol{\pi}^{-1}(\mathbf{u}_{m},\mathbf{i}) =\frac{z_{d}\xi+\sqrt{z_{d}^{2}+(1-\xi^{2})r_{d}^{2}}}{z_{d}^{2}+ r_{d}^{2}}\begin{bmatrix}x_{d}\\ y_{d}\\ z_{d}\end{bmatrix}-\begin{bmatrix}0\\ 0\\ \xi\end{bmatrix}, \tag{36}\] \[z_{d} =\frac{1-\alpha^{2}r_{d}^{2}}{\alpha\sqrt{1-(2\alpha-1)r_{d}^{2}}+1 -\alpha},\] (37) \[r_{d}^{2} =x_{d}^{2}+y_{d}^{2}. \tag{38}\] This model has been implemented in Basalt [14] and Kalibr. #### Iii-B3 Global Omnidirectional Camera Models An omnidirectional camera has an HAOV \(\geq 180^{\circ}\) and a DAOV up to \(360^{\circ}\). Several models have been developed for such cameras. The unified camera model (UCM) in [58] can deal with both fisheye cameras and central catadoptric cameras, defined by \[\boldsymbol{\pi}(\mathbf{x}_{c},\mathbf{i}) =\begin{bmatrix}\gamma_{x}\frac{X_{c}}{\xi\rho+Z_{c}}+c_{x}& \gamma_{y}\frac{Y_{c}}{\xi\rho+Z_{c}}+c_{y}\end{bmatrix}^{\mathsf{T}}, \tag{39}\] \[\rho =\sqrt{X_{c}^{2}+Y_{c}^{2}+Z_{c}^{2}}, \tag{40}\] with intrinsic parameters \(\mathbf{i}=[\gamma_{x},\gamma_{y},c_{x},c_{y},\xi]\). When \(\xi\)=0, the above model degenerates to a pinhole model. The unified model is formulated equivalently in [14] for better numeric stability. The formulation is given by \[\boldsymbol{\pi}(\mathbf{x}_{c},\mathbf{i})=\begin{bmatrix}f_{x}\frac{X_{c}}{ \alpha\rho+(1-\alpha)Z_{c}}+c_{x}\\ f_{y}\frac{Y_{c}}{\alpha\rho+(1-\alpha)Z_{c}}+c_{y}\end{bmatrix}, \tag{41}\] with intrinsic parameters \(\mathbf{i}=[f_{x},f_{y},c_{x},c_{y},\alpha]\) where \[\alpha=\xi/(1+\xi),\quad f_{x}=\gamma_{x}/(1+\xi),\quad f_{y}=\gamma_{y}/(1+ \xi). \tag{42}\] The unprojection function for the UCM is given by \[\boldsymbol{\pi}^{-1}(\mathbf{u},\mathbf{i}) =\frac{\xi+\sqrt{1+(1-\xi^{2})r_{d}^{2}}}{1+r_{d}^{2}}\begin{bmatrix} x_{d}\\ y_{d}\\ 1\end{bmatrix}-\begin{bmatrix}0\\ 0\\ \xi\end{bmatrix}, \tag{43}\] \[x_{d} =\frac{u_{m}-c_{x}}{f_{x}(1+\xi)},\quad y_{d}=\frac{v_{m}-c_{y}} {f_{y}(1+\xi)},\] (44) \[r_{d}^{2} =x_{d}^{2}+y_{d}^{2},\quad\xi=\frac{\alpha}{1-\alpha}. \tag{45}\] For better accuracy with the UCM, Mei and Rives [59] also consider the lens distortion, the misalignment and the sensor skew. The Mei model is defined by \[\begin{bmatrix}x_{n}\\ y_{n}\end{bmatrix} =\begin{bmatrix}X_{c}/(Z_{c}+\xi\rho)\\ Y_{c}/(Z_{c}+\xi\rho)\end{bmatrix},\qquad r_{n}=\sqrt{x_{n}^{2}+y_{n}^{2}}, \tag{46}\] \[\begin{bmatrix}x_{d}\\ y_{d}\end{bmatrix} =\begin{bmatrix}x_{n}d(r_{n})+2p_{1}x_{n}y_{n}+p_{2}(r_{n}^{2}+2x_{n }^{2})\\ y_{n}d(r_{n})+p_{1}(r_{n}^{2}+2y_{n}^{2})+2p_{2}x_{n}y_{n}\end{bmatrix},\] (47) \[d(r_{n}) =1+k_{1}r_{n}^{2}+k_{2}r_{n}^{4}+k_{3}r_{n}^{6},\] (48) \[\boldsymbol{\pi}(\mathbf{x}_{c},\mathbf{i}) =\begin{bmatrix}\gamma_{x}(x_{d}+sy_{d})+c_{x}\\ \gamma_{y}y_{d}+c_{y}\end{bmatrix}, \tag{49}\] with the intrinsic parameters \(\mathbf{i}=[\gamma_{x},\gamma_{y},c_{x},c_{y},\xi,k_{1},k_{2},k_{3},p_{1},p_{ 2},s]\) where \(k_{1}\), \(k_{2}\), and \(k_{3}\) are for radial distortion, \(p_{1}\) and \(p_{2}\) for misalignment, and \(s\) for skew. This model is adopted in [60] and Camodocal [61]. As pointed out in [16], \(k_{1}\) of the Mei model is redundant with \(\xi\). The extended unified camera model (EUCM) [16] enhances the UCM by a parameter \(\beta\) to deal with the radial distortion. Its projection model is given by \[\begin{bmatrix}u_{m}\\ v_{m}\end{bmatrix} =\begin{bmatrix}f_{x}\frac{X_{c}}{\alpha\rho+(1-\alpha)Z_{c}}+c_{ x}\\ f_{y}\frac{Y_{c}}{\alpha\rho+(1-\alpha)Z_{c}}+c_{y}\end{bmatrix}, \tag{50}\] \[\rho =\sqrt{\beta(X_{c}^{2}+Y_{c}^{2})+Z_{c}^{2}}, \tag{51}\] with parameters \(\mathbf{i}=[f_{x},f_{y},c_{x},c_{y},\alpha,\beta]\), where \(\alpha\in[0,1]\), \(\beta>0\), and \(\alpha\rho+(1-\alpha)Z_{c}>0\). The unprojection function for the EUCM is given by \[\boldsymbol{\pi}^{-1}(\mathbf{u},\mathbf{i}) =\frac{1}{\sqrt{x_{d}^{2}+y_{d}^{2}+z_{d}^{2}}}\begin{bmatrix}x_{d }\\ y_{d}\\ z_{d}\end{bmatrix}, \tag{52}\] \[\begin{bmatrix}x_{d}\\ y_{d}\end{bmatrix} =\begin{bmatrix}(u_{m}-c_{x})/f_{x}\\ (v_{m}-c_{y})/f_{y}\end{bmatrix},\quad r_{d}^{2}=x_{d}^{2}+y_{d}^{2},\] (53) \[z_{d} =\frac{1-\beta\alpha^{2}r_{d}^{2}}{\alpha\sqrt{1-(2\alpha-1) \beta r_{d}^{2}+1-\alpha}}. \tag{54}\] #### Iii-B4 Local Generic Camera Models The preceding global camera models are available in a variety of GCC tools possibly for their simplicity, but their accuracy is also limited. To push the accuracy limit, generic models with thousands of parameters have been proposed, such as [19, 62]. But loosely speaking, they are still behind the global models in availability among GCC tools and in support by downstream applications. We briefly describe two generic models implemented in [6], a per-pixel model and a B-spline model. The per-pixel model of [7] associates a ray direction to every pixel for a central camera and a ray direction and a 3D point on the ray to every pixel for a non-central camera. Furthermore, interpolation between pixels is used to achieve continuous projection. A B-spline model adopted in [6] associates ray parameters to a sparse set of grid points instead of all pixels. These grid points control the cubic B-spline surface which represents the back projection function. Notably, this B-spline model is initialized using the relative camera poses computed with the method [7] developed for the per-pixel model. ### _Calibration Targets_ GCC usually depends on passive or active man-made objects, e.g., ground control points in remote sensing or planar targets in close-range calibrations. Recent self/auto-calibration methods, e.g., [63, 64], use opportunistic environmental features, whereas infrastructure-based methods [61] use a prior landmark map of the environment. Since artificial targets are still commonly used for better accuracy control, this section surveys the targets supported by GCC tools, as listed in Fig. 4. There are a few 3D targets, such as cubes [65] and icosahedrons [66], each of which is usually a composite of multiple planar targets. The accuracy requirements of length and orthogonality complicate their manufacturing and hamper their accessibility. The majority of calibration targets are planar, including surveyed markers on flat walls, and a variety of coded patterns either displayed on digital screens [13, 67, 68] or printed out. The targets based on digital displays usually have accurate size and good flatness and can deal with defocusing [68, 69], but such a target usually requires capturing multiple pattern images at each pose and compensating the refraction of the display's glass plate. So far, the printed boards are the most common targets and are widely supported by GCC tools. They include the checkerboard, the AprilGrid [10], the circle grid, the Charuco [24] board, and the recent deltille board [66], etc., as shown in Fig. 5. Their properties are briefly described below. There are also numerous customized calibration targets tailored for Fig. 4: Categories of targets for geometric camera calibration. specific algorithms, e.g., the random pattern aggregated from noise at multiple scales in [60], the pattern in [6] with dense corners for generic models, the Ecocheck board [70], the PhotoModeler circle board [53]. A custom board can often be created by combining markers to disambiguate orientations, e.g., the AprilTag, and corners invariant to perspective and lens distortion, e.g., formed from repeating squares. Lists of fiducial markers resilient to rotation can be found in [24, 28]. #### Iii-B1 Checkerboard The checkerboard is probably the most common calibration target. It is also known as chessboard. We prefer the name checkerboard which is more general than chessboard. Many checkerboard detection improvements have been proposed, such as [61, 71]. The checkerboard requires that the corners inside the board are fully visible in an image so that their coordinates can be uniquely determined. Though this weakness is reported to be remedied by a few recent methods [72, 73, 74, 66], most current tools have not kept up. To ensure that the pattern does not look the same after a 180\({}^{\circ}\) rotation, a checkerboard with odd rows and even columns or even rows and odd columns is usually used. #### Iii-B2 Circle Grid A circle grid [75] usually consists of an array of circles, symmetrically or asymmetrically distributed (see Fig. 5). The circle centers are target points for calibration, and can be detected from images based on area, circularity, convexity, inertia 2, etc. The circle grid has several downsides: first, all circles should be visible in each image; second, the detected circle centers suffer from the eccentricity error due to the perspective effect and lens distortion [21]. The eccentricity error is worth attention especially for lenses of large distortion. Moreover, the symmetric circle grid also has the 180\({}^{\circ}\) ambiguity and thus asymmetric circle grid is generally preferred. Footnote 2: [https://learropencv.com/blob-detection-using-opencv-python-c/](https://learropencv.com/blob-detection-using-opencv-python-c/) #### Iii-B3 Charuco The Charuco board [65] combines the checkerboard and the Aruco tags [24] to deal with inaccurate corner positions and occlusions. As shown in Fig. 5(d), the white squares of checkerboards are occupied by uniquely identifiable Aruco tags. #### Iii-B4 AprilGrid The Aprilgrid is an array of AprilTag markers [23] connected by smaller black squares as shown in Fig. 5(e), developed in the Kalibr package [10]. It is resilient to occlusion due to the AprilTag markers, and has accurate positions of corners which are surrounded by two black squares. #### Iii-B5 Deltille Grid The Deltille grid is a pattern of adjacent regular triangles filled with alternating colors as shown in Fig. 5(f). It is the only other possible tiling with alternating colors besides the checkerboard tiling. Its benefits compared to checkerboards are higher corner density and more accurate corner positions. The wide use of Deltille grids is mainly hindered by the effort to adapt the interfaces of existing calibration tools. ### _Calibration Algorithms_ This section gives a high-level overview of the calibration algorithms as implemented in GCC tools. According to the used solver, GCC algorithms can be grouped into traditional geometric and learning-based ones. Generally speaking, geometric approaches are explainable and accurate, whereas the learning-based approaches are intended to be more robust and flexible, e.g., [76, 77, 78]. According to the type of calibration targets, GCC algorithms can be grouped into those based on artificial targets, those based on mapped natural scenes, and self-calibration algorithms without targets. Calibration with an artificial target is pretty standard and widely supported in GCC packages. It is typically offline, and usually involves two phases, linear initialization and iterative nonlinear refinement. Instances of linear initialization are DLT, [79, 7]. Iterative refinement is exemplified by [42, 45, 80, 81, 10]. We refer to [36] for an overview of artificial-target-based methods. Calibration with natural objects of known geometry includes infrastructure-based calibration methods, such as [61, 82]. Such methods require an accurate 3D reconstruction of the site for calibration and rough values for intrinsic parameters and are suitable for camera systems with motion constraints. Broadly speaking, self camera calibration by using observations of opportunistic landmarks includes recursive refinement methods, methods that recover only camera intrinsic parameters, and methods that recover structure, motion and camera intrinsic parameters. Methods in the first group recursively refine calibration parameters and have to start from coarse parameter values, e.g., [63, 83]. The second group dates back to [84] and is reviewed in [85]. Methods in the last group usually rely on bundle adjustment, thus, they typically have the best accuracy among self-calibration methods and are commonly supported in SfM packages, e.g., colmap [86]. Fig. 5: The passive planar calibration targets: (a) 8\(\times\)11 checkerboard, (b) 8\(\times\)11 circle grid, (c) 8\(\times\)11 asymmetric circle grid, (d) 8\(\times\)11 Charuco, (e) 7\(\times\)10 AprilGrid, (f) 10\(\times\)11 Deltille. \begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline **GCC tools** & **Supported models** & **Supported targets** & **Outlier tangents** & **Param. handling** & **std. dev.** & **Language** & **Open source** & **Other features** \\ \hline AppiCal [23] & KB-8; pinhole rad tan & AprilTag grid & No & No & Java & Yes & GUT; interactive \\ \hline \multirow{2}{*}{BabelCalib [87]} & division; division; evasion; & \multirow{2}{*}{agnostic} & \multirow{2}{*}{Huber} & \multirow{2}{*}{No} & \multirow{2}{*}{MATLAB} & \multirow{2}{*}{Yes} & \multirow{2}{*}{allow multiple planar targets} \\ & DS; EUCM; FOV; KB-8; & & & & & & & \\ & pinhole rad; UCM & & & & & & & \\ \hline \multirow{2}{*}{Basalt [14]} & DS; EUCM; FOV; KB-8; UCM & AprilGrid & Huber & No & C++ & Yes & efficient; GUI; modular \\ \hline \multirow{2}{*}{BooFCV [70]} & KB-23; pinhole rad tan; Mei & checkerboard circle Eocheck & No & No & Java & Yes & Android portable; GUI \\ \hline \multirow{3}{*}{calib.io [88]} & B-spline model; division; DS; EUCM; FOV; KB-8; pinhole rational rad tan & Charuco; checkerboard & Huber & Yes &? & No & correlation analysis; GUI; multi-camera; multiple targets \\ \hline \multirow{2}{*}{Calibu [89]} & Calibu by ARFG & FOV; KB-8; pinhole rational rad & circle grid & No & No & C++ & Yes & \\ & pinhole rad tan; KB-8; DS & AprilGrid; Charuco & likely & No &? & No & GUI; multi-camera \\ \hline \multirow{3}{*}{Camodocal [61]} & pinhole rad tan; Mei; KB-8 & checkerboard & Cauchy & No & C++ & Yes & \begin{tabular}{} \end{tabular} \\ \hline \multirow{2}{*}{Colmap [86]} & Colmap & pinhole rad tan prism, KB-8; FOV & N/A & soft L1, Cauchy & No & C++ & Yes & GUI; self-calibration; well-documented \\ \hline \multirow{3}{*}{Generic camera calibration [6]} & central generic; non-central generic; pinhole rational rad tan & custom grid with an AprilTag & Huber & No & C++ & Yes & accurate; board deformation \\ \hline \multirow{2}{*}{Learning CCS [91]} & pinhole rad & checkerboard & trim & No & Python & Yes & \\ & & & & & & & \\ \hline \multirow{2}{*}{libomical [92]} & a centered model; a geometric model; Mei; Scaramuzza & checkerboard & No & No & MATLAB & Yes & \\ \hline \multirow{2}{*}{Kalibr [10] / TartanCalib [93]} & DS; EUCM; FOV; mainhole equidistant; pinhole rad tan; Mei & AprilGrid; checkerboard & trim & Yes & C++ & Yes & multi-camera; board deformation \\ \hline \multirow{2}{*}{MATLAB camera calibrator [57]} & MILAB camera calibrator [57] & pinhole rad tan; Scaramuzza circle grid & \multirow{2}{*}{likely} & Yes & MATLAB & Yes & GUI; modular; stereo camera; well-documented \\ \hline \multirow{2}{*}{MC-Calib [94]} & KB-8; pinhole rad tan & Charuco & Huber & No & C++ & Yes & board deformation; multi-camera \\ \hline \multirow{2}{*}{Metashape calibrator [9]} & a custom fisheye model; pinhole rad tan & checkerboard &? & Yes & C++ / Python & No & correlation analysis; GUI \\ \hline \multirow{2}{*}{Metashape [9]} & a custom fisheye model; pinhole rad tan & N/A & Yes & No & C++ & No & GUI; self-calibration \\ \hline \multirow{3}{*}{mrcral [95]} & pinhole rational rad tan & \multirow{3}{*}{checkboard} & \multirow{3}{*}{trim} & No & C++ / Python & Yes & board deformation; uncertainty analysis \\ & thin prism; splined stereographic model & & & & & & \\ \hline \multirow{3}{*}{MRPT camera calib [96] / Open/CV} & pinhole rad tan & checkerboard & No & No & C++ & Yes & GUI; multi-chexerboard detector \\ \cline{1-1} \cline{2-2} \cline{4-6} & Onnil calibrator [59] & Mei & checkerboard & No & Yes & MATLAB & Yes & GUI; manual checkerboard detection \\ \hline \multirow{2}{*}{ROS camera calibrator [7] / OpenCV} & KB-8; pinhole rational rad tan thin prism; Mei & Charuco checkerboard circle grid & No, but trim (Mei) & No & Python & Yes & efficient; GUI; modular; stereo camera; well-documented \\ \hline \multirow{2}{*}{PhotoModeler calibrator [53]} & backward pinhole rad tan & customized circle grid & likely & Yes &? & No & correlation analysis; GUI \\ \hline \multirow{2}{*}{Pix4DMapper [64]} & pinhole rad tan; adapted Scaramuzza & N/A & Yes & No & C++ & No & GUI; self-calibration \\ \hline \multirow{2}{*}{SCNeRF [78]} & KB-6 & N/A & trim & No & Python & Yes & self-calibration \\ \hline \multirow{2}{*}{vidar [98]} & DS; EUCM; UCM & N/A & N/A & No & Python & Yes & self-calibration \\ \hline \end{tabular} ## IV GCC Tools This section reviews tools developed for GCC. These tools mainly realize algorithms using artificial targets or target-free bundle adjustment. Several learning-based GCC tools are also cited as examples from this active research field. Since our focus is on intrinsic calibration, tools solely for extrinsic calibration are left out, e.g., [82, 99, 100]. An extensive list of GCC tools to our knowledge is given in Table I. For brevity, the table only list a few photogrammetric software tools which unanimously allow self-calibration. This table can serve as a reference in choosing a proper GCC tool and hopefully can help prevent duplication of development effort. We assess a GCC tool based on characteristics which are grouped into accessibility and quality evaluation. For accessibility, these characteristics include supported camera models and targets, stereo / multiple camera support, the user interface, source availability, and the coding language. Usually, a graphical user interface (GUI) is more accessible than a command line interface to an average user. When a tool is open-source or modular, it is easy to extend it to other camera models and calibration targets. The coding language usually implies the execution efficiency and the community support. From quality evaluation, we look at the outlier strategy and the availability of covariance output. The outlier strategy dictates how to handle outliers in detected corners which may deviate from their true positions by a few pixels. For quality check, all calibration tools output some metric based on reprojection errors, such as the mean reprojection error and the root mean square (RMS) reprojection error. However, these metrics are highly dependent on the used image corners, and thus are inadequate to compare results from different methods [93]. The covariance output is an quality indicator besides these metrics, and directly links to the correlation analysis [101]. Next, we describe several popular calibration tools in terms of these characteristics. ### _BabelCalib_ The monocular camera calibrator, BabelCalib, employs a back-projection model as a proxy for a variety of radial-symmetric forward camera models, including the pinhole radial distortion model (4), DS (33), EUCM (50), FOV (27), KB-8 (20), and UCM (39). In practice, the back-projection model, a two-parameter division model of even degrees (12), can be obtained by linear solvers, and then the desired camera models can be regressed from the division model. BabelCalib is agnostic to the calibration targets, supports calibration with multiple targets, and handles outliers with the Huber loss. ### _Basalt_ The Basalt package [14] can carry out monocular camera calibration, supporting camera models including DS, EUCM, FOV, KB-8, and UCM. Its default calibration target is the AprilGrid. A Levenberg-Marquardt algorithm is implemented in Basalt for robust calibration with the Huber loss. With neat use of C++ templates, it is a lean and fast tool. ### _calio.io_ The commercial calibration tool by calio.io comes with an intuitive GUI, supports a variety of camera models, including the pinhole rational radial tangential model with the thin prism effect (19), the division model (12), DS, KB-8, FOV, EUCM, and a B-spline camera model, and supports many calibration targets including the checkerboard and the Charuco board. Moreover, it allows calibrating multiple cameras with multiple targets, and optimizing the target points to deal with board deformation, and deals with outliers with the Huber loss. ### _Camodocal_ The Camodocal package supports monocular and stereo GCC with models including the pinhole radial tangential model (4), KB-8, and Mei (49). By default, it supports the checkerboard, but it is relatively easy to extend to other targets. It uses the Cauchy loss to deal with outliers. ### _Kalibr_ Kalibr is a popular GCC tool that can select informative images for calibration [10]. It supports projection models including pinhole projection (1), UCM, EUCM, and DS, and distortion models including radial tangential distortion, equidistant distortion, and FOV. As mentioned for (20), the KB-8 model in Kalibr discards points of non-positive depth \(Z_{c}\). The supported targets include checkerboards and AprilGrids. Outliers are handled by removing corners of reprojection errors exceeding a certain threshold. This tool has been extended to deal with the rolling shutter effect [102], and to better detect corners in images of high distortion lenses [93]. ### _MATLAB Camera Calibrator_ The MATLAB camera calibrator [8] supports both monocular and stereo camera calibration with both the pinhole radial tangential model (4) and the Scaramuzza model (24). It can be seen as a superset of [103] and [57]. The supported targets by default are checkerboards, circle grids, and AprilTag grids. With its modular design, it is easy to use other calibration targets, e.g., the AprilGrid. The MATLAB calibrator has an easy-to-follow GUI and many visualization functions. ### _ROS Camera Calibrator_ The OpenCV library provides functions for calibrating monocular and stereo cameras with the pinhole rational radial tangential model with the thin prism effect (19), the KB-8 model for fisheye cameras, and the Mei model for omnidirectional cameras. The omnidirectional module in OpenCV also supports a multi-camera setup and can be seen as a reimplementation of the MATLAB tool in [60]. The current KB-8's realization in OpenCV does not support points of non-positive depth. The calibration functions in OpenCV do not have outlier handling schemes, but its omnidirectional module removes images of large total reprojection errors in calibration. Several programs have been developed on top of OpenCV, such as the ROS camera calibrator [97] and the MRPT camera calibrator [96]. The ROS camera calibrator is a thin wrap of OpenCV calibration functions, can run in both interactive and batch mode, and supports checkerboards, circle grids, and Charuco boards. Besides wrapping the OpenCV functions, the MRPT camera calibrator extends the checkerboard detection to support multiple checkerboards. ### _Self-Calibration Tools with SfM_ Self-calibration is usually based on a SfM pipeline which is realized in commercial software or open source programs. For space, we limit the discussion to several representatives of the two groups. Professional photogrammetric packages usually support self-calibration, for instance, the Metashape by Agisoft [9], the calibrator in PhotoModeler [53], and the Pix4D mapper [64]. The Metashape realizes both checkerboard-based calibration and self-calibration using natural landmarks within its SfM pipeline. Both methods support the pinhole radial tangential model and a customized fisheye model that is made of the equidistant projection and the radial tangential distortion. The calibration tool in PhotoModeler adopts the inverse pinhole radial tangential model (8), and supports target-based calibration with either multiple boards each of five RAD (Ringed Automatically Detected) tags or a single board of a circle grid with four non-ringed coded tags. When the scene to be reconstructed is much larger than the printed targets, a self-calibration of the camera in the field may be conducted with PhotoModeler. The Pix4D mapper can also estimate the camera intrinsic parameters with a collection of images of natural scenes. It supports the pinhole radial tangential model and an adapted Scaramuzza model. The open-source SfM packages also widely support camera self-calibration, such as the popular colmap, and the recent Self-Calibration package based on the Neural Radiance Field, SCNeRF [78]. Based on geometric bundle adjustment, colmap supports camera models including the pinhole radial tangential model with the thin prism distortion, KB-8, and FOV. The learning-based SCNeRF considers both geometric and photometric consistency in constructing the implicit scene geometry and estimating the camera parameters. ## V Evaluation of Target-Based GCC Tools This section evaluates six popular target-based GCC tools on simulated and real data acquired by cameras of varying AOVs, to show their extensibility and repeatability. ### _Data Acquisition_ The real data were captured by an UI-3251LE-M-GL camera of a 1/1.8" sensor from the IDS Imaging, fitted with six fixed focus lenses listed in Table II, leading to varying camera DAOVs from 90\({}^{\circ}\) to 194\({}^{\circ}\). Notably, in focal length, the 90\({}^{\circ}\) lens resembles lenses on smartphones whose actual focal lengths are about 4 mm. Also, empirically, the calibrated focal lengths are close to the physical focal lengths from Table II in pixels. The camera can capture grayscale images at 25 frames/second and resolution 1600\(\times\)1200 in global shutter mode. Prior to data capture, the exposure time was set to 5 ms to reduce motion blur. For each lens, the camera was gently moved in front of an AprilGrid, passing through a variety of poses. We chose the AprilGrid since it is accurate [25], widely used, and resilient to occlusions, among the reviewed calibration targets. Three sequences each of a minute were recorded for each lens. From each sequence, three subsequences each of 200 frames were uniformly drawn without replacement. This resulted in \(54=6\times 3\times 3\) calibration sequences for six lenses. We evaluated six GCC tools on Ubuntu 20.04, including BabelCalib [87], Basalt [14], Camodocal [61], TartanCalib [93] (Kalibr with enhanced corner detection), the Matlab calibrator [8], and the ROS calibrator [97] based on OpenCV. which were chosen for their wide use and easy extension to an alternative type of target and data input. Within these tools, we evaluated several camera models, the pinhole model with the radial tangential distortion for wide-angle cameras, KB-8 for fisheye cameras, and Mei / EUCM for omnidirectional cameras, which were chosen mainly for their wide support by GCC tools and downstream applications. The test plan is Fig. 6: Sample images for S04525, E1M3518, BM4218, BM4018, BT2120, MTV185, in row-major order. shown in Table III which lists GCC tools and camera models for processing particular data. In general, the pinhole model with distortion was used for cameras with a DAOV \(<\)120\({}^{\circ}\), KB-8 for cameras with a DAOV \(\geq\) 100\({}^{\circ}\), and Mei / EUCM for cameras with a DAOV \(\geq\) 120\({}^{\circ}\). The simulation data were generated from the real data with the workflow shown in Fig. 7 (bottom). We first processed the real data by the TartanCalib with proper models according to the test plan. Thus, we obtained the frames of detected corners and their poses, and the estimated calibration parameters, from TartanCalib. As an exception, for simulating observations of the KB-8 model on MTV185 sequences, we first processed them by TartanCalib with the Mei model to get the frame poses, and then estimated the KB-8 parameters by Camodocal on the used corners by TartanCalib. In any case, these frame poses and camera parameters were then used to simulate the corners in images by projecting the target landmarks and adding a Gaussian noise of 0.7 px at both \(x\) and \(y\)-axis. These camera parameters served as the reference in evaluation. ### _Data Processing_ For either real or simulated data, the evaluation pipeline is shown in Fig. 7 (top). For better comparison, all tools except for the ROS calibrator used the same corners. Specifically, we first ran TartanCalib on a (real or simulated) sequence, and save the frames with detected corners and mark the frames used by TartanCalib. The TartanCalib was chosen to extract corners from AprilGrid images since it could identify sufficient corners under large distortion [93]. All frames of corners were provided to the ROS calibrator. But only frames of corners used by TartanCalib were given to the four methods, BabelCalib, Basalt, Camodocal, and the MATLAB calibrator. For these tools, we wrote necessary data loading functions and adapted the calibration initialization with the AprilGrid if needed. Note that TartanCalib / Kalibr always failed for the MTV185 sequences, we gave these four tools the corners of TartanCalib with the Mei model for these sequences. Feeding the four tools by TartanCalib had several other reasons. First, empirically, Kalibr usually chose \(\leq\) 40 informative frames for calibration. This coincided the assertion that global camera models were usually well constrained with 40 frames in [18]. Second, BabelCalib often failed to find a solution with too many frames (e.g., \(\geq\) 100), especially for the pinhole model with radial distortion. Third, the MATLAB calibrator took up to an hour to solve for the Scaramuzza model with 100 frames. For the ROS calibrator, we ran it five times, each with a sample of 40 randomly chosen frames without replacement, and kept the run of the minimum RMS reprojection error as the final result. The exclusive treatment of the ROS calibrator was because the OpenCV calibration functions hardly dealt with outliers and often gave poor results on corners used by TartanCalib. Apart from the above, we ran these six calibration tools with their default parameter settings. A test run was considered failed if no solution was found or the recovered focal lengths deviated from the nominal values (for real data) or the reference (in simulation) by \(\geq\) 100 px. Failures in real and simulated tests are marked in Table III, for BabelCalib, Basalt, Kalibr, and the ROS calibrator. BabelCalib failed once when it converged to a wrong focal length. Basalt failed for either converging to a wrong focal length or unable to converge in 100 iterations. Kalibr's failures were due to the unsuitable pinhole equidistant model for large FOV cameras. The ROS calibrator was bothered by outliers and largely unsuccessful on MTV185 sequences for the unsuitable pinhole equidistant model. Oddly, it always aborted with ill-conditioned matrices on sequences of 103\({}^{\circ}\) and 127\({}^{\circ}\) DAOV cameras, perhaps unable to initialize in such cases. Next, we evaluated the GCC tools by looking at the consistency of estimated camera parameters and the RMS reprojection errors for both simulated and real data. The RMS reprojection errors are computed by these tools on all inlier observations. The RMS values should be viewed lightly when comparing across tools since the inlier sets may vary slightly even for the same data. ### _Simulation Results_ The simulated data were processed as described above. The data from cameras with S04525, E1M3518, and BM4218 lenses, were processed by five tools with the pinhole radial tangential model except Basalt which did not support the model. The camera parameter errors and the RMS reprojection errors are shown in Fig. 8, where failed tests were excluded in drawing the box plots. The units are specified in parentheses Fig. 7: (Top) The calibration workflow for either real data or simulated data. (Bottom) The workflow to simulate image points from real data. for all box plot figures. These tools generally gave very similar results close to the reference. The focal lengths and principal points were usually within (-2, 2) px of the true values. Since BabelCalib did not consider the tangential distortion, its estimates had larger errors than other methods, especially for BM4218 sequences of 103\({}^{\circ}\) DAOV. The RMS reprojection errors slightly above 0.9 were resulted from the Gaussian noise of \(\sigma=\sqrt{2}\times 0.7=0.99\). The ROS calibrator based on OpenCV had slightly larger error dispersions, likely due to corners of large reprojection residuals. For a BM4218 sequence, the MATLAB calibrator converged to a focal length off by 37 px for no apparent reason. For cameras with a DAOV \(\geq 100^{\circ}\), the KB-8 model was solved for by using five tools except for the MATLAB calibrator which does not support KB-8. The parameter errors and RMS errors are shown in Fig. 9. The ROS calibrator results for the BM4218, BM4018, and MTV185 lenses, and the MATLAB results for the MTV185 lens, were excluded for consistent failures explained in Section V-B. Among these tools, we see that the Basalt and the OpenCV-based ROS calibrator sometimes converged to focal lengths of large errors >5 px. Other tools consistently estimated the focal lengths and principal points within (-2, 2) px as well as the distortion parameters. For sequences with lenses, BM4018, BT2120, and MTV185, three tools including Kalibr, Camodocal, and the ROS / OpenCV calibrator were used to solve for the Mei parameters. For comparison, we also solved for the EUCM model by BabelCalib and Basalt. The parameter errors and reprojection errors are shown in Fig. 10. where we used \((f_{x},f_{y})\) instead of \((\gamma_{x},\gamma_{y})\) as the latter has large variance caused by that of \(\xi\). For both the BM4018 and BT2120 sequences, the three methods with the Mei model gave similar results. Overall, Kalibr gave the best estimates, notably on the MTV185 sequences. The ROS calibrator tended to have larger variances in focal lengths and principal points but their errors were within (-2, 2) px. For the MTV185 sequences, the Camodocal and OpenCV results showed about 3 px errors in focal lengths and about 0.7 errors in \(\xi\), but with reasonable RMS errors. We attribute this to two reasons. First, the UCM model is numerical unstable. Second, \(k_{1}\) in the Mei model is redundant. As for the EUCM models, the BabelCalib achieved smaller dispersions in \(f_{x}\) and \(f_{y}\) than Basalt, although the data were simulated with the Mei model. ### _Real Data Results_ We processed the real data according to Table III and looked at the estimated parameters and RMS reprojection errors. For clarity, the nominal focal lengths from Table II and the principal point (800, 600) are subtracted from their estimates in plots. Fig. 8: Error distributions of pinhole radial tangential model parameters and the root mean square (RMS) reprojection errors, by five geometric camera calibration tools, BabelCalib, Camodocal, Kalibr / TartcanCalib, the MATLAB calibrator, and the ROS / OpenCV calibrator, on the simulated data for S04525, EIM3518, and BM4218 lenses, each of 9 sequences. Note that BabelCalib used a pinhole radial model, resulting in zero values in \(p_{1}\) and \(p_{2}\), which are shown relative to the true values. The sequences of S04525, E1M3518, and BM4218 lenses were processed by five tools except for Basalt. The calibration parameters and the RMS reprojection errors are shown in Fig. 11, where the failed cases are not included in the box plots. These five tools had fairly similar results. The difference in principal points for BabelCalib was caused by its model that ignored tangential distortion. The large dispersion in projection parameters of BM4218 sequences was likely because the pinhole radial tangential model was somewhat improper for the camera with this lens as implied in Fig. 12. For the KB-8 model, the sequences of BM4218, BM4018, BT2120, and MTV185 lenses were processed by five tools except the MATLAB calibrator. As shown in Fig. 12, these tools achieved very similar calibration results in general. Basalt failed frequently for the BM4218 and BT2120 sequences, leading to apparent small parameter dispersions. Comparing the dispersion of focal lengths for BM4218 in Fig. 11 and 12, we think that the KB-8 model is more suitable than the pinhole radial tangential model for the BM4218 sequences. The OpenCV-based ROS calibrator aborted on BM4218 and BM4018 sequences perhaps for failed initialization. Both Kalibr and the ROS calibrator did not handle MTV185 sequences of a \(194^{\circ}\) DAOV. For the BT2120 data, we see that the results by OpenCV were affected by outliers leading to large RMS reprojection errors and parameters slightly deviated from other methods. For the Mei model, we processed the BM4018, BT2120, and MTV185 sequences using tools including Kalibr, Camodocal, and the ROS calibrator. For comparison to the EUCM model, these sequences were also processed by BabelCalib and Basalt. The calibration parameters and the RMS reprojection errors are shown in Fig. 13. With real data, these tools obtained similar values and dispersions for focal lengths and principal points, more consistent than the simulation shown in 10 where the data were simulated with the Mei model. The distortion parameters of the Mei model had large variance despite reasonable RMS reprojection errors, due to its parameter redundancy. Otherwise, the EUCM model resulted in consistent values for Fig. 10: Error distributions of Mei parameters and the root mean square (RMS) reprojection errors, by geometric camera calibration tools, BabelCalib, Basalt, Camodocal, Kalibr, and the OpenCV-based ROS camera calibrator, on the simulated data for BM4018, BT2120, and MTV185 lenses, each of 9 sequences. BabalcCalib and Basalt used the extended unified camera model (EUCM), while others used the Mei model. Note that \(\beta\) of the EUCM is shown together with \(k_{1}\) of the Mei model relative to \(k_{1}\)’s true value. A parameter unavailable to the EUCM, e.g., \(k_{2}\), is zero by default, and shown relative to the parameter’s true value. Basalt failed all MTV185 sequences. Fig. 9: Error distributions of KB-8 parameters and the root mean square (RMS) reprojection errors, by five geometric camera calibration tools, BabelCalib, Basalt, Camodocal, Kalibr, and the OpenCV-based ROS calibrator, on the simulated data for BM4218, BM4018, BT2120, and MTV185 lenses, each of 9 sequences. Results of the ROS calibrator for BM4218, BT4018, and MTV185 lenses and the MATLAB calibrator for the MTV185 lens, were excluded for persistent failures. \([\alpha,\beta]\). ## VI Conclusions and Research Trends In view of the ever-evolving GCC, we survey the recent GCC tools from the perspectives of camera models, calibration targets, and algorithms, providing an overview of the benefits and limitations of these tools. We also evaluated six well-known calibration tools, including BabelCalib, Basalt, Camodocal, Kalibr, the MATLAB calibrator, and the OpenCV-based ROS calibrator, to study their consistency and repeatability on simulated and real data. From the review and experiments, we summarize several findings. (1) Outlier handling is crucial for optimization-based camera calibration tools. These outliers are usually detected corners a few pixels away from their actual image locations, and often occur in somewhat blurry images. Luckily, most GCC tools can deal with outliers. (2) The GCC tools, Camodocal, Kalibr, and the MATLAB calibrator, support well the pinhole radial tangential model. BabelCalib and Camodocal support well the KB-8 model, and TartanCalib supports well the KB-8 model for a camera with a DAOV \(<180^{\circ}\). Camodocal, TartanCalib, and OpenCV support well the Mei model, but the model suffers from parameter instability and redundancy. Moreover, the pinhole radial tangential model may become inadequate for cameras of a DAOV \(>100^{\circ}\). The KB-8 model is typically preferred for cameras of a large DAOV due to its wide support and good accuracy when a global camera model is to be obtained. (3) The various failure cases revealed in our tests imply the intricacy in camera model initialization and optimization of a classic GCC tool. Aside from these failures, these GCC tools in (2) agree well with each other on calibrating conventional, fisheye, and omnidirectional cameras with proper global camera models. Based on this study, we point out several future research directions. **Interactive Calibration** It is well known that quality data and informative data are essential for GCC. The opposite are two problems, image blur that may be caused by rapid motion or out of focus, and insufficient data. One way to ensure Fig. 11: Pinhole radial tangential model parameters and the root mean square (RMS) reprojection errors, by geometric camera calibration tools, BabelCalib, Camodocal, Kalibr, the MATLAB calibrator, and the ROS / OpenCV camera calibrator, on the real datasets captured with S04525, E1M3518, and BM4218 lenses, each of \(9\) sequences. Note that BabelCalib adopted a pinhole radial model. The ROS calibrator failed four out of \(9\) times for the BM4218 dataset. Fig. 12: KB-8 parameters and the root mean square (RMS) reprojection errors by geometric camera calibration tools, BabelCalib, Basalt, Camodocal, Kalibr, and the ROS / OpenCV calibrator, on the real datasets captured with BM4218, BM4018, BT2120, and MTV185 lenses, each of \(9\) sequences. Basalt failed \(7\) out of \(9\) times for both BM4218 and MTV185 datasets, leading to its small variances. data quality and information is interactive calibration which provides quality check, selects the quality data, and gives next-move suggestions in real time, whether for target-based or target-free calibration. AprilCal [104] is such a tool for target-based calibration. **Static Calibration** Target-based calibration often involves unrepeatable onerous movements which can be obviated in at least two ways, calibration with a programmed robot arm and static calibration. Robot arm-based calibration has been studied in [105]. Static calibration usually relies on active targets. Such methods have been developed in [106, 67] with application-specific setups. We think there is still much room in static calibration to explore. **Reconstruction with Calibration** The setup of the lab calibration is usually different from the in-situ setup, e.g., in focusing distance (depth of field), exposure, capture mode (snapshot or video), aperture, and size of the objects of interest. Some work has been done to mitigate the differences, e.g., out of focus, in [68, 69]. An ultimate solution would be self-calibration or calibration based on prior maps. These methods depend on a reconstruction engine that supports calibration. Such an engine based on traditional bundle adjustment is colmap [86]. New engines capable of calibration based on deep learning are on the surge, for instance, [78, 107].
2301.05689
Diagnostics of mixed-state topological order and breakdown of quantum memory
Topological quantum memory can protect information against local errors up to finite error thresholds. Such thresholds are usually determined based on the success of decoding algorithms rather than the intrinsic properties of the mixed states describing corrupted memories. Here we provide an intrinsic characterization of the breakdown of topological quantum memory, which both gives a bound on the performance of decoding algorithms and provides examples of topologically distinct mixed states. We employ three information-theoretical quantities that can be regarded as generalizations of the diagnostics of ground-state topological order, and serve as a definition for topological order in error-corrupted mixed states. We consider the topological contribution to entanglement negativity and two other metrics based on quantum relative entropy and coherent information. In the concrete example of the 2D Toric code with local bit-flip and phase errors, we map three quantities to observables in 2D classical spin models and analytically show they all undergo a transition at the same error threshold. This threshold is an upper bound on that achieved in any decoding algorithm and is indeed saturated by that in the optimal decoding algorithm for the Toric code.
Ruihua Fan, Yimu Bao, Ehud Altman, Ashvin Vishwanath
2023-01-13T18:21:23Z
http://arxiv.org/abs/2301.05689v2
# Diagnostics of mixed-state topological order and breakdown of quantum memory ###### Abstract Topological quantum memory can protect information against local errors up to finite error thresholds. Such thresholds are usually determined based on the success of decoding algorithms rather than the intrinsic properties of the mixed states describing corrupted memories. Here we provide an intrinsic characterization of the breakdown of topological quantum memory, which both gives a bound on the performance of decoding algorithms and provides examples of topologically distinct mixed states. We employ three information-theoretical quantities that can be regarded as generalizations of the diagnostics of ground-state topological order, and serve as a definition for topological order in error-corrupted mixed states. We consider the topological contribution to entanglement negativity and two other metrics based on quantum relative entropy and coherent information. In the concrete example of the 2D Toric code with local bit-flip and phase errors, we map three quantities to observables in 2D classical spin models and analytically show they all undergo a transition at the same error threshold. This threshold is an upper bound on that achieved in any decoding algorithm and is indeed saturated by that in the optimal decoding algorithm for the Toric code. + Footnote †: RF and YB contributed equally to this work. ## I Introduction The major roadblock to realizing quantum computers is the presence of errors and decoherence from the environment which can only be overcome by adopting quantum error correction (QEC) and fault tolerance [1]. A first step would be the realization of robust quantum memories [2; 3; 4]. Topologically ordered systems in two spatial dimensions, owing to their long-range entanglement and consequent degenerate ground states, serve as a promising candidate [5; 6; 7; 8]. A paradigmatic example is the surface code [9; 10], whose promise as a robust quantum memory has stimulated recent interest in its realization in near-term quantum simulators [11; 12; 13; 14; 15; 16; 17]. One of the central quest is to analyze the performance of topological quantum memory under _local_ decoherence. In the case of surface code with bit-flip and phase errors, it has been shown that the stored information can be decoded reliably up to a finite error threshold [10]. Namely, as the error rate increases, the success probability of the decoding algorithm drops to zero at a critical value that depends on the choice of the algorithm. These decoding transitions imply an error-induced singularity in the mixed state of the system. The algorithmic dependence of the error thresholds is a mere reflection of the suboptimality of specific algorithms. It is then natural to inquire how this transition can be probed through the behavior of intrinsic properties of the quantum state. Such a characterization has at least two important consequences. First, the critical error rate for the intrinsic transition should furnish an upper bound for decoding algorithms, saturating which implies that the optimal decoder has been found. Second, the correspondence between successful decoding and intrinsic properties of the quantum state acted upon by errors points to the existence of topologically distinct mixed states. In another word, answering this question amounts to relating the breakdown of topological quantum memory to a transition in the mixed-state topological order. Progress towards this goal lies in quantifying the residual long-range entanglement in the error-corrupted mixed state. We will consider quantities that are motivated from both perspectives and explore their unison. In this work, we investigate three information-theoretical diagnostics: (i) quantum relative entropy between the error-corrupted ground state and excited state; (ii) coherent information; (iii) topological entanglement negativity. The first two are natural from the perspective of quantum error correction (QEC). More specifically, the quantum relative entropy quantifies whether errors ruin orthogonality between states [18], and coherent information is known to give robust necessary and sufficient conditions on successful QEC [19; 20; 21]. The third one is a basis-independent characterization of long-range entanglement in mixed states and is more natural from the perspective of mixed-state topological order. This quantity has been proposed to diagnose topological orders in Gibbs states [22; 23], which changes discontinuously at the critical temperature. We borrow and apply this proposal to error-corrupted states. Our transition occurs in two spatial dimensions at a finite error rate, in contrast to the finite temperature transitions in four spatial dimensions. The presence of three seemingly different diagnostics raises the question of whether they all agree and share the same critical error rate. Satisfyingly, we indeed find this to be the case in a concrete example, surface code with bit-flip and phase errors. The \(n\)-th Renyi version of the three quantities can be formulated in a _classical_ two-dimensional statistical mechanical model of \((n-1)\)-flavor Ising spins, which exhibits a transition from a paramagnetic to a ferromagnetic phase as the error rate increases. The three quantities are mapped to different probes of the ferromagnetic order and must undergo the transition simultaneously, which establishes their consistency in this concrete example. Interestingly, the statistical mechanical model derived for the information-theoretic diagnostics is exactly dual to the random-bond Ising model (RBIM) that governs the decoding transition of the algorithm proposed in [10]. This duality implies that the error threshold of the algorithm in [10] saturates the upper bound. Therefore, it confirms that this decoding algorithm is optimal, and its threshold reflects the intrinsic properties of the corrupted state. We remark that mappings to statistical mechanical models have been tied to obtaining error thresholds of decoding algorithms [10; 24; 25; 26; 27; 28]. Here such mappings arise from characterizing intrinsic properties of the error corrupted mixed state. The rest of the paper is organized as follows. Sec. II gives a concrete definition of the error-corrupted states and introduces the three diagnostics. Sec. III studies the concrete example, the 2D Toric code subject to local bit-flip and phase errors. We close with discussions in Sec. IV. ## II Setup and Diagnostics In this section, we begin with introducing the error-corrupted mixed state. We show that any operator expectation value in a single-copy corrupted density matrix cannot probe the transition, and instead one needs to consider the non-linear functions of the density matrix. Next, we introduce three information-theoretic diagnostics of the transition: (i) quantum relative entropy; (ii) coherent information; (iii) topological entanglement negativity. These quantities generalize the diagnostics of ground-state topological order. ### Error-corrupted mixed state The type of mixed state we consider throughout the paper describes a topologically ordered ground state \(\rho_{0}=\left|\Psi_{0}\right\rangle\left\langle\Psi_{0}\right|\) subject to local errors \[\rho=\mathcal{N}[\rho_{0}]=\prod_{i}\mathcal{N}_{i}[\rho_{0}]\,, \tag{1}\] where the quantum channel \(\mathcal{N}_{i}\) models the local error at site \(i\) and is controlled by the error rate \(p\). We refer to \(\rho\) as the error-corrupted mixed state. The transition in the corrupted state, if exists, cannot be probed by the operator expectation value in a single-copy density matrix. To demonstrate it, we purify the corrupted state by introducing one ancilla qubit prepared in \(\left|0\right\rangle_{i}\) for each physical qubit at site \(i\). The physical and ancilla qubits are coupled locally via unitary \(U_{i}(p)\) such that tracing out the ancilla qubits reproduces the corrupted state \(\rho\). This leads to a purification \[\left|\Phi\right\rangle=\prod_{i}U_{i}(p)\left|\Psi_{0}\right\rangle\left( \otimes_{i}\left|0\right\rangle_{i}\right), \tag{2}\] which is related to the topologically ordered state by a depth-1 unitary circuit on the extended system [see Fig. 1]. It follows that the expectation value of _any_ operator supported on a large but finite region of the physical qubits, e.g., a Wilson loop operator, must be a smooth function of the error rate [see Fig. 1 for a schematics]. Thus, it is indispensable to consider the non-linear functions of the density matrix, e.g. quantum information quantities, to probe the transition in the corrupted state. This property holds when \(\rho\) describes a general mixed state in the ground-state subspace under local errors. We remark that the above argument does not prevent observables in a single-copy density matrix from detecting topological order in finite-temperature Gibbs states [29]. The key difference is the purifications of the Gibbs states at different temperatures are not necessarily related by finite-depth circuits. ### Quantum relative entropy Anyon excitations are crucial for storing and manipulating quantum information in a topologically ordered state. For example, to change the logical state of the code one creates a pair of anyons out of the vacuum and separates them to opposite boundaries of the system. The first diagnostic tests if the process of creating a pair of anyons and separating them by a large distance gives rise to a distinct state in the presence of decoherence. Specifically, we want to test if the corrupted state \(\rho=\mathcal{N}[\rho_{0}]\) is sharply distinct from \(\rho_{\alpha}=\mathcal{N}[w_{\alpha}(\mathcal{P})\rho_{0}w_{\alpha}(\mathcal{ P})^{\dagger}]\). In the second state, \(w_{\alpha}(\mathcal{P})\) is an open string operator that creates an anyon \(\alpha\) and its anti-particle \(\alpha^{\prime}\) at the opposite Figure 1: Physical observables verses information quantities in error corrupted states. Each error corrupted state can be obtained from applying local unitaries to the system (topological order) plus ancilla qubits (trivial product state). Thus, physical observables must be smooth functions of the error rate \(p\). In contrast, information quantities, e.g. the topological entanglement negativity \(\gamma_{N}\), can have discontinuities that identify the many-body singularities. ends of the path \(\mathcal{P}\). We use the _quantum relative entropy_ as a measure for the distinguishability of the two states \[D(\rho||\rho_{\alpha}):=\mathrm{tr}\rho\log\rho-\mathrm{tr}\rho\log\rho_{\alpha}\,. \tag{3}\] In absence of errors the relative entropy is infinite because the two states are orthogonal, and it decreases monotonically with the error rate [30; 31; 32]. Below the critical error rate, however, the states should remain perfectly distinguishable if the anyons are separated by a long distance. Therefore we expect the relative entropy to diverge as the distance between the anyons is taken to infinity. Above the critical error rate on the other hand we expect the relative entropy to saturate to a finite value reflecting the inability to perfectly distinguish between the two corrupted states. In this regard, the relative entropy describes whether anyon excitations remain well-defined and is a generalization of the Fredenhagen-Marcu order parameter for ground state topological order [33; 34; 35; 36]. To facilitate calculations, we consider a specific sequence of the Renyi relative entropies \[D^{(n)}(\rho||\rho_{\alpha}):=\frac{1}{1-n}\log\frac{\mathrm{tr}\rho\rho_{ \alpha}^{n-1}}{\mathrm{tr}\rho^{n}}, \tag{4}\] which recovers \(D(\rho||\rho_{\alpha})\) in the limit \(n\to 1\). In Sec. III we map the relative entropies \(D^{(n)}\) in the corrupted Toric code to order parameter correlation functions in an effective statistical mechanics model, which is shown to exhibit the expected behavior on two sides of the critical error rate. ### Coherent information The basis for protecting quantum information in topologically ordered states is encoding it in the degenerate ground state subspace. The second diagnostic we consider is designed to test the integrity of this protected quantum memory. We use the _coherent information_, as a standard metric for the amount of quantum information surviving in a channel [19; 20; 21]. In our case, the relevant quantum channel consists of the following ingredients illustrated below. (i) A unitary operator \(U\) that encodes the state of the logical qubits in the input \(R\) into the ground state subspace. (ii) A unitary coupling \(U_{QE}\) of the physical qubits \(Q\) to environment qubits \(E\), which models the decoherence. The coherent information in this setup is defined as \[I_{c}(R)Q):=S_{Q}-S_{QR}. \tag{5}\] Here \(S_{Q}\) and \(S_{RQ}\) are the von Neumann entropies of the systems \(Q\) and \(RQ\) respectively and we used the Choi map to treat the input \(R\) as a reference qubit in the output. It follows from subadditivity that the coherent information is bounded by the amount of encoded information in the degenerate ground state subspace, i.e. \(-S_{R}\leqslant I_{c}\leqslant S_{R}\). In the absence of errors \(I_{c}=S_{R}\), and we expect this value to persist as long as the error rate is below the critical value. Above the critical error rate, we expect \(I_{c}<S_{R}\), indicating the loss of encoded information. Physically the coherent information is closely related and expected to undergo a transition at the same point as the relative entropy discussed above. The quantum information is encoded by separating anyon pairs across the system. It stands to reason that if this state remains perfectly distinguishable from the original state, as quantified by the relative entropy, then the quantum information encoded in this process is preserved. The critical error rate for preserving the coherent information is an upper bound for the threshold of any QEC algorithms \[p_{c}\geqslant p_{c,\mathrm{algorithm}}\,. \tag{6}\] The key point is that coherent information is non-increasing upon quantum information processing and cannot be restored once it is lost. Thus, a successful QEC requires \(I_{c}=S_{R}\). Moreover, the QEC algorithm involves syndrome measurements that are non-unitary and generically do not access the full coherent information in the system giving rise to a lower error threshold. To facilitate calculations and mappings to a statistical mechanics model we will need the Renyi coherent information \[I_{c}^{(n)}:=S_{Q}^{(n)}-S_{RQ}^{(n)}=\frac{1}{n-1}\log\frac{\mathrm{tr}\rho_{ RQ}^{n}}{\mathrm{tr}\rho_{Q}^{n}}, \tag{7}\] which approaches \(I_{c}\) in the limit \(n\to 1\). In the example of Toric code with incoherent errors discussed in Sec. III, we show that \(I_{c}^{(n)}\) takes distinct values in different phases. ### Topological entanglement negativity The topological entanglement entropy provides an intrinsic bulk probe of ground state topological order and does not require a priori knowledge of the anyon excitations. The third diagnostic we consider generalizes this notion to the error-corrupted mixed state. A natural quantity often used to quantify entanglement in mixed states, is the logarithmic negativity of a sub-region \(A\)[37; 38; 39] \[\mathcal{E}_{A}(\rho):=\log||\rho^{T_{A}}||_{1}, \tag{8}\] where \(\rho^{T_{A}}\) is the partial transpose on the subsystem \(A\) and \(\|\cdot\|_{1}\) denotes the trace (\(L_{1}\)) norm. The logarithmic negativity coincides with the Renyi-1/2 entanglement entropy for the pure state and is non-increasing with the error rate of the channel, a requirement that any measure of entanglement must satisfy [40; 41]. The logarithmic negativity was previously used in the study of ground state topological phases [42; 43; 44] and more recently for detecting topological order in finite temperature Gibbs states [22; 23]. We expect that the universal topological contribution to the entanglement [45; 46] will survive in the corrupted mixed state below a critical error rate and be captured by the logarithmic negativity. Thus, the conjectured form of this quantity is \[\mathcal{E}_{A}=c|\partial A|-\gamma_{N}+\ldots, \tag{9}\] where \(|\partial A|\) is the circumference of the region \(A\), \(c\) is a non-universal coefficient, and ellipsis denotes terms that vanish in the limit \(|\partial A|\to\infty\). The constant term \(\gamma_{N}\) is the _topological entanglement negativity_ of a simply connected subregion \(A\), and is argued to originate from the long-range entanglement [22; 47]. One of the essential reasons for \(\gamma_{N}\) being topological is the conversion property \(\mathcal{E}_{A}=\mathcal{E}_{A}\), i.e. negativity of a subsystem is equal to that of the complement. In contrast, the von Neumann entropy of a subregion in the error-corrupted mixed state exhibits a volume-law scaling, and its constant piece is not topological because of \(S_{A}\neq S_{\bar{A}}\). To facilitate the calculation of the negativity, we consider the Renyi negativity of even order \[\mathcal{E}_{A}^{(2n)}(\rho):=\frac{1}{2-2n}\log\frac{\mathrm{tr}(\rho^{T_{A} })^{2n}}{\mathrm{tr}\rho^{2n}}\,. \tag{10}\] The logarithmic negativity is recovered in the limit \(2n\to 1\). Here, we choose a particular definition of the Renyi negativity such that it exhibits an area-law scaling in the corrupted state. In Sec. III, we show explicitly that in the Toric code the topological part \(\gamma_{N}^{(2n)}\) of the Renyi negativity takes a quantized value \(\log 2\) in the phase where the quantum memory is retained and vanishes otherwise. To summarize, we expect the topological negativity takes the same universal value as the topological entanglement entropy in the uncorrupted ground state and drops sharply to a lower value at a critical error rate. It is _a priori_ not clear, however, that the transition in the negativity must occur at the same threshold as that marks the transition of the other two diagnostics we discussed. In Sec. III we show, through mapping to a statistical mechanics model that, in the example of the Toric code, a single phase transition governs the behavior of all three diagnostics. ## III Example: Toric code under bit-flip and phase errors In this section, we use the three information-theoretical diagnostics to probe the distinct error-induced phases in the 2D Toric code under bit-flip and phase errors. In particular, we develop 2D classical statistical mechanical models to analytically study the Renyi-\(n\) version of the diagnostics in this example. The statistical mechanical models involve \((n-1)\)-flavor Ising spins and undergo ferromagnetic phase transitions as a function of error rates. We show that the three diagnostics map to distinct observables that all detect the ferromagnetic order and undergo the transition simultaneously. We remark that our results also apply to the planar code. In Sec. III.1, we introduce the Toric code and the error models. We derive the statistical mechanical models in Sec. III.2 and analyze the phase transition in Sec. III.3. Sec. III.4 discusses the three diagnostics and their corresponding observables in the statistical mechanical models. See Table 1 for a summary. We discuss the replica limit \(n\to 1\) in Sec. III.5. ### Toric code and error model We consider the 2D Toric code on an \(L\times L\) square lattice with periodic boundary conditions. This code involves \(N=2L^{2}\) physical qubits on the edges of the lattice, and its code space is given by the ground state subspace of the Hamiltonian \[H_{\mathrm{TC}}=-\sum_{s}A_{s}-\sum_{p}B_{p}\,, \tag{11}\] where \(A_{s}\) and \(B_{p}\) are mutually commuting operators associated with vertices and plaquettes \[A_{s}=\prod_{\ell\in\mathrm{star}(s)}X_{\ell}\,,\quad B_{p}=\prod_{\ell\in \mathrm{boundary}(p)}Z_{\ell}\,. \tag{12}\] Here, \(X_{\ell}\) and \(Z_{\ell}\) denote the Pauli-X and Z operators on edge \(\ell\), respectively. The ground state satisfying \(A_{s}\ket{\Psi}=B_{p}\ket{\Psi}=\ket{\Psi}\) is four-fold degenerate and can encode two logical qubits. We consider specific error channels describing uncorrelated single-qubit bit-flip and phase errors \[\begin{split}\mathcal{N}_{X,i}[\rho]&=(1-p_{x}) \rho+p_{x}X_{i}\rho X_{i}\,,\\ \mathcal{N}_{Z,i}[\rho]&=(1-p_{z})\rho+p_{z}Z_{i} \rho Z_{i}\,,\end{split} \tag{13}\] where the Pauli-\(X\) (\(Z\)) operator acting on the Toric code ground state creates a pair of \(m\) (\(e\)) anyons on the adjacent plaquettes (vertices), \(p_{x}\) and \(p_{z}\) are the corresponding error rates. The corrupted state reads \[\rho=\mathcal{N}_{X}\circ\mathcal{N}_{Z}[\rho_{0}]\,,\] where \(\mathcal{N}_{X(Z)}=\prod_{i}\mathcal{N}_{X(Z),i}\). We assume that the error rate is uniform throughout our discussion. We make a few remarks. First, the error channels in Eq. (13) do not create coherent superposition between states with different anyon configurations and are referred to as incoherent errors. Second, Pauli-Y errors create \(f\) anyons incoherently and can also be analyzed. It leads to a similar physics and will be not discussed in the work. ### Statistical mechanical models Here, we map the \(n\)-th moment of the corrupted density matrix \(\mathrm{tr}\rho^{n}\) to the partition function of the \((n-1)\)-flavor Ising model. In this statistical mechanical model, one can analyze the singularity in the Renyi version of the three diagnostics, which will be presented in Sec. III.4. To begin, we consider the maximally mixed state in the ground state subspace \[\rho_{0}=\frac{1}{4}\prod_{s}\frac{1+A_{s}}{2}\prod_{p}\frac{1+B_{p}}{2}\,. \tag{14}\] For our purpose here, it is convenient to write \(\rho_{0}\) in a loop picture \[\rho_{0}=\frac{1}{2^{N}}\sum_{g_{z}}g_{z}\sum_{g_{x}}g_{x}\,, \tag{15}\] where \(g_{z}\) and \(g_{x}\) are \(Z\) and \(X\) loops on the original and dual lattice given by the product of \(A_{s}\) and \(B_{p}\) operators, respectively. The summation runs over all possible loop configurations. In what follows, we will use \(g_{x(z)}\) to denote both the operators and the loop configurations. The meaning will be clear in the context. Two error channels act on the loop operators \(g_{x},g_{z}\) by only assigning a real positive weight: \[\mathcal{N}_{X,i}[g_{z}] =\left\{\begin{array}{rl}(1-2p_{x})g_{z}&Z_{i}\in g_{z}\\ g_{z}&Z_{i}\notin g_{z}\end{array}\right.,\] \[\mathcal{N}_{Z,i}[g_{x}] =\left\{\begin{array}{rl}(1-2p_{z})g_{x}&X_{i}\in g_{x}\\ g_{x}&X_{i}\notin g_{x}\end{array}\right..\] Thus, the corrupted state remains a superposition of loop operators \[\rho=\frac{1}{2^{N}}\sum_{g_{x},g_{z}}e^{-\mu_{x}|g_{x}|-\mu_{z}|g_{z}|}g_{x}g_ {z}, \tag{16}\] where \(|g_{x(z)}|\) denotes the length of the loop, and \(\mu_{x(z)}=-\log(1-2p_{z(x)})\) can be understood as the line tension. Using Eq. (16), it is straightforward to see that the expectation values of operators, such as the Wilson loop and open string, behave smoothly as the error rate increases, in consistence with the general argument in Sec. II.1. Using this loop picture Eq. (16), we can write the \(n\)-th moment as \[\begin{split}\mathrm{tr}\rho^{n}&=\frac{1}{2^{nN}} \sum_{\{g_{x}^{(s)},g_{z}^{(s)}\}}\mathrm{tr}\Big{(}\prod_{s=1}^{n}g_{x}^{(s)} g_{z}^{(s)}\Big{)}\\ & e^{\sum_{s}-\mu_{x}|g_{x}^{(s)}|-\mu_{z}|g_{z}^{(s)}|},\end{split} \tag{17}\] where \(g_{x(z)}^{(s)}\), \(s=1,2,\cdots,n\) is the \(X(Z)\) loop operator from the \(s\)-th copy of density matrix. The product of loop operators in Eq. (17) has a nonvanishing trace only if the products of \(X\) and \(Z\) loops are proportional to identity individually, which leads to two independent constraints \[g_{a}^{(n)}=\prod_{s=1}^{n-1}g_{a}^{(s)}\,,\quad a=x,z\,. \tag{18}\] The \(n\)-th moment factorizes into a product of two partition functions \[\mathrm{tr}\rho^{n}=\frac{1}{2^{(n-1)N}}\mathcal{Z}_{n,x}\mathcal{Z}_{n,z}\,, \tag{19}\] where \(\mathcal{Z}_{n,a}=\sum_{\{g_{a}^{(s)}\}}e^{-H_{n,a}}\) with \(a=x,z\) is a statistical mechanics model that describes fluctuating \(X(Z)\) loops with a line tension. The Hamiltonian takes the form \[H_{n,a}=\mu_{a}\Big{(}\sum_{s=1}^{n-1}\big{|}g_{a}^{(s)}\big{|}+\big{|}\prod_ {s=1}^{n-1}g_{a}^{(s)}\big{|}\Big{)}\,. \tag{20}\] Here, we have imposed the constraints (18), and the summation in each partition function runs over the loop configurations only in the first \(n-1\) copies. The loop model can be mapped to a statistical mechanical model of \(n-1\) flavors of Ising spins with nearest neighbor ferromagnetic interactions. The mapping is established by identifying the loop configuration \(g_{a}^{(s)}\) with \(s=1,2,\ldots,n-1\) with domain walls of Ising spins. Specifically, for a \(Z\) loop configuration on the original lattice, we associate a Ising spin configuration \(\sigma_{i}\) on the dual lattice such that \[\Big{|}g_{z,\ell}^{(s)}\Big{|}=\Big{(}1-\sigma_{i}^{(s)}\sigma_{j}^{(s)}\Big{)} \,/2\,,\] \begin{table} \begin{tabular}{|c|c|c|} \hline \hline Diagnostics & Observable & PM & FM \\ \hline \(D^{(n)}\) & Logarithm of & \(O(|i_{l}-i_{r}|)\) & \(O(1)\) \\ \hline \(I_{c}^{(n)}\) & Related to the excess free energy for & \multirow{2}{*}{\(2\log 2\)} & \multirow{2}{*}{\(0\)} \\ & domain walls along non-contractible loops & & \\ \hline \(\mathcal{E}_{A}^{(2n)}\) & Excess free energy for & \multirow{2}{*}{\(c|\partial A|/\xi-\log 2\)} & \multirow{2}{*}{\(c|\partial A|/\xi\)} \\ & aligning spins on the boundary of \(A\) & & \\ \hline \hline \end{tabular} \end{table} Table 1: Dictionary of the mapping. The Rényi-\(n\) version of the diagnostics of topological order in error corrupted states and their corresponding observables in \((n-1)\)-flavor Ising models are listed in the first and second columns, respectively. We consider 2D Toric code subject to one type of incoherent error (bit-flip or phase errors). The asymptotic behaviors of these diagnostics in the paramagnetic (PM) and ferromagnetic (FM) phases of the spin model are provided. where \(i,j\) are connected by the link dual to \(\ell\), and \(|g_{z,\ell}^{(s)}|\) is a binary function that counts the support of loop on link \(\ell\). The total length of the loop is given by \(|g_{z}^{(s)}|=\sum_{\ell}|g_{z,\ell}^{(s)}|\). Similarly, we can define the Ising spins on the original lattice that describe the \(X\) loop configuration on the dual lattice. In terms of the Ising spins, the effective Hamiltonian is given by \[H_{n,a}=-J_{a}\sum_{\langle i,j\rangle}\left(\sum_{s=1}^{n-1}\sigma_{i}^{(s)} \sigma_{j}^{(s)}+\prod_{s=1}^{n-1}\sigma_{i}^{(s)}\sigma_{j}^{(s)}\right) \tag{21}\] with a ferromagnetic coupling \(J_{x(z)}=-\log\sqrt{1-2p_{x(x)}}\,\). In what follows, we refer to this model as the \((n-1)\)_-flavor Ising model_. We remark that the model exhibits a global symmetry \(G^{(n)}=(\mathbb{Z}_{2}^{\otimes n}\rtimes\mathcal{S}_{n})/\mathbb{Z}_{2}\), where \(\mathcal{S}_{n}\) is the permutation symmetry over \(n\) elements. As is shown below, increasing the error rate the model undergoes a paramagnetic-to-ferromagnetic transition that completely breaks the \(G^{(n)}\) symmetry. ### Phase transitions Here, we study the ferromagnetic transition in the \((n-1)\)-flavor Ising model. The transition points depend on \(n\) and are determined using both analytical methods (e.g. Kramers-Wannier duality for \(n=2,3\)) and Monte-Carlo simulation (for \(n=4,5,6\), etc). The results are presented in Fig. 2. For \(n=2\), the statistical mechanical model is the standard square lattice Ising model: \[H_{2,a}=-2J_{a}\sum_{\langle i,j\rangle}\sigma_{i}\sigma_{j}\,. \tag{22}\] The critical point is determined analytically by the Kramers-Wannier duality [48; 49] \[p_{c}^{(2)}=\frac{1}{2}\Big{(}1-\sqrt{\sqrt{2}-1}\,\Big{)}\approx 0.178. \tag{23}\] For \(n=3\), the model becomes the Ashkin-Teller model on 2D square lattice along the \(\mathcal{S}_{4}\) symmetric line. The Hamiltonian is \[H_{3,a}=-J_{a}\sum_{\langle i,j\rangle}\sigma_{i}^{(1)}\sigma_{j}^{(1)}+\sigma _{i}^{(2)}\sigma_{j}^{(2)}+\sigma_{i}^{(1)}\sigma_{i}^{(2)}\sigma_{j}^{(1)} \sigma_{j}^{(2)}. \tag{24}\] The model is equivalent to the standard four-state Potts model [50] with a critical point determined by the Kramers-Wannier duality \[p_{c}^{(3)}=\frac{1}{2}\Big{(}1-\frac{1}{\sqrt{3}}\Big{)}\approx 0.211. \tag{25}\] For \(n\geqslant 4\), we are not aware of any exact solution and resort to the Monte-Carlo simulation. To locate the transition point \(p_{c}\), we consider the average magnetization per spin, \[m:=\frac{1}{(n-1)L^{2}}\sum_{s=1}^{n-1}\sum_{i}\sigma_{i}^{(s)}. \tag{26}\] We calculate the magnetization square \(\langle m^{2}\rangle\) and the Binder ratio \(B=\langle m^{4}\rangle/\langle m^{2}\rangle^{2}\) numerically and display the results in Fig. 3. Assuming a continuous transition, we determine \(p_{c}^{(n)}\) by the crossing point of \(B(p,L)\) for various system sizes \(L\) and extract the critical exponents using the scaling ansatz \(B(p,L)=\mathcal{F}_{b}((p-p_{c})L^{1/\nu})\) and \(\langle m^{2}\rangle(p,L)=L^{-2\beta/\nu}\mathcal{F}_{m}((p-p_{c})L^{1/\nu})\). The analysis yields \(p_{c}^{(4)}=0.231\) for \(n=4\). However, the sharp drop of magnetization and the non-monotonic behavior of \(B(p,L)\) near \(p_{c}^{(4)}\) hint at a possible first-order transition [51; 52]. The critical error threshold \(p_{c}\) increases monotonically with \(n\) and is exactly solvable in the limit \(n\to\infty\). In this case, the interaction among different flavors is negligible compared to the two-body Ising couplings. Thus, the critical point is asymptotically the same as that in the Ising model with coupling \(J_{a}\) and is given by \[p_{a,c}^{(\infty)}=\frac{1}{2}\big{(}2-\sqrt{2}\,\big{)}\approx 0.293. \tag{27}\] ### Three diagnostics The Renyi version of the three information theoretic diagnostics, quantum relative entropy, coherent information, and topological entanglement negativity, translate Figure 2: Critical error rates for various Rényi index \(n\). \(p_{c}^{(2)}\approx 0.178\) and \(p_{c}^{(3)}\approx 0.211\) are determined by the exact solution (blue diamonds). For \(n\geqslant 4\), \(p_{c}^{(n)}\) is determined by calculating the crossing of the Binder ratio for various system sizes via Monte-Carlo (red squares). \(p_{c}^{(n)}\) in the replica limit \(n\to 1\) (the yellow star) is given by the critical point of random-bond Ising model (RBIM) in 2D, \(p_{c}^{(1)}\approx 0.109\), as explained in Sec. III.5. In the limit \(n\to\infty\), the spin model is asymptotically decoupled Ising models with \(p_{c}^{(\infty)}\approx 0.293\) (the grey dashed line). into distinct physical quantities in the statistical mechanical model. We write these quantities explicitly below and show that all three detect the establishment of ferromagnetic order. Therefore the transition in all three quantities is governed by the same critical point, a fact that is not evident before mapping to statistical mechanical models. #### iv.1.1 Quantum relative entropy We start with the Renyi version of the quantum relative entropy given by Eq. (4). Let \(\rho\) be the corrupted ground state of the Toric code, and \(\rho_{m}=\mathcal{N}[\ket{\Psi_{m}}\bra{\Psi_{m}}]\) where \(\ket{\Psi_{m}}:=w_{m}(\mathcal{C})\ket{\Psi_{0}}\) has a pair of \(m\)-particles at the end of path \(\mathcal{C}\). The phase errors do not change the distinguishability between the two states and can be safely ignored here. Only the statistical mechanics model for the \(Z\) loops/spins is relevant. Let \(i_{\ell}\) and \(i_{r}\) denote the positions of two \(m\)-particles, we show in Appendix A.1 that the Renyi relative entropy is mapped to a two-point function of the Ising spins \[D^{(n)}(\rho||\rho_{\alpha})=\frac{1}{1-n}\log\langle\sigma_{i_{\ell}}^{(1)} \sigma_{i_{r}}^{(1)}\rangle\,, \tag{28}\] where \(\sigma_{j}^{(1)}\) is the first flavor of the Ising spin at site \(j\), and the subscription \(z\) is suppressed. When the error rate is small and the system is in the paramagnetic phase, the correlation function decays exponentially, and thus \(D^{(n)}=O(|i_{\ell}-i_{r}|)\) which grows linearly with the distance between \(i_{\ell}\) and \(i_{r}\). This indicates that the error-corrupted ground state and excited state remain distinguishable. When the error rate exceeds the critical value and the system enters the ferromagnetic phase, \(D^{(n)}\) is of \(O(1)\) due to the long-range order, which implies that the error-corrupted ground state and excited state are no longer distinguishable. #### iv.1.2 Coherent information Next consider the Renyi version of the coherent information \(I_{c}^{(n)}\) in Eq. (7). We let the two logical qubits in the system \(Q\) be maximally entangled with two reference qubits \(R\). As detailed in Appendix A.2, \(I_{c}^{(n)}\) can be mapped to the free energy cost of inserting domain walls along non-contractible loops that are related to the logical operators. More explicitly, let \(\mathbf{d}_{al}\) with \(a=x,z\) and \(l=l_{1},l_{2}\) be a \((n-1)\)-component binary vector. Each component of \(\mathbf{d}_{al}\) dictates the insertion of domain walls for \(a=x,z\) spins along the non-contractible loop \(l\), respectively, in \(n-1\) copies of the Ising spins. Here, along the domain walls, the couplings between nearest neighbor spins are flipped in sign and turned anti-ferromagnetic. Then, we have \[I_{c}^{(n)}=\frac{1}{n-1}\sum_{a=x,z}\log\Big{(}\sum_{\mathbf{d}_{a1}\mathbf{ d}_{a2}}e^{-\Delta F_{n,a}^{(\mathbf{d}_{a1},\mathbf{d}_{a2})}}\Big{)}-2\log 2\,, \tag{29}\] where \(\Delta F_{n,a}^{(\mathbf{d}_{a1},\mathbf{d}_{a2})}\) is the free energy cost associated with inserting domain walls labeled by binary vectors \(\mathbf{d}_{al}\), the sum runs over all possible \(\mathbf{d}_{al}\). When the error rate is small and the system is in the paramagnetic phase, the domain wall along a non-contractible loop costs nothing, i.e. \(\Delta F_{n,a}^{(\mathbf{d}_{a1},\mathbf{d}_{a2})}=0\). It follows that the corrupted state retains the encoded information, i.e. \(I_{c}^{(n)}=2\log 2\). When the error rate exceeds the critical value and the system enters the ferromagnetic phase, inserting a domain wall will have a free energy cost that is proportional to its length. Namely, \(\Delta F_{n,a}^{(\mathbf{d}_{a1},\mathbf{d}_{a2})}\) is proportional to the linear system size unless no defect is inserted. One can deduce \(I_{c}^{(n)}=0\) when the spin model for either \(Z\) or \(X\) loop undergoes a transition to the ferromagnetic phase, namely, the corrupted state corresponds to a classical memory. When both spin models are in the ferromagnetic phase, we have \(I_{c}^{(n)}=-2\log 2\), indicating that the system is a trivial memory. Figure 3: Phase transition in the statistical mechanical model for \(n=4\). Magnetization (a) and Binder ratio (b) as a function of error rate \(p\) for various system sizes up to \(L_{x}=L_{y}=L=64\). The crossing of \(B(p,L)\) yields \(p_{c}=0.231\). The exponents \(\nu=0.74\) and \(\beta=0.04\) are extracted from the finite-size scaling collapse in the insets. The results are averaged over \(10^{5}\) independent Monte-Carlo measurements for each of 48 initial configurations. #### iii.1.3 Topological entanglement negativity The Renyi negativities of even order are given in Eq. (10). Let us specialize here to the Toric code with only phase errors. As shown in Appendix A.3, the \(2n\)-th Renyi negativity of a region \(A\) is given by \[\mathcal{E}_{A}^{(2n)}=\Delta F_{A}\,, \tag{30}\] where \(\Delta F_{A}\) is the excess free energy associated with aligning a single flavor of Ising spins on the boundary \(\partial A\) in the same direction (illustrated in Fig. 4). The excess free energy \(\Delta F_{A}\), or more precisely, its subleading term can probe the ferromagnetic transition in the statistical-mechanical model. The excess free energy has two contributions. The energetic part is always proportional to \(|\partial A|\). The entropic part is attributed to the loss of degrees of freedom due to the constraint. In the paramagnetic phase, the Ising spins fluctuate freely above the scale of the finite correlation length \(\xi\). Hence, enforcing each constraint removes \(O(|\partial A|/\xi)\) degrees of freedom proportional to the circumference of \(A\), which yields the leading term (area law). Importantly, there is still one residual degree of freedom, namely, the aligned boundary spins can fluctuate together, which results in a subleading term \(\log 2\). Altogether, we have \(\mathcal{E}_{A}^{(2n)}=c|\partial A|/\xi-\log 2\). Here, it is an interesting question to verify whether the prefactor \(c\) is universal or not [53], and we leave it for future study [54]. In the ferromagnetic phase, the finite correlation length \(\xi\) sets the scale of the critical region, below which the spins can fluctuate. Thus, imposing each constraint removes \(O(|\partial A|/\xi)\) degrees of freedom. However, the aligned boundary spins should also align with the global magnetization resulting in a vanishing subleading term in the excess free energy. Hence, the negativity \(\mathcal{E}_{A}^{(2n)}\) exhibits a pure area law without any subleading term. To support our analytical argument, we also numerically calculate the Renyi-4 negativity (the Renyi-2 negativity is trivially zero) and show that the topological term \(\gamma_{N}^{(4)}\) indeed exhibits distinct behaviors across the transition. We adopt the Kitaev-Preskill prescription to extract \(\gamma_{N}\)[45]. More specifically, we consider the subsystems \(A\), \(B\), \(C\) depicted below, and \(\gamma_{N}\) is given by \[\begin{split}\includegraphics[width=142.26378pt]{Fig4} \end{split}-\gamma_{N}&:=\mathcal{E}_{A}+\mathcal{E}_{B}+ \mathcal{E}_{C}+\mathcal{E}_{ABC}\\ &\qquad-\mathcal{E}_{AB}-\mathcal{E}_{BC}-\mathcal{E}_{AC}\,. \end{split} \tag{31}\] Our choice of the subsystems further simplifies the above expression to \(-\gamma_{N}=2\mathcal{E}_{A}-2\mathcal{E}_{AC}+\mathcal{E}_{ABC}\)[55]. The result is presented in Fig. 5, where \(\gamma_{N}^{(4)}\) approaches \(\log 2\) and \(0\) for small and large \(p_{z}\), respectively. The curves become steeper as the system size increases, which is consistent with the predicted step function in the thermodynamic limit. One can also observe a dip of \(\gamma_{N}^{(4)}\) below zero. This phenomenon has also appeared in the numerical study of the topological entanglement entropy across transitions [13]. We believe that this dip is due to the finite-size effect, which might be more severe for information quantities with a large Renyi index \(n\)[56]. So far, we only considered a simply connected subregion. If \(A\) is not simply connected, that is, \(\partial A\) contains \(k\) disconnected curves (for example the boundary of an annular region that contains two disconnected curves), then the constraints only require the Ising spins to align with other spins on the same boundary curve. In this case the topological entanglement negativity is \(k\log 2\). This is the same dependence on the number of disconnected components as in the topological entanglement entropy of ground states [46]. Figure 4: Entanglement negativity between region \(A\) and its compliment \(\bar{A}\) corresponds to the excess free energy for aligning Ising spins on the boundary of \(A\) (pink plaquettes) pointing to the same direction. Figure 5: Topological negativity \(\gamma_{N}^{(4)}\) as a function of the phase error rate \(p_{z}\). We consider the subsystems \(A,B\), and \(C\) as in Eq. (31) and choose the side of the region \(ABC\) to be \(L/4\). \(\gamma_{N}^{(4)}\) approaches \(\log 2\) and zero at small and large \(p_{z}\), respectively. The curves become steeper as the system size \(L\) increases. The dashed line indicates the predicted behavior in the thermodynamic limit. The results are averaged over \(10^{7}\) independent Monte-Carlo measurements from each of \(48,96\) random initial configurations for \(L=8,12\), respectively. The errorbars for \(L=8\) are negligible and thus omitted. ### \(n\to 1\) limit, duality and connection to optimal decoding In this subsection, we determine \(p_{c}\) in the limit \(n\to 1\) via a duality between the statistical mechanical model established in Sec. III.2 and the 2D random bond Ising model (RBIM) along the Nishimori line. The RBIM is also known to govern the error threshold of the optimal decoding algorithm for the 2D Toric code with incoherent errors [10]. The duality shows that the decoding threshold indeed saturates the upper bound given by the threshold in our information theoretical diagnostics. This duality was derived before via a binary Fourier transformation [57; 58]. Here, it follows naturally from two distinct expansions of the error-corrupted state. The statistical mechanical model in Sec. III.2 is based on the loop picture (15). Here, we work in an alternative error configuration picture, writing the error corrupted state as \[\rho=\sum_{\mathcal{C}_{z},\mathcal{C}_{z}}P(\mathcal{C}_{x})P( \mathcal{C}_{z}) \tag{32}\] \[Z^{\mathcal{C}_{z}}X^{\mathcal{C}_{x}}\rho_{0}X^{\mathcal{C}_{x }}Z^{\mathcal{C}_{z}}\,,\] where \(\mathcal{C}_{z}\) (\(\mathcal{C}_{x}\)) denotes the error strings on the original (dual) lattice. The corresponding error syndromes are \(e\) and \(m\) anyons on the boundary \(\partial\mathcal{C}_{z}\) and \(\partial\mathcal{C}_{x}\), respectively. Let \(|\mathcal{C}_{a}|\) denote the total length of the error string, the probability for each string configuration is \[P(\mathcal{C}_{a})=p_{a}^{|\mathcal{C}_{a}|}(1-p_{a})^{N-|\mathcal{C}_{a}|}\,, \tag{33}\] where \(N\) is the total number of qubits. The expansion in error configurations allows writing the \(n\)-th moment as \[\begin{split}\mathrm{tr}\rho^{n}=&\sum_{(\mathcal{C} _{x}^{(s)},\,\mathcal{C}_{z}^{(s)})}\prod_{s=1}^{n}P\big{(}\mathcal{C}_{x}^{(s )}\big{)}P\big{(}\mathcal{C}_{z}^{(s)}\big{)}\\ &\quad\mathrm{tr}\Big{(}\prod_{s=1}^{n}Z^{\mathcal{C}_{z}^{(s)}}X ^{\mathcal{C}_{z}^{(s)}}\rho_{0}X^{\mathcal{C}_{z}^{(s)}}Z^{\mathcal{C}_{z}^ {(s)}}\Big{)}.\end{split} \tag{34}\] We choose \(\rho_{0}=|\Psi_{0}\rangle\,\langle\Psi_{0}|\) to be an eigenstate of the logical operators. Then, we can rewrite the trace as \[\prod_{s=1}^{n-1}\langle\Psi_{0}|\,X^{\mathcal{C}_{z}^{(s)}}Z^{\mathcal{C}_{z }^{(s)}}Z^{\mathcal{C}_{z}^{(s+1)}}X^{\mathcal{C}_{z}^{(s+1)}}\,|\Psi_{0} \rangle\,\] and see that the trace is non-vanishing only if error strings of different copies differ only by homologically trivial loops. Namely, the error strings in the \(2,\ldots,n\)-th copies are related to that in the first copy via \[\begin{split}\includegraphics[width=142.26378pt]{figs-1.pdf} \end{split} \tag{35}\] where \(v_{z(x)}^{(s)}\) is a set of plaquettes on the original (dual) lattice, and its boundary \(\partial v_{z(x)}^{(s)}\) only consists of homologically trivial loops. Noticing the decoupling between \(Z\) and \(X\), we have \[\begin{split}\mathrm{tr}\rho^{n}=\mathcal{Z}_{n,z}^{\prime} \mathcal{Z}_{n,x}^{\prime}\,,\\ \mathcal{Z}_{n,a}^{\prime}=&\sum_{\mathcal{C}_{a}^{( 1)}}P\big{(}\mathcal{C}_{a}^{(1)}\big{)}\sum_{\{v_{z}^{(s)}\}}\prod_{s=1}^{n-1 }P\big{(}\mathcal{C}_{a}^{(1)}+\partial v_{a}^{(s)}\big{)}\,.\end{split} \tag{36}\] By comparing the above expression with Eq. (19), we must have the following duality \[\mathcal{Z}_{n,x}=2^{\frac{(n-1)N}{2}}\mathcal{Z}_{n,z}^{\prime}\,,\quad \mathcal{Z}_{n,z}=2^{\frac{(n-1)N}{2}}\mathcal{Z}_{n,x}^{\prime}\,. \tag{37}\] In the following, we will focus on \(\mathcal{Z}_{n,z}^{\prime}\) and suppress the subscripts for the sake of clarity. The analysis of \(\mathcal{Z}_{n,x}^{\prime}\) is similar. We now interpret \(\mathcal{Z}_{n,z}^{\prime}\) as a partition function of Ising spins that is related to the replicated RBIM. Let us replace \(v^{(s)},s=1,\ldots,n-1\) by \(n-1\) flavors of Ising spins that live on the plaquettes such that each \(v^{(s)}\) is in one-to-one correspondence to a configuration of Ising spins, as is drawn below \[\begin{split}\includegraphics[width=142.26378pt]{figs-2.pdf} \end{split}\] Namely, the boundary \(\partial v^{(s)}\) is mapped to the domain walls of the \(s\)-th Ising spin. There are nearest neighbor antiferromagneticinteractions between spins of the same flavor on links that cross the path \(\mathcal{C}^{(1)}\) and ferromagnetic interaction across other links. More explicitly, one can verify \[\begin{split}\mathcal{Z}_{n}^{\prime}=((1-p)p)^{N/2}\sum_{\{J_{ ij}\}}P(\{\eta_{ij}\})\\ \sum_{\{\tau^{(s)}\}}\exp\Big{(}J\sum_{s=1}^{n-1}\sum_{\langle ij \rangle}\eta_{ij}\tau_{i}^{(s)}\tau_{j}^{(s)}\Big{)}\end{split} \tag{38}\] where \(e^{-2J}=p/(1-p)\) and \(\eta_{ij}\in\{-1,1\}\) is a binary random variable on links determined by \(\mathcal{C}^{(1)}\). Hence, we recognize \(\mathcal{Z}_{n}^{\prime}=\overline{\mathcal{Z}_{\mathrm{RBIM}}^{n-1}}\) as the disorder-averaged partition function of \(n-1\) copies of RBIM along the Nishimori line [59]. The replicated RBIM in the error configuration picture and the spin model in the loop picture are both derived from the \(n\)-th moment of the error corrupted state. Therefore, they must be dual to each other and share the same critical error rate for all replica indices. Note that the replicated RBIM exhibits two phases, a ferromagnetic and a paramagnetic phase at small and large error rates, respectively. This is exactly opposite to the phase diagrams of the spin model in the loop picture, which is a common feature in Kramers-Wannier dualities. In the replica limit \(n\to 1\), the replicated RBIM reduces to the RBIM derived for the optimal quantum error correction algorithm [10] and undergoes an ordering transition at \(p_{c}=0.109\)[60]. This implies that all three diagnostics should also undergo the transition at the same \(p_{c}\) in the replica limit and confirms that the optimal decoding threshold saturates the upper bound in Eq. (6). ## IV Discussion In this work, we introduced information theoretic diagnostics of error-corrupted mixed states \(\rho=\prod_{i}\mathcal{N}_{i}[\rho_{0}]\), which probe their intrinsic topological order and capacity for protecting quantum information. We focused on a concrete example, where \(\rho_{0}=|\psi_{\mathrm{TC}}\rangle\langle\psi_{\mathrm{TC}}|\) is in the ground state subspace of the Toric code and \(\mathcal{N}_{i}\) the bit-flip and phase errors. We noted that the \(n\)-th moment \(\mathrm{tr}\rho^{n}\) can be written as the partition function of a 2D classical spin model, that is dual to the (replicated) random-bond Ising model along the Nishimori line, which is used to establish the following results. We consider three complementary diagnostics, quantum relative entropy, coherent information, and topological entanglement negativity, which are mapped to different observables in the spin model and shown to undergo a transition at the same critical error rate. Generally speaking, this critical error rate is an upper bound for the error threshold that can be achieved by any decoding algorithm. The aforementioned duality implies that the critical error rate identified here is exactly saturated by the famous error threshold of the optimal decoding algorithm for the Toric code proposed by Dennis et al [10]. This result unveils a connection between the breakdown of topological quantum memory and a transition in the mixed-state topological order, and also provides physical interpretation for the decoding transition. We have focused on Toric code with incoherent errors. It will be interesting to generalize the discussion to coherent errors that create anyons with coherence, e.g., amplitude damping or unitary rotations [61; 62; 63; 64]. In these cases, one has to concatenate coherent errors and dephasing channels that mimic the syndrome measurement in order to make better contact to quantum error correction based on that syndrome measurement. It is also interesting to further consider non-Abelian quantum codes [65; 66; 67]. It might be surprising that the intrinsic properties of the 2D error corrupted quantum states are captured by 2D _classical_ statistical mechanical models. In Appendix B, we give a brief discussion on \(\mathbb{Z}_{N}\) Toric code with specific incoherent errors and show that this is also the case. A more general perspective is the so-called _errorfield double formalism_, which is proposed by the same authors. It follows from this general formalism that the intrinsic properties of the 2D error corrupted states can always be captured by a 1+1D quantum model. Details will be reported elsewhere [68]. For the 2D random-bond Ising model along the Nishimori line, physical quantities, such as the specific heat, change smoothly despite crossing the phase transition [59]. For quantum memories under local errors, we have argued in Sec. II.1 that any physical observables must also behave smoothly across the error-induced transition. This similarity between the two sides may be the deeper underlying reason why the corrupted quantum memories are mapped to the Nishimori line. It will be interesting to leverage this to identify more exotic Nishimori physics and also help develop a better understanding of quantum memory. As we have commented in Sec. II.1, the error-induced transition acquires a different nature from the thermal transition in finite-temperature topological order. This distinction suggests a hierarchy of topological transitions in general mixed states. For example, it suffices to use physical observables (linear in the density matrix) to detect the thermal transition, while it requires at least second Renyi quantities (quadratic in the density matrix) to detect the error-induced transition. It is interesting to explore more exotic topological transitions in mixed states that are detectable only by non-linear functions of the density matrix of even higher orders, such as the entanglement Hamiltonian. The above task is intimately related to the goal of classifying mixed-state topological order. A suitable definition of mixed state topological order should be both operationally meaningful and also identify computable topological invariants. Our discussion which focuses on the error-corrupted mixed states represents one particular aspect of this more general question. Here, the coherent information provides the operational definition, namely, a locally corrupted state is in a different phase if QEC is impossible, while the topological entanglement negativity is believed to provide a computable topological invariant that diagnoses the present transition. However, note that both the local error channel and QEC process are generally non-unitary, for which the Lieb-Robinson bound does not apply. Therefore, understanding the role of locality is key to obtaining a general notion of equivalence classes of mixed states. Similarly, a more general justification of topological negativity and its universality, in the sense of establishing its invariance under the application of local quantum channels at a certain place, is left for future work. The main difficulty comes from understanding how local perturbations affect the spectrum of a partially transposed density matrix, which is an interesting problem in its own right and is left to future work. ###### Acknowledgements. We thank Meng Cheng, Soonwon Choi, Mikhail Lukin, Nishad Maskara, Karthik Siva, Tomohiro Soejima for helpful discussions, and Tarun Grover for useful comments on the manuscript. AV was funded by the Si mons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651440, AV). AV and RF further acknowledge support from NSF-DMR 2220703. This material is based upon work supported in part by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator (EA). YB was supported in part by NSF QLCI program through grant number OMA-2016245. This work is funded in part by a QuantEmX grant from ICAM and the Gordon and Betty Moore Foundation through Grant GBMF9616 to Ruihua Fan and Yimu Bao. _Note added_: Upon completion of the present manuscript, we became aware of an independent work [69] which is broadly related and will appear on arXiv on the same day. We thank them for informing us their work in advance. ## Appendix A Details of the mapping In this section, we detail the mapping between the three diagnostics and observables in the statistical mechanical models. ### Quantum relative entropy We here explicitly show that the Renyi quantum relative entropy is related to the correlation function in the classical spin model. Specifically, we consider the relative entropy between the error corrupted ground state and an excited state \(\ket{\Psi_{m}}:=w_{m}(\mathcal{C})\ket{\Psi_{0}}\) with a pair of \(m\)-particles created at the end of path \(\mathcal{C}\). First, we write down the error corrupted state \(\rho_{m}\) in the loop representation \[\rho_{m}=\frac{1}{2^{N}}\sum_{g}\mathrm{sgn}\left(g_{z},X^{\mathcal{C}}\right) g_{z}g_{x}e^{-\mu_{x}\ket{g_{x}}-\mu_{z}\ket{g_{z}}}. \tag{10}\] where the commutation relation between the loop operator and the string operator is accounted by \(\mathrm{sgn}(g_{z},X^{\mathcal{C}})\); the sign function equals \(+1\) when \(g_{z}\) and \(X^{\mathcal{C}}\) commute and \(-1\) otherwise. The above expression allows one to write \(\mathrm{tr}\rho\rho_{m}^{n-1}\) as \[\mathrm{tr}\rho\rho_{m}^{n-1}=\frac{\mathcal{Z}_{n,x}}{2^{(n-1)N}}\sum_{\{g_{ z}^{(x)}\}}\mathcal{O}_{D}^{(n)}e^{-H_{n,z}}, \tag{11}\] where \(\mathcal{O}_{D}^{(n)}\) denotes the product of sign functions in \(n-1\) copies of \(\rho_{m}\) \[\mathcal{O}_{D}^{(n)}=\mathrm{sgn}\left(g_{z}^{(1)},X^{\mathcal{C}}\right). \tag{12}\] Here, we have used the constraint \(g_{z}^{(1)}=\prod_{s=2}^{n}g_{z}^{(s)}\) for nonvanishing trace in the loop representation. Using this expression, the \(n\)-th Renyi relative entropy takes the form \[D^{(n)}(\rho||\rho_{m})=\frac{1}{1-n}\log\langle\mathcal{O}_{D}^{(n)}\rangle\,. \tag{13}\] Our next step is to express the observable \(\langle\mathcal{O}_{D}^{(n)}\rangle\) in terms of the Ising spins. In the spin model, the closed loop \(g_{z}^{(1)}\) is identified with the domain wall of \(\sigma_{i}^{(1)}\), and the Ising spins on two sides of \(g_{z}^{(1)}\) anti-align. Thus, \(\sigma_{i_{l}}^{(1)}\) and \(\sigma_{i_{r}}^{(1)}\) on the two ends of the open string \(\mathcal{C}\) is aligned if \(g_{z}^{(1)}\) crosses \(\mathcal{C}\) for even number of times and is anti-aligned otherwise. The parity of the crossing is exactly measured by the sign function \(\mathrm{sgn}(g_{z}^{(1)},X^{\mathcal{C}})\). Hence, the observable \(\langle\mathcal{O}_{D}^{(n)}\rangle\) maps to the correlation function \[\langle\mathcal{O}_{D}^{(n)}\rangle=\langle\sigma_{i_{l}}^{(1)}\sigma_{i_{r}} ^{(1)}\rangle\,. \tag{14}\] ### Coherent information We now develop a spin model description for the Renyi coherent information \(I_{c}^{(n)}\) in Eq. (7). In the definition of coherent information, the system density matrix \(\rho_{Q}\) is the error corrupted state \(\rho\) in Sec. III.2, and its \(n\)-th moment is mapped to the partition function of the \((n-1)\)-flavor Ising model on the torus. Here, we show that the \(n\)-th moment of \(\rho_{RQ}\) maps to the partition function of the same model with defects (domain walls) inserted along large loops on the torus. First, we write down the initial state of the system \(Q\) and the reference \(R\). We consider two reference qubits and two logical qubits in the ground state subspace, and maximally entangle them in a Bell state. Let \(s_{l}^{a=x,z}\) be the Pauli operator of two reference qubits, and \(\bar{g}_{al}\) be the four logical operators \[\begin{split}\parbox{142.26378pt}{\includegraphics[width=142.26378pt]{ 142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt]{142.26378pt}{\includegraphics[width=142.26378pt ]{142.26378pt}{\includegraphics[width=142.26378pt ]{142. In the error corrupted state \(\rho_{RQ}\), the \(X\) and \(Z\) error channels act on \(\Gamma_{0,RQ}^{x}\) and \(\Gamma_{0,RQ}^{x}\), respectively, giving rise to \(\rho_{RQ}=\Gamma_{RQ}^{x}\Gamma_{RQ}^{z}/2^{N+2}\) with \[\Gamma_{RQ}^{a} =\sum_{g_{a}}\sum_{d_{al}=0,1}e^{-\mu_{a}\left|\prod_{l=i_{1},l_{2} }(\bar{g}_{al})^{d_{al}}g_{a}\right|}\] \[g_{a}\prod_{l=l_{1},l_{2}}(\bar{g}_{al}s_{l}^{a})^{d_{al}}, \tag{10}\] where \(d_{al}\) is a binary variable indicating whether the loop operator in the summation acts on the non-contractible loop \(l\) of the torus. Our next step is to write down the \(n\)-th moment of \(\rho_{RQ}\) in the loop picture \[\mathrm{tr}\rho_{RQ}^{n}=\frac{1}{2^{n(N+2)}}\mathrm{tr}\left(\left(\Gamma_{RQ }^{x}\right)^{n}\left(\Gamma_{RQ}^{z}\right)^{n}\right), \tag{11}\] where each \(\Gamma_{RQ}^{x(z)}\) is a sum over all possible \(X(Z)\) loop operators with positive weights. The product of loop operators from \(n\) copies has a non-vanishing trace only if the product is identity. This imposes the constraint on loop configurations and allows expressing the \(n\)-th moment as a sum of partition functions \[\mathrm{tr}\rho_{RQ}^{n}=\frac{1}{2^{(n-1)(N+2)}}\prod_{a=x,z}\sum_{\mathbf{d }_{al}\mathbf{d}_{a2}}\mathcal{Z}_{n,a}^{(\mathbf{d}_{a1},\mathbf{d}_{a2})}, \tag{12}\] where \(\mathbf{d}_{al}\) with \(l=1,2\) is a \((n-1)\)-component binary vector, the sum runs over all possible \(\mathbf{d}_{al}\), and \(\mathcal{Z}_{n,a}^{(\mathbf{d}_{a1},\mathbf{d}_{a2})}=\sum_{\{g_{a}^{(x)}\}}e ^{-H_{n,a}^{(\mathbf{d}_{a1},\mathbf{d}_{a2})}}\) is the partition function with an effective Hamiltonian \[\begin{split} H_{n,a}^{(\mathbf{d}_{1a},\mathbf{d}_{2a})}& =\mu_{a}\sum_{s=1}^{n-1}\left|(\bar{g}_{a1}^{(s)})^{d_{al,s}}(\bar {g}_{a2}^{(s)})^{d_{a2,s}}g_{a}^{(s)}\right|\\ &\quad+\mu_{a}\left|\prod_{s=1}^{n-1}(\bar{g}_{a1}^{(s)})^{d_{al, s}}(\bar{g}_{a2}^{(s)})^{d_{a2,s}}g_{a}^{(s)}\right|.\end{split} \tag{13}\] Here, \(d_{al,s}\) denotes the \(s\)-th component of vector \(\mathbf{d}_{al}\). The loop model in Eq. (13) can be identified with a classical spin model similar to Eq. (21). However, there is an important difference due to the presence of the homologically nontrivial loop \(\bar{g}_{al}^{(s)}\). Here, we interpret the homologically trivial loop \(g_{a}^{(s)}\) as the Ising domain wall and \(\bar{g}_{al}^{(s)}\) as a defect along the non-contractible loop. The defect corresponds to flipping the sign of Ising coupling along a large loop. Specifically, for \(Z\) (\(X\)) loops on the original lattice, we introduce Ising spin on the plaquettes (vertices) such that \[\left|(\bar{g}_{a1})^{d_{a1,s}}_{\ell}(\bar{g}_{a2})^{d_{a2,s}}_{\ell}g_{a, \ell}^{(s)}\right|=\frac{1-(-1)^{\lambda_{\ell}^{(s)}}\sigma_{i}^{(s)}\sigma_{ j}^{(s)}}{2}, \tag{14}\] where \(i,j\) are connected by the link \(\ell\), and \(\lambda_{\ell}^{(s)}=[(\bar{g}_{a1})^{d_{a1,s}}_{\ell}(\bar{g}_{a2})^{d_{a2,s}} _{\ell}]\) is binary variable that denotes whether the defect goes through the link \(\ell\). This results in an effective Hamiltonian \[\begin{split} H_{n,a}^{(\mathbf{d}_{1a},\mathbf{d}_{2a})}& =-J_{a}\sum_{\langle i,j\rangle}\sum_{s=1}^{n-1}(-1)^{\lambda_{ \ell}^{(s)}}\sigma_{i}^{(s)}\sigma_{j}^{(s)}\\ &\quad+\prod_{s=1}^{n-1}(-1)^{\lambda_{\ell}^{(s)}}\sigma_{i}^{(s )}\sigma_{j}^{(s)}.\end{split} \tag{15}\] Hence, \(\mathcal{Z}_{n,a}^{(\mathbf{d}_{a1},\mathbf{d}_{a2})}\) becomes the partition function of the classical spin model with defects inserting along the non-contractible loops labeled by binary vectors \(\mathbf{d}_{al}\). The mapping developed above allows a spin model description for the \(n\)-th Renyi coherent information \(I_{c}^{(n)}\). The \(n\)-th moment of \(\rho_{Q}\) is identified with the partition function with no defect, i.e. \(\mathrm{tr}\rho_{Q}^{n}=\mathcal{Z}_{n,x}^{(\mathbf{0},\mathbf{0})}\mathcal{Z}_ {n,z}^{(\mathbf{0},\mathbf{0})}/2^{(n-1)N}\). Therefore, we have \[I_{c}^{(n)}=\frac{1}{n-1}\sum_{a=x,z}\log\frac{\sum_{\mathbf{d}_{a1}\mathbf{d} _{a2}}\mathcal{Z}_{n,a}^{(\mathbf{d}_{a1},\mathbf{d}_{a2})}}{2^{n-1}\mathcal{Z }_{n,a}^{(\mathbf{0},\mathbf{0})}}. \tag{16}\] Thus, the Renyi coherent information is associated with the excess free energy of inserting defects along non-contractible loops \[\Delta F_{n,a}^{(\mathbf{d}_{a1},\mathbf{d}_{a2})}:=-\log\left(\mathcal{Z}_{n,a }^{(\mathbf{d}_{a1},\mathbf{d}_{a2})}/\mathcal{Z}_{n,a}^{(\mathbf{0},\mathbf{0 })}\right). \tag{17}\] ### Entanglement negativity Here, we show that the Renyi negativity in the error-corrupted state maps to the excess free energy for aligning spins in the statistical mechanical model. Specifically, we consider the case when only one type of error, e.g. bit-flip errors, is present. The first step is to write down the partially transposed density matrix \(\rho^{T_{A}}\). We again work in the loop representation, where the error corrupted state is expressed as a sum of Pauli strings \(g=g_{x}g_{z}\) in Eq. (16). The Pauli string \(g\) is invariant under the partial transpose up to a sign factor \(y_{A}(g)=(-1)^{N_{Y}}\) depending on the number \(N_{Y}\) of Pauli-Y operators inside the subsystem \(A\). Hence, \[\rho^{T_{A}}=\frac{1}{2^{N}}\sum_{g}y_{A}(g)e^{-\mu_{x}|g_{x}|-\mu_{z}|g_{z}|}g. \tag{18}\] Using the above expression, one can write down the \(n\)-th moment of \(\rho^{T_{A}}\) \[\mathrm{tr}\left(\rho^{T_{A}}\right)^{n}=\frac{1}{2^{(n-1)N}}\sum_{\{g^{(s)}\}} \mathcal{O}_{N}^{(n)}e^{-H_{n,x}-H_{n,z}}. \tag{19}\] Here, similar to \(\mathrm{tr}\rho^{n}\), the trace imposes a constraint on the loop operators \(g^{(s)}\), and the summation runs over \(g^{(s)}\) only in the first \(n-1\) copies. The sign factors collected from the partial transpose in each copy are combined in \(\mathcal{O}_{N}^{(n)}\), \[\mathcal{O}_{N}^{(n)}=\left[\prod_{s=1}^{n-1}y_{A}\Big{(}g^{(s)}\Big{)}\right]y_{ A}\Big{(}\prod_{s=1}^{n-1}g^{(s)}\Big{)}. \tag{101}\] Eq. (100) allows expressing the \(2n\)-th Renyi negativity in terms of the expectation value of \(\mathcal{O}_{2n}\): \[\mathcal{E}_{A}^{(2n)}=\frac{1}{2-2n}\log\Big{\langle}\mathcal{O}_{N}^{(2n)} \Big{\rangle}. \tag{102}\] Yet, analyzing the number of Pauli-Y operators in Eq. (101) is a formidable task. Moreover, the observable \(\mathcal{O}_{N}^{(n)}\) derived from the partial transpose should be a basis-independent quantity. Indeed, one can express \(O_{N}^{(n)}\) in terms of loop configurations \[\mathcal{O}_{N}^{(n)} =\prod_{r=1}^{n-2}\operatorname{sgn}_{A}\Big{(}\prod_{s=1}^{r}g^{ (s)},g^{(r+1)}\Big{)}\] \[=\prod_{r=2}^{n-1}\prod_{s=1}^{r-1}\operatorname{sgn}_{A}\big{(}g ^{(s)},g^{(r)}\big{)}. \tag{103}\] Here, we use the property \[y_{A}(g)y_{A}(h)=y_{A}(gh)\operatorname{sgn}_{A}(g,h), \tag{104}\] where the sign function \(\operatorname{sgn}_{A}(g,h)=\pm 1\) depending on the commutation relation between the support of Pauli string \(g\) and \(h\) on subsystem \(A\): \[\operatorname{sgn}_{A}(g,h)=\left\{\begin{array}{cc}1&[g_{A},\;h_{A}]=0\\ -1&\{g_{A},\;h_{A}\}=0\end{array}\right.. \tag{105}\] In the second equality of Eq. (103), we use the property of sign function \[\operatorname{sgn}_{A}(g_{1}g_{2},g_{3})=\operatorname{sgn}_{A}(g_{1},g_{3}) \operatorname{sgn}_{A}(g_{2},g_{3}). \tag{106}\] In the Toric code, the operator \(g\) further factorizes into \(g=g_{x}g_{z}\), where \(g_{x},g_{z}\) are closed loop operators of Pauli \(X\) and \(Z\), respectively. The sign function between two such loop operators \(g\) and \(h\) reduces to \[\operatorname{sgn}_{A}(g,h)=\operatorname{sgn}_{A}(g_{x},h_{z})\operatorname {sgn}_{A}(g_{z},h_{x}). \tag{107}\] We then arrive at \[\mathcal{O}_{N}^{(n)}=\prod_{s,r=1,s\neq r}^{n-1}\operatorname{sgn}_{A}\big{(} g_{x}^{(s)},g_{z}^{(r)}\big{)}. \tag{108}\] To develop an analytic understanding of the observable \(\mathcal{O}_{N}^{(n)}\) and how it detects the ferromagnetic transition, we first consider the situation when only \(X\) or \(Z\) error is present. In this case, we show that \(\log\langle\mathcal{O}_{N}^{(n)}\rangle\) exactly maps to the excess free energy of spin pinning and sharply distinguish the two phases. After that, we discuss the general situation when both types of error are present. We here consider the case when only \(X\) errors are present, namely \(p_{z}=0\) and \(\mu_{x}=0\). The vanishing \(X\)-loop tension indicates that \(H_{n,x}\) is in the paramagnetic phase, and the domain walls \(g_{x}\) of arbitrary sizes occur with the same probability. Thus, we can perform an exact summation over all possible \(g_{x}\) and obtain \[\operatorname{tr}(\rho^{T_{A}})^{n}=\frac{1}{2^{(n-1)N}}\sum_{\{g_{z}^{(s)}\} }\mathcal{O}_{N,z}^{(n)}e^{-\mu_{z}H_{n,z}}, \tag{109}\] where \(\mathcal{O}_{N,z}^{(n)}=\sum_{\{g_{z}^{(s)}\}}\mathcal{O}_{N}^{(n)}\). The summation in \(\mathcal{O}_{N,z}^{(n)}\) is non-vanishing only if the sign functions in Eq. (108) for different \(g_{x}^{(s)}\) interfere constructively. This yields a constraint on the \(g_{z}^{(s)}\) \[\mathcal{O}_{N,z}^{(n)}=\prod_{r=1}^{n-1}N_{g_{x}}\delta_{h^{(r)}(A)} \tag{110}\] where \(h^{(r)}=\prod_{s=1,s\neq r}^{n-1}g_{z}^{(s)}\), the Kronecker delta function \(\delta_{h^{(r)}(A)}\) takes the value unity only if the support of \(h^{(r)}\) on subsystem \(A\) is a closed loop and equals zero otherwise, and \(N_{g_{x}}\) is an unimportant prefactor that denotes the number of possible \(g_{x}\) in each copy. The \(n-1\) delta function constraints are independent for odd \(n\), whereas for even \(n\) they give rise to only \(n-2\) independent constraints as \(\prod_{r=1}^{n-1}h^{(r)}=I\). The constraint requires \(h^{(r)}\) not to go through the boundary of subsystem \(A\). In the statistical mechanical model of Ising spins, this corresponds to no domain wall going through the boundary of \(A\), namely forcing \(|\partial A|\) boundary spins aligning in the same direction (see Fig. 4). Thus, the negativity is associated with the excess free energy for aligning spins \[\mathcal{E}_{A}^{(2n)}=\frac{1}{2n-2}(F_{A}^{(2n)}-F_{0}^{(2n)}):=\frac{\Delta F _{A}^{(2n)}}{2n-2}, \tag{111}\] where \(F_{0}^{(2n)}:=-\log\mathcal{Z}_{2n,x}\mathcal{Z}_{2n,z}\) and \(F_{A}^{(2n)}\) are the free energy without and with constraints, respectively. Since we have in total \(2n-2\) constraints, \(\mathcal{E}_{A}^{(2n)}=\Delta F_{A}\) with \(\Delta F_{A}\) being the excess free energy for aligning one species of Ising spins. ## Appendix B \(\mathbb{Z}_{n}\) Toric code So far, we only focus on the \(\mathbb{Z}_{2}\) Toric code with incoherent errors. It is natural to inquire whether our methods are still applicable to \(\mathbb{Z}_{N}\) Toric code and whether the results change. We provide a brief discussion on the \(\mathbb{Z}_{3}\) Toric code in this subsection. We will use similar symbols to denote the basic operators and stabilizers, although their meanings are different from those in the \(\mathbb{Z}_{2}\) case. Let us first specify the Hamiltonian and the error models. Consider an \(L\times L\) square lattice with periodic boundary conditions. The physical qutrits live on the edges of the lattice. We introduce the clock and shift operators \[\begin{split} XZ=wZX\,,\quad w=e^{2\pi i/3}\,,\\ Z=\begin{pmatrix}1&\\ &w\\ &w^{2}\end{pmatrix}\,,\quad X=\begin{pmatrix}1\\ &1\\ 1\end{pmatrix}\,.\end{split} \tag{21}\] In and only in this subsection, \(X\) and \(Z\) refer to the clock and shift, respectively. The code subspace is given by the ground state subspace of the Hamiltonian \[H_{\mathbb{Z}_{3}}=-\sum_{s}A_{s}-\sum_{p}B_{p} \tag{22}\] where \(A_{s}\) and \(B_{p}\) are mutually commuting projectors associated with vertices and plaquettes, e.g., \[\begin{split} A_{s}=&\frac{1}{3}\sum_{n=0}^{3}\left(X_{4}X_{5}X _{1}^{-1}X_{6}^{-1}\right)^{n}\\ B_{p}=&\frac{1}{3}\sum_{n=0}^{3}\left(Z_{4}Z_{1}Z_{2}^{-1}Z_{3}^{-1} \right)^{n}\end{split} \tag{23}\] One can verify that \(A_{s}^{2}=A_{s}\), \(B_{p}^{2}=B_{p}\). The ground state \(\ket{\Psi}\) satisfies \(A_{s}\ket{\Psi}=B_{p}\ket{\Psi}=\ket{\Psi}\), and the violation of \(A_{s}\) and \(B_{p}\) will be refered to as \(e\) (and its anti-particle \(\bar{e}\)) and \(m\) (and its anti-particle \(\bar{m}\)) anyons, respectively. For simplicity, we only consider the following incoherent error \[\begin{split}\mathcal{N}_{X,i}[\rho]=(1-& p_{1}-p_{2})\rho\\ +& p_{1}Z_{i}\rho Z_{i}^{\dagger}+p_{2}Z_{i}^{2} \rho Z_{i}^{2,\dagger}\,,\end{split} \tag{24}\] which creates a pair of \(e\) anyons in two different ways with probabilities \(p_{1}\) and \(p_{2}\). In the following, we will first assume \(p_{1}=p_{2}=p\) and comment on what could change without this assumption. To compute the three diagnostics, one can still work in the loop picture and map the \(n\)-th momentum of the error-corrupted state to a partition function of a classical spin model that involves \(n\)-flavor 3-state Potts spins. As the error rate increases, the spin model undergoes a paramagnet-to-ferromagnet transition. The three diagnostics are mapped to the corresponding observables in a similar fashion as what we have shown in the \(\mathbb{Z}_{2}\) case. Therefore, they should undergo a transition simultaneously and yield a consistent characterization of the error-induced phase. When \(p_{1}\neq p_{2}\), the spin models obtained in the loop picture contain complex phases and do not admit a statistical mechanical interpretation. Technically, it brings sign problems to the Monte Carlo simulation. It is unclear whether the three diagnostics still exhibit transition simultaneously, which may be an interesting question for future study.
2307.01761
Démélange, déconvolution et débruitage conjoints d'un modèle convolutif parcimonieux avec dérive instrumentale, par pénalisation de rapports de normes ou quasi-normes lissées (PENDANTSS)
Denoising, detrending, deconvolution: usual restoration tasks, traditionally decoupled. Coupled formulations entail complex ill-posed inverse problems. We propose PENDANTSS for joint trend removal and blind deconvolution of sparse peak-like signals. It blends a parsimonious prior with the hypothesis that smooth trend and noise can somewhat be separated by low-pass filtering. We combine the generalized pseudo-norm ratio SOOT/SPOQ sparse penalties $\ell_p/\ell_q$ with the BEADS ternary assisted source separation algorithm. This results in a both convergent and efficient tool, with a novel Trust-Region block alternating variable metric forward-backward approach. It outperforms comparable methods, when applied to typically peaked analytical chemistry signals. Reproducible code is provided: https://github.com/paulzhengfr/PENDANTSS.
Paul Zheng, Emilie Chouzenoux, Laurent Duval
2023-07-04T15:04:08Z
http://arxiv.org/abs/2307.01761v1
Demelange, deconvolution et debruitage conjoints d'un modele convolutif parcimonieux avec derive instrumentale, par penalisation de rapports de normes ou quasi-normes lissees (PENDANTSS) ###### Abstract Denoising, detrending, deconvolution: usual restoration tasks, traditionally decoupled. Coupled formulations entail complex ill-posed inverse problems. We propose PENDANTSS for joint trend removal and blind deconvolution of sparse peak-like signals. It blends a parsimonious prior with the hypothesis that smooth trend and noise can somewhat be separated by low-pass filtering. We combine the generalized pseudo-norm ratio SOOT/SPOQ sparse penalties \(\ell_{p}/\ell_{q}\) with the BEADS ternary assisted source separation algorithm. This results in a both convergent and efficient tool, with a novel Trust-Region block alternating variable metric forward-backward approach. It outperforms comparable methods, when applied to typically peaked analytical chemistry signals. Reproducible code is provided: [https://github.com/paulzhengfr/PENDANTSS](https://github.com/paulzhengfr/PENDANTSS). ## 1 Contexte La base de ce travail prolonge des travaux anterieurs [9, 14]. Recommenden publiee dans une revue [20], elle est ici soumise pour la premiere fois a une conference. Nous considerons le modele discret de formation de signal suivant: \[\mathbf{y}=\overline{\mathbf{s}}*\overline{\mathbf{\pi}}+\overline{\mathbf{l}}+\mathbf{n}\,. \tag{1}\] Il vise a identifier trois composantes: 1) un train parcimonieux d'impulsions \(\overline{\mathbf{s}}\in\mathbb{R}^{N}\), 2) un noyau de convolution en forme de pic \(\overline{\mathbf{\pi}}\in\mathbb{R}^{L}\) et 3) une composante de tendance \(\overline{\mathbf{t}}\in\mathbb{R}^{N}\) a variations relativement lentes, a partir d'une unique observation \(\mathbf{y}\) bruite par \(\overline{\mathbf{n}}\in\mathbb{R}^{N}\). Ce modele concerne une classe courante de donnces potentiellement multidimensionnelles, dans leur domaine naturel ou apres application d'une transformation favorisant la parcimonie [15]. Ce travail se concentre sur le cas de signaux monodimensionnels. Il rappelle le probleme de soustraction spectrale (en analyse de la parole) visant a separer des composantes harmoniques (pics dans le domaine de Fourier [5]) d'un fond \(\overline{\mathbf{t}}+\mathbf{n}\). Il se retrouve egalement en analyse biomedicale (ECG, EEG, EMG) ou en astronomie, ou les signaux \(\overline{\mathbf{x}}=\overline{\mathbf{s}}*\overline{\mathbf{\pi}}\) peuvent etre nomnies lignes ou raies spectrales. La composante \(\overline{\mathbf{t}}\) concentre des fluctuations lentes modifiant le niveau de reference (_offset_) des mesures sur \(\overline{\mathbf{x}}\). Elle pour corresponde a des sissonalites, a des derives instrumentales liees au viellissement de capteurs [1], a des variations de calibration. Peu ou mal modelisees (notamment par des modeles parametriques), un filtrage automatise de ces tendances, sans alterer les pics, est souvent difficile. Ce modele est tres courant en analyse physio-chimique (chromatographie, spectrometrie, spectroscopic), ou \(\overline{\mathbf{\pi}}\) prend la forme d'un melange de pics positifs a support restreint (gaussianes, lorentziennes, pseudo-fonction de Voigt). La composante de tendance \(\overline{\mathbf{t}}\) peut s'appeler egalement ligne de base, continuum, excursion, fond. Debruitage et suppression de tendance conjointes appartiennent a une classe de pretraitements courants de series temporelles [2], utilisant filtrage, regression parametrique, remplissage ou desocclusion (_inpainting_). Pour l'analyse de donnees physico-chimiques, nous renvoyons aux methodes backcor [16] et BEADS [14, 17]. Pour le debruitage et la deconvolution combiniees, mentionnons notamment [7, 19] pour des approches pronuvant la parcimonie. Nous nous focalisonsici sur les methodes SOOT [18] et SPOQ [8, 9], emolyant des rapports de quasi-normes et de normes lissees, presentant une penalisation avec propriete approchee d'invariance d'echelle. Afin de resoudre le probleme (1), nous proposons une formulation conjointe non-convexe du probleme (section 2). Nous presentons un algorithme efficace de separation base sur des methodes _forward-backward_[6, 11], pourvu de preuves de convergence (section 3). Cet algorithme est evalue -- dans le contexte experimental decrit en section 4 -- de maniere comparative en combinant backcor [16] et SOOT/SPOQ [8, 18] pour differents niveaux de bruits et de promotion de parcimonie (section 5). Hypotheses pour la resolution du probleme conjoint de demelange L'equation (1) comprend plusieurs inconnues. Tenter de la resoudre impose des hypotheses supplementaires. Afin de couppler des differentes taches, nous associons a la perte quadratic traditionnelle une regularisation combinee, incorporant certaines hypotheses _a priori_. Le cadre traite par PENDanNTSS, pouvant ente plus generique, est focalise si cair une classe de signaux observes en analyse physico-chimique. Notons \(\iota_{A}\) la fonction indicatrice de l'ensemble convexe non vide \(A\), nulle si son argument appartient a \(A\), et identifiee a \(+\infty\) sinon. La positivite des pics et du noyau, ainsi que la normalisation de l'integrale de ce demier permet de definir dans un premier temps les ensembles \(C_{1}=[0,+\infty[^{N}\) et \(C_{2}=\mathcal{S}=\{\mathbf{\pi}=(\pi_{\ell})_{1\leq\ell\leq L}\in[0,+\infty[^{L}\) t.q. \(\sum_{\ell=1}^{L}\pi_{\ell}=1\}\), limitant (par leurs indicatrices) l'espace de recherche pour le signal partimonieux et le noyau. Nous supposons ensuite que la tendance possede des variations lentes en regards du bruit. Ainsi par un filtrage passe-bas, il devrait etre possible d'en obtenir une estimation correcte. En d'autres termes : en faisant l'hypotheses d'une estimation du signal de pics que l'on puisse soustraire, le bruit residuel a minimiser par l'attache quadratique aux donnees s'esprime par le biais d'un filtre passe-haut \(\mathbf{H}:\) \[(\forall\mathbf{s}\in\mathbb{R}^{N})(\forall\mathbf{\pi}\in\mathbb{R}^{L})\;\rho(\mathbf{s },\mathbf{\pi})=\frac{1}{2}\|\mathbf{H}(\mathbf{y}-\mathbf{\pi}\ast\mathbf{s})\|^{2}. \tag{2}\] Cette fonction est Lipschitz differentiable par rapport a \(\mathbf{s}\) (resp. \(\mathbf{\pi}\)) avec une constante note \(\Lambda_{1}(\mathbf{\pi})\) (resp. \(\Lambda_{2}(\mathbf{s})\)). La parcimie du signal est favorisse par la regularisation \(\Psi\) definie (pour \(\beta\in]0,+\infty[\)) par la fonction non-convexe : \[(\forall\mathbf{s}\in\mathbb{R}^{N})\quad\Psi(\mathbf{s})=\log\left(\frac{(\ell_{p, \alpha}^{p}(\mathbf{s})+\beta^{p})^{1/p}}{\ell_{q,\eta}(\mathbf{s})}\right), \tag{3}\] avec les deux approximations parametriques de normes ou quasi-normes (de parametres \((\alpha,\eta)\in]0,+\infty[\)) : \[\ell_{p,\alpha}(\mathbf{s})=\left(\sum_{n=1}^{N}\left((s_{n}^{2}+\alpha^{2})^{p/2} -\alpha^{p}\right)\right)^{1/p}, \tag{4}\] et \[\ell_{q,\eta}(\mathbf{s})=\left(\eta^{q}+\sum_{n=1}^{N}|s_{n}|^{q}\right)^{1/q}. \tag{5}\] Si \(q>2\), ou \(q=2\) et \(\eta^{2}\alpha^{p-2}>\beta^{p}\) (ce que nous supposerons), alors \(\Psi\) est Lipschitz differentiable sur \(\mathbb{R}^{N}\) et \(\mathbf{0}_{N}\) (i.e., vecteur de taille \(N\) identiendent nul) en est un minimiseur local. Le couple solution \((\widehat{\mathbf{\pi}},\widehat{\mathbf{s}})\) minimise : \[(\forall\mathbf{s}\in\mathbb{R}^{N})(\forall\mathbf{\pi}\in\mathbb{R}^{L})\quad\Omega( \mathbf{s},\mathbf{\pi})=f(\mathbf{s},\mathbf{\pi})+g(\mathbf{s},\mathbf{\pi}), \tag{6}\] ou l'on definit \[(\forall\mathbf{s}\in\mathbb{R}^{N})(\forall\mathbf{\pi}\in\mathbb{R}^{L})\begin{cases} g(\mathbf{s},\mathbf{\pi})=\iota_{C_{1}}(\mathbf{s})+\iota_{C_{2}}(\mathbf{\pi}),\\ f(\mathbf{s},\mathbf{\pi})=\rho(\mathbf{s},\mathbf{\pi})+\lambda\Psi(\mathbf{s}).\end{cases} \tag{7}\] La tendance est enfin estimee a partir de : \[\widehat{\mathbf{t}}=(\mathbf{Id}_{N}-\mathbf{H})(\mathbf{y}-\widehat{\mathbf{\pi}}\ast\widehat {\mathbf{s}})\,, \tag{8}\] avec \(\mathbf{Id}_{N}\) la matrice identite de \(\mathbb{R}^{N}\). ## 3 Algorithme PENDANTSS La structure de (6) suggere l'usage d'une methode alternee par blocs, mettant a jour sequentiellement la sequence d'implussions \(\mathbf{s}\) et le noyau \(\mathbf{\pi}\). PENDANTSS s'appuie sur l'algorithme TR-BC-VMFB (Alg. 1), qui generalise le BC-VMFB [11] employe dans [18] en deconvolution aveugle. Soient \((\mathbf{s}_{k},\mathbf{\pi}_{k})\) les estimees de l'iteration \(k\in\mathbb{N}\). Le calcul de \(\mathbf{s}_{k+1}\) s'obient par une etape de VMFB [10], acceleree par un schema de region de confiance. Nous introduisons tout d'abord la metrique MM (de majoration-minimisation) : \[\mathbf{A}_{1,\rho}(\mathbf{s}_{k},\mathbf{\pi}_{k})=(\Lambda_{1}(\mathbf{\pi}_{k})+ \lambda\chi_{q,\rho})\mathbf{Id}_{N}+\\ \frac{\lambda}{\ell_{p,\alpha}^{p}(\mathbf{s}_{k})+\beta^{p}}\text{Diag }((s_{n,k}^{2}+\alpha^{2})^{p/2-1})_{1\leq n\leq N}, \tag{9}\] avec la constante \(\chi_{q,\rho}=\frac{q-1}{(q+\rho)^{2/q}}\). On construit une majoration de (6) par rapport a la variable \(\mathbf{s}\) (voir [8, Prop. 2]) : \[(\forall\mathbf{s}\in\bar{\mathcal{B}}_{q,\rho}\cap C_{1})\quad\Omega( \mathbf{s},\mathbf{\pi}_{k})\leq f(\mathbf{s}_{k},\mathbf{\pi}_{k})\\ +(\mathbf{s}-\mathbf{s}_{k})^{\top}\nabla_{1}f(\mathbf{s}_{k},\mathbf{\pi}_{k})+ \frac{1}{2}\|\mathbf{s}-\mathbf{s}_{k}\|^{2}_{\mathbf{A}_{1,\rho}(\mathbf{s}_{k},\mathbf{\pi}_{k})}, \tag{10}\] avec, pour tout \(\mathbf{z}\in\mathbb{R}^{N}\), \(\|\mathbf{z}\|_{\mathbf{A}}=(\mathbf{z}^{\top}\mathbf{A}\mathbf{z})^{1/2}\). Le domaine de validite de (10) est limite par le complement de la boule \(\ell_{q}\), \[\bar{\mathcal{B}}_{q,\rho}=\{\mathbf{s}=(s_{n})_{1\leq n\leq N}\in\mathbb{R}^{N}| \sum_{n=1}^{N}|s_{n}|^{q}\geq\rho^{q}\}. \tag{11}\] Nous introduisons donc un schema de region de confiance (_Trust-Region_ ou TR [13]), permettant de controler le domaine des it\(\mathbf{\Sigma}>0\), un nombre maximal de tests de regions de confiance, et \((\rho_{k,i})_{1\leq i\leq\mathcal{I}}\) une luste de rayons testes : \[\rho_{k,i}=\begin{cases}\sum_{n=1}^{N}|s_{n,k}|^{q}&\text{ si }i=1\,,\\ \theta\rho_{k,i-1}&\text{ si }2\leq i\leq\mathcal{I}-1\,,\\ 0&\text{ si }i=\mathcal{I}\,.\end{cases} \tag{12}\] On calcule alors la matrice MM associee \(\mathbf{A}_{1,\rho_{k,i}}(\mathbf{s}_{k},\mathbf{\pi}_{k})\), et l'on definit \(\mathbf{s}_{k,i}\) comme minimiser du terme de droite de (10). La boucle TR s'internormt des que \(\mathbf{s}_{k,i}\in\bar{\mathcal{B}}_{q,\rho_{k,i}}\), et definit ainsi \(\mathbf{s}_{k+1}\). En general, la minimisation de la majorante (10) n'admet pas de solution explicite. Neanmoins, par notre choix de \(C_{1}\) et la structure diagonale de (9), la resolution est directe, d'apres [3, Prop. 24.11] et [4, Cor. 9], (\(\forall i\in\{1,\ldots,\mathcal{I}\}\)) \[\mathbf{s}_{k,i} =\text{Proj}_{C_{1}}(\mathbf{s}_{k}-\gamma_{\gamma,k}\mathbf{A}_{1,\rho_{k,i}}( \mathbf{s}_{k},\mathbf{\pi}_{k})^{-1}\nabla_{1}f(\mathbf{s}_{k},\mathbf{\pi}_{k}))\,. \tag{13}\] La mise a jour du noyau s'exprimee simplement, par une etape de descente de gradient projete : \[\mathbf{\pi}_{k+1}=\text{Proj}_{\bar{\mathcal{B}}}\left(\mathbf{\pi}_{k}-\gamma_{\pi,k} \Lambda_{2}(\mathbf{s}_{k+1})^{-1}\nabla_{2}f(\mathbf{s}_{k+1},\mathbf{\pi}_{k})\right), \tag{14}\] avec (Proj\({}_{\bar{\mathcal{\mathcal{B}}}}\)) la projection sur le simplexe unite, pour laquelle il existe des methodes de calcul rapides [12]. La methode (incluant les domaines de validite des pas \((\gamma_{s,k},\gamma_{\pi,k})_{k\in\mathbb{N}}\)), est ## 4 Contexte de validation experimentale Nous considerons les jeux de donnees normmes C et D. Les signaux parcimonieux originaux \(\overline{\mathbf{x}}\) et les observations \(\mathbf{y}\) sont representes dans la figure 1,de longeur \(N=200\). Les observations \(\mathbf{y}\) sont obtenues a partir de (1), avec un noyau \(\overline{\mathbf{\pi}}\) defini par une fonction gaussienne normalisee, d'ecart-type \(0,15\) et de support tronque de taille \(L=21\). Le bruit est blanc, gaussien, a moyenne nulle. Son niveau \(\sigma\) est fixe a un pourcentage variable de l'amplitude maximale \(x_{\max}\) de \(\overline{\mathbf{\pi}}=\overline{\mathbf{\pi}}\ast\overline{\mathbf{s}}\), convolution implemente par bourrage de zeros. Les parametres de l'algorithme PENDANTSS et ete chois s constants par soutois de simplicite : \(\gamma_{s,k}=1.9\) et \(\gamma_{\pi,k}\equiv 1.9\) satisfont la contrainte d'intervalle. Nous avons choisi \(\theta=0.5\) comme increment du rayon de confiance, ainsi qu'un maximum de \(\mathcal{I}=50\) tests. L'initialisation pour toutes les methodes suit celle proposee dans [18] : \(\mathbf{s}_{0}\in C_{1}\) est un signal constant, postif, \(\mathbf{\pi}_{0}\in C_{2}\) est un filtre gaussien de largeur 1. Les criteres d'arret sont : \(\varepsilon=10^{-6}\sqrt{N}\) et \(K_{\max}=3000\). Les hyperparametres -- regularisation pour backor [16] et ceux de SPOQ/SOT \((\lambda,\alpha,\beta,\eta)\) -- ont e\(\hat{\mathbf{c}}\) ajusts a partir d'une seule realisation de reference, en employant une memrique composite, combinant les trois composantes-cibles (train d'impulsions, noyau, tendance) : \(2\text{SNR}_{\mathbf{s}}+\text{SNR}_{\mathbf{\pi}}+\text{SNR}_{\mathbf{\hat{t}}}\). Suivant le meme critere composite, la frequence de coupure \(f_{c}\) du filtre passe-haut, dans (8), resulte du choix du meilleur point, parmi les dix premiers pics du module du spectre de frequence du signal. Parmi les hyperparametres, \(\alpha\) peut aisement etre choisis constant (typiquement \(\alpha=7\times 10^{-7}\) pour nos donnees). Du fait de l'ambigitue de position classique en demelange, pour s'assurer que le noyau est correctement centre, nous appliquons au noyau estime un post-traitement du d'ecalage spatial entier, pour nous assurer qu'il est correctement centre. Une recherche par quadrillage (_grid search_) determine le nombre de boucles permetant de maximiser le \(\text{SNR}_{\mathbf{s}}\) du train d'impulsion parcimonieux. ## 5 Resultats numeriques Nous comparons les performances de PENDANTSS par ablation vis-a-vis (i) de la reference backcor [16] en suppression de tendance avec une recherche par gille ou quadrillage pour optimiser l'order du polynome et le seul, suivie de la de-convolution aveuige proposee dans [18] pour estimer le signal \(\widehat{\mathbf{s}}\) et le noyau \(\widehat{\mathbf{\pi}}\), (ii) du flot complet de PENDANTSS avec les parametres SPOQ \((p,q)=(1,2)\) (soit SOOT) ou \((p,q)=(0.75,10)\). Les differentes estimations sont eva-lues en rapport signal-sur-bruit (SNR) : signal parcimonieux (\(\text{SNR}_{\mathbf{s}}\)), noyau (\(\text{SNR}_{\mathbf{\pi}}\)) et tendance (\(\text{SNR}_{\mathbf{\hat{t}}}\)). En particulier, \(\text{SNR}_{\mathbf{s}}=20\log_{10}(\|\widehat{\mathbf{s}}\|_{2}/\|\overline{\mathbf{s}}- \widehat{\mathbf{s}}\|_{2})\). Nous evaluons de surroit le TSNR, correspondant a la meme quantite, evalue uniquement sur le support (suppose connu) du signal parcimonieux original. Ce support n'est pas connu en general, cependant cette mesure est tulie pour quantifier la performance de taches d'estimation de quantites ancillaires, calculees sur les pics (amplitude, largeur, surface). De telles mesures, importantes en analyse physico-chimique quantitative, sont sensibles au filtrage de tendance et a la deconvolution. Les resultats sont resumes dans la table 1. Nous reportons les quantites moyennes, et l'eur ced-type (apres le signe "\(\pm\)"), sur 30 realisations. Le meilleur resultat le second sont identinies par deux (**) ou une (*) etoiles. Dans la plupart des situations, PENDANTSS se revele superieur aux approches decouplees. Il convient neanmoins de rester mesure en regard des ecarts-types parfois importants, ce qui n'est pas surprenant dans l'evaluation de metriques quadratiques pour des signaux de nature parcimonieuse. ## 6 Conclusion et perspectives Nous proposons l'algorithme PENDANTSS, pour resoudre le probleme complique de separation de tendance, conjoint a une deconvolution aveuige parcimonieue. La methode prend en compte une hypothese de derive lisse "base frequence" en l'incorporum a un probleme de deconvolution aveuige. Ce dernier s'appique sur l'usage recent de penalite en rapport de quasi-normes ou normes de type SOOT/SPOQ. La validation proposee indique un gain quantitatif par rapport aux methodes de reference, pour des signaux parcimonieux positifs, comme l'on en rencontre en chimie analytique. Il reste a tendre la validation a de plus larges classes de signaux parcimonieux, ainsi qu'a proposer des estimations plus intuitives des hyper-parametres, en fonction de la nature des signaux parcimonieux analyses, et particulierement en regard de critretes de separabilite des signaux de pics. Un code en langage Matlab est mis a disposition in [https://github.com/paulzhengfr/PENDANTSS](https://github.com/paulzhengfr/PENDANTSS). ## Remerciement Ce travail a benefici de la bourse ERC (_European Research Council, Starting Grant_) MAJORIS ERC-2019-STG-850925.
2303.13692
Predicting Physical Parameters of Cepheid and RR Lyrae variables in an Instant with Machine Learning
We present a machine learning method to estimate the physical parameters of classical pulsating stars such as RR Lyrae and Cepheid variables based on an automated comparison of their theoretical and observed light curve parameters at multiple wavelengths. We train artificial neural networks (ANNs) on theoretical pulsation models to predict the fundamental parameters (mass, radius, luminosity, and effective temperature) of Cepheid and RR Lyrae stars based on their period and light curve parameters. The fundamental parameters of these stars can be estimated up to 60 percent more accurately when the light curve parameters are taken into consideration. This method was applied to the observations of hundreds of Cepheids and thousands of RR Lyrae in the Magellanic Clouds to produce catalogs of estimated masses, radii, luminosities, and other parameters of these stars.
Anupam Bhardwaj, Earl P. Bellinger, Shashi M. Kanbur, Marcella Marconi
2023-03-23T22:05:58Z
http://arxiv.org/abs/2303.13692v1
Predicting Physical Parameters of Cepheid and RR Lyrae variables in an Instant with Machine Learning ###### Abstract We present a machine learning method to estimate the physical parameters of classical pulsating stars such as RR Lyrae and Cepheid variables based on an automated comparison of their theoretical and observed light curve parameters at multiple wavelengths. We train artificial neural networks (ANNs) on theoretical pulsation models to predict the fundamental parameters (mass, radius, luminosity, and effective temperature) of Cepheid and RR Lyrae stars based on their period and light-curve parameters. The fundamental parameters of these stars can be estimated up to 60 percent more accurately when the light-curve parameters are taken into consideration. This method was applied to the observations of hundreds of Cepheids and thousands of RR Lyrae in the Magellanic Clouds to produce catalogs of estimated masses, radii, luminosities, and other parameters of these stars. stars: variable, Cepheid, RR Lyrae - galaxies: Magellanic Clouds - cosmology: distance scale + Footnote †: journal: 0000-0002-8191-7886 ## 1 Introduction Classical Cepheids and RR Lyrae variable stars are well known distance indicators and are also excellent tracers of young and old age stellar populations, respectively (Subramanian et al., 2017; Beaton et al., 2018; Bhardwaj, 2020, 2022). Modern pulsation codes can reproduce observed pulsation periods, light and radial velocity curves, and peak-to-peak amplitude variations for these classical pulsating stars at all wavelengths (Bono, Marconi & Stellingwerf, 2000; Marconi et al., 2013, 2015). In the era of large variability surveys, a quantitative comparison of the predicted and observed pulsation properties of Cepheid and RR Lyrae variables provides stringent constraints on their intrinsic evolutionary parameters, and subsequently, provides new challenges for the stellar evolution and pulsation theory (Bhardwaj et al., 2015, 2017; Das et al., 2018). We aim to employ modern automated methods for comparing observed and predicted light curve structure, which is defined by its amplitude and phase parameters, of Cepheid and RR Lyrae variables and generate catalogs of physical parameters of observed variables in the Galaxy and the Magellanic Clouds. ## 2 Analysis and results Our theoretical model grid includes 390 models with composition (Y = 0.25 and Z = 0.008) representative of Cepheids in the Large Magellanic Cloud (LMC) computed by Marconi et al. (2013), and a total of 270 models representative of RR Lyrae metallicities ranging from Z = 0.0001 to Z = 0.02 and helium abundances ranging from Y = 0.245 to Y = 0.27 (Marconi et al., 2015). The pulsation models provide multiband theoretical light curves for a broad range of stellar masses, luminosity levels, and chemical composition. The observational \(V-\) and \(I-\)band light curves were taken from the Optical Gravitational Lensing Experiment survey (Soszynski et al., 2018). We use ANNs trained using _scikit-learn_ on the grid of theoretical models to predict physical parameters of observed stars (see Bellinger et al. 2020, for more details). The light curve structure importance was independently verified using random forests. We evaluate how well the ANN can predict physical parameters based on the theoretical models and using: 1) period (linear model); 2) period and light curve structure (machine learning method). The two-fold cross validation method was used for model assessment and a quantification of improvement was provided using the standard deviation of the errors and the coefficient of variation (\(R^{2}\)) - a higher \(R^{2}\) is better. In Figure 1, a significant increase in \(R^{2}\) and a large reduction in the standard deviation of residuals is clearly seen when using the machine learning method. Similar results were also obtained for masses, radii, temperatures, absolute magnitudes, colors, and Wesenheit magnitudes (Bellinger et al. 2020). ## 3 Conclusions While the period-mean density relation suggests that the observable period is the most important quantity in constraining the global stellar parameters of classical pulsating stars, we showed that the light curve structure plays a statistically significant part in determining these parameters. Once a smooth grid of pulsation models covering the entire parameter space is available, the catalogs of physical parameters can be instantly generated for all observed Cepheid and RR Lyrae stars with unprecedented precision.
2304.01231
On a Super-Complete Mathematical Model of Ambipolar Processes of Cumulation and Dissipation in Self-Focusing Structures in Plasma of Planetary Atmospheres in plasma with current
4D mathematical models of structurally related (conjugated, entangled, dual) phenomena of dissipation and cumulation of electrical energy (an external source in continuous media) are discussed, accompanied by the formation of cumulative-dissipative structures and their ordering into a regular system - a dynamic dissipative "crystal" with a long-range dynamic order. The excitation of new degrees of freedom in such systems provides attractiveness or geometric self-focusing of energy-mass-momentum flows (EMMF) for the entire regular system. As a result of cumulation, EMMF structures acquire hyper-properties. The cumulation of EMMF in rendered structures is a common property of media activated to form 4D structures. The basis of such a dissipative structure is an attractor, the end result of which is a cumulative jet from an attractor with hyper-properties. Therefore, these structures are cumulative-dissipative. We discuss a method for describing these structures and prove that cumulative processes in plasmoids exist and can be described theoretically, although not with the help of full-fledged mathematical 4D models. It has been theoretically and experimentally proven that the cumulation of the electric field due to the ambipolar drift of the plasma is an inherent property of the current carrying gas-discharge plasma. The results obtained by modeling shock waves of the electric field (E/N) can be useful to explain the cumulative formation in the heliosphere, atmosphere and ionosphere of the Earth, since the Earth has a negative charge of about 500,000 C, and the Sun positively charged at the level of 1400 C. Based on the mathematical approach, a classification of shock waves and types of cumulation in 4D space-time will be carried out.
Philipp I. Vysikaylo
2023-04-02T19:52:22Z
http://arxiv.org/abs/2304.01231v1
On a Super-Complete Mathematical Model of Ambipolar Processes of Cumulation and Dissipation in Self-Focusing Structures in Plasma of Planetary Atmospheres ###### Abstract 4D mathematical models of structurally related (conjugated, entangled, dual) phenomena of dissipation and cumulation of electrical energy (an external source in continuous media) are discussed, accompanied by the formation of cumulative-dissipative structures and their ordering into a regular system - a dynamic dissipative "crystal" with a long-range dynamic order. The excitation of new degrees of freedom in such systems provides attractiveness or geometric self-focusing of energy-mass-momentum flows (EMMF) for the entire regular system. As a result of cumulation, EMMF structures acquire hyper-properties. The cumulation of EMMF in rendered structures is a common property of media activated to form 4D structures. The basis of such a dissipative structure is an attractor, the end result of which is a cumulative jet from an attractor with hyper-properties. Therefore, these structures are cumulative-dissipative. We discuss a method for describing these structures and prove that cumulative processes in plasmoids exist and can be described theoretically, although not with the help of full-fledged mathematical 4D models. It has been theoretically and experimentally proven that the cumulation of the electric field due to the ambipolar drift of the plasma is an inherent property of the current carrying gas-discharge plasma. The results obtained by modeling shock waves of the electric field (\(E\)/\(N\)) can be useful to explain the cumulative formation in the heliosphere, atmosphere and ionosphere of the Earth, since the Earth has a negative charge of about 500,000 C, and the Sun positively charged at the level of 1400 C. Based on the mathematical approach, a classification of shock waves and types of cumulation in 4D space-time will be carried out. perturbation theory, classification of shock waves, types of ambipolar drift, classification of ambipolar diffusions, self-consistent electric fields, cumulation, dissipation, cumulative-dissipative structures. P.I. Vysikaylo is with the Plasma Chemistry Laboratory, Moscow Radiotechnical Institute RAS, 117519 Moscow, Russia (e-mail: [email protected]). ID: 0000-0001-9701-5222 ## I Introduction Electrons are more mobile than ions due to the smallness of their mass compared to ions. The electrons leave the plasma structures faster than positive ions and thereby charge them with a positive charge, the electric field of which returns a part of the low-energy electrons back to plasma structures. This is how dual (bound, entangled) flows of charged particles arise in the region of charged plasma structures. Reverse electron flows focus (cumulate) plasma dissipative structures. This is how cumulative-dissipative structures (CDS) with dynamic surface and volume tension appear and develop. There are structures with three types of cumulation of electrons (Fig.1a): with planar, spherical and cylindrical symmetries. When the activation energy of the medium is low, strata appear first. They focus weakly energetic electrons, some of which gain energy in the stratum and, further accelerated by the electric field, carry out an effective current transfer. As the energy increases, spherically symmetrical structures with dual cumulative jets (bicumulation of electrons and positive ions) are formed in the medium (Fig.1b, c). Then cylindrical symmetrical electric arcs and lightning with cumulative stings with super-properties are formed on these jets. In an activated environment, it is possible to co-organize CDS with different types of symmetry (Fig.1_a_). The presence of a positive charge in the plasma structure leads to the formation of a flow of positive ions from the structure. The discovery of positively charged CDS with dual electron flows and with different types of symmetry in plasma was carried out in [1]. The idea in [1] was the possibility of co-organizing accumulating and dissipating energy, momentum and mass of opposite or orthogonal flows into a single self-organizing, in particular 8D dimensional, dual structure. The electric field strength acts in plasma as an additional and most important component that controls the behavior of all (except high-energy) charged plasma particles [1]. The electric field strength \(-\mathbf{E}\) is always a vector and three-dimensional quantity that can change over time. Because of the internal electric fields, "a system of charged particles is essentially not a gas, but some completely unique system, pulled together by distant forces" [2]. Far Coulomb forces form: 1) potential mirrors focusing and reflecting charged particles; 2) separating the flows of charged particles into opposite ones (high-energy and low-energy, unable to leave the potential wells). **The aim of the work** is to carry out a more correct, account for the electric field strength - \(\mathbf{E}\). We will do this description of plasma using perturbation theory [3]. We will describe the phenomena of cumulative (mainly drift) and dissipative (mainly diffusion) transport controlled by internal electric fields. These phenomena lead to the formation of CDS [3] of the following type: running and standing strata (known to Faraday); the effect discovered by Pekarik when the group velocity of the strata is opposite to the phase velocity; cathode spots; electric field shock waves discovered by Vysikaylo in gas-discharge plasma with current and visualized by him and his co-authors in gas-discharge plasma in a tube; plasma tails Figure 1: Examples of CDS with different symmetry. (a) a possible arrangement of structured plasmoids with different symmetries (\(k=0\) correspond to planar, 1 to cylindrical, and 2 to spherical symmetry). (b) Arrows indicate the directions of cumulation of electron flows and of the reduced electric field – \(E\)/\(N\). Electrons appear in the bulk in the spot region, for example, due to UV preionization. (c) Corresponding diagram of cumulation of ion flows to the cathode spot [1]. behind meteoroids; jets; sprites; elves; ordinary and beaded lightning (Fig.2,3), electric arcs and other plasma CDS. The synergistic (joint, internal) field of uncompensated ions has a more significant effect on the behavior of electrons. It heats them, localizes weakly energetic electrons in positively charged potential wells, forms cumulation points L1 between positively charged regions (Fig.2) [4], forms cumulative jets of high-energy electrons from positively charged structures such as cathode positively charged spots into a positively charged plasma column [1,4], electric arcs or various lightnings. Without the presence of a positively charged cathode spot, the discharge current is negligible [5,6]. The shape of the cathode positively charged spot has an elliptical shape [6]. The cathode spot with its positive charge cumulates weakly energetic electrons to its center and throws them in the form of a cumulative jet into the region of the Faraday dark space (fig.1b,c) [1,3]. Taking into account the positive charge of the cathode spot and the accumulation of low-energy electrons to its center explains the reverse movement of cathode spots in transverse magnetic fields [1], an effect experimentally established by Stark in 1903. Electrons, ionizing neutral gas particles, form ion concentration profiles; when atoms and molecules are excited, they form plasma glow profiles indicating possible profiles of the _E/N_ parameter, reaching breakdown values, etc. There are currently two main approaches to describing the mechanisms of CDS generation. **The first approach** is based on the study of the mechanisms of instabilities, i.e., an unlimited growth in time of the concentrations of discharge plasma particles. Within the framework of the first approach, it is believed that if a mechanism for an unlimited increase in plasma concentrations is proposed, then this will necessarily lead to radial (3D) pinching of a homogeneous discharge [7]. Here, the complex 4D Cauchy-Dirichlet problem is replaced by the 1D Cauchy problem in time, while the duality of electron flows into and out of the structure is not taken into account at all. This is mistake! Asymmetric hydrodynamic elliptical and other structures, with a pulsating electric field in space and Vysikaylo-Euler' libration points, shown in Fig.2, are "mysterious" for supporters of the first concept. Theorists have been trying to describe plasma structuring (strata) without involving a space charge for many years [8]. However, they have not yet been able to achieve satisfactory agreement between the results of numerical calculations and experiments with sharp drops in luminosity between strata [8]. Strata exist in discharges at pressure 15 torr and a discharge time of \(\sim\)10 ns [9]. At these times, any processes of ambipolar diffusion are insignificant [3,4]. **The second approach** developed by us [1,3,4] is based on the search and study of dual (8D) processes of transfer and modification in 4D space-time of internal electric fields, leading to local cumulation of a previously homogeneous discharge. According to the second approach, the processes that form the CDS proceed simultaneously in opposite directions: from the CDS and to the CDS. Electron flows in CDS focus these structures and form dynamic surface tension in potential Coulomb wells. This leads to self-focusing of the volume positive charge in the CDS, i.e. to the processes of 3D cumulation of positive volume charge and internal electric fields. The cumulation of flows of charged particles and electric field strength leads to increased luminescence of the surface of plasma CDS (Fig.2 and 3). Focusing low-energy electrons exchange energy during Coulomb collisions, which leads to the processes of maxwellization of the electron distribution function and the constant formation of fluxes of high-energy runaway (from plasma CDS) electrons. It is usually believed that the electrons run away against the electric field, accelerating. This is how pulsed moving lightning and electric anode-directed arcs are formed with electrons run away (falling out) of them. Observations of such lightning are described in [5], and the theory describing this phenomenon was formulated in [1,3,4]. But, for high-energy electrons, internal electric fields are no longer a decree. Some of them can move in any direction, and even in the direction of the electric field, slowing down slightly. Two opposite streams of electrons are formed. One focuses (cumulates) its energy into the plasma CDS, and the other, having received additional energy, takes the energy out of the plasma CDS. The volumetric charge and electric field of the plasma CDS limits the dissipation of energy from it. The solution of such a minimum 8D dimensional full-fledged model, which does not even take into account the interaction of dual opposite electron flows with each other in plasma CDS, is currently impossible. However, an explanation of a number of cumulative-dissipative phenomena can be obtained within the framework of inferior models using experimental observations of such plasma CDS. In [10], the electron temperature profile in the entire heliosphere was obtained analytically on the basis of experimentally established varieties of positive iron ions in the heliosphere in [11]. In [10], on the basis of the experimentally established facts in [11] the role of runaway electrons in the effective charge of the CDS-Sun was taken into account and the EMF of the entire heliosphere. Thus, electrons escaping from the Sun and the entire heliosphere are taken into account in the effective charge of the Sun. This allowed the 8D dimensional problem to be reduced to a 4D dimensional problem, and then to a 1D dimensional spherically symmetric quasi-stationary problem. Thus, for the first time in [10], the foundations of a unified plasma heliogeophysics of a quasi-permanent giant discharge between a positively charged Sun and a negatively charged Earth were laid. In [11], despite the general title of the monograph "Plasma Heliogeophysics", heliophysics and geophysics are presented as separate parts that are not connected to each other by a single giant current of charged particles in the heliosphere, which is proved in [10]. In this paper, understanding the complexity of 8D numerical Figure 2: Beaded lightning as a regular cumulative-dissipative system with long-range dynamic order and hyper-properties. \(L_{1}\) are electron cumulation points theoretically described by Vysikaylo [5]. **Fig.3**. Linear plasma cumulative-dissipative structures in air: Cylindrical cumulation in ordinary lightning in Brazil with a characteristic radius of \(\sim\) 1.2 m. [https://photo.bresticity.com/2023/02/nristo2.jpg](https://photo.bresticity.com/2023/02/nristo2.jpg) and analytical modeling of specific flows in CDS in plasma, in order to explain a number of "mysterious" phenomena in plasma caused by violation of plasma electrical neutrality, we propose a method for modeling such flows. To do this, we will use our experimental experience in observing the entire spectrum of gas-discharge phenomena in laboratories and reduce the complexity of the 8D problem to 4D or 2D measurements (in space-time), setting the appropriate boundary conditions or CDS electric charge. The first experimental studies of the phenomena of the transfer of charged particles in a weakly ionized gas and the establishment of the main control parameters of the dynamic order in a gas-discharge plasma were carried out at the end of the 19th century. Thus, Stoletov established that many phenomena in plasma are determined by the parameter of the electric field strength reduced to the neutral gas pressure (\(P\)) - \(E/P\)[12]. This is a consequence of Paschen's integral law in differential form for local phenomena in gas-discharge plasma, discovered at the same time. The \(E/P\) parameter was widely used in the USSR back in the 1980s. This is the tradition of the Russian Stoletov' school. (This parameter uniquely determines the ratio of the electric energy density \(-\)\(E^{2}\) to the kinetic energy density of gas particles \(-\)\(P\)). When the voltage reaches the critical value \(E\)c of 30 kV/cm (or 3 MV/m), breakdown occurs in air. Later, referring to Stoletov, Townsend proved experimentally that for all dependences of the constants of processes (production, excitation, attachment of electrons, drift of charged particles and coefficients of various diffusions) in plasma, it is more efficient to use the parameter of the reduced electric field strength to the density of the number of neutral gas particles \(-\)\(N\). This is how the parameter \(-\)\(E/N\), measured in Townsends (1Td = 10\({}^{\text{-}17}\) V\(\cdot\)cm\({}^{2}\)) appeared in the physics of low-temperature plasma [13]. This parameter is more convenient for a theoretical description, since it does not include the temperature of the gas in which the plasma is formed. Breakdown of air occurs when \(E/N\)\(\approx\) 90 Td is reached. This area continues to develop successfully due to the influence of charged particle transfer processes in the ionosphere and heliosphere on the functioning of systems such as JPS. The importance is the study of the influence of processes in the plasma of the heliosphere and ionosphere on the well-being of man and all organisms of the Earth. However, in astrophysics and physics of the ionosphere the control parameter \(E/N\) is little used. It believed that a rigorous description of the behavior of not only a weakly ionized gas-discharge plasma, but also the plasma of the heliosphere and ionosphere should be carried out using kinetic equations for electrons and ions [3,8]. This method is very complicated; the approaches developed within its framework with two-particle collisions and a two-term approximation were criticized by A.A. Vlasov and he called them inferior models [2]. This approach is not needed in many cases, and all the questions posed by A.A. Vlasov, are easily removed. Since all transfer coefficients can be taken from experiments, for example, for electron transfer processes from [13,14], for ions from [15] and approximated by simple dependences on the E/N parameter [3]. The system of kinetic equations can be replaced by a simpler system of transfer equations for local macroscopic quantities that determine the behavior of electrons and ions, if three basic conditions are met [3]: 1) many collisions occur during the characteristic time of the process; 2) the path traveled by the particle between two collisions is much less than the distance over which the macroscopic quantities change significantly; 3) the violation of the electroneutrality of the plasma is small. (In [3], we modified this condition for the first time for a significant violation of the electroneutrality of the plasma with current). These conditions allow us to a perturbation theory for describing shock waves of an electric field [3] in a gas-discharge and in in semiconductors. Accounting for pair interactions and the formation of electron velocity distribution functions different from the Maxwellian one led to the renormalization of the coefficients of transport processes in an inhomogeneous plasma and the appearance in the theory of a number of transfer processes different from the classical ones [3,8]. In reality, all these "classical" and "neclassical" processes exist in nature, are observed in experiments (the modification of the effective coefficient of electron diffusion longitudinal to the electric field has been studied in detail) and can be taken into account in modified hydrodynamic models [3,8]. In this case, the coefficients of various diffusions, drifts, reactions (excitation, ionization, recombination, etc.) can be taken from the tables compiled by Townsend and his followers, according to experimental observations [3,13-15]. In this case, the hydrodynamic description will be the most complete and corresponding to already observed and well-studied natural phenomena. All the remarks of A.A. Vlasov to the theory of pair collisions and artificial (not sufficiently substantiated) cutoff of divergent integrals and effective collision cross sections become unjustified. The experimentally measured coefficients take into account all types of collisions (triple, quadruple, etc.), as well as all possible real impact distances (or effective reaction cross sections). These coefficients established in experiments can be used in electrohydrodynamic models, and based on the analysis of these models, new discoveries can be made and refute pseudoscientific conjectures and 1D models that are not related to real phenomena in plasma. Consequently, the application of the kinetic description does not at all mean an expansion of the scope of the equations of hydrodynamics, and perhaps even, on the contrary, narrows them, since it requires the application of a completely specific procedure for solving the kinetic equation, which significantly limits the scope of application of the entire bulky model. Therefore, a simple system of electro-hydrodynamic equations, rigidly based on previously experimentally measured transfer and reactions coefficients, can be formulated in a more general case (in a wider framework) than a system with kinetic equations for plasma components, with a pre-formulated procedure for solving the kinetic equation (with the selected cutoff procedure, the selected consideration of only paired collisions, etc.). The change in internal long-range electric fields in plasma can be taken into account by supplementing the system of hydrodynamic equations with the Poisson equation. In this case, the system of charged particles becomes not just a multicomponent gas, but some peculiar system, pulled together by distant Coulomb forces (or synergistic electric fields capable of self-organization and formation of cumulative jets from charged plasma particles from cumulative-dissipative structures [3]). We will focus on the analysis and methods for solving specific problems using such a simple and fairly general model, based on experimental data on the processes of transport and ionization of plasma particles, in this work. **II. SYSTEM OF CHARGED PARTICLE TRANSPORT EQUATIONS IN NONEQUILIBRIUM PLASMA WITH ELECTRICAL NEUTRALITY VIOLATION** The model of the processes of transport of charged plasma particles without a magnetic field includes the equations for the balance of the number of ions: \[\partial n_{a}/\partial\!t+{\rm div}(n_{a}\,\mathbf{V}_{a})=I_{a}\ \mbox{ -}\ R\mathbf{a}, \tag{1}\] where \(n_{a}\) is the concentration of positive or negative ions; \(\mathbf{V}_{a}\!=\!\mu_{\mbox{\scriptsize{\boldmath$c$}}}E\) is the ion drift velocity, which is a function of the control parameter \(E/N\). \(I_{a}\); \(R\mathbf{a}\) - sources and sinks of grade ions. To equation (1) it is necessary to add electrodynamic equations: \[{\rm rot}\,\mathbf{E}=0; \tag{2}\] \[{\rm div}\,\mathbf{E}=4\pi\mathbf{\rho}, \tag{3}\] where \(\mathbf{\rho}=e\big{(}\sum_{a=1}^{m}z_{a}n_{a}\ \mathbf{n}_{a}\big{)}\); \(z_{a}\) is the ion charge, \(m\) - ion types. Instead of the electron balance equation (as in the case of ions), we will take into account the total current density. To do this, we add the balance equations for electrons and all kinds of ions (multiplying them by the corresponding charge) and take into account that charged particles of different signs in the volume are born and die simultaneously, using (3), we get: \[\nabla j=0, \tag{4}\] where \(j/e=(\partial E/\partial\!t)/(4\pi e)\ -n_{\rm e}\,\mathbf{V}_{ \rm e}\ +\ \sum_{a=1}^{m}z_{a}n_{a}\ \mathbf{V}_{a}+\mathbf{\nabla}(D_{ \perp}n_{\rm e})...\) (... this is an allowance for ion diffusion). Since electrons and ions in the plasma are born and die together, in the current continuity equation, the sources and sinks are mutually compensated (as are the fluxes due to the non-stationarity and inhomogeneity of the plasma concentration and electric field strength, as well as the non-stationarity and inhomogeneity of the electron velocity distribution function in the sources and stocks). Therefore, the continuity equation for the total current density has the from (4), where \(j\) is the total current, taking into account the displacement current - \((\partial E/\partial\!t)/(4\pi)\)[3, 6]. Equation (4) can be modified taking into account (3). From (3) - \(n_{\rm i}=n_{\rm e}+\mathbf{\nabla}\mathbf{E}/(4\pi e)\) - \(\sum_{a=1}^{m-1}z_{a}n_{a}\ \mathbf{n}_{a}\ \mathbf{\), we will substitute it in (4). In this case, (4) will take the form [3]: \(j/e=1/(4\pi e)\ (\partial E/\partial\!t)\ \mbox{-}\ n_{\rm e}\ \mbox{ \boldmath$V}_{\rm e}\ +(\sum_{a=1}^{m-1}z_{a}n_{a}\ \mathbf{V}_{\rm e} \mathbf{+}\Sigma\mathbf{V}_{\rm i}\) \[(n_{\rm e}\!\!+\!\!\mathbf{\nabla}\!E/(4\pi e)\!\!-\!\!\sum_{a=1}^{m -1}z_{a}n_{a}\ \mathbf{)}+\mathbf{\nabla}(D_{\perp}n_{\rm e})..., \tag{5}\] where the 5th term with \(z\mathbf{\nabla}\!E/(4\pi e)\mathbf{V}_{\rm i}\) takes into account the influence of the violation of the electrical neutrality of the plasma on the modification of the internal electric field [3]. **III. BASIC PARAMETERS OF** **PERTURATION THEORY** The order of magnitude of terms in (5) with respect to the term with a drift structure is determined by the following values: \(\Omega\tau_{\rm M}\), 1, \((\mu/\mu)\)\(l\)/\(L\), \(l_{\rm v}\)/\(L\)... Usually, one can neglect the diffusion of ions, which we will do. Here \(\Omega\) characteristic charge change frequency, \(\tau_{\rm M}=1/(4\pi e\mu_{\rm e})\) Maxwellian space charge neutralization time, \(\mu_{\rm j}\) - effective plasma mobility taking into account the mobility of ions and electrons, \(I_{\rm E0}=\mathbf{E}_{0}\)/ (\(4\pi en_{a}\)) vectorized characteristic size of electric field strength change. If the parameters \(\Omega\tau_{\rm M}\), \((\mu/\mu)_{\rm I}\)/\(L\) and \(l_{\rm v}\)/\(L\) are small, then the system of hydrodynamic equations and the Poisson equation can be solved using perturbation theory [3]. The smallness of the parameter \((\mu/\mu_{\rm I})_{\rm I}\)/\(L\!<<\!1\) can also be observed at \(l_{\rm E0}/L\!>>\!10\), since \(\mu/\mu\)\(\gamma_{\rm E}\)\(\mu/\mu_{\rm e}\). Within the framework of our perturbation theory, it is possible to advance in the zero order into the region with a significant violation of electro-neutrality [3]. For example, from a positive column, it is possible, discarding the problem of boundary conditions to advance using numerical and analytical calculations into the near-electrode regions. This method is applicable at elevated gas pressures and significant interelectrode gaps and away from near-electrode regions. The null approximation branches into the approximation: 1) drift or quasi-neutral, when \(l_{\rm E0}/L<<1\) (see [3]) and 2) Vysikaylo-Poisson', when \(l_{\rm E0}/L\!\sim\!1\) (or even \(l_{\rm E0}/L>>10\), but \((\mu/\mu)_{\rm I0}/L\!<<\!1\) (the main current is carried by electrons \(\mu=\mu_{\rm e}>\mu_{\rm e}>\mu-\) ion mobility). **IV. THE ZERO APPROXIMATION OF OUR PERTURATION THEORY** In the zeroth approximation of our perturbation theory, the drift velocity of electrons and ions is described by the relations: \(\mathbf{V}_{\rm e0}=\mu_{\rm e0}\mathbf{E}_{0}\), \(\mathbf{V}_{\rm e0}=\mu_{\rm e0}\mathbf{E}_{0}\), here are the mobility of electrons - \(\mu_{\rm e0}\) and ions - \(\mu_{\rm e0}\), respectively. From (1),(3) and (5) in the zero approximation \((\mu_{\rm 10}\ n_{\rm e}\mathbf{\nabla}\!E\!\mathbf{E}=\mbox{ -}\ \mathbf{E}_{0}\ \mathbf{\nabla}(\mu_{\rm e0}\ n_{\rm e}))\), we obtain for simple plasma: \[\partial n_{\rm e}/\partial\!t-\partial[(I_{\rm E0}/\mu_{\rm e0})\mathbf{\nabla}](\mu_{\rm e0}\mathbf{n}_{\rm e0})/\partial\!t+(j \!e)\mathbf{\nabla}(\mu_{\rm e0}/\mu_{\rm e0})\ - \tag{6}\] \(\mathbf{\nabla}(\mu_{\rm e0}E/\mu_{\rm e0})(I_{\rm E0}\mathbf{\nabla})(\mu_{\rm e0}\mathbf{n}_{\rm e0})=I_{\rm E0}-R_{ \rm E0}\), 4D equation (6) is derived from (1) by modifying the ion concentration \(n_{\rm i}\) by \(n_{\rm e}\)-\((I_{\rm E0}\mathbf{\nabla})(n_{\rm e}\ \mu_{\rm e0})/\mu_{\rm e0}\). The terms with \(I_{\rm e0}\) in (6) arises due to taking into account the violation of electroneutrality. The second term with mixed derivatives with respect to time and spatial coordinates has no analogues in hydrodynamics, and the fourth term is analogous to diffusion. In hydrodynamics, the transition from convective to diffusion transfer is observed during the formation of shock waves discovered by Mach. The presence in (6) of a term due to the violation of electrical neutrality allows us to assert that the presence of electric field shock waves in the plasma should be expected. Shock waves of the electric field in gas-discharge were discovered and visualized by Vysikaylo and co-authors in 1985-1987 [3]. The presence of 2 and 4 terms in (6) with a mixed derivative will allow us to describe stationary and traveling shock waves of the electric field - strata (parameter \(E/N\)) both in ordinary gas-discharge plasma and in the ionosphere and heliosphere, where global currents flow [10]. We will describe them in the next parts of this work. **IV. CONCLUSIONS** The general knowledge gained in solving some equations should be used in solving completely different equations. C.W. Ozen found that in hydrodynamics, arbitrarily small causes can produce final actions. He proved in 1927 [16] that the presence of arbitrarily small higher-order terms (diffusion or viscosity) in a system of differential equations can completely change the nature of solutions. Paradoxes due to this cause are called asymptotic paradoxes [17]. Often, when modeling a complex spatially distributed non-stationary system based on a comparison of the terms that determine the Cauchy problem in time with sources and sinks, with the terms responsible for solving the Dirichlet problem, some of the terms responsible for the transfer of particles, their momentum and energy are thrown out. These errors are the essence of many asymptotic paradoxes observed when comparing experiments with the results of modeling processes, both in plasma, where asymptotic paradoxes are associated with the violation of electrical neutrality (see [3]), and in ordinary hydrodynamics, where the main asymptotic paradoxes are determined by the viscosity [17]. Accounting for the violation of electrical neutrality, diffusion, and viscosity processes leads to the appearance of the highest derivative with respect to coordinates in transport processes. The correct allowance for viscosity leads to the solution of asymptotic paradoxes in hydrodynamics [17], and the correct allowance for violation of electrical neutrality leads to the solution of a number of asymptotic paradoxes that take place in plasma [3,4]. The use of general methods of analytical and numerical modeling to describe various transport phenomena in gas dynamics and plasma makes it possible to formulate the basics of the method of generalized mathematical transposition (MGMT) of models and their solutions from one area of natural sciences to another [3,4]. In (5) we got rid of the processes of birth and death of plasma particles. This major achievement helped us to obtain equation (6) with the help of which we will explain a number of hitherto "mysterious" phenomena - asymptotic paradoxes caused by a violation of electroneutrality. In the following parts we will describe in detail the transfer processes caused by the violation of plasma electroneutrality. According to (5), in the following parts we will construct a perturbation theory in the first approximation. In Part 3 we will describe different types of ambipolar diffusions and ambipolar drifts in plasma depending on the main dynamic parameter of the order \(-E/N\). Based on extensive experimental data obtained by Voyagers and images from the Hubble telescope, NASA scientists in [18] concluded that Saturn's rings act as a kind of heaters, heating the upper part of Saturn's atmosphere. This can occur, for example, under the influence of the solar wind and currents in Saturn's atmosphere. The 4D equation (6) obtained by us for a simple plasma describes the profiles of plasma parameters caused by the processes of transfer of weakly energetic electrons. To take into account the role of high-energy electrons runaway from plasma CDS (Fig.2, 3), it is necessary to set their full current or charge of CDS. We will consider how this is done for specific tasks in the following works.
2305.01118
CSP: Self-Supervised Contrastive Spatial Pre-Training for Geospatial-Visual Representations
Geo-tagged images are publicly available in large quantities, whereas labels such as object classes are rather scarce and expensive to collect. Meanwhile, contrastive learning has achieved tremendous success in various natural image and language tasks with limited labeled data. However, existing methods fail to fully leverage geospatial information, which can be paramount to distinguishing objects that are visually similar. To directly leverage the abundant geospatial information associated with images in pre-training, fine-tuning, and inference stages, we present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images. We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images, which can be transferred to downstream supervised tasks such as image classification. Experiments show that CSP can improve model performance on both iNat2018 and fMoW datasets. Especially, on iNat2018, CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
Gengchen Mai, Ni Lao, Yutong He, Jiaming Song, Stefano Ermon
2023-05-01T23:11:18Z
http://arxiv.org/abs/2305.01118v2
# CSP: Self-Supervised Contrastive Spatial Pre-Training ###### Abstract Geo-tagged images are publicly available in large quantities, whereas labels such as object classes are rather scarce and expensive to collect. Meanwhile, contrastive learning has achieved tremendous success in various natural image and language tasks with limited labeled data. However, existing methods fail to fully leverage geospatial information, which can be paramount to distinguishing objects that are visually similar. To directly leverage the abundant geospatial information associated with images in pre-training, fine-tuning, and inference stages, we present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images. We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images, which can be transferred to downstream supervised tasks such as image classification. Experiments show that CSP can improve model performance on both iNat2018 and fMoW dataset. Especially, on iNat2018, CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios1. Footnote 1: Code, data, and pre-trained models are available at [https://gengchennai.github.io/csp-website/](https://gengchennai.github.io/csp-website/). ## 1 Introduction Low-data or few-shot regimes (Zhai et al., 2021; Wang et al., 2020) is a prevalent challenge in the geospatial domain, where we usually have access to massive amounts of unlabeled data while only limited amount of labeled data is available. For example, users on Flickr, Google Photos, and iNaturalist App2 upload millions of geo-tagged images every day, and multiple satellites continuously capture remote sensing (RS) images with corresponding geo-coordinates all over the world. These geo-tagged data form large publicly available _unlabeled_ datasets that are inexpensive to obtain. In contrast, desired labels for many geospatial tasks (e.g., object class labels, object bounding boxes, and land use type labels, etc.) are rather scarce and expensive to collect. Moreover, even well-curated and widely used labeled geospatial datasets such as UC Merced Land Use Dataset (Yang and Newsam, 2010) and BigEarthNet (Sumbul et al., 2019) have limited sizes, geographic coverage, and potentially oversimplified label distributions. This lack of labeled data coverage severely limits the ability to generalize, especially in a geographic sense, of models trained on these labeled geospatial datasets (Goodchild and Li, 2021). Footnote 2: iNaturalist is one of the world’s most popular nature apps to help users identify species given the uploaded images. Meanwhile, numerous previous studies have shown the great potential of leveraging geospatial information as complementary information for visual cues to help improve the model performance on various computer vision tasks (Tang et al., 2015; Chu et al., 2019; Mac Aodha et al., 2019; Klo Figure 1: The importance of geospatial information demonstrated by two visually similar species (a)(c), and their distinct patterns in image locations (b)(d). cek et al., 2019; Mai et al., 2020, 2022d; Yang et al., 2022). For example, Figure 0(a) and 0(c) are images of two different fox species: Arctic fox and bat-eared fox, with which the vision-based models, or even humans, can be confused due to the high visual similarity of the two species and their surrounding environments. Fortunately, these two species have distinct geospatial distribution patterns (shown in Figure 0(b), 0(d)), and it is very easy to tell them apart based on the geo-locations. Motivated by these observations, we ask whether we can **build a multi-modal self-supervised learning framework between geo-locations and images** that learns the alignments between geo-location and image representations using large unlabeled geo-tagged datasets. In this work, we propose CSP (Contrastive Spatial Pre-Training), a self-supervised learning framework, which pre-trains deep spatial representations from unlabeled geotagged images by predicting image features or image identities based on their geo-locations as shown in Figure 1(c). Given one location-image pair \((\mathbf{x}_{i},\mathbf{I}_{i})\), a dual-encoder separately encodes \(\mathbf{x}_{i}\) and \(\mathbf{I}_{i}\) into the embedding space with a location encoder \(e()\) and an image encoder \(f()\) and contrast against related locations and images to form a **contrastive learning objective** (the red box in Figure 1(c)). After the location encoder and image encoder pre-training stage, both \(e()\) and \(f()\) can be fine-tuned on a small amount of labeled data (the green and blue box in Figure 1(c)) separately and do inference jointly, which is compatible with prior works (Mac Aodha et al., 2019; Mai et al., 2020). To perform contrastive learning, we explore a combination of three different ways to form positive and negative pairs for the location encoder pre-training stage of CSP as shown in Figure 3: **(a) In-batch negative sampling**: given a mini-batch of unlabeled location-image pairs, create mismatching location-image pairs as negative samples; **(b) Random negative location sampling**: uniformly sample negative locations from the study area (e.g., the whole earth surface) to form negative pairs; **(c) SimCSE-based sampling**: create a positive pair by encoding the same location with two location encoders, which share all the parameters but use different dropout masks. We also compare several self-supervised learning objectives including **Mean Square Error** loss (\(MSE\)), **Noise Contrastive Estimation** loss (\(\mathrm{NCE}\)), and **Contrastive Multi-classification** loss (\(\mathrm{MC}\)). We conduct experiments on geo-aware image classification tasks including **fine-grained species recognition**(Chu et al., 2019; Mac Aodha et al., 2019; Mai et al., 2020; Yang et al., 2022), and **remote sensing (RS) image classification**(Christie et al., 2018; Ayush et al., 2021; Manas et al., 2021; Li et al., 2021). Results show that our CSP can boost the model performance on both datasets. Figure 2: Different training strategies for geo-aware image classification. Our proposed method CSP is presented in Figure 1(c). **In summary, the contributions of our work are:** * We propose an effective multi-modal self-supervised pre-training method CSP that leverages abundant unlabeled geo-tagged images to better learn location representations that can be transferred to few-shot learning tasks. * We explore three ways to construct positive and negative training examples for contrastive learning. We find that the combination of them achieves the best performance. * We explore three self-supervised losses including \(MSE\), \(\mathrm{NCE}\), and \(\mathrm{MC}\). We find out that using CSP with \(\mathrm{MC}\) usually yields the best result. * We apply CSP to fine-grained species recognition (iNat2018) and remote sensing image classification task (fMoW) in few-shot learning and fully supervised settings, and demonstrate advantages on both datasets. CSP can significantly boost model performances with 10-34% relative improvements on the iNat2018 dataset at few-shot settings by stratified sampling \(\{5\%,10\%,20\%\}\) of the training data. On both datasets, when training models on the whole training dataset in a fully supervised manner, we find that adding the CSP pre-training objective can still improve the model performance. ## 2 Related Work **Unsupervised/Self-Supervised Learning on Geotagged Images** Multiple unsupervised or self-supervised frameworks have been proposed to pre-train image encoder by utilizing geographic knowledge such as Tile2Vec (Jean et al., 2019), Geo-SSL (Ayush et al., 2021), SeCo (Manas et al., 2021), and GeoKR (Li et al., 2021). **Tile2Vec**(Jean et al., 2019) is an unsupervised learning framework to pre-train image encoder based on the spatial relations among RS images. Given an anchor RS image, location information is only used to obtain one nearby tile and a distant tile. An unsupervised triplet loss is formed to pre-train image encoder to make nearby tiles similar in the embedding space while distant tiles dissimilar. Geo-locations are not part of the model input and cannot be used during the model fine-tuning or inference stage. **Geo-SSL**(Ayush et al., 2021) is a self-supervised contrastive learning objective to pre-train an RS image encoder based on the MoCo-V2 (Chen et al., 2020) framework. Instead of using augmented images as positive pairs as MoCo-V2 does, they used co-located RS images at different times as positive pairs. This contrastive image loss is combined with a geo-location classification pre-text loss during pre-training, which uses the image encoder to predict which geo-location cluster the image might come from. Here, the spatiotemporal information is only used in the pre-training stage. During the fine-tuning and inference stage, the model prediction relies entirely on the pre-trained image encoder. **SeCo**(Manas et al., 2021) is a similar self-supervised contrastive learning framework for an RS image encoder \(f()\). It also uses MoCo-V2 as the backbone and uses spatially aligned RS images at different times as novel temporal augmented samples. The difference is that SeCo uses both the temporal augmented samples and synthetic samples based on artificial augmentations as either positive or negative samples so that the pre-trained \(f()\) can be either invariant or sensitive to the temporal or artificial augmentations. **GeoKR**(Li et al., 2021) is proposed as an unsupervised framework for an RS image encoder. GeoKR first obtains a spatially aligned land cover map \(\mathbf{M}\) based on an RS image. The image encoder is pre-trained in a teacher-student network to predict the distribution of land cover types in the current scene with a KL loss. Figure 2b illustrates the general idea of those four models while Figure 6 in Appendix A.1 provides a detailed comparison. None of them directly takes geo-locations as model input but use locations as auxiliary information to pre-train the image encoder. Moreover, after pre-training, location information is completely ignored during fine-tuning and inference stage which leads to significantly suboptimal results. In contrast, our CSP utilizes the location-image pairs in a direct and explicit manner by separately encoding them and contrasting them against each other. The pre-trained location encoder can be utilized in the model inference process jointly with the image encoder so that both the visual and spatial clue can be used for prediction. **Location Representation Learning**Zhai et al. (2018) learned location representation from image-location pairs for image localization. So in this context, locations are supervision signals. Instead of using the original geo-locations, they grouped locations (or times) into different bins and utilized them in the cross entropy loss. This practice cannot leverage the continuity of the approximated function. Most existing location encoding approaches (Tang et al., 2015; Christie et al., 2018; Chu et al., 2019; Mac Aodha et al., 2019; Mai et al., 2020; 2022; Yang et al., 2022) are developed and trained in a supervised learning framework while massive unlabeled geographic data cannot be used. Figure 2a illustrates the dual-encoder supervised learning idea both Mac Aodha et al. (2019) and Mai et al. (2020) used for geo-aware image classification. In contrast, this work focuses on training location encoders in a self-supervised manner based on unlabeled geotagged images. The pre-trained location encoder can later be utilized jointly with the image encoder for model prediction (See Figure 2c). Spatially Explicit Artificial IntelligencePrevious works showed that naively applying existing state-of-the-art AI models to geospatial tasks usually yielded suboptimal results (Mai et al., 2019; Chu et al., 2019; Mac Aodha et al., 2019; Ayush et al., 2021; Yan et al., 2018). _Spatially Explicit Artificial Intelligence_ aims at improving the performances of AI models on various geospatial tasks by incorporating spatial thinking, spatial principles and spatial inductive biases into the AI model design (Janowicz et al., 2020; Liu and Biljecki, 2022; Zhu et al., 2022; Mai et al., 2022). Several important spatial principles have been considered by previous works including spatial dependency (Mai et al., 2019; Yan et al., 2018, 2019; Li et al., 2021b; Huang et al., 2022, 2023), spatial heterogeneity (Chu et al., 2019; Mac Aodha et al., 2019; Mai et al., 2020, 2021; Xie et al., 2021; Goodfellow and Li, 2021; Xie et al., 2023), temporal continuity (Cai et al., 2020; He et al., 2021; Cong et al., 2022), temporal periodicity (Cai et al., 2020; Rao et al., 2020), earth spherical geometry nature (Cohen et al., 2018; Esteves et al., 2018; Jiang et al., 2019; Mai et al., 2022d), and so on. CSP contributes to the _Spatially Explicit Artificial Intelligence_ research by learning effective multi-scale location representations from unlabeled images. ## 3 Method ### A Dual-Encoder for Geo-Tagged Images We define an unlabeled geo-tagged image dataset as \(\mathbb{X}=\{(\mathbf{x}_{i},\mathbf{I}_{i})|i=1,...,M\}\), where \(\mathbf{I}_{i}\) is an image, \(\mathbf{x}_{i}\) represents the location (longitude and latitude) and optionally the time the image was taken3. Inspired by recent image-text pre-training models (Zhang et al., 2020; Radford et al., 2021; Jia et al., 2021; Zhai et al., 2021), CSP uses a dual-encoder architecture - a location encoder \(e()\) and an image encoder \(f()\) - to handle location \(\mathbf{x}_{i}\) and image \(\mathbf{I}_{i}\) separately. Footnote 3: In this study we focus on the location information and leave the time aspect to the future work. The location encoder \(e()\) is a function \(e_{\theta}(\mathbf{x}_{i}):\mathbb{S}^{2}\rightarrow\mathbb{R}^{d}\), which is parameterized by \(\theta\) and maps any coordinate \(\mathbf{x}_{i}=(\lambda_{i},\phi_{i})\) in a spherical surface \(\mathbb{S}^{2}\) to a vector representation of \(d\) dimension. Here longitude \(\lambda_{i}\in[-\pi,\pi)\) and latitude \(\phi_{i}\in[-\pi/2,\pi/2]\). \(e()\) can be any existing 2D location encoders (Mai et al., 2022c) such as \(tile\)(Tang et al., 2015), \(wrap\)(Mac Aodha et al., 2019), Space2Vec's \(grid\) and \(theory\)(Mai et al., 2020b), or spherical location encoders such as Sphere2Vec (Mai et al., 2022d). We assume that \(e()\) is inductive and does not depend on the unlabeled dataset \(\mathbb{X}\) anymore once it is pre-trained. The image encoder \(f()\) is a function \(f_{\psi}(\mathbf{I}_{i}):\mathbb{R}^{H\times W\times C}\rightarrow\mathbb{R}^ {d}\), which is parameterized by \(\psi\) and maps any image with height \(H\), width \(W\), and channel \(C\) into an embedding of \(d\) dimension. In this study we define \(f(\mathbf{I}_{i})=\mathbf{W}(\mathbb{F}(\mathbf{I}_{i}))\) where \(\mathbb{F}()\) is an off-the-shelf deep image neural network such as InceptionV3 (Szegedy et al., 2016) or Geo-SSL (Ayush et al., 2021) pretrained ResNet50 (He et al., 2015), which encodes any image into a \(d^{(I)}\) dimension image feature vector. \(\mathbf{W}()\) is a projection layer (similar to that of SimCLR (Chen et al., 2020) and MoCo-V2 (Chen et al., 2020)), which projects the image feature \(\mathbb{F}(\mathbf{I}_{i})\in\mathbb{R}^{d^{(I)}}\) into \(d\) dimension such that a contrastive learning objective can be formed between \(e(\mathbf{x}_{i})\) and \(f(\mathbf{I}_{i})\). Please refer to Appendix A.2.1 for a detailed description of \(f()\). In our work, \(d^{(I)}=2048\) and \(d=512\). This dual-encoder architecture is shown in Figure 2c as well as Figure 3. We simply denote the encoded representation of a location \(\mathbf{x}_{i}\) as \(e(\mathbf{x}_{i})\) and its associated image representation as \(f(\mathbf{I}_{i})\). ### Contrastive Spatial Pre-Training(CSP) Contrastive Learning ObjectivesWe consider different contrastive objectives. The first is the _noise contrastive estimation_ (NCE) (Gutmann and Hyvarinen, 2010) loss, which avoids calculation of the partition function and has been successfully used in word embeddings (Mikolov et al., 2013) and language modeling (Mnih and Teh, 2012): \[\begin{split} l_{\mathrm{NCE}}(\mathcal{P},\mathcal{N})=& -\mathbb{E}_{(\mathbf{a},\mathbf{b})\sim\mathcal{P}}\log\sigma(s( \mathbf{a},\mathbf{b}))\\ &-\mathbb{E}_{(\mathbf{a},\mathbf{b}^{-})\sim\mathcal{N}}\log(1- \sigma(s(\mathbf{a},\mathbf{b}^{-})))\end{split} \tag{1}\] Here \(\mathcal{P}=\{(\mathbf{a},\mathbf{b})\}\) is a set of positive pairs, and \(\mathcal{N}=\{(\mathbf{a},\mathbf{b}^{-})\}\) is a set of negative pairs. \(s(\cdot,\cdot)\) is a similarity function (such as \(cosine()\)), and \(\sigma(v)=e^{v}/(1+e^{v})\) is the sigmoid function. The second objective function is the multi-class classification loss with temperature which takes the same form as the InfoNCE loss (Van den Oord et al., 2018). It has been successfully used in unsupervised learning for images (He et al., 2020) and text (Gao et al., 2021): \[\begin{split} l_{\mathrm{MC}}(\mathcal{P},\mathcal{N},\tau)\\ =\mathbb{E}_{(\mathbf{a},\mathbf{b})\sim\mathcal{P}}\frac{e^{s( \mathbf{a},\mathbf{b})/\tau}}{e^{s(\mathbf{a},\mathbf{b})/\tau}+\sum_{(\mathbf{a },\mathbf{b}^{-})\in\mathcal{N}_{\mathbf{a}}}e^{s(\mathbf{a},\mathbf{b}^{-})/ \tau}}\end{split} \tag{2}\] where \(\mathrm{MC}\) stands for "multi-class". \(\mathcal{N}_{\mathbf{a}}\) obtains a set of negative pairs with first entry being \(\mathbf{a}\), \(\mathcal{P}\) and \(s(\cdot,\cdot)\) are defined as earlier. The temperature scaling parameter \(\tau\) determines how soft the softmax is (Hinton et al., 2015). In practice it helps with the trade off between top ranked classes (precision) versus reset of the classes (recall). Third, we also experimented with a regression loss, but it does not work as well as the NCE and MC losses. Self-Supervised Training Pair ConstructionIn order to learn useful representations, we need to choose appropriate distributions for positive pairs \(\mathcal{P}\) and negative pairs for contrastive learning. In CSP, we use three sampling methods to obtain positive and negative pairs: in-batch negative sampling (indicated as \(B\)), random negative location sampling (indicated as \(L\)), and SimCSE-based sampling (indicated as \(D\)). Figure 3 illustrates how we use these three methods to do the positive and negative sampling. Each of them includes methods to sample both the positive and negative pairs so that one contrastive loss component can be formed based on each of them. Some of them share the same positive sampling method such as \(B\) and \(L\). So we summarize the positive and negative sampling methods below. Given an unlabeled location-image pair \((\mathbf{x}_{i},\mathbf{I}_{i})\) from a mini-batch \(\mathbb{X}_{(N)}=\{(\mathbf{x}_{1},\mathbf{I}_{1}),(\mathbf{x}_{2},\mathbf{I }_{2}),...,(\mathbf{x}_{N},\mathbf{I}_{N})\}\subseteq\mathbb{X}\), where \(\mathbb{X}\) is a geo-tagged but unlabeled image set, we use the following positive and negative instances: * the red boxes in both Figure 2(a) and 2(b). * all gray boxes (no-diagonal elements) in Figure 2(a). * **Sampled negative locations**\(\mathcal{N}^{L}=\bigcup_{i}\mathcal{N}^{L}_{i}\), where \(\mathcal{N}^{L}_{i}=\{(e(\mathbf{x}^{-}_{i,j}),\,f(\mathbf{I}_{i}))|j\in\{1,2,...,C\}\}\) indicates \(C\) negative pairs for \(\mathbf{I}_{i}\). Note that \(\mathbf{x}^{-}_{i,j}\) is sampled uniformly from the surface of the sphere at pre-training time, and therefore they are different at each training epoch. \(\mathcal{N}^{L}\) corresponds to all gray boxes in Figure 2(b). This is a common negative location sampling practice used by Mac Aodha et al. (2019); Mai et al. (2020). * **Dropout positive**\(\mathcal{P}^{D}=\{(e(\mathbf{x}_{i}),e^{\prime}(\mathbf{x}_{i}))\}\), where given two towers of the same location encoders \(e()\) and \(e^{\prime}()\) with two independently sampled dropout masks, we pass the same input \(\mathbf{x}_{i}\) to them and obtain two embeddings \((e(\mathbf{x}_{i}),e^{\prime}(\mathbf{x}_{i}))\) as "positive pairs". This is a data augmentation strategy (so called SimCSE), which has been very successful for sentence embeddings Gao et al. (2021). This corresponds to the red boxes in Figure 2(c). * **Dropout negative**\(\mathcal{N}^{D}=\bigcup_{i}\mathcal{N}^{D}_{i}\), where \(\mathcal{N}^{D}_{i}=\{(e(\mathbf{x}_{i}),\,e^{\prime}(\mathbf{x}_{j}))|j\in\{ 1,2,...,N\}\setminus\{j\}\}\). \(\mathcal{N}^{D}\) indicates the location embeddings from two location encoder towers based on different locations from the same mini-batch. It corresponds to the gray boxes in Figure 2(c). As shown in Figure 3, those five positive/negative sampling sets amount to three different sampling methods: * **In-batch negative sampling (\(B\))**Zhang et al. (2020); Radford et al. (2021); Carlsson et al. (2021); Karpukhin et al. (2020) uses \(\mathcal{P}^{X},\mathcal{N}^{B}\) as positive and negative pairs. * **Random negative location sampling (\(L\))**Mac Aodha et al. (2019); Mai et al. (2020); Ma et al. (2020) uses \(\mathcal{P}^{X},\mathcal{N}^{L}\) as positive and negative pairs. * **SimCSE-based sampling (\(D\))**Gao et al. (2021) uses \(\mathcal{P}^{D},\mathcal{N}^{D}\) as positive and negative pairs. Please refer to Appendix A.3 for a detailed description. Each corresponds to one loss component in our contrastive learning loss function by using either \(\mathrm{NCE}\) or \(\mathrm{MC}\) objective shown in Equation 1 and 2. So we define two versions of contrastive losses which both have three components. _The self-supervised binary (\(\mathrm{NCE}\)) loss \(l_{\mathrm{NCE}}\)_ is defined as \[\begin{split} l_{\mathrm{NCE}}(\mathbb{X})&=l^{B}_{ \mathrm{NCE}}(\mathbb{X})+\beta_{1}l^{L}_{\mathrm{NCE}}(\mathbb{X})+\beta_{2}l^{ D}_{\mathrm{NCE}}(\mathbb{X})\\ &=l_{\mathrm{NCE}}(\mathcal{P}^{X},\mathcal{N}^{B})+\beta_{1}l_{ \mathrm{NCE}}(\emptyset,\mathcal{N}^{L})\\ &+\beta_{2}l_{\mathrm{NCE}}(\mathcal{P}^{D},\mathcal{N}^{D})\end{split} \tag{3}\] where \(\beta_{1}\) and \(\beta_{2}\) control the contribution of the last two loss components. Note here we use empty set as the positive pairs in \(l^{L}_{\mathrm{NCE}}(\mathbb{X})\) since \(\mathcal{P}^{X}\) has been considered in \(l^{B}_{\mathrm{NCE}}(\mathbb{X})\). Figure 3: Three different ways to form positive and negative training pairs (red and gray boxes respectively). _The self-supervised multi-class (\(\mathrm{MC}\)) loss \(l_{\mathrm{MC}}\)_ is defined as_ \[l_{\mathrm{MC}}(\mathbb{X})= l_{\mathrm{MC}}^{B}(\mathbb{X})+\alpha_{1}l_{\mathrm{MC}}^{L}( \mathbb{X})+\alpha_{2}l_{\mathrm{MC}}^{D}(\mathbb{X})\] \[= l_{\mathrm{MC}}(\mathcal{P}^{X},\mathcal{N}^{B},\tau_{0})+\alpha_ {1}l_{\mathrm{MC}}(\mathcal{P}^{X},\mathcal{N}^{L},\tau_{1}) \tag{4}\] \[+\alpha_{2}l_{\mathrm{MC}}(\mathcal{P}^{D},\mathcal{N}^{D},\tau_{ 2})\] _where \(\alpha_{1}\) and \(\alpha_{2}\) are hyper-parameters. Although \(l_{\mathrm{MC}}^{B}(\mathbb{X})\) and \(l_{\mathrm{MC}}^{L}(\mathbb{X})\) use the same positive pairs \(\mathcal{P}^{X}\), they are embedded in the Softmax function. So we need to use \(\mathcal{P}^{X}\) in both loss components._ A naive contrastive pre-training for this dual-encoder architecture is to jointly training both encoders from scratch as CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) do for the image and text encoder. However, from-scratch training will be problematics in CSP. Unlike CLIP and ALIGN's dual-encoder framework in which the text and image encoder have relatively the same number of trainable parameters, the number of trainable parameters of the image encoder \(f()\) is 100 times larger than that of the location encoder \(e()\). For example, the InceptionV3 image encoder we used for iNat2018 dataset has 41.8 million trainable parameters while the Space2Vec location encoder we used in both iNat2018 and fMoW dataset has only 0.4 million trainable parameters. Jointly training both encoders from scratch will yield overfitting issue for location encoder and underfitting issue for the image encoder. Moreover, in text-image pre-training literature, LiT (Zhai et al., 2021) also reported that locking the image encoder during pre-training leads to a significant performance improvement. So we follow the practice of LiT (Zhai et al., 2021), and utilize a pre-trained image network \(\mathbb{F}^{*}()\) and lock it during Contrastive Spatial Pre-Training. The pre-trained image network \(\mathbb{F}^{*}()\) should not see the current image labels during pre-training stage. In other words, we first do image encoder pre-training as shown in the orange box of Figure 1(c). Then we lock \(f()\) and use it to pre-train \(e()\) as shown in the red box of Figure 1(c). During CSP, only the image projection layer \(\mathbf{W}()\) is trained in the image encoder part. ### Supervised Fine-Turning After Contrastive Spatial Pre-Training, we follow the practice of Chu et al. (2019); Mac Aodha et al. (2019); Mai et al. (2022) and fine-tune the image encoder \(f()\) and location encoder \(e()\) separately on a small labeled dataset \(\overline{\mathbb{X}}=\{(\mathbf{x},\mathbf{I},y)\}\) to test its performance in a few-shot learning setting. The supervised fine-tuning stage corresponds to the green and blue box in Figure 1(c). Their predictions are combined at the inference stage as Mac Aodha et al. (2019); Mai et al. (2022) did. Image Encoder Fine TuningWe drop the projection layer \(\mathbf{W}()\) and use a classification head \(g()\) to process the image feature vector \(\mathbb{F}(\mathbf{I})\) into logits over image labels, i.e., \(g(\mathbb{F}(\mathbf{I}))\in\mathbb{R}^{Q}\). We fine-tune \(g()\) with cross-entropy loss. \(Q\) is the total number of classes. This process corresponds to the green box in Figure 1(c). Please refer to Appendix A.2.1 for a detailed description of \(f()\) fine-tuning. Location Encoder Fine TuningAs shown in the blue box of Figure 1(c), we use image labels in the training objective for location encoder fine tuning. Following Mac Aodha et al. (2019), we used a _presence-absence loss_ function which converts the multi-class labels into binary multi-labels. A class embedding matrix \(\mathbf{T}\in\mathbb{R}^{d\times Q}\) is used to supervisedly train the location encoder where \(\mathbf{T}_{:,y}\in\mathbb{R}^{d}\) indicates the class embedding for the \(y\)th class. Given a set of training samples \(\overline{\mathbb{X}}=\{(\mathbf{x},\mathbf{I},y)\}\) where \(y\) indicates the class label, the loss function \(l^{sup}(\overline{\mathbb{X}})\) is defined as: \[l^{sup}(\overline{\mathbb{X}})=\beta l_{\mathrm{NCE}}(\mathcal{P}^{y},\emptyset )+l_{\mathrm{NCE}}(\emptyset,\mathcal{N}^{y}\cup\mathcal{N}^{R}) \tag{5}\] Here \(\beta\) is a hyperparameter for the weight of positive samples. The following positive and negative samples are used: * **Labeled positives \(\mathcal{P}^{y}=\{(e(\mathbf{x}),\mathbf{T}_{:,y})|(\mathbf{x},y)\in\overline{ \mathbb{X}}\}\)**. * **Labeled negatives \(\mathcal{N}^{y}=\{(e(\mathbf{x}),\mathbf{T}_{:,y_{j}})|(\mathbf{x},y)\in \overline{\mathbb{X}},y_{j}\in\{1..Q\}\setminus\{y\}\}\)**. * **Sampled negative locations \(\mathcal{N}^{R}=\{(e(\mathbf{x}^{-}),\mathbf{T}_{:,y_{j}})|(\mathbf{x},y)\in \overline{\mathbb{X}},\;y_{j}\in\{1..Q\}\}\)**, where \(\mathbf{x}^{-}\) is a uniformly sampled locations from the surface of the sphere for each example \(\mathbf{x}\). ### Model Inference At inference time, we combined the predicted logits of fine-tuned \(e()\) and \(f()\) to give the final prediction as shown in the purple box of Figure 1(c). Given a location-image pair \((\mathbf{x},\mathbf{I})\), we estimate which category \(y\) it belongs to by \(P(y|\mathbf{I},\mathbf{x})\). According to Mac Aodha et al. (2019), if we assume \(\mathbf{I}\) and \(\mathbf{x}\) are conditionally independent given \(y\), then based on Bayes' theorem, we have \(P(y|\mathbf{I},\mathbf{x})\propto P(y|\mathbf{x})P(y|\mathbf{I})\). Here, \(P(y|\mathbf{I})\) can be estimated by the logits of \(g(\mathbb{F}(\mathbf{I}))\) at the \(y\)th class. For \(P(y|\mathbf{x})\), we have \(P(y|\mathbf{x})\propto\sigma(e(\mathbf{x})\mathbf{T}_{:,y})\) where \(\sigma(\cdot)\) is a sigmoid activation function. ## 4 Experiments In this work, we study the effectiveness of CSP on two geo-aware image classification tasks - species fine-grained recognition and satellite image classification. We are particularly interested in how the dual-encode architecture performs in various _few-shot learning_ settings after CSP. For each task, three datasets are used to pre-train, fine-tune, and evaluate our CSP models: \(\mathbb{X}_{train}\) is a set of unlabeled location-image pairs we use for pre-training; \(\overline{\mathbb{X}}_{train}\) is a set of labeled location-image-class tuples we use for fine-tuning, where the size of \(\mathbb{X}_{train}\) is much larger than that of \(\overline{\mathbb{X}}_{train}\), i.e., \(|\mathbb{X}_{train}|\gg|\overline{\mathbb{X}}_{train}|\); and \(\overline{\mathbb{X}}_{val}\) is a set of labeled location-image-class tuples we use for evaluation that can not be seen during fine-tuning. ### Models and Baselines In this work, we consider the following baselines: * **Img. Only** supervisedly fine-tune the image network \(g(\mathbb{F}())\) on the fine tuning dataset \(\overline{\mathbb{X}}_{train}\) (See Figure 2b). We use InceptionV3 (Szegedy et al., 2016) and ResNet50 (Ayush et al., 2021) as the image encoders on iNat2018 and fMoW respectively. * **Sup. Only** uses the dual-encoder architecture but is only supervisedly trained on \(\overline{\mathbb{X}}_{train}\) (See Figure 2a). We consider use \(wrap\)(Mac Aodha et al., 2019) and \(grid\)(Mai et al., 2020) as the location encoder which yield two models: **Sup. Only (wrap)** and **Sup. Only (grid)**. * **MSE** follows the same setup as CSP (See Figure 2c) except that during location encoder pre-training, it directly feeds the location embedding \(e(\mathbf{x})\) into a linear layer to regress the image feature vector \(\mathbb{F}(\mathbf{I})\) with a Mean Square Error (MSE) loss. MSE uses \(grid\) as the location encoder. We compare these baselines with different versions of CSP. All CSP models have the same training procedure, and use \(grid\) as their location encoders. The only difference is the contrastive loss function they use: * **CSP-NCE-BLD** uses the \(\mathrm{NCE}\) loss with all three loss components as shown in Equation 3. * **CSP-MC-BLD** uses the \(\mathrm{MC}\) loss with all three loss components as shown in Equation 4. ### Fine-Grained Species Recognition We use the iNat2018 dataset4(Van Horn et al., 2018) as a representative dataset to study the effectiveness of CSP on species fine-grained recognition. iNat2018 is a large-scale species classification dataset with 8142 species categories. There are 437,513 training images of which 436,063 training images have geo-locations. On average each class has 53.6 training samples. We use all location-image pairs \(\{(\mathbf{x}_{i},\mathbf{I}_{i})\}\) in iNat2018 training set as the unlabeled geo-tagged dataset \(\mathbb{X}_{train}\) for our CSP. To create a few-shot learning task, we perform a stratified sampling on the training dataset to select \(\lambda\%\) of training samples which constitute our few-shot supervised fine-tuning dataset \(\overline{\mathbb{X}}_{train}=\{(\mathbf{x},\mathbf{I},y)\}\). The iNat2018 validation dataset is used for model evaluation to make our results comparable with previous work (Mac Aodha et al., 2019; Mai et al., 2020; 2022). We use InceptionV3 network pre-trained on ImageNet as the image feature extractor \(\mathbb{F}^{*}()\) for iNat2018 dataset. Footnote 4: [https://github.com/visipedia/inat_comp/tree/master/2018](https://github.com/visipedia/inat_comp/tree/master/2018) Table 1 compares the Top1 accuracy of different training strategies on the iNat2018 validation dataset with different \(\lambda\%\). From Table 1, we can see that: * **Img. Only (ImageNet)** yields the lowest performances in all \(\lambda\%\) settings which indicates that considering location information is beneficial in all settings. * **Sup. Only (grid)** outperforms Sup. Only (wrap) across all settings indicating that multi-scale location encoders (e.g., grid) are effective for spatial distribution modeling. This confirms the results of Mai et al. (2020). * **Comparing the last three models, we can see the general patterns in all \(\lambda\%\) settings: CSP-MC-BLD> CSP-NCE-BLD> MSE. Since these three models only differ in terms of the location encoder pre-training strategies (the red box in Figure 2c), this indicates that CSP-MC-BLD is the best location encoder pre-training objective. * **When \(\lambda\%=5\%,10\%,20\%\), compared with the Sup. Only, CSP-MC-BLD have relative performance improvements of 10.4%, 34.3%, and 16.6% which indicates the effectiveness of Contrastive Spatial Pre-Training. * **When \(\lambda\%=100\%\), CSP-MC-BLD still yields better results than Sup. Only (grid). This indicates that our \(CSP\) is beneficial even in a fully supervised setting.** To understand the effectiveness of each loss component in CSP (see Figure 3 and Equation 4), we conduct an ablation study on the iNat2018 dataset with different \(\lambda\) and report the results in Table 2. We can see that each component contributes to the final model performance. Deleting any of them will lead to performance drops. To understand the effect of location embedding dimension \(d\) on the model performance, we conduct an additional ablation study of \(d\) on the iNat2018 dataset with different \(\lambda\) and report the results in Table 3. We can see that at the few-shot setting \(\lambda\%=5\%,10\%,20\%\), models with \(d=256\) achieve the best performance. In the fully supervised setting, the model with \(d=1024\) leads to the best performance. Last but not least, we also explore whether our CSP is effective on different image encoders. We conduct an ablation study of different \(\mathbb{F}()\) on the iNat2018 dataset with \(\lambda\%=5\%\). Table 4 summarizes the results. We can see that no matter which \(\mathbb{F}()\) we use, Inception V3 or ViT, our CSP-MC-BLD consistently yields the best results, and ViT improves the model performance a lot. To investigate how well CSP learns location representation, we sample a set of regular grid points all over the world and compute their embeddings with the location encoder \(e()\). The resulting location embeddings are hierarchically clustered. The results are shown in Figure 4. Figure 4a and 4c show the clustering results after _CSP-MC-BLD_ or _CSP-NCE-BLD_ pre-training, while Figure 4b and 4d show the clustering results after supervised fine-tuning on respective models. Some interesting clustering patterns merge in Figure 4a and 4c. For example, the clustering patterns in Figure 4a show some regional effects that are somewhat similar to the Koppen climate classification5. This makes sense since the pre-training with location-image pairs is learning the spatial distribution of species and their environment, which is highly related to climate zones. The clusters in the US are smaller since the iNat2018 training dataset has much more data in the US (See Figure 8a in Appendix A.7). Footnote 5: [https://en.wikipedia.org/wiki/K%C3%](https://en.wikipedia.org/wiki/K%C3%) B6ppen_climate_classification ### Satellite Image Classification A similar procedure is carried out on fMoW6 dataset (Christie et al., 2018), which has 62 different geospatial object classes, and 363,570 location-image pairs. We use all location-image pairs as \(\mathbb{X}_{train}\), and stratified sample \(\lambda\%\) labeled location-image pairs from the training dataset as \(\overline{\mathbb{X}}_{train}\). We use similar training, and evaluation protocol as Section 4.2. The ResNet50 checkpoint after Geo-SSL's MoCo-V2+TP self-supervised pre-training on unlabeled fMoW dataset (Ayush et al., 2021) is used as the pre-trained image feature extractor \(\mathbb{F}^{*}()\) for all models. Footnote 6: [https://github.com/fMoW/dataset](https://github.com/fMoW/dataset) Table 5 compares the evaluation results (Top1 accuracy) among different models and training strategies on the fMoW val dataset after fine-tuning on \(\lambda\%\) fMoW training samples where \(\lambda\%\in\{5\%,10\%,20\%,100\%\}\). We can see that Table 5 shows similar patterns as those of Table 1: * Img. Only (Geo-SSL) yields better results than Img. Only (Tile2Vec) across different \(\lambda\%\). But both Img. Only models still give the lowest performance than all other settings with all \(\lambda\%\). This confirms the importance of jointly learning location representations. However, Img. Only (Geo-SSL) gives a relatively good performance (65.22%) even when \(\lambda\%=5\%\). That is because we use the Geo-SSL's MoCo-V2+TP checkpoint which is directly pre-trained on the unlabeled fMoW training dataset. In contrast, in Table 1, Img. Only used an InceptionV3 model pre-trained on ImageNet, not on the iNat2018 training dataset. * Similar to the results in Table 1, Sup. Only (grid) outperforms Sup. Only (wrap) in all settings which shows the effectiveness of grid over wrap. * CSP-MC-BLD outperforms all models and yields super or comparable results of CSP-NCE-BLD. However, the margins are relatively small compared with those of Table 1. The performance improvements mainly come from the location encoder's ability to do spatial distribution modeling. Compared with species distribution, the geographic \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Ratio \(\lambda\%\) & 5\% & 10\% & 20\% & 100\% \\ \hline CSP-MC-BLD & **9.01** & **19.68** & **29.61** & **73.79** \\ \hline CSP-MC-BD & 8.63 & 19.60 & 29.52 & 73.15 \\ CSP-MC-BLD & 8.40 & 17.17 & 26.63 & 73.36 \\ CSP-MC-B & 8.16 & 16.58 & 25.89 & 73.10 \\ \hline \end{tabular} \end{table} Table 3: Ablation studies on different location embedding dimensions \(d\) on the iNat2018 validation dataset with different \(\lambda\%\). \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline & \(d\) & 5\% & 10\% & 20\% & 100\% \\ \hline CSP-MC-BLD & 64 & 7.64 & 16.57 & 25.31 & 71.76 \\ CSP-MC-BLD & 128 & 8.5 & 19.35 & 29.11 & 72.89 \\ CSP-MC-BLD & 256 & **9.01** & **19.68** & **29.61** & 73.62 \\ CSP-MC-BLD & 512 & 8.97 & 18.8 & 27.96 & 73.67 \\ CSP-MC-BLD & 1024 & 8.78 & 17.94 & 26.65 & **73.79** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation studies on different image neural network \(\mathbb{F}()\) (InceptionV3 (Szegedy et al., 2016) and ViT (Dosovitskiy et al., 2021)) on the iNat2018 validation dataset with \(\lambda\%=5\%\). \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Ratio \(\lambda\%\) & 5\% & 10\% & 20\% & 100\% \\ \hline Img. Only (ImageNet) & \multirow{2}{*}{5.28} & \multirow{2}{*}{12.46} \\ Szegedy et al., (2016) & & & & \\ \hline Sup. Only (wrap) & \multirow{2}{*}{7.12} & \multirow{2}{*}{18.66} \\ Mac Aodha et al., (2019) & & & & \\ Sup. Only (grid) & & & & \\ Mai et al., (2020b) & & 8.16 & 18.68 \\ \hline MSE & 8.15 & 20.02 \\ \hline CSP-NCE-BLD & 8.15 & 20.02 \\ CSP-MC-BLD & **9.01** & **20.78** \\ \hline \hline \end{tabular} \end{table} Table 1: The Top1 accuracy of different models and training strategies on the iNat2018 validation dataset for the species fine-grain recognition task with different training data ratios, where \(\lambda\%=100\%\) indicates the fully supervised setting. We run each model 5 times and report the standard deviation in “\(0\)”. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Ratio \(\lambda\%\) & 5\% & 10\% & 20\% & 100\% \\ \hline Ing. Only (Tile2Vec) & \multirow{2}{*}{59.41 (0.23)} & \multirow{2}{*}{61.91 (0.31)} & \multirow{2}{*}{62.96 (0.51)} & \multirow{2}{*}{64.45 (0.37)} \\ Jean et al. (2019) & & & & \\ Img. Only (Go-SSL) & \multirow{2}{*}{65.22 (\(\cdot\))} & \multirow{2}{*}{66.46 (\(\cdot\))} & \multirow{2}{*}{67.66 (\(\cdot\))} & \multirow{2}{*}{69.83 (\(\cdot\))} \\ Ayush et al., (2021) & & & & \\ \hline Sup. Only (wrap) & \multirow{2}{*}{66.67 (0.03)} & \multirow{2}{*}{68.22 (0.01)} & \multirow{2}{*}{69.45 (0.01)} & \multirow{2}{*}{70.30 (0.02)} \\ Aodha et al., (2019) & & & & \\ Sup. Only (grid) & & & & \\ \hline MSE & 8.15 (0.02) & 17.80 (0.05) & 27.56 (0.02) & 73.27 (0.02) \\ \hline CSP-NCE-BLD & 8.65 (0.02) & 18.75 (0.12) & 28.15 (0.07) & 73.33 (0.01) \\ CSP-MC-BLD & **9.01** (**0.02**) & **19.68** **(**0.05**) & **29.61** ** **(**0.03**) & **73.79** (**0.02**) \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation studies on different CSP-MC-* pretraining objectives on the iNat2018 validation dataset with different \(\lambda\%\). Here, CSP-MC-BLD indicates the CSP training on the \(\mathrm{MC}\) loss with all three components. CSP-MC-BL deletes the SimCSE \(l_{\mathrm{MC}}^{D}(\mathbb{X})\) component in Equation 4. The rest models follow similar logic. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Ratio \(\lambda\%\) & 5\% & 10\% & 20\% & 100\% \\ \hline \hline \multicolumn{1}{l|}{Img. Only (Tile2Vec)} & \multirow{2}{*}{59.41 (0.23)} & \multirow{2}{*}{61.91 (0.31)} & \multirow{2}{*}{62.96 (0.51)} & \multirow{2}{*}{64.45 (0.37)} \\ Jean et al. (2019) & & & & \\ Img. Only (Go-SSL) & \multirow{2}{*}{65.22 (\(\cdot\))} & \multirow{2}{*}{66.46 (\(\cdot\))} & \multirow{2}{*}{67.66 (\(\cdot\))} & \multirow{2}{*}{69.83 (\(\cdot\))} \\ Ayush et al., (2021) & & & & \\ \hline Sup. Only (wrap) & \multirow{2}{*}{66.67 (0.03)} & \multirow{2}{*}{68.22 (0.01)} & \multirow{2}{*}{69.45 (0.01)} & \multirow{2}{*}{70.30 (0.02)} \\ Aodha et al. (2019) & & & & \\ Sup. Only (grid) & & & & \\ \hline MSE & 8.15 (0.02) & 17.80 (0.05) & 27.56 (0.02) & 73.27 (0.02) \\ \hline CSP-NCE-BLD & 8.65 (0.02) & 18.75 (0.12) & 28.15 (0.07) & 73.33 (0.01) \\ CSP-MC-BLD & **9.01** (**0.02**) & **19.68** **(**0.05**) & **29.61** ** **(**0.03**) & **73.79** (**0.02**) \\ \ distributions of land use types are very complex and hard to differentiate from each other. For example, factories and multi-unit residential buildings are both man-made geographic entities. Both their distributions are correlated with population distributions and are hard to differentiate. Moreover, sometimes they also show similar appearance in remote sensing images. So it is rather hard to use a location encoder to differentiate one land use type from the other based on their geographic distribution. We think a more powerful location encoding is needed to differentiate them. But this is beyond the scope of this paper. Similar to the iNat2018 dataset, in the fMoW dataset, the embedding clustering results of the pre-trained and fine-tuned location encoders are visualized in Figure 5. We can see that more fine-grained clusters are generated in the US after _CSP-MC-BLD/CSP-NCE-BLD_ pre-training, while the representation is updated to be more detailed after location encoder fine-tuning. Compared with Figure 4, the regional effect is less clear which also shows the difficulty to model the spatial distributions of land use types. ## 5 Conclusion and Discussion In this work, we proposed Contrastive Spatial Pre-Training (CSP), a self-supervised framework to learn the alignment between locations and images based on large unlabeled geo-tagged images. Similar to recent popular image-text pre-training models such as CLIP and ALIGN, CSP utilizes a dual-encoder architecture to separately encode the location and image. The resulting location and image representation are contrasted against each other to form a contrastive pre-training objective. To validate the effectiveness of CSP, we conduct experiments on two geo-aware image classification tasks: species fine-grained recognition on iNat2018 dataset and satellite image classification on the fMoW dataset. Experiments results show that CSP can improve model performance on both datasets under different labeled training data sampling ratios. On the iNat2018 dataset CSP can significantly boost the model performance with 10-34% relative improvement in several few-shot settings ( \(\lambda\%=\{5\%,10\%,20\%\}\)) and still be able to improve model performance when \(\lambda=100\%\). To the best of our knowledge, our work is the first one to show the great potential of learning the geospatial-visual alignment for model pre-training. Although we only investigate the effectiveness of our CSP framework on location-image pre-training in this work, CSP can be easily extended to learn the alignment between location (or time) and data in other modalities such as text for different downstream tasks such as geo-aware text classification. We put this as one of our future works. Moreover, in this work, we only use the existing geo-tagged datasets (e.g., iNat2018 and fMoW) as a proxy for unlabeled location-image pairs. In the future, we would like to construct larger-scale unlabeled geo-tagged image datasets based on publicly available satellite images with which we expect to see a larger performance improvement. In this work, we only use single geo-coordinates for geospatial-visual contrastive representation learning. In the future, we can explore more complex geometries such as polylines (Xu et al., 2018) and polygons (Mai et al., 2023b). The proposed CSP framework can be seen as a step towards the multimodal foundation models for geospatial artificial intelligence (Mai et al., 2022a; 2023a). ## 6 Ethics Statements Our code and used datasets are available from [https://gengchenmai.github.io/csp-website/](https://gengchenmai.github.io/csp-website/). We do not find any negative societal impact of our research. ## Acknowledgements This research is supported in part by ODNI, IARPA (2021-2011000004), NSF (#1651565), CZ Biohub, and Stanford HAI. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes not-withstanding any copyright Figure 4: Location embedding before and after supervised fine-tuning for iNat2018. Figure 5: Location embedding before and after supervised fine tuning for fMOW. annotation therein.
2306.11883
Invariant systems of weighted representatives
It is known that, if removing some $n$ edges from a graph $\Gamma$ destroys all subgraphs isomorphic to a given finite graph $K$, then all subgraphs isomorphic to $K$ can be destroyed by removing at most $|E(K)|\cdot n$ edges, which form a set invariant with respect to all automorphisms of $\Gamma$. We construct the first examples of (connected) graphs $K$ for which this estimate is not sharp. Our arguments are based on a ``weighted analogue'' of an earlier known estimate for the cost of symmetry.
Anton A. Klyachko, Mikhail S. Terekhov
2023-06-20T20:55:31Z
http://arxiv.org/abs/2306.11883v1
# Udc 519.157.1+519.175.1+519.176 ###### Abstract It is known that, if removing some \(n\) edges from a graph \(\Gamma\) destroys all subgraphs isomorphic to a given finite graph \(K\), then all subgraphs isomorphic to \(K\) can be destroyed by removing at most \(|E(K)|\cdot n\) edges, which form a set invariant with respect to all automorphisms of \(\Gamma\). We construct the first examples of (connected) graphs \(K\) for which this estimate is not sharp. Our arguments are based on a "weighted analogue" of an earlier known estimate for the cost of symmetry. UDC 519.157.1+519.175.1+519.176 ## 1 Introduction Suppose that we can choose \(n\) vertices of a graph in such a way that each \(100\)-cycle contains a chosen vertex. How many vertices should be chosen if we want to make the choice _fair_, i.e., we want the set of chosen vertices be invariant with respect to all automorphisms of the graph? It is known that the answer is at most \(100n\) suffice (and this estimate is sharp). Moreover, the following general fact was obtained in [KL21]. **Theorem KL** [KL21].: Suppose that a group \(G\) acts on a set \(U\), \(\mathcal{F}\) is a \(G\)-invariant family of finite subsets of \(U\) of uniformly bounded cardinality, and \(X\subseteq U\) is a finite system of representatives for this family (i.e., \(X\cap F\neq\varnothing\) for any \(F\in\mathcal{F}\)). Then there exists a \(G\)-invariant system of representatives \(Y\) such that \(|Y|\leqslant|X|\cdot\max_{F\in\mathcal{F}}|F|\). This fact can be called a combinatorial analogue of the (algebraic) Khukhro-Makarenko theorem [KhM07a] (see also [KhM07b], [KIMe09], and [KIM15]). A survey of algebraic Khukhro-Makarenko type theorems can be found in [KL21]. First, we prove the following generalisation of Theorem KL (in Section 3). **Theorem on symmetrisation of systems of multiple representatives.** Suppose that a group \(G\) acts on a set \(U\), \(\mathcal{F}\) is a \(G\)-invariant family of finite subsets of \(U\) of uniformly bounded cardinality, and \(X\subseteq U\) is a finite system of \(k\)-multiple representatives for this family (i.e. \(|X\cap F|\geqslant k\) for any \(F\in\mathcal{F}\)). Then there exists a \(G\)-invariant system of \(k\)-multiple representatives \(Y\) such that \(k\cdot|Y|\leqslant|X|\cdot\max_{F\in\mathcal{F}}|F|\). Moreover, as \(Y\), we can take the union of all \(G\)-orbits \(G\circ u\) such that \(|(G\circ u)\cap X|\cdot\max_{F\in\mathcal{F}}|F|\geqslant|G\circ u|\cdot k\). In particular, \(k\cdot|Y|\leqslant|X\cap Y|\cdot\max_{F\in\mathcal{F}}|F|\). For example, if a graph \(\Gamma\) possesses a finite set of vertices \(X\) containing at least two vertex from each \(100\)-cycle, then we can choose an \((\operatorname{Aut}\Gamma)\)-invariant set of vertices \(Y\) containing at least two vertex from each \(100\)-cycle and such that \(|Y|\leqslant 50|X|\). In Section 4, we prove an even more general version of Theorem KL, which makes it possible, e.g., to solve the following "applied" problem: _to form a student council, we have, from every ten students, the first of which is respected by the nine others, to choose_ _- either this respected student (the first one),_ _- or at least three respecting student (from the nine remaining ones)._ _We want to minimise the cardinality of the student council. If we wish also to choose fairly, then what is the cost of fairness?_ There arises a directed graph with weights. In this case, this is the ten-vertex star; the central vertex has weight one, and the nine remaining vertices are of weight \(\frac{1}{3}\). There is also a (large) directed graph \(\Gamma\) (with no weights) describing who respects whom. We have to choose some vertices of \(\Gamma\) in such a way that each ten-vertex star is presented with weight at least one. If we do not care about fairness, then we can choose a student council consisting of, say, forty persons. If we also want to be fair (i.e., we want the student council to be invariant with respect to \(\operatorname{Aut}\Gamma\)), how many students must be chosen in the worst case? The answer is \(\leqslant 40\cdot\left(1+9\cdot\frac{1}{3}\right)=160\), because of the following natural generalisation of Theorem KL (which is proven in Section 4).
2306.00756
Kinetic Models of Wealth Distribution Having Extreme Inequality: Numerical Study of Their Stability Against Random Exchanges
In view of some persistent recent reports on a singular kind of growth of the world wealth inequality, where a finite (often handful) number of people tend to possess more than the wealth of the planet's 50\% population, we explore here if the kinetic exchange models of the market can ever capture such features where a significant fraction of wealth can concentrate in the hands of a countable few when the market size $N$ tends to infinity. One already existing example of such a kinetic exchange model is the Chakraborti or Yard-Sale model, where (in absence of tax redistribution etc) the entire wealth condenses in the hand of one (for any value of $N$), and the market dynamics stops. With tax redistribution etc, its steady state dynamics have been shown to have remarkable applicability in many cases of our extremely unequal world. We show here that another kinetic exchange model (called here the Banerjee model) has intriguing intrinsic dynamics, by which only ten rich traders or agents possess about 99.98\% of the total wealth in the steady state (without any tax etc like external manipulation) for any large value of $N$. We will discuss in some detail the statistical features of this model using Monte Carlo simulations. We will also show, if the traders each have a non-vanishing probability $f$ of following random exchanges, then these condensations of wealth (100\% in the hand of one agent in the Chakraborti model, or about 99.98\% in the hands ten agents in the Banerjee model) disappear in the large $N$ limit. We will also see that due to the built-in possibility of random exchange dynamics in the earlier proposed Goswami-Sen model, where the exchange probability decreases with an inverse power of the wealth difference of the pair of traders, one did not see any wealth condensation phenomena.
Asim Ghosh, Suchismita Banerjee, Sanchari Goswami, Manipushpak Mitra, Bikas K. Chakrabarti
2023-06-01T14:51:15Z
http://arxiv.org/abs/2306.00756v2
# Kinetic Models of Wealth Distribution Having Extreme Inequality: ###### Abstract In view of some persistent recent reports on a singular kind of growth of the world wealth inequality, where a finite (often handful) number of people tend to possess more than the wealth of the planet's 50% population, we explore here if the kinetic exchange models of the market can ever capture such features where a significant fraction of wealth can concentrate in the hands of a countable few when the market size \(N\) tends to infinity. One already existing example of such a kinetic exchange model is the Chakraborti or Yard-Sale model, where (in absence of tax redistribution etc) the entire wealth condenses in the hand of one (for any value of \(N\)), and the market dynamics stops. With tax redistribution etc, its steady state dynamics have been shown to have remarkable applicability in many cases of our extremely unequal world. We show here that another kinetic exchange model (called here the Banerjee model) has intriguing intrinsic dynamics, by which only ten rich traders or agents possess about 99.98% of the total wealth in the steady state (without any tax etc like external manipulation) for any large value of \(N\). We will discuss in some detail the statistical features of this model using Monte Carlo simulations. We will also show, if the traders each have a non-vanishing probability \(f\) of following random exchanges, then these condensations of wealth (100% in the hand of one agent in the Chakraborti model, or about 99.98% in the hands ten agents in the Banerjee model) disappear in the large \(N\) limit. We will also see that due to the built-in possibility of random exchange dynamics in the earlier proposed Goswami-Sen model, where the exchange probability decreases with an inverse power of the wealth difference of the pair of traders, one did not see any wealth condensation phenomena. These aspects of the statistics of these intriguing models have been discussed here. ## I Introduction The first successful theory of classical many-body physics or classical condensed matter systems has been about one and a quarter centuries old kinetic theory of the (classical) ideal gas, composed of Avogadro number (about \(10^{23}\)) order constituent atoms or molecule (each following Newtonian dynamics). It remains still a robust, versatile and extremely successful foundation of classical many-body physics. Social systems, economic markets in particular, are intrinsically many-body dynamical systems composed of a lesser number of constituents (order of \(10^{2}\) for a village market to the order of \(10^{10}\) for a global market). One Robinson Crusoe in an island can not develop a market or a society for that matter and markets are intrinsically many-body systems. It is no wonder that the kinetic exchange of money or wealth models had therefore been conjectured early on (e.g., by Saha and Srivastava [1] in 1931, Mandelbrot [2] in 1960) and resurrected recently (e.g., by Chakrabarti and Marjit [3] in 1995, Dragulecu and Yakovenko [4] in 2000, Chakraborti and Chakrabarti [5] in 2000, Chatterjee, Chakrabarti and Manna [6] in 2004). The kinetic exchange models of trades and their statistics have been quite successful in capturing several realistic features of wealth distributions among the agents in the societies (see e,g., [7; 8]). The beneficial effects of the agent's saving propensity in reducing the social inequality has been studied extensively [5; 6; 8]. The choice of the poorest trader as mandatory in each trade (the other trade partner being randomly chosen) leads to the remarkable self-organized poverty line, below which none remains in the steady state (see e.g., [9; 10; 11; 12]). This model was inspired by some crucial observations by the economists (see e.g., [10]) and suggests the built-in (self-organized) remedies for reducing social inequality. Though it must be admitted, such intriguing self-organizing properties of the kinetic exchange models have not been investigated extensively yet. Contrarily, the recent focus has moved to the unusual rate of growth of social inequality in the post world war II period (see e.g., [13; 14; 15; 16]), which in some countries seem to have crossed significantly above the 80-20 Pareto limit and have reached a steady state with 87% percent wealth accumulated in the hands of 13% people. This has indeed been argued, following an analogy with the inequality index values for the avalanche burst statistics in self-organized sand-pile models near their respective critical points, to be the natural limit in all social competitive situations, where the welfare mechanisms (helping those who fail to participate properly in such self-organizing dynamics) are either absent or removed (see e.g., a recent review [16]). Although the Pareto-like inequality mentioned above where a small fraction of people (say 13%) possess a large fraction (say 87%) of wealth, can already be devastating, more annoying kind of inequalities are being reported recently. For example, the Oxfam Report [17] of January, 2020 in Davos said "The world's 2,153 billionaires have more wealth than the 4.6 billion people who makeup 60 percent of the planet's population." In other words, a handful number (about \(10^{3}\)) of rich people possess more than about 60% (or \(10^{9}\) order) poor people's wealth of this planet. This dangerous trend in the world as a whole, is repeatedly mentioned in various recent reports. The Pareto-type inequality mentioned above have long been investigated (see e.g., [6; 18]) using the kinetic exchange models with non-uniform saving propensities of the traders (see e.g., [8], [19] for reviews). One may naturally ask the question, does the kinetic exchange theory allow for possible models where only a handful of traders (say, about 10 in number) possess a significant fraction (say, above 50%) of the total wealth considered in the model, even when its population \(N\) tends to infinity? The answer is yes. The Chakraborti model [20], widely known today as the Yard-Sale model, starting with [21], have attracted a lot of attention (see e.g., [22; 23; 24]). In its barest form [20] in the Chakraborti model (called C-model here), two randomly chosen traders at any point of time come and participate in an exchange trade when the richer one saves the excess over the poorer's wealth and goes for a random exchange of the total available wealth (double of the poorer's). The slow but inevitable attractor fixed point of the trade dynamics arrive when all wealth ends up in the hand of just one trader, no matter how big the population (\(N\)) is. Because of the particular form of savings during any trade, whenever one becomes pauper, nobody trades with him and gradually all condense to that state where one trader acquires the entire wealth and the trade dynamics stop (see also [22]). External perturbations like regular redistribution of tax collections by the central government (or any non-playing agent) can help relieving [23; 24] the condensation phenomenon and this seems to fit well with many observed situations [23]. We will show here numerically that if each of the traders has a finite probability (\(f\)) of playing Dragulecu and Yakovenko (DY) [4] type random exchanges, then for any \(f>0\), the condensation of wealth in the hand of one trader disappear and the steady state distribution of wealth becomes exponentially decreasing, as in the DY model. In the Goswami-Sen (or GS) model [25], one considers a kinetic exchange mechanism where the interaction (trade) probability among the trade partners (\(i\) and \(j\)) decreases with their wealth difference (\(|m_{i}-m_{j}|\)) at that instant of trading (time), following a power law (\(|m_{i}-m_{j}|^{-\alpha}\)). Of course, for \(\alpha=0\), the model reduces to that of DY. Their numerical results showed that for \(\alpha\) values less than about 2.0, the steady-state wealth distribution among the traders are still DY-like (exponentially decaying with increasing wealth). For higher values (beyond 2.0) of \(\alpha\), power-law (Pareto-law) decays occur. No condensation of wealth in the hands of a finite number of traders or agents are observed, because of the inherent DY-like exchange probability in the dynamics of the model (checked by extrapolating with \(N\) the fraction of total wealth possessed in the steady state by the richest ten traders). We finally consider here a seemingly natural version of the kinetic exchange model, called here the Banerjee (B) model [26], where the intrinsic dynamics of the model lead to another extreme kind of inequality in the steady state in the sense that precisely ten traders (out of the \(N\) traders in the market; \(N\rightarrow\infty\)) possess (99.98 \(\pm\) 0.01)% of the total wealth. These fortunate traders are not unique and their fortune does not last for long (residence time on average is about 66 time units with the most probable value around 25 time units, counted in units of \(N\) trades or exchanges, for any value of \(N\)) and it decreases continuously with increasing fraction (\(f\)) of random trades or interactions. Unlike in the Chakraborti or Yard-Sale model [20; 21], where the dynamics stops after the entire wealth goes to one (unless perturbed externally), here the trade dynamics continue with the total wealth circulating only within a handful (about ten) traders in the steady state. In this model, after each trade, the traders are ordered from lowest wealth to highest and each of the traders trade only with their nearest-in-wealth traders, richer or poorer compared to own, with equal probability. Even if by chance the entire wealth goes to one trader, the dynamics of random exchanges do not stop in this model as all the paupers become the only nearest neighbors (wealth-wise) of this trader and random exchange among them occurs! The process continues. Apart from the steady state wealth distributions and most probable wealth amounts of the top few rich traders, we will show that in this model the condensation of almost the entire (99.98%) wealth occurs in the hands of 10 traders (no matter how big \(N\) is). We will show here again that this condensation disappears when a finite fraction \(f\) of time each of the traders go for DY type random exchanges, and eventually the DY-type exponentially decaying wealth distribution sets-in, after a power law region for low values of \(f\). ## II Models and Numerical Studies for their Statistics We study here numerically the statistical features of the three kinetic exchange models introduced in the Introduction. We begin with the B (Banerjee [26]) model. Next, we consider the C (Chakraborti, or Yard Sale) model [20; 21] and then the GS (Goswami-Sen) model [25]. In order to explore the stability of the condensation of wealth feature in these models, we study the steady-state wealth distributions \(P(m)\) in each of these models and the fraction of total wealth concentrated in the hands of a few (say ten) traders or agents (whenever meaningful), allowing each trader to have a nonvanishing probability \(f\) (the faction of tradings or times) to go for DY (Dragulecu and Yakovenko [4]) type random exchanges. Most of the numerical (Monte Carlo) studies of the dynamics of these models are studied with four sets of numbers \(N\) of the agents or traders: \(N=100\), 200, 400, and 800 and at each time step \(t\) Figure 1: Distributions of the fraction of total wealth (\(M=N\)) ending up in the hands of the richest three traders. The error estimation is based on 10 runs. The typical errors in the distribution of wealth are seen to grow with \(N\) near the most-probable value of the wealth fraction and are indicated for \(N=800\) for all the three traders. Far away from the most-probable values, the errors are less than the data point symbol sizes. the dynamics runs over all the \(N\) order traders. We consider total money (\(M\)), to be distributed among the agents is equal to \(N\) and we denote the money of any agent \(i\) at time \(t\) by \(m_{i}(t)\) and, as such \(M=\sum_{i}m_{i}(t)=N\). When the steady state is reached after the respective relaxation times when the average quantities do not change with time (relaxation time typically much less than \(10^{5}\) trades/interactions for the \(N\) values considered here), the statistical quantities are evaluated from averages over \(10^{5}\) post-relaxation time steps or so. ### Banerjee model results In this B-model, when the DY fraction (\(f\)) is set equal to zero, no wealth distribution \(P(m)\) across the population is meaningful, because of wealth condensation in the hands of a few. We first study the distributions (see Fig. 1) of total wealth fraction in the hands of the richest three. Note, these three are not unique, and once they become so rich, their residence time (in unit of \(N\)) is finite (about 66) and in case these positions are lost, the return time also is finite. Though the distributions of the total wealth fraction in the hands of the few richest (shown in Fig. 1) are rather wide (each one spread over more than 30% of the total wealth and does not \(N\)), the distribution of the total wealth fraction possessed by the ten richest (at any time in the steady state) is extremely narrow and spreads over 0.1% only (see Fig. 2). At any time in the steady state its value is much more robust in this B model (with \(f=0\)) and its value is less than unity, but very close to 0.9998. Next, we consider the B model with a nonvanishing probability \(f\) of each trader to follow DY trades or exchanges. We see, immediately, the wealth condensation disappears and with increasing values of \(f\), the wealth gets Boltzmann (exponentially) distributed among all the agents (see Fig. 3), starting with Pareto-like power law distribution for lower values of \(f\) (see the inset of Fig. 3). Indeed, when we Figure 2: Distribution of the total wealth fraction possessed by the ten richest (at any time in the steady state and for different \(N\) values). The inset shows that the average of this total wealth fraction of the ten richest (for any time and any value of \(N\)) in the steady state is very close to 0.9998. It may be noted, although the wealth share fractions of the richest ten traders have considerable fluctuations (see Fig. 1), the sum total of their wealth fractions have hardly any fluctuation (much less than the symbol size in the inset). The error estimation is based on 10 runs. The typical error in the distribution of total wealth of the ten richest are seen to be more than the data point symbol sizes only near the most-probable value, where it is indicated. consider the limiting values (for large \(N\)) of the average fraction of total wealth (\(M=N\)) possessed by the ten richest traders in the steady state, they all seem to vanish (see Fig. 4) for any non-zero value of \(f\) (and remains a constant 0.9998 for \(f=0\), the pure B model). For the wealth condensation in B-model (with \(f=0\)), we show next in Fig. 5(a), the distribution of residence-time (in unit of \(N\)) of the 10 fortunate traders and (in the inset) the variation of the most probable and average values of the residence-time (\(\tau\), in unit of \(N\)). For the same model with \(f=0\), we show in Fig. 5(b) the distribution of return-time to fortune (become one of the 10 richest starting from the 20th rank) and (in the inset) the variation of the most probable and average values of the residence-time with market size \(N\). ### Chakraborti or Yard-Sale model results The C Model or Yard sale model is well-studied. However, in order to check the stability of the condensation of wealth (entire money \(M=N\) going to the hand of one trader only, we added a nonvanishing probability \(f\) of each trader to follow DY trades or exchanges. We see, immediately, the wealth condensation disappears for any \(f>0\) (see Fig. 6) and the wealth gets distributed in the Boltzmann form (exponentially decaying with increasing wealth) among all the agents. The inset shows that for any nonzero value of \(f\), the steady state wealth distribution is exponentially decaying (and there is a power law region) in this extended model C. Also, when we consider the limiting values (for large \(N\)) of the average fraction of total wealth (\(M=N\)) possessed by the ten richest traders the steady state (see Fig. 7), they all seem to vanish from the unit value in the original C model (with \(f=0\)) for any non-zero value of \(f\). ### Goswami-Sen model results Here the interaction (trade) probability among the trade partners (\(i\) and \(j\)) decreases with their wealth difference (\(|mi\)-\(mj|\)) at that instant of trading (time), following a power law (\(|mi\)-\(mj|^{-\alpha}\)). As Figure 3: Wealth distribution \(P(m)\) among all the agents against the wealth \(m\) in the B model for different probability \(f\) of DY random exchanges. Note that the fluctuations appear to grow more for the lower values of the distribution of wealth due to the log scale used in the y-axis. such in the GS model, there is always a finite (but small) probability of random exchanges. We do not need to consider the additional fraction of DY interaction in this model. Of course, for \(\alpha=0\), the model reduces to that of DY. Our numerical results confirm (see Fig. 8) that for \(\alpha\) values less then about 2.0, the steady-state wealth distribution among the traders are still DY-like (exponentially decaying). For Figure 4: To get the limiting values (for large \(N\)) of the average fraction of total wealth (\(M=N\)) possessed by the ten richest traders in the steady state, we plot the fraction against \(1/N^{2}\) (as with DY type trades each of N traders interacts with (N - 1 other trader). The extrapolated values all seem to approach zero for any non-zero value of \(f\) (but remains a constant 0.9998 for \(f=0\), as in the pure B model). The error estimation is based on 10 runs. Typical sizes of error bars are indicated. Figure 5: (a) The distribution of residence-time (in units of \(N\)) of the 10 fortunate traders and (in the inset) the variation of the most probable and average values of the residence time. (b) The distribution of return-time to fortune (become one of the 10 richest, starting from the 20th rank) and (in the inset) the variation of the most probable and average values of the return-time (in units of \(N\)). The error estimation is based on 10 runs. The typical errors in the distribution of both the residence and return times are seen to grow with \(N\) near the most-probable values of the respective quantities and are indicated for \(N=400\) here, when they are bigger than the symbol sizes. higher values (beyond 2.0) of \(\alpha\), power-law (Parto-like) decays occur (but no condensation of wealth). Though the model leads to extreme inequality, there is no condensation of wealth in the hands of a few traders for any (larger) value of \(\alpha\). In order to check that we studied again the average fraction of wealth in the hands of a few traders for any (larger) value of \(\alpha\), we found that the average fraction of wealth in the hands of a few traders for any (larger) value of \(\alpha\) is \(\alpha=0.001\). Figure 6: Wealth distribution \(P(m)\) among all the agents against the wealth \(m\) in the C model for different probability \(f\) of DY random exchanges. Note that the fluctuations appear to grow more for the lower values of the distribution due to the log scale used in the y-axis. Figure 7: The limiting values (for large \(N\)) of the average fraction of total wealth (\(M=N\)) possessed by the ten richest traders in the the steady state of the C model with \(f\) fraction of DY-like trades. For \(f=0\), the entire money goes to one agent and the other nine agents contribute nothing. When we plot the fraction against \(1/N^{2}\) (as with DY type trades each of N traders interacts with \((N-1)\) other traders), the extrapolated values all seem to approach zero for any non-zero value of \(f\). The error estimation is based on 10 runs. Typical sizes of error bars are indicated. total wealth (\(M=N\)) possessed by the ten richest traders in the steady state of the GS model with \(\alpha\). When we plot the fraction against \(1/N^{2}\) (see Fig. 9), the extrapolated values of the fraction all seem to approach zero for any non-zero value for any of the \(\alpha\) values considered. Figure 8: Wealth distribution \(P(m)\) among all the agents against the wealth \(m\) in the GS model for different values of \(\alpha\). Note that the fluctuations appear to grow more for the lower values of the distribution due to the log scale used in the y-axis. Figure 9: Plot of the fraction of total wealth (\(M=N\)) against \(1/N^{2}\) for different values of \(\alpha\) in the GS model. The extrapolated (with \(N\)) values of the fraction all seem to approach zero for any non-zero value of \(\alpha\). The error estimation is based on 10 runs. Typical sizes of error bars are indicated. ## III Summary and discussion In view of the observed extreme income or wealth inequalities in society, the suitability of the kinetic exchange models [8] to capture them, at least qualitatively, have been investigated here. We distinguish between two types of such extreme inequalities: One (Pareto) type [16] where a small fraction (typically 13%) of the population possess about 87% of the total wealth (following a power law distribution) of the respective country. The other more recently observed (and reported by Oxfam [17]) truly extreme nature of income and wealth inequalities worldwide, where only a handful number (say a few hundreds to thousands) of super-rich people of the world accumulate more than the total wealth of 50 to 60 percent poor people. Several kinetic exchange models (see e.g, [6; 8]) have been developed to analyze Pareto type of inequalities. We have investigated here the statistics of some kinetic exchange models where, even in the \(N\) going to infinity limit, only one person can grab the entire wealth (as in the Yard-sale or Chakraborti or C model [20; 21]), or only 10 people can accumulate about 99.98% of the total wealth (as in the Banerjee or B model [26], see Fig. 2). We investigate how these extreme inequalities in these kinetic models get softened to the Dragulescu-Yakovenko (DY) [4] type exponentially decaying wealth distributions among all the traders or agents, when the traders each have a non-vanishing probability \(f\) of DY-type random exchanges. These condensations of wealth (100% in the hand of one agent in the C model [20], or about 99.98% in the hands of ten agents in the B model) then disappear in the large \(N\) limit (clearly seen when extrapolated against \(1/N^{2}\), as in DY type random exchanges, each of \(N\) agents interact or exchange with all others; see Figs. 4 and 7). We also showed that due to the built-in possibility of DY-type random exchange dynamics in the Goswami-Sen or GS model [25], where the exchange probability decreases with an inverse power of the wealth difference of the pair of traders, one does not see any wealth condensation phenomena. In both GS and B model (with \(f>0\) fraction DY interactions or exchanges) no wealth condensation occurs, though strong Pareto-type power-law wealth distribution \(P(m)\) or inequalities occur for large values of \(\alpha\) and smaller values of \(f\) in GS and B models respectively (see Figs. 3 and 8). For the wealth condensation in the B model, for \(f=0\), we additionally find that the fortunate top ten traders are not unique and their fortune does not last for long (residence-time \(\tau\) to fortune on average is about 66 time units with its most probable value around 25 time units, when counted in units of \(N\) trades or exchanges; see Fig. 5a). The most probable'return-time' to such a fortune (of the 20th rank holder to come within the group of fortunate 10), is found to be about 20 (again in units of \(N\); see Fig. 5b). It may be noted that with Figure 10: The DY fraction \(f\) dependence of the bare residence time (in units of interactions or exchanges) at different \(N\) values for B-model (Fig. 10a) and for C-model (Fig. 10b) are shown. Their power law fits with \(f\) for \(N\) = 400 are shown (for other \(N\) vales, the respective prefactors change linearly with \(N\)). The insets show the \(f\) dependence of the residence-time \(\tau\) (in units of \(N\)). Note that the limiting values of \(\tau\) at \(f=0\) is about 66 for the B-model, while it goes to infinity for the C-model. \(f=0\), in the C-model, the residence-time \(\tau\) is infinity for the only fortunate one accumulating the entire wealth in the system. Indeed with increasing values of DY fraction \(f\), the values of \(\tau\) in both the cases decrease rapidly (see Fig. 10), following inverse power laws with \(f\). We further note, for \(f=0\) in the B-model near the most-probable values of the wealth fractions (Figs. 1 and 2) and residence or return times (Fig. 5), the fluctuations tend to grow with \(N\), indicating a possible divergence there in the macroscopic limit of \(N\). We plan to explore its significance later. Our studies for the B, C, and GS kinetic exchange models, using Monte Carlo techniques [27], suggest that the potential condensation type extreme inequality can disappear in all of them if a non-vanishing probability of random exchanges are allowed, and converge to Pareto-type power law inequality (for B and GS model) which in turn converges to Gibbs like (exponentially decaying) wealth distribution for larger values of \(f\) in the B model, smaller values of \(\alpha\) in the GS model, or any values of \(f>0\) in the C model. These observations may help to formulate public welfare policies. ## Acknowledgement SB acknowledges the support from DST INSPIRE. BKC is grateful to the Indian National Science Academy for their Senior Scientist Research Grant support.
2306.07039
Anticyclotomic $p$-adic $L$-functions for Rankin--Selberg product
We construct $p$-adic $L$-functions for Rankin--Selberg products of automorphic forms of hermitian type in the anticyclotomic direction for both root numbers. When the root number is $+1$, the construction relies on global Bessel periods on definite unitary groups which, due to the recent advances on the global Gan--Gross--Prasad conjecture, interpolate classical central $L$-values. When the root number is $-1$, we construct an element in the Iwasawa Selmer group using the diagonal cycle on the product of unitary Shimura varieties, and conjecture that its $p$-adic height interpolates derivatives of cyclotomic $p$-adic $L$-functions. We also propose the nonvanishing conjecture and the main conjecture in both cases.
Yifeng Liu
2023-06-12T11:38:58Z
http://arxiv.org/abs/2306.07039v4
# Anticyclotomic \(p\)-adic \(L\)-functions for Rankin-Selberg Product ###### Abstract. We construct \(p\)-adic \(L\)-functions for Rankin-Selberg products of automorphic forms of hermitian type in the anticyclotomic direction for both root numbers. When the root number is \(+1\), the construction relies on global Bessel periods on definite unitary groups which, due to the recent advances on the global Gan-Gross-Prasad conjecture, interpolate classical central \(L\)-values. When the root number is \(-1\), we construct an element in the Iwasawa Selmer group using the diagonal cycle on the product of unitary Shimura varieties, and conjecture that its \(p\)-adic height interpolates derivatives of cyclotomic \(p\)-adic \(L\)-functions. We also propose the nonvanishing conjecture and the main conjecture in both cases. ###### Contents * 1 Introduction * 2 Gan-Gross-Prasad conjecture * 3 \(p\)-adic measure * 4 Ordinary condition * 5 Coherent anticyclotomic \(p\)-adic \(L\)-function * 6 Iwasawa Selmer group and height pairing * 7 Incoherent anticyclotomic \(p\)-adic \(L\)-function ## 1. Introduction In their pioneer work [1], Bertolini and Darmon constructed what they called Heegner distribution in both definite and indefinite cases for a classical weight two modular form and an imaginary quadratic field, opening the study of \(p\)-adic \(L\)-functions in the anticyclotomic direction. In the definite case, the product of Heegner distributions leads to a \(p\)-adic measure that interpolates central values of (classical) Rankin-Selberg \(L\)-functions, using the Waldspurger formula. In the indefinite case, the height of Heegner distribution leads to a \(p\)-adic measure that interpolates central derivatives of Rankin-Selberg \(L\)-functions, using the Gross-Zagier formula and its generalization. Since then, there have been numerous works addressing related problems including, for example, [1] in the definite case and [2] in the incoherent case. In this note, we make the first step to generalize these ideas, still along the anticyclotomic direction, to Rankin-Selberg products of higher ranks/dimensions, that is, beyond the \(\mathrm{GL}_{1}\times\mathrm{GL}_{2}\)-case. We consider a CM extension \(E/F\), a nonempty set \(\mathbb{P}\) of \(p\)-adic places of \(F\), and a suitable admissible representation \(\Pi=\Pi_{n}\mathfrak{g}\Pi_{n+1}\) of \(\mathrm{GL}_{n}(\mathbb{A}_{E}^{\infty})\times\mathrm{GL}_{n+1}(\mathbb{A}_{E }^{\infty})\) with coefficients in a \(p\)-adic field \(\mathbb{L}\) (see Section 2 for the precise setting; in particular, after base change to \(\mathbb{C}\), \(\Pi\) is the finite part of a cohomological isobaric automorphic representation of minimal weights and with certain conjugate self-duality). Denote by \(\Gamma^{-}_{E,\mathsf{P}}\) the idele class group giving anticyclotomic extensions of \(E\) unramified outside \(\mathsf{P}\), which is naturally a quotient of \(E^{\times}\backslash(\mathbb{A}^{\infty}_{E})^{\times}\). Our anticyclotomic theory requires that places in \(\mathsf{P}\) split in \(E\) and that \(\Pi\) is ordinary and semistable at every place in \(\mathsf{P}\) (Definition 4.2), which we now assume. In particular, associated with \(\Pi\) there is a Galois representation \(\mathbb{W}_{\Pi}\) of \(E\) with coefficients in \(\mathbb{L}\) of rank \(n(n+1)\) and weight \(-1\) (see Example 6.2). We may attach to \(\Pi\) a "root number" \(\epsilon(\Pi)\in\{\pm 1\}\) and call it coherent/incoherent if \(\epsilon(\Pi)=+1/-1\), parallel to the definite/indefinite cases in [1]. When \(\Pi\) is coherent, we show in Theorem 5.2 that there exists a unique (bounded) measure \(\mathscr{L}^{0}_{\mathsf{P}}(\Pi)\in\mathbb{L}[[\Gamma^{-}_{E,\mathsf{P}}]]^{ \circ}\coloneqq\mathbb{Z}_{\rho}[[\Gamma^{-}_{E,\mathsf{P}}]]\otimes_{ \mathbb{Z}_{\rho}}\mathbb{L}\) such that for every finite character \(\chi\colon\Gamma^{-}_{E,\mathsf{P}}\to\overline{\mathbb{L}}^{\times}\) that is ramified at every place in \(\mathsf{P}\) and every embedding \(\iota\colon\overline{\mathbb{L}}\to\mathbb{C}\), we have \[\iota\mathscr{L}^{0}_{\mathsf{P}}(\Pi)(\chi)=\iota\lambda_{\mathsf{P}}\cdot \frac{\Delta_{n+1}\cdot L(\frac{1}{2},\iota(\Pi_{n}\otimes\overline{\chi}) \times\iota\Pi_{n+1})}{2^{d(\Pi_{n})+d(\Pi_{n+1})}\cdot L(1,\iota\Pi_{n}, \operatorname{As}^{(-1)^{n}})L(1,\iota\Pi_{n+1},\operatorname{As}^{(-1)^{n+1} })}.\] Here, \(\lambda_{\mathsf{P}}\in\mathbb{L}^{\times}\) is an elementary factor depending on \(\Pi_{v}\) and the conductor of \(\chi_{v}\) for \(v\in\mathsf{P}\); \(\Delta_{n+1}\) is a positive real constant in Definition 2.1(1); \(\widetilde{\chi}\) is just \(\chi\) but regarded as a character of \(E^{\times}\backslash(\mathbb{A}^{\infty}_{E})^{\times}\); \(d(\Pi_{n})\) and \(d(\Pi_{n+1})\) are certain positive integers introduced in Remark 2.2(4), same as the ones appearing in the refined Gan-Gross-Prasad conjecture [11]. The proof relies on the recent advances on the global Gan-Gross-Prasad conjecture [1, 2] and a local Birch lemma [13, 14]. We then propose a conjecture predicting when \(\mathscr{L}^{0}_{\mathsf{P}}(\Pi)\) is nonvanishing (Conjecture 5.4) and an Iwasawa main conjecture for \(\mathscr{L}^{0}_{\mathsf{P}}(\Pi)\) (Conjecture 6.3). In our forthcoming work [11], we plan to show in many cases one side of the divisibility for such main conjecture, namely, \(\mathscr{L}^{0}_{\mathsf{P}}(\Pi)\) belongs to the characteristic ideal of the corresponding Iwasawa Selmer group, generalizing [1] to higher ranks/dimensions. When \(\Pi\) is incoherent, we construct a collection of elements \((\mathscr{Z}_{\varphi})_{\varphi}\) in the (compact) Iwasawa Selmer group \(\operatorname{H}^{1}_{\operatorname{fin}}(E,\mathbb{W}_{\Pi})^{\circ}_{\mathsf{ P}}\) (Definition 6.1) of \(\mathbb{W}_{\Pi}\) along \(\Gamma^{-}_{E,\mathsf{P}}\), parameterized by ordinary vectors \(\varphi\in\pi\). Here, \(\pi\) is a certain "descent" of \(\Pi\) to a unitary product group \(G\) determined by the local Gan-Gross-Prasad conjecture (now theorem). The element \(\mathscr{Z}_{\varphi}\) is constructed by variants of special cycles on the Shimura varieties associated with \(G\) given by the natural diagonal subgroup \(H\subseteq G\), generalizing the analogous construction in [1, 1] using Heegner points. We then define a \(p\)-adic measure \(\mathscr{L}^{1}_{\mathsf{P}}(\Pi)\in\Gamma_{F,\mathsf{P}}\otimes_{\mathbb{Z}_ {\rho}}\mathbb{L}[[\Gamma^{-}_{E,\mathsf{P}}]]^{\circ}\) to be the \(p\)-adic height pairing between \(\mathscr{Z}_{\varphi}\) and \(\mathscr{Z}_{\varphi^{\vee}}\), modified by some local terms so that the result is independent of the choices of test vectors. Finally, we propose three conjectures concerning \(\mathscr{L}^{1}_{\mathsf{P}}(\Pi)\): the nonvanishing conjecture (Conjecture 7.7), the main conjecture (Conjecture 7.9), and the interpolation conjecture to the derivative of cyclotomic \(p\)-adic \(L\)-function (Conjecture 7.12). ### Notation and conventions * Put \(\mathbb{N}\coloneqq\{0,1,2,\dots\}\). * Denote by \(\mathsf{e}\) a vector of length \(1\). * In this article, \(\mathbb{L}\) always denotes a field embeddable into \(\mathbb{C}\); and \(\mathbb{L}_{?}\) stands for an extension of \(\mathbb{L}\) that is again embeddable into \(\mathbb{C}\). We denote by \(\overline{\mathbb{Q}}\) the algebraic closure of \(\mathbb{Q}\) in \(\mathbb{C}\). * For a number field \(K\) and a finite set \(\mathsf{S}\) of nonarchimedean places of \(K\), denote by \(\Gamma_{K,\mathsf{S}}\) the profinite completion of \[K^{\times}\backslash(\mathbb{A}^{\infty}_{K})^{\times}\left/\prod_{\nu\neq \mathsf{S}}O^{\times}_{K_{\nu}}\right.,\] where the product is taken over all nonarchimedean places of \(K\) not in \(\mathsf{S}\). We fix a CM extension \(E/F\). Denote by * \(\mathsf{c}\in\operatorname{Gal}(E/F)\) the Galois involution, * \(\eta_{E/F}\colon F^{\times}\backslash\!\!\backslash\!\!\backslash\!\!\backslash\!\! \backslash\!\!\!\backslash\!\!\!\backslash\!\!\!\backslash\!\!\!\!\backslash\!\! \!\backslash\!\!\!\backslash\!\!\!\!\backslash\!\!\!\backslash\!\!\!\!\backslash \!\!\!\backslash\!\!\!\!\backslash\!\!\!\!\backslash\!\!\!\!\backslash\!\!\! \backslash\!\!\!\!\backslash\!\!\!\!\backslash\!\!\!\!\backslash\!\!\!\! \backslash\!\!\!\!\backslash\!\!\!\!\backslash\!\!\!\!\backslash\!\!\!\! \backslash\!\!\!\!\backslash\!\!\!\!\!\backslash\!\!\!\!\!\backslash\!\!\! \!\backslash\!\!\!\!\backslash\!\!\!\!\!\backslash\!\!\!\!\!\backslash\!\!\! \!\!\backslash\!\!\!\!\!\backslash\!\!\!\!\!\backslash\!\!\!\!\!\!\!\! \backslash\!\!\!\!\!\backslash\!\!\!\!\!\!\backslash\!\!\!\!\!\!\!\!\! \backslash\!\!\!\!\!\!\backslash\!\!\!\!\!\!\!\backslash\!\!\!\!\!\!\! \backslash\!\!\!\!\!\!\backslash\!\!\!\!\!\!\!\backslash\!\!\!\!\!\!\!\! \backslash\!\!\!\!\!\!\backslash\!\!\!\!\!\!\!\!\backslash\!\!\!\!\!\!\! \backslash\!\!\!\!\!\!\!\backslash\!\!\!\!\!\!\!\!\backslash\!\!\!\!\!\! \backslash\!\!\!\!\!\!\!\!\backslash\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \backslash\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Fix a positive integer \(n\) throughout the article. Let \(\Pi_{n}\) and \(\Pi_{n+1}\) be two relevant \(\mathbb{L}\)-representations of \(\operatorname{GL}_{n}(\mathbb{A}_{E}^{\infty})\) and \(\operatorname{GL}_{n+1}(\mathbb{A}_{E}^{\infty})\), respectively. Write \(\Pi:=\Pi_{n}\boxtimes\Pi_{n+1}\) as a representation of \(\operatorname{GL}_{n}(\mathbb{A}_{E}^{\infty})\times\operatorname{GL}_{n+1}( \mathbb{A}_{E}^{\infty})\). By the solution of the local Gan-Gross-Prasad conjecture [1] by [1, 1], we know that for every finite place \(v\) of \(F\), there exists a pair \((V_{n,v},\pi_{v})\), unique up to isomorphism, in which * \(V_{n,v}\) is a hermitian space over \(E_{v}\) of rank \(n\); * \(\pi_{v}\) is an irreducible admissible representation of \(G_{v}(F_{v})\) with coefficients in \(\mathbb{L}\) with \(\Pi_{v}\) as its base change and satisfying \(\operatorname{Hom}_{H_{v}(F_{v})}(\pi_{v},\mathbb{L})\neq 0\).3 Footnote 3: Indeed, one can find an \(\mathbb{L}\)-linear model of \(\pi_{v}\) in the space of \(\mathbb{L}\)-valued locally constant functions on \(H_{v}(F_{v})\backslash G_{v}(F_{v})\), using the multiplicity one property. Here, we have put \(G_{v}:=\operatorname{U}(V_{n,v})\times\operatorname{U}(V_{n+1,v})\) with \(V_{n+1,v}:=V_{n,v}\oplus E_{v}\cdot\mathbf{e}\), and denoted by \(H_{v}\subseteq G_{v}\) the graph of the natural embedding \(\operatorname{U}(V_{n,v})\hookrightarrow\operatorname{U}(V_{n+1,v})\) by realizing the source as the stabilizer of \(\mathbf{e}\) in the target. **Definition 2.3**.: Put \[\epsilon(\Pi):=\prod_{v\in\sqrt{\Pi_{n}}}\eta_{E/F}\left(\det V_{n,v}\right) \in\{\pm 1\}.\] We say that \(\Pi\) is a _coherent_ (resp. _incoherent_) if \(\epsilon(\Pi)\) equals \(1\) (resp. \(-1\)). Now suppose that \(\chi\colon\Gamma_{E,P}^{-}\to\mathbb{L}_{\chi}^{\times}\) is a finite character. Then we have a new pair of relevant \(\mathbb{L}_{\chi}\)-representations \(\Pi_{n}\otimes\widetilde{\chi}\) and \(\Pi_{n+1}\) of \(\operatorname{GL}_{n}(\mathbb{A}_{E}^{\infty})\) and \(\operatorname{GL}_{n+1}(\mathbb{A}_{E}^{\infty})\), respectively. Put \[\Pi_{\chi}:=(\Pi_{n}\otimes\widetilde{\chi})\boxtimes\Pi_{n+1}=\Pi\otimes( \widetilde{\chi}\circ\det\boxtimes 1)\] as a representation of \(\operatorname{GL}_{n}(\mathbb{A}_{E}^{\infty})\times\operatorname{GL}_{n+1}( \mathbb{A}_{E}^{\infty})\). Then for every finite place \(v\) of \(F\), the corresponding pair for \(\Pi_{\chi}\) is \((V_{n,v},(\pi_{v})_{\chi_{v}})\), where \((\pi_{v})_{\chi_{v}}:=\pi_{v}\otimes(\chi_{v}\circ\det\boxtimes 1)\). In particular, \(\epsilon(\Pi)=\epsilon(\Pi_{\chi})\). In the rest of this section, we review some facts about local matrix coefficient integrals appearing in the refined Gan-Gross-Prasad conjecture. Take a finite place \(v\) of \(F\) and suppress it in the subscripts from the notation below. In particular, \(E\) is a quadratic etale \(F\)-algebra. The lemma below will be used later. **Lemma 2.4**.: _Let \(r,r_{1},r_{2}\) be positive integers._ 1. _Let_ \(\Pi_{1}\) _and_ \(\Pi_{2}\) _be tempered (complex) irreducible admissible representations of_ \(\operatorname{GL}_{r_{1}}(E)\) _and_ \(\operatorname{GL}_{r_{2}}(E)\)_, respectively. Then for every automorphism_ \(\sigma\) _of_ \(\mathbb{C}\)_,_ \[L(s,\sigma\Pi_{1}\times\sigma\Pi_{2})=\sigma L(s,\Pi_{1}\times\Pi_{2})\] _holds for every_ \(s\in\mathbb{N}+1\) _(resp._ \(s\in\mathbb{N}+\frac{1}{2}\)_) when_ \(r_{1}\) _and_ \(r_{2}\) _have the same parity (resp. different parities)._ 2. _Let_ \(\Pi\) _be a tempered (complex) irreducible admissible representations of_ \(\operatorname{GL}_{r}(E)\) _and_ \(\epsilon\in\{\pm 1\}\)_. Then for every automorphism_ \(\sigma\) _of_ \(\mathbb{C}\)_,_ \[L(s,\sigma\Pi,\operatorname{As}^{\epsilon})=\sigma L(s,\Pi,\operatorname{As}^ {\epsilon})\] _holds for every_ \(s\in\mathbb{N}+1\)_._ Proof.: Denote by \(e_{r}\) the vector \((0,\dots,0,1)\) of length \(r\). For (1), without lost of generality, we may assume \(r_{1}\leqslant r_{2}\) and \(E\) a field. Regard \(\operatorname{GL}_{r_{1}}\) as a subgroup of \(\operatorname{GL}_{r_{2}}\) via the first \(r_{1}\) coordinates. Let \(U_{r_{1}}\) and \(U_{r_{2}}\) be the standard upper-triangular maximal unipotent subgroups of \(\operatorname{GL}_{r_{1}}\) and \(\operatorname{GL}_{r_{2}}\) respectively. Choose two generic characters \(\psi_{1}\colon U_{r_{1}}(E)\to\mathbb{C}^{\times}\) and \(\psi_{2}\colon U_{r_{2}}(E)\to\mathbb{C}^{\times}\) satisfying \(\psi_{1}(u)\psi_{2}(u)=1\) for \(u\in U_{r_{1}}(E)\). For \(\alpha=1,2\) denote by \(\mathcal{W}(\Pi_{\alpha})_{\psi_{\alpha}}\) the Whittaker model of \(\Pi_{\alpha}\) with respect to \((U_{r_{\alpha}},\psi_{\alpha})\). Fix a rational Haar measure on \(U_{r_{1}}(E)\backslash\operatorname{GL}_{r_{1}}(E)\). Denote by \(\mathcal{F}(E^{r_{1}})\) the space of Schwartz (resp. constant) functions on \(E^{r_{1}}\) when \(r_{1}=r_{2}\) (resp. \(r_{1}<r_{2}\)). Let \(q\) be the residue cardinality of \(E\). For \(W_{1}\in\mathcal{W}(\Pi_{1})_{\psi_{1}}\), \(W_{2}\in\mathcal{W}(\Pi_{2})_{\psi_{2}}\), and \(\Phi\in\mathcal{F}(E^{r_{1}})\), the integral \[Z(s,W_{1},W_{2},\Phi)\coloneqq\int_{U_{r_{1}}(E)\operatorname{GL}_{r_{1}}(E)}W _{1}(g)W_{2}(g)\Phi(e_{r_{1}g}g)|\det g|_{F}^{s-\frac{r_{2}-r_{1}}{2}}\mathrm{d}g\] is absolutely convergent for \(\operatorname{Re}s>0\) and has a meromorphic extension to an element in \(\mathbb{C}(q^{-s})\). By the local Rankin-Selberg theory [10], there exists a unique element \(P_{\Pi_{1}\times\Pi_{2}}(X)\in\mathbb{C}[X]\) satisfying \(P_{\Pi_{1},\Pi_{2}}(0)=1\) such that \(\{Z(s,W_{1},W_{2},\Phi)\,|\,W_{1},W_{2},\Phi\}\) is a \(\mathbb{C}[q^{s},q^{-s}]\)-submodule of \(\mathbb{C}(q^{-s})\) generated by \(P_{\Pi_{1},\Pi_{2}}(q^{-s})^{-1}\), which is nothing but \(L(s,\Pi_{1}\times\Pi_{2})\). In particular, \(\{Z(s,W_{1},W_{2},\Phi)\,|\,W_{1},W_{2},\Phi\}\) is independent of the choices of \(\psi_{1}\) and \(\psi_{2}\). Now let \(\sigma\) be an automorphism of \(\mathbb{C}\). We have \(\sigma W_{\alpha}\in\mathcal{W}(\sigma\Pi_{\alpha})_{\sigma\psi_{\alpha}}\) for \(\alpha=1,2\), and moreover \[Z(s,\sigma W_{1},\sigma W_{2},\sigma\Phi)=\sigma Z(s,W_{1},W_{2},\Phi)\] as absolutely convergent integrals for \(s\in\mathbb{N}+1\) (resp. \(s\in\mathbb{N}+\frac{1}{2}\)) when \(r_{1}\) and \(r_{2}\) have the same parity (resp. different parities). Since elements in \(\mathbb{C}(q^{-s})\) are uniquely determined by their values at an arbitrary infinite set of \(s\), it follows easily that \(L(s,\sigma\Pi_{1}\times\sigma\Pi_{2})=\sigma L(s,\Pi_{1}\times\Pi_{2})\) for the same set of \(s\). For (2), it is similar to (1) by considering the integral \[Z(s,W,\Phi)\coloneqq\int_{U_{\ell}(F)\backslash\operatorname{GL}_{r}(F)}W(g) \Phi(e_{r}g)\eta_{E/F}(\det g)^{\frac{1-s}{2}}|\det g|_{F}^{s}\mathrm{d}g\] for \(W\in\mathcal{W}(\Pi)_{\psi}\) and \(\Phi\in\mathcal{S}(F^{r})\). The lemma is proved. Choose a rational Haar measure \(\mathrm{d}h\) on \(H(F)\). Let \(\chi\colon E^{\chi-}\to\mathbb{L}_{\chi}^{\times}\) be a finite character. **Definition 2.5**.: Lemma 2.4 provides us with the following definitions. 1. We define \(L(\frac{1}{2},(\Pi_{n}\otimes\overline{\chi})\times\Pi_{n+1})\in\mathbb{L}_{\chi}\) to be the unique element such that for every \(\iota\colon\mathbb{L}_{\chi}\to\mathbb{C}\), \(\iota L(\frac{1}{2},(\Pi_{n}\otimes\overline{\chi})\times\Pi_{n+1})=L(\frac{1} {2},\iota(\Pi_{n}\otimes\overline{\chi})\times\iota\Pi_{n+1})\). 2. For \(N=n,n+1\), we define \(L(1,\Pi_{N},\operatorname{As}^{(-1)^{N}})\in\mathbb{L}^{\times}\) to be the unique element such that for every \(\iota\colon\mathbb{L}\to\mathbb{C}\), \(\iota L(1,\Pi_{N},\operatorname{As}^{(-1)^{N}})=L(1,\iota\Pi_{N},\operatorname {As}^{(-1)^{N}})\). Let \(\iota\colon\mathbb{L}_{\chi}\to\mathbb{C}\) be an embedding. For \(\varphi\in\pi\) and \(\varphi^{\vee}\in\pi^{\vee}\), the integral \[\alpha_{\iota}^{\chi}(\varphi,\varphi^{\vee})\coloneqq\int_{H(F)}\iota\left( \langle\varphi^{\vee},\pi(h)\varphi\rangle_{\pi}\cdot\chi(\det h)\right) \mathrm{d}h\] is absolutely convergent [13] since \(\iota\pi\) is tempered, hence defines an element in the \(\mathbb{L}_{\chi}\)-vector space \[\alpha_{\iota}^{\chi}\in\operatorname{Hom}_{H(F)\times H(F)}\left(\pi_{\chi} \boxtimes(\pi_{\chi})^{\vee},\mathbb{C}\right),\] where we regard \(\mathbb{C}\) as an \(\mathbb{L}_{\chi}\)-vector space via \(\iota\). We suppress \(\chi\) in the notation when it is the trivial character. **Lemma 2.6**.: _Let \(\chi\colon E^{\chi-}\to\mathbb{L}_{\chi}^{\times}\) be a finite character. For \(\varphi\in\pi\) and \(\varphi^{\vee}\in\pi^{\vee}\), there exists a unique element \(\alpha^{\chi}(\varphi,\varphi^{\vee})\in\mathbb{L}_{\chi}\) such that for every embedding \(\iota\colon\mathbb{L}_{\chi}\to\mathbb{C}\),_ \[\iota\alpha^{\chi}(\varphi,\varphi^{\vee})=\alpha_{\iota}^{\chi}(\varphi,\varphi ^{\vee})\] _holds._ Proof.: This follows from the observation that the absolutely convergent integral defining \(\alpha_{\iota}^{\chi}\) is Galois equivariant, namely, for every automorphism \(\sigma\) of \(\mathbb{C}\), \[\int_{H(F)}\sigma\iota\left((\varphi^{\vee},\pi(h)\varphi)_{\pi}\cdot\chi(\det h )\right)\mathrm{d}h=\sigma\int_{H(F)}\iota\left((\varphi^{\vee},\pi(h)\varphi )_{\pi}\cdot\chi(\det h)\right)\mathrm{d}h\] holds. **Lemma 2.7**.: _Assume \(\chi\) trivial if \(E\) is a field. Then the \(\mathbb{L}_{\chi}\)-vector space_ \[\mathrm{Hom}_{H(F)\times H(F)}\left(\pi_{\chi}\boxtimes(\pi_{\chi})^{\vee}, \mathbb{L}_{\chi}\right)\] _is one-dimensional of which \(\alpha^{\chi}\) is a basis._ Proof.: Without lost of generality, we may assume \(\mathbb{L}_{\chi}=\mathbb{C}\). Then this is proved in [1] or, in a more general setting, [12]. **Lemma 2.8**.: _Suppose that \(E/F\) is unramified; \(\chi\) is unramified; and \(V_{n}\) admits an integrally self-dual lattice \(L_{n}\) such that both \(\varphi\) and \(\varphi^{\vee}\) are fixed by \(K_{n}\times K_{n+1}\), where \(K_{n}\) and \(K_{n+1}\) are the stabilizers of \(L_{n}\) and \(L_{n}\oplus O_{E}\cdot\mathrm{e}\) respectively. Then_ \[\alpha^{\chi}(\varphi,\varphi^{\vee})=\langle\varphi^{\vee},\varphi\rangle_{ \pi}\cdot\mathrm{vol}(K_{n},\mathrm{d}h)\cdot\frac{\Delta_{n+1}\cdot L(\tfrac {1}{2},(\Pi_{n}\otimes\widetilde{\chi})\times\Pi_{n+1})}{L(1,\Pi_{n},\mathrm{ As}^{(-1)^{n}})L(1,\Pi_{n+1},\mathrm{As}^{(-1)^{n+1}})}.\] Proof.: By Definition 2.5 and Lemma 2.6, we may assume \(\mathbb{L}_{\chi}=\mathbb{C}\). Then this is [11, Theorem 2.12]. Now suppose that \(E\simeq F\times F\). Choose a Borel subgroup \(B\) of \(H\) such that \(B_{H}\coloneqq B\cap H\) is a Borel subgroup of \(H\), and a generic character \(\psi\) of \(U(F)\) that is trivial on \((U\cap H)(F)\), where \(U\) denotes the unipotent radical of \(B\). For every \(\iota\colon\mathbb{L}\to\mathbb{C}\), denote by \(\mathcal{W}(\imath\pi)_{\psi}\) and \(\mathcal{W}(\imath\pi^{\vee})_{\psi^{-1}}\) the Whittaker models of \(\imath\pi\) and \(\imath\pi^{\vee}\) with respect to \((U,\psi)\) and \((U,\psi^{-1})\), respectively. Define a pairing \(\vartheta\colon\mathcal{W}(\imath\pi^{\vee})_{\psi^{-1}}\times\mathcal{W}( \imath\pi)_{\psi}\to\mathbb{C}\) by the formula \[\vartheta(W^{\vee},W)\coloneqq\int_{U(F)\backslash Q(F)}W^{\vee}(h)W(h) \mathrm{d}h\] where \(Q\) is a mirabolic subgroup of \(G\) containing \(U\). **Lemma 2.9**.: _There exists a positive rational constant \(c\) depending only on the rational Haar measures on \(U(F)\backslash Q(F)\), \(H(F)\), and \((U\cap H)(F)\), such that for every finite character \(\chi\colon E^{\times-}\to\mathbb{L}_{\chi}^{\times}\), every \(\iota\colon\mathbb{L}_{\chi}\to\mathbb{C}\), and every \((W,W^{\vee})\in\mathcal{W}(\imath\pi)_{\psi}\times\mathcal{W}(\imath\pi^{ \vee})_{\psi^{-1}}\),_ \[\int_{H(F)}\vartheta(W^{\vee},\pi(h)W)\cdot\iota\chi(\det h) \mathrm{d}h\] \[=c\left(\int_{(U\cap H)(F)\backslash H(F)}W(h)\cdot\iota\chi( \det h)\mathrm{d}h\right)\left(\int_{(U\cap H)(F)\backslash H(F)}W^{\vee}(h) \cdot\iota\chi(\det h)^{-1}\mathrm{d}h\right)\] _in which both sides are absolutely convergent._ Proof.: This is [11, Proposition 4.10]. **Proposition 2.10**.: _Suppose that \(E\simeq F\times F\). There exist elements \(\varphi\in\pi\) and \(\varphi^{\vee}\in\pi^{\vee}\) such that for every finite unramified character \(\chi\colon E^{\times-}\to\mathbb{L}_{\chi}^{\times}\),_ \[\alpha^{\chi}(\varphi,\varphi^{\vee})=\frac{\Delta_{n+1}\cdot L(\tfrac{1}{2},( \Pi_{n}\otimes\widetilde{\chi})\times\Pi_{n+1})}{L(1,\Pi_{n},\mathrm{As}^{(-1)^ {n}})L(1,\Pi_{n+1},\mathrm{As}^{(-1)^{n+1}})}\] _holds._ Proof.: Take an embedding \(\iota_{0}\colon\mathbb{L}\to\mathbb{C}\). Choose \(G(F)\)-equivariant isomorphisms \(i_{1}\colon\pi\otimes_{\mathbb{L}\times_{0}}\mathbb{C}\stackrel{{ \sim}}{{\to}}\mathcal{W}(\iota_{0}\pi)_{\psi}\) and \(i_{2}\in\pi^{\vee}\otimes_{\mathbb{L}\times_{0}}\mathbb{C}\stackrel{{ \sim}}{{\to}}\mathcal{W}(\iota_{0}\pi^{\vee})_{\psi^{-1}}\) such that \(\iota_{0}\langle\varphi^{\vee},\varphi\rangle_{\pi}=\vartheta(i_{2}\varphi^{ \vee},i_{1}\varphi)\) holds for all \(\varphi\in\pi\) and \(\varphi^{\vee}\in\pi^{\vee}\). By the definition of Rankin-Selberg \(L\)-factors [13], there exist elements \(W\in i_{1}(\pi)\) and \(W^{\vee}\in i_{2}(\pi^{\vee})\) such that \[\int_{(U\cap H)(F)\setminus H(F)}W(h)\cdot|\det h|_{F}^{s}\mathrm{ d}h =c_{1}L(\tfrac{1}{2}+s,\iota_{0}\pi_{n}\times\iota_{0}\pi_{n+1}),\] \[\int_{(U\cap H)(F)\setminus H(F)}W^{\vee}(h)\cdot|\det h|_{F}^{s }\mathrm{d}h =c_{2}L(\tfrac{1}{2}+s,\iota_{0}\pi_{n}^{\vee}\times\iota_{0}\pi_{n +1}^{\vee}),\] hold as meromorphic functions on \(\mathbb{C}\) for some constants \(c_{1},c_{2}\in\mathbb{C}^{\times}\). Moreover, the integrals on the left-hand side are absolutely convergent when \(\operatorname{Re}s\geqslant 0\). Put \(\varphi=i_{1}^{-1}W\) and \(\varphi^{\vee}=i_{2}^{-1}W^{\vee}\). By Lemma 2.9, we have \[\alpha_{\iota_{0}}(\varphi,\varphi^{\vee})=cc_{1}c_{2}\cdot L(\tfrac{1}{2}, \iota_{0}\Pi_{n}\times\iota_{0}\Pi_{n+1}).\] Since \(\alpha_{\iota_{0}}(\varphi,\varphi^{\vee})\in\iota_{0}\mathbb{L}\) and \(L(\tfrac{1}{2},\iota_{0}\Pi_{n}\times\iota_{0}\Pi_{n+1})\in\iota_{0}\mathbb{L }^{\times}\), we have \(cc_{1}c_{2}\in\iota_{0}\mathbb{L}^{\times}\). By Lemma 2.4(2), after rescaling, we may assume \[cc_{1}c_{2}=\frac{\Delta_{n+1}}{L(1,\iota_{0}\Pi_{n},\operatorname{As}^{(-1)^ {n}})L(1,\iota_{0}\Pi_{n+1},\operatorname{As}^{(-1)^{n+1}})}.\] Then, again by Lemma 2.9, we know that for every finite unramified character \(\chi\colon E^{\times-}\to\mathbb{L}^{\times}_{\chi}\) and every \(\iota\colon\mathbb{L}_{\chi}\to\mathbb{C}\) extending \(\iota_{0}\), \[\alpha^{\chi}_{\iota}(\varphi,\varphi^{\vee})=\frac{\Delta_{n+1}\cdot L(\tfrac {1}{2},\iota(\Pi_{n}\otimes\widetilde{\chi})\times\iota\Pi_{n+1})}{L(1,\iota \Pi_{n},\operatorname{As}^{(-1)^{n}})L(1,\iota\Pi_{n+1},\operatorname{As}^{(-1 )^{n+1}})}\] holds. Finally, by Definition 2.5 and Lemma 2.6, the above identity holds for every embedding \(\iota\colon\mathbb{L}_{\chi}\to\mathbb{C}\). The proposition is proved. ## 3. \(p\)-adic measure From now on, we assume that \(\mathbb{L}\) is a finite extension of \(\mathbb{Q}_{p}\), and denote by \(\mathbb{L}^{\circ}\) its ring of integers. Define the (bounded) Iwasawa algebra of \(\Gamma^{-}_{E,\mathbb{P}}\) with coefficients in \(\mathbb{L}\) to be \[\mathbb{L}[[\Gamma^{-}_{E,\mathbb{P}}]]^{\circ}\coloneqq\mathbb{L}^{\circ}[[ \Gamma^{-}_{E,\mathbb{P}}]]\otimes_{\mathbb{L}^{\circ}}\mathbb{L}.\] The inverse of \(\Gamma^{-}_{E,\mathbb{P}}\) induces an involution on \(\mathbb{L}[[\Gamma^{-}_{E,\mathbb{P}}]]^{\circ}\), which we call _adjoint_ and denoted by \(-^{\vee}\). For \(v\in\mathbb{P}\) and a positive integer \(f_{v}\), we put \[\mathfrak{U}_{f_{v}}\coloneqq\{(x,x^{-1})\mid x\in 1+\varphi_{v}^{f_{v}}\} \subseteq E_{v}^{\times-}.\] For a tuple \(f=(f_{v})_{v\in\mathbb{P}}\) of positive integers indexed by \(\mathbb{P}\), we put \[\mathfrak{U}_{f}\coloneqq\prod_{v\in\mathbb{P}}\mathfrak{U}_{f_{v}}\subseteq \prod_{v\in\mathbb{P}}E_{v}^{\times-}.\] In particular, we have a natural homomorphism \[\mathfrak{U}_{f}\to\Gamma^{-}_{E,\mathbb{P}}\] whose kernel is finite, and trivial when \(|f|\coloneqq\sum_{v}f_{v}\) is large enough. We define an \(f\)_-cell_ of \(\Gamma^{-}_{E,\mathbb{P}}\) to be a \(\mathfrak{U}_{f}\)-orbit in \(\Gamma^{-}_{E,\mathbb{P}}\). Then all \(f\)-cells give a disjoint cover of \(\Gamma^{-}_{E,\mathbb{P}}\) by open compact subsets. In what follows, we will frequently choose a collection \(\varpi=(\varpi_{v})_{v\in\mathbb{P}}\) of uniformizers of \(F_{v}\) for \(v\in\mathbb{P}\). For \(f\) as above, we put \[\varpi^{f}\coloneqq\prod_{v\in\mathbb{P}}\varpi_{v}^{f_{v}}\in\prod_{v\in \mathbb{P}}(O_{F_{v}}\cap F_{v}^{\times}).\] In this article, we will encounter measures on \(\Gamma_{E,\mathbb{P}}^{-}\) valued in a finite-dimensional \(\mathbb{L}\)-vector space \(\mathbb{V}\), whose definition we now review. A _distribution_ on \(\Gamma_{E,\mathbb{P}}^{-}\) valued in \(\mathbb{V}\) is an assignment \[\boldsymbol{\mu}\colon\{\text{open compact subsets of $\Gamma_{E,\mathbb{P}}^{-}$}\} \to\mathbb{V}\] that is additive. It is clear that in the above definition, we may replace the source by the set of all \(f\)-cells of \(\Gamma_{E,\mathbb{P}}^{-}\) for all \(f\). We say that the distribution \(\boldsymbol{\mu}\) is a _measure_ if it is bounded, namely, there exists an \(O_{F}\)-lattice \(\mathbb{V}^{\circ}\) of \(\mathbb{V}\) such that the range of \(\boldsymbol{\mu}\) is contained in \(\mathbb{V}^{\circ}\). For a measure \(\boldsymbol{\mu}\) valued in \(\mathbb{V}\), we may evaluate it on every continuous character \(\chi\colon\Gamma_{E,\mathbb{P}}^{-}\to\mathbb{L}_{\chi}^{\times}\) (for a \(p\)-adic field extension \(\mathbb{L}_{\chi}/\mathbb{L}\)) by the formula \[\boldsymbol{\mu}(\chi)\coloneqq\int_{\Gamma_{E,\mathbb{P}}^{-}}\chi\mathrm{d} \boldsymbol{\mu}\in\mathbb{V}\otimes_{\mathbb{L}}\mathbb{L}_{\chi}\] (as the limit of Riemann sums over \(f\)-cells for all \(f\)). By evaluating at all finite characters of \(\Gamma_{E,\mathbb{P}}^{-}\), we obtain a map \[\{\text{measures on $\Gamma_{E,\mathbb{P}}^{-}$ valued in $\mathbb{V}$}\}\to\mathbb{L}[[\Gamma_{E,\mathbb{P}}^{-}]]^{\circ}\otimes_{ \mathbb{L}}\mathbb{V},\] which is well-known to be a bijection. As consequences, we have * A measure is determined by its values on all but finitely many finite characters of \(\Gamma_{E,\mathbb{P}}^{-}\). * When \(\mathbb{V}\) is a (commutative) \(\mathbb{L}\)-algebra, the set of measures on \(\Gamma_{E,\mathbb{P}}^{-}\) valued in \(\mathbb{V}\) is naturally a (commutative) \(\mathbb{L}\)-algebra. ## 4. Ordinary condition Take an element \(v\in\mathbb{P}\) and suppress it in the subscripts from the notation below. In particular, \(E\simeq F\times F\). **Definition 4.1**.: An isomorphism \(G\simeq\operatorname{GL}_{n,F}\times\operatorname{GL}_{n+1,F}\) is called _standard_ if the subgroup \(H\) is identified with \((h,\operatorname{diag}(h,1))\) for \(h\in\operatorname{GL}_{n,F}\). We fix a standard isomorphism \(G\simeq\operatorname{GL}_{n,F}\times\operatorname{GL}_{n+1,F}\) (which exists). For \(N=n,n+1\), denote by \(B_{N}\) and \(U_{N}\) the upper-triangular Borel and unipotent subgroups of \(\operatorname{GL}_{N}\), respectively; for every positive integer \(r\), denote by \(I_{N}^{(r)}\) the inverse image of \(B_{N}(O_{F}/v^{r})\) under the reduction map \(\operatorname{GL}_{N}(O_{F})\to\operatorname{GL}_{N}(O_{F}/v^{r})\). Put \(B\coloneqq B_{n}\times B_{n+1}\), \(U\coloneqq U_{n}\times U_{n+1}\), and \(I^{(r)}\coloneqq I_{n}^{(r)}\times I_{n+1}^{(r)}\). We also write \(\pi=\pi_{n}\boxtimes\pi_{n+1}\) in which both \(\pi_{n}\) and \(\pi_{n+1}\) have coefficients in \(\mathbb{L}\) with base change \(\Pi_{n}\) and \(\Pi_{n+1}\), respectively. For \(N=n,n+1\) and a tuple \(\mu=(\mu_{1},\ldots,\mu_{N})\) of admissible characters \(F^{\times}\to\mathbb{L}^{\times}\), we have an induced character \(\mu^{\natural}\) of \((F^{\times})^{N}\) given by \[\mu^{\natural}(x_{1},\ldots,x_{N})=\prod_{i=1}^{N}\mu_{i}(x_{i})|x_{i}|_{F}^{N -i}\] hence a character of \(B_{N}(F)\) by inflation, and the unnormalized principal series \[\mathcal{I}(\mu)\coloneqq\left\{f\colon\operatorname{GL}_{N}(F)\to\mathbb{L} \text{ locally constant}\left|f(bg)=\mu^{\natural}(b)f(g),\forall b\in B_{N}(F),g\in \operatorname{GL}_{N}(F)\right.\right\}\] as an admissible representation of \(\operatorname{GL}_{N}(F)\) via right translation. **Definition 4.2**.: For \(N=n,n+1\), we say that \(\Pi_{N}\) is \(v\)-_ordinary_ if there exists a (unique) tuple \(\mu=(\mu_{1},\ldots,\mu_{N})\) of admissible characters \(F^{\times}\to\mathbb{L}^{\times}\) satisfying \(|x|_{F}^{i-1}\mu_{i}(x)\in\mathbb{L}^{\,\circ\times}\) for \(1\leqslant i\leqslant N\) and every \(x\in F^{\times}\) such that \(\pi_{N}\) is a subrepresentation of \(\mathcal{I}(\mu)\); we say that \(\Pi_{N}\) is _semi-stably \(v\)-ordinary_ if furthermore \(\mu_{i}\) are all unramified. We say that \(\Pi\) is (semi-stably) \(v\)-ordinary if both \(\Pi_{n}\) and \(\Pi_{n+1}\) are. _Remark 4.3_.: Note that if \(\pi_{N}\) satisfies the property in Definition 4.2, then so does \(\pi_{N}^{\vee}\) with respect to the tuple \(\tilde{\mu}\coloneqq(\mid\mid_{F}^{1-N}\mu_{N}^{-1},\ldots,\mid\mid_{F}^{1-N }\mu_{1}^{-1})\). For \(N=n,n+1\) and an element \(x\in O_{F}\cap F^{\times}\), we put \([x]_{N}\coloneqq\operatorname{diag}(x^{N-1},\ldots,x,1)\in\operatorname{GL}_{N }(F)\) and define an operator \(\Psi_{N}^{x}\) on \(\pi_{N}^{U_{N}(O_{F})}\) as \[\Psi_{N}^{x}\coloneqq\sum_{u\in U_{N}(O_{F})/(U_{N}(O_{F})\cap[x]_{N}U_{N}(O_{ F})[x]_{N}^{-1})}\pi_{N}(u[x]_{N}).\] **Lemma 4.4**.: _Suppose that \(\Pi_{N}\) is \(v\)-ordinary for some \(N\in\{n,n+1\}\). There exists a unique up to scalar nonzero element \(\varphi_{N}\in\pi_{N}^{U_{N}(O_{F})}\) satisfying that_ \[\Psi_{N}^{x}\varphi_{N}=\left(\prod_{m=1}^{N-1}\prod_{i=1}^{m}\mid x|_{F}^{i-1 }\mu_{i}(x)\right)\varphi_{N}\] _holds for every \(x\in O_{F}\cap F^{\times}\). Here, \(\mu\) is the tuple of characters in Definition 4.2._ Proof.: Since \(\mu_{1}\boxtimes\cdots\boxtimes\mu_{N}\) is a regular character of \(B_{N}(F)/U_{N}(F)\), the Jacquet module (with respect to \(B_{N}\)) \(\mathcal{I}(\mu)_{B_{N}}\) of \(\mathcal{I}(\mu)\) is a direct sum of \(N\) distinct characters of \(B_{N}(F)/U_{N}(F)\). By [11, Corollary 5.5], it suffices to show that \(\pi_{B_{N}}\) contains the character \(\mu^{\natural}\), which follows from the Frobenius reciprocity law. The lemma is proved. **Definition 4.5**.: We call the one-dimensional \(\mathbb{L}\)-subspace of \(\pi_{N}^{U_{N}(O_{F})}\) generated by \(\varphi_{N}\) in Lemma 4.4 the _ordinary line_ of \(\pi_{N}\), and a nonzero element of it an _ordinary vector_. When \(\Pi\) is \(v\)-ordinary, we have the obvious notion of ordinary line and ordinary vectors, which are contained in \(\pi^{U(O_{F})}=\pi_{n}^{U_{N}(O_{F})}\otimes_{\mathbb{L}}\pi_{n+1}^{U_{n+1}(O_ {F})}\). _Remark 4.6_.: The notion of being (semi-stably) \(v\)-ordinary and the characters in Definition 4.2 are intrinsic, namely, they do not depend on the choice of the standard isomorphism \(G\simeq\operatorname{GL}_{n,F}\times\operatorname{GL}_{n+1,F}\). However, the notion of ordinary line and vectors clearly depends on such choice. **Lemma 4.7**.: _Suppose that \(\Pi_{N}\) is \(v\)-ordinary for some \(N\in\{n,n+1\}\). For ordinary vectors \(\varphi_{N}\in\pi_{N}^{U_{N}(O_{F})}\) and \(\varphi_{N}^{\vee}\in(\pi_{N}^{\vee})^{U_{N}(O_{F})}\), we have \(\langle\varphi_{N}^{\vee},\varphi_{N}\rangle_{\pi_{N}}\neq 0\)._ Proof.: Fix a uniformizer \(\varpi\) of \(F\). Denote by \((\pi_{N})^{\rm ss}\) the invertible part of \(\pi_{N}^{U_{N}(O_{F})}\) with respect to the operator \(\Psi_{N}^{\varpi}\) (see [11, SS5.2]). Then \(\langle\ \,\ \rangle_{\pi_{N}}\) restricts to a perfect pairing on \((\pi_{N}^{\vee})^{\rm ss}\times(\pi_{N})^{\rm ss}\). Moreover, by [11, Proposition 5.4 & Corollary 5.5], there exists a group \(\mathfrak{S}\) consisting of permutations on \(\{1,\ldots,N\}\) such that \((\pi_{N})^{\rm ss}\) and \((\pi_{N}^{\vee})^{\rm ss}\) are spanned by \(\{\varphi_{N,\sigma}\mid\sigma\in\mathfrak{S}\}\) and \(\{\varphi_{N,\sigma}^{\vee}\mid\sigma\in\mathfrak{S}\}\) in which \(\varphi_{N,\sigma}\) and \(\varphi_{N,\sigma}^{\vee}\) are nonzero eigenvectors of \(\Psi_{N}^{\varpi}\) with eigenvalues \[\prod_{m=1}^{N-1}\prod_{i=1}^{m}\mid\varpi|_{F}^{i-1}\mu_{\sigma(i)}(\varpi), \quad\prod_{m=1}^{N-1}\prod_{i=1}^{m}\mid\varpi|_{F}^{i-1}\tilde{\mu}_{\sigma(i) }(\varpi),\] respectively. Now since \(\langle\Psi_{N}^{\varpi}\,\ \ \rangle_{\pi_{N}}=\langle\,\ \varpi^{1-N}\Psi_{N}^{\varpi}\ \rangle_{\pi_{N}}\), it follows that \(\langle\varphi_{N,\sigma_{2}}^{\vee},\varphi_{N,\sigma_{1}}\rangle_{\pi_{N}}=0\) if \(\sigma_{1}\neq\sigma_{2}\). In particular, \(\langle\varphi_{N,1}^{\vee},\varphi_{N,1}\rangle_{\pi_{N}}\neq 0\), which proves the lemma. **Lemma 4.8**.: _Suppose that \(\Pi_{N}\) is \(v\)-ordinary for some \(N\in\{n,n+1\}\). The following are equivalence:_ 1. \(\Pi_{N}\) _is semi-stably_ \(v\)_-stable._ 2. _The ordinary line is contained in_ \(\pi_{N}^{t_{N}^{(\nu)}}\) _for some positive integer_ \(r\)_._ 3. _The ordinary line is contained in_ \(\pi_{N}^{t_{N}^{(\nu)}}\)_._ Proof.: The implication (3) \(\Rightarrow\) (2) is trivial. For (2) \(\Rightarrow\) (1), let \(\varphi_{N}\) be an ordinary vector fixed by \(I_{N}^{(r)}\). Then it is also the eigenvector of the operators \(\mathbb{V}_{N,m}^{\varpi}\) in [10, Lemma 3.2] for \(0\leqslant m\leqslant N\) (with respect to a uniformizer \(\varpi\)) with the eigenvalue \(\prod_{n=1}^{m}|\varpi|_{F}^{i-1}\mu_{i}(\varpi)\). Now since \(\varphi_{N}\) is fixed by \(I_{N}^{(r)}\), the eigenvalues are independent of the choice of \(\varpi\), which implies that \(\mu_{1},\mu_{1}\mu_{2},\ldots,\mu_{1}\cdots\mu_{N}\) are all unramified characters, that is, \(\Pi_{N}\) is semi-stably \(v\)-ordinary. For (1) \(\Rightarrow\) (3), note that the above discussion also shows (1) \(\Rightarrow\) (2), namely, the ordinary line is contained in \(\pi_{N}^{t_{N}^{(\nu)}}\) for some \(r\geqslant 1\). Then the statement follows from [10, Proposition 3.4] by noting that elements in \(\pi_{N}^{t_{N}^{(\nu)}}\) that are annihilated by the Hecke polynomial of \(\pi_{N}\) are already contained in \(\pi_{N}^{t_{N}^{(1)}}\). **Notation 4.9**.: Suppose that \(\Pi\) is \(v\)-ordinary. Let \(\mu\) and \(\nu\) be the two tuples of characters for \(\pi_{n}\) and \(\pi_{n+1}\) in Definition 4.2, respectively. For every \(x\in O_{F}\cap F^{\times}\), put \[\lambda^{x}(\pi)\coloneqq\left(\prod_{m=1}^{n}\prod_{i=1}^{m}|x|_{F}^{i-1}\mu _{i}(x)\right)\left(\prod_{m=1}^{n}\prod_{i=1}^{m}|x|_{F}^{i-1}\nu_{i}(x) \right)\in\mathbb{L}^{\,\infty},\] and similarly for \(\lambda^{x}(\pi^{\vee})\) (see Remark 4.3). When \(\Pi\) is semi-stably \(v\)-ordinary, we simply put \(\lambda(\pi)\coloneqq\lambda^{\varpi}(\pi)\) and \(\lambda(\pi^{\vee})\coloneqq\lambda^{\varpi}(\pi^{\vee})\) for some hence every uniformizer \(\varpi\) of \(F\), and put \(\lambda(\Pi)\coloneqq\lambda(\pi)\lambda(\pi^{\vee})\). **Notation 4.10**.: We introduce two more notations. 1. Put \[\xi\coloneqq\left(\begin{array}{ccc}&&1&1\\ &\iddots&&\vdots\\ 1&&&1\\ 0&\dots&0&1\end{array}\right)\in\operatorname{GL}_{n+1}(\mathbb{Z})\] as the element introduced on the top of [10, Page 23]. 2. For \(x\in O_{F}\cap F^{\times}\), put \([x]\coloneqq x[x]_{n}\in\operatorname{GL}_{n}(F)\), regarded as an element of \(H(F)\) (in particular, as an element of \(G(F)\), \([x]=(x[x]_{n},[x]_{n+1})\)). **Lemma 4.11**.: _For every positive integer \(f\), put \(K_{H}^{(f)}\coloneqq((1_{n},\xi)\cdot[\varpi^{f}])I^{(1)}((1_{n},\xi)\cdot[ \varpi^{f}])^{-1}\cap H(F)\). Then_ 1. \(\det(K_{H}^{(f)})=1+\flatflat\)_;_ 2. \(K_{H}^{(f+1)}\) _is contained in_ \(K_{H}^{(f)}\) _of index_ \(q^{\frac{n(n+1)\left\lfloor\alpha+1\right\rfloor}{6}}\)_._ Proof.: It is clear that \(K_{H}^{(f)}\) does not depend on the choice of \(\varpi\). Let \(h=(h_{ij})_{1\leqslant i,j\leqslant n}\in H(F)\) be an element written in the matrix form. Then a straightforward computation shows that the element \(((1_{n},\xi)\cdot[\varpi^{f}])^{-1}h((1_{n},\xi)\cdot[\varpi^{f}])\) belongs to \(I^{(1)}\) if and only if the following are satisfied: * \(h_{ij}\in\flat^{i-j/l}\) for \(i\neq j\); * \(h_{il}+\dots+h_{in}-1\in\flat^{jf}\) for \(1\leqslant i\leqslant n\). Then (1) follows directly. For (2), the inclusion is clear; while for the index, we have \[[K_{H}^{(f)}:K_{H}^{(f+1)}]=q^{\sum_{i=1}^{n-1}i(n-i)}\cdot q^{\sum_{i=1}^{n-1}i(n -i)}\cdot q^{\sum_{i=1}^{n}i}=q^{\frac{n(n+1)(2n+1)}{6}}.\] The lemma is proved. **Proposition 4.12**.: _Suppose that \(\Pi\) is semi-stably \(v\)-ordinary. There exists a unique element \(\gamma(\Pi_{v})\in\mathbb{L}^{\times}\) depending only on \(\Pi_{v}\) such that for every pair of ordinary vectors \(\varphi\in\pi,\varphi^{\vee}\in\pi^{\vee}\), every positive integer \(f\), and every finite character \(\chi\colon E^{\times\to}\mathbb{L}_{\chi}^{\times}\) of conductor \(\mathfrak{p}^{f}\),4_ Footnote 4: We adopt the convention that when \(q=2\), there is no character of conductor \(\mathfrak{p}\). In particular, \(\chi\) must be ramified here. \[\frac{\gamma(\Pi_{v})}{\langle\varphi^{\vee},\varphi\rangle_{\pi }}\cdot\alpha^{\chi}\left(\pi((1_{n},\xi)\cdot[\varpi^{f}])\varphi,\pi^{\vee}( (1_{n},\xi)\cdot[\varpi^{f}])\varphi^{\vee}\right)\] \[=\left(q^{-\frac{n(n+1)(2n+1)}{6}}\right)^{f}\frac{\Delta_{n+1}}{ L(1,\Pi_{n},\operatorname{As}^{(-1)^{n}})L(1,\Pi_{n}+1,\operatorname{As}^{(-1)^{n+1} })},\] _where we adopt the Haar measure on \(H(F)\) that gives a hyperspecial maximal subgroup volume \(1\)._ Proof.: Choose an embedding \(\iota\colon\mathbb{L}\to\mathbb{C}\) and an additive character \(\psi_{F}\colon F\to\mathbb{C}^{\times}\) of conductor \(O_{F}\). Let \(\psi\colon U(F)\to\mathbb{C}^{\times}\) be the character sending \(((u_{ij})_{1\leqslant i,j\leqslant n},(v_{ij})_{1\leqslant i,j\leqslant n+1} )\in U_{n}(F)\times U_{n+1}(F)=U(F)\) to \(\psi_{F}(u_{12}+\cdots+u_{n-1n}-v_{12}-\cdots-v_{m+1})\), which is a generic character trivial on \((U\cap H)(F)\). Choose \(G(F)\)-equivariant isomorphisms \(i_{1}\colon\pi\otimes_{\mathbb{L},\iota}\subset\tilde{\to}\mathcal{W}(\iota \pi)_{\phi}\) and \(i_{2}\in\pi^{\vee}\otimes_{\mathbb{L},\iota}\subset\tilde{\to}\mathcal{W}( \iota\pi^{\vee})_{\psi^{-1}}\) such that \(\iota(\varphi^{\vee},\varphi)_{\pi}=\vartheta(i_{2}\varphi^{\vee},i_{1}\varphi)\) holds for all \(\varphi\in\pi\) and \(\varphi^{\vee}\in\pi^{\vee}\). By Lemma 2.9, there exists a constant \(c\in\mathbb{Q}^{\times}\) depending only on certain rational Haar measures, such that for every finite character \(\chi\colon E^{\times\to}\mathbb{C}^{\times}\), \[\alpha^{\chi}(\varphi,\varphi^{\vee})=c\left(\int_{(U\cap H)(F)\setminus H(F) }(i_{1}\varphi)(h)\cdot\chi(\det h)\mathrm{d}h\right)\left(\int_{(U\cap H)(F) \setminus H(F)}(i_{2}\varphi^{\vee})(h)\cdot\chi(\det h)^{-1}\mathrm{d}h\right)\] for every \(\varphi\in\pi\) and \(\varphi^{\vee}\in\pi^{\vee}\). Apply this formula to \(\pi((1_{n},\xi)\cdot[\varpi^{f}])\varphi\) and \(\pi^{\vee}((1_{n},\xi)\cdot[\varpi^{f}])\varphi^{\vee}\) with \(\varphi\) and \(\varphi^{\vee}\) ordinary vectors when \(\chi\) has conductor \(\mathfrak{p}^{f}\). By [1, Corollary 2.8], we obtain \[\alpha^{\chi}\left(\pi((1_{n},\xi)\cdot[\varpi^{f}])\varphi,\pi^{ \vee}((1_{n},\xi)\cdot[\varpi^{f}])\varphi^{\vee}\right) \tag{4.1}\] \[=c\prod_{i=1}^{n}(1-q^{-i})^{-2}\cdot\left(q^{-\frac{n(n+1)(2n+1) }{3}}\right)^{f}\cdot G_{\psi_{F}}(\chi)^{\frac{n(n+1)}{2}}G_{\psi_{F}^{-1}}( \chi^{-1})^{\frac{n(n+1)}{2}}\cdot(i_{1}\varphi)(1)\cdot(i_{2}\varphi^{\vee} )(1),\] where \(G_{\psi_{F}^{\pm 1}}(\chi^{\pm 1})\) denotes the Gauss sums (the one in [1, SS2]) of \(\chi^{\pm 1}\) with respect to \(\psi_{F}^{\pm 1}\). As it is well-known that \(G_{\psi_{F}}(\chi)G_{\psi_{F}^{-1}}(\chi^{-1})=q^{f}\), we have \[(\ref{eq:1})=c\prod_{i=1}^{n}(1-q^{-i})^{-2}\cdot\left(q^{-\frac{n(n+1)(2n+1) }{6}}\right)^{f}\cdot(i_{1}\varphi)(1)\cdot(i_{2}\varphi^{\vee})(1).\] By [1, Proposition 4.12], we have \((i_{1}\varphi)(1)\cdot(i_{2}\varphi^{\vee})(1)\neq 0\). Put \[c^{\prime}\coloneqq\frac{\prod_{i=1}^{n}(1-q^{-i})^{2}\cdot\langle\varphi^{ \vee},\varphi\rangle_{\pi}}{c\cdot(i_{1}\varphi)(1)\cdot(i_{2}\varphi^{\vee} )(1)},\] which belongs to \(\mathbb{C}^{\times}\) by Lemma 4.7 depending only on \(\Pi_{v}\) and certain Haar measures. Then we have \[\frac{c^{\prime}}{\langle\varphi^{\vee},\varphi\rangle_{\pi}}\cdot\alpha^{ \chi}\left(\pi((1_{n},\xi)\cdot[\varpi^{f}])\varphi,\pi^{\vee}((1_{n},\xi) \cdot[\varpi^{f}])\varphi^{\vee}\right)=\left(q^{-\frac{n(n+1)(2n+1)}{6}} \right)^{f}\] for every \(f\geq 1\) and every finite character \(\chi\colon E^{-\times}\to\mathbb{C}^{\times}\) of conductor \(\mathfrak{p}^{f}\). Now it is clear that \(c^{\prime}\) depends only on \(\Pi_{v}\). Finally, we may choose a ramified character \(\chi\) that takes values in \(\iota\mathbb{L}^{\times}\), which implies that \(c^{\prime}\in\iota\mathbb{L}^{\times}\). The proposition follows by taking \[\gamma(\Pi_{v})\coloneqq(\iota^{-1}c^{\prime})\frac{\Delta_{n+1}}{L(1,\Pi_{n},\operatorname{As}^{(-1)^{n}})L(1,\Pi_{n+1},\operatorname{As}^{(-1)^{n+1}})} \in\mathbb{L}^{\times}.\] ## 5. Coherent anticyclotomic \(p\)-adic \(L\)-function In this section, we study the case where \(\Pi=\Pi_{n}\boxtimes\Pi_{n+1}\) is coherent. In this case, there exists a totally positive definite hermitian space \(V_{n}\) over \(E\) of rank \(n\), unique up to isomorphism, such that \(V_{n,v}\) is the prescribed hermitian space from Section 2 for every \(v\in\mathbb{V}_{F}^{\mathrm{fin}}\). Put \(V_{n+1}\coloneqq V_{n}\oplus E\cdot\mathbf{e}\). Put \[G\coloneqq\operatorname{U}(V_{n})\times\operatorname{U}(V_{n+1}),\quad\pi \coloneqq\otimes_{v\in\mathbb{V}_{F}^{\mathrm{fin}}}\pi_{v}\] which is an irreducible admissible representation of \(G(\mathbb{A}_{F}^{\infty})\) with coefficients in \(\mathbb{L}\). Put \[\mathbb{V}_{\Pi}\coloneqq\operatorname{Hom}_{G(\mathbb{A}_{F}^{\infty})} \left(\pi,\mathcal{S}\left(G(F)\backslash G(\mathbb{A}_{F}^{\infty}),\mathbb{ L}\right)\right),\quad\mathbb{V}_{\Pi^{\vee}}\coloneqq\operatorname{Hom}_{G( \mathbb{A}_{F}^{\infty})}\left(\pi^{\vee},\mathcal{S}\left(G(F)\backslash G( \mathbb{A}_{F}^{\infty}),\mathbb{L}\right)\right).\] Put \(\mathscr{L}_{\mathfrak{p}}^{0}(\Pi)\coloneqq 0\) when \(\mathbb{V}_{\Pi}=0\). Now we assume \(\mathbb{V}_{\Pi}\neq 0\) hence \(\mathbb{V}_{\Pi^{\vee}}\neq 0\). Then by Arthur's multiplicity formula [16, 17], we have \(\dim_{\mathbb{L}}\mathbb{V}_{\Pi}=\dim_{\mathbb{L}}\mathbb{V}_{\Pi^{\vee}}=1\). Denote by \(\mathcal{V},\mathcal{V}^{\vee}\subseteq\mathcal{S}\left(G(F)\backslash G( \mathbb{A}_{F}^{\infty}),\mathbb{L}\right)\) the unique irreducible \(\mathbb{L}[G(\mathbb{A}_{F}^{\infty})]\)-submodules that are isomorphic to \(\pi\) and \(\pi^{\vee}\), respectively. Using the Petersson inner product with respect to the Tamagawa measure of \(G(\mathbb{A}_{F})\), we have a canonical isomorphism \(\pi\otimes_{\mathbb{L}}\pi^{\vee}\simeq\mathcal{V}\otimes_{\mathbb{L}} \mathcal{V}^{\vee}\). In what follows, we assume that \(\Pi\) is semi-stably \(v\)-ordinary for every \(v\in\mathrm{P}\). Denote by \(H\subseteq G\) the graph of the natural embedding \(\operatorname{U}(V_{n})\hookrightarrow\operatorname{U}(V_{n+1})\), and fix a decomposition \(\mathrm{d}h=\mathrm{d}h_{\infty}\cdot\mathrm{d}h_{\mathrm{P}}\cdot\mathrm{d}h ^{\infty,\mathrm{P}}\) of the Tamagawa measure on \(H(\mathbb{A}_{F})\) such that the volume of \(H(F_{\infty})\) under \(\mathrm{d}h_{\infty}\) is \(1\) and the volume of every hyperspecial maximal subgroup of \(H(F_{\mathrm{P}})\) under \(\mathrm{d}h_{\mathrm{P}}\) is \(1\) (so that the measure \(\mathrm{d}h^{\infty,\mathrm{P}}\) is rational). For every \(v\in\mathrm{P}\), fix a standard isomorphism \(G_{v}\simeq\operatorname{GL}_{n,F_{v}}\times\operatorname{GL}_{n+1,F_{v}}\) (Definition 4.1). For every element \(\varphi=\otimes_{v\in\mathbb{V}_{F}^{\mathrm{fin}}}\varphi_{v}\in\mathcal{V}\) with \(\varphi_{v}\in\pi_{v}\) that is a (nonzero) ordinary vector for \(v\in\mathrm{P}\), we define a measure \(\mathscr{P}_{\varphi}\in\mathbb{L}[[\Gamma_{E,\mathrm{P}}^{-}]]^{\circ}\) as follows: For every subset \(\Omega\) of \(\Gamma_{E,\mathrm{P}}^{-}\), we denote by \(H_{\Omega}\) its inverse image under the determinant map \(H(F)\backslash H(\mathbb{A}_{F}^{\infty})\to\Gamma_{E,\mathrm{P}}^{-}\). For an \(f\)-cell \(\Omega\) of \(\Gamma_{E,\mathrm{P}}^{-}\) for some tuple \(f=(f_{v})_{v\in\mathrm{P}}\) of positive integers, we put \[\mathscr{P}_{\varphi}(\Omega)\coloneqq\prod_{v\in\mathrm{P}}\left(\frac{ \frac{\mathfrak{q}_{v}^{(n+1)(2n+1)}}{6}}{\lambda(\pi_{v})}\right)^{f_{v}} \cdot\int_{H_{\Omega}}\varphi\left(h^{\infty}\cdot(1_{n},\xi)\cdot[\varpi^{f }]\right)\mathrm{d}h^{\infty},\] in which \((1_{n},\xi)\in\operatorname{GL}_{n}(F_{\mathrm{P}})\times\operatorname{GL}_{n +1}(F_{\mathrm{P}})=G(F_{\mathrm{P}})\) and \([\varpi^{f}]\in\operatorname{GL}_{n}(F_{\mathrm{P}})=H(F_{\mathrm{P}})\). It is clear that the above integral is a finite sum of elements in \(\mathbb{L}\) and is independent of the choice of \(\varpi\) by Lemma 4.8. **Proposition 5.1**.: _The assignment \(\Omega\mapsto\mathscr{P}_{\varphi}(\Omega)\) defines a measure \(\mathscr{P}_{\varphi}\) on \(\Gamma_{E,\mathrm{P}}^{-}\) valued in \(\mathbb{L}\); in other words, \(\mathscr{P}_{\varphi}\) is an element of \(\mathbb{L}[[\Gamma_{E,\mathrm{P}}^{-}]]^{\circ}\)._ Proof.: The additivity of \(\mathscr{P}_{\varphi}\) follows from the same argument in the proof of [12, Theorem 4.4] (with a translation by \([\varpi^{f}]\)). The boundedness of \(\mathscr{P}_{\varphi}\) follows from Lemma 4.11(2) and the fact that \(\varphi\) is bounded. **Theorem 5.2**.: _There exists a unique measure \(\mathcal{L}^{0}_{\mathrm{p}}(\Pi)\in\mathbb{L}[[\Gamma^{-}_{E,\mathrm{p}}]]^{\circ}\) such that for every finite character \(\chi\colon\Gamma^{-}_{E,\mathrm{p}}\to\mathbb{L}^{\times}_{\chi}\) of conductor \(\prod\mathfrak{p}^{\prime_{\mathrm{f}_{v}}}_{v}\) for a tuple \(f=(f_{v})_{v\in\mathrm{P}}\) of positive integers indexed by \(\mathrm{P}\) and every embedding \(\iota\colon\mathbb{L}_{\chi}\to\mathbb{C}\), we have_ \[\iota\mathcal{L}^{0}_{\mathrm{P}}(\Pi)(\chi)=\prod_{v\in\mathrm{P}}\left( \frac{q_{v}^{\frac{\alpha(\mathrm{int}+1)(2n+1)}{6}}}{\iota\lambda(\Pi_{v})} \right)^{f_{v}}\cdot\frac{\Delta_{n+1}\cdot L(\frac{1}{2},\iota(\Pi_{n}\otimes \widetilde{\chi})\times\iota\Pi_{n+1})}{2^{d(\Pi_{n})+d(\Pi_{n+1})}\cdot L(1, \iota\Pi_{n},\mathrm{As}^{(-1)^{n}})L(1,\iota\Pi_{n+1},\mathrm{As}^{(-1)^{n+ 1}})}.\] _Here, \(d(\Pi_{N})\) for \(N=n,n+1\) is introduced in Remark 2.2(4).5_ Footnote 5: Binyong Sun has informed the author that he has an approach to evaluate \(\mathcal{L}^{0}_{\mathrm{P}}(\Pi)\) at the remaining finite order characters. The theorem holds even when \(\mathbb{V}_{\Pi}=0\) since in this case \(L(\frac{1}{2},\iota(\Pi_{n}\otimes\widetilde{\chi})\times\iota\Pi_{n+1})\) always vanishes; see Remark 5.5. Proof.: The uniqueness is clear. Now we show the existence. Choose a pair \(\varphi=\otimes_{v\in\mathbb{V}_{F}^{\mathrm{fin}}}\varphi_{v}\in\mathcal{V}\) and \(\varphi^{\vee}=\otimes_{v\in\mathbb{V}_{F}^{\mathrm{fin}}}\varphi_{v}^{\vee}\in \mathcal{V}^{\vee}\) with \(\varphi_{v}\in\pi_{v}\) and \(\varphi_{v}^{\vee}\in\pi_{v}^{\vee}\) satisfying 1. for \(v\in\mathrm{P}\), both \(\varphi_{v}\) and \(\varphi_{v}^{\vee}\) are (nonzero) ordinary vectors; 2. for \(v\in\mathbb{V}_{F}^{\mathrm{spl}}\setminus\mathrm{P}\), the pair \((\varphi_{v},\varphi_{v}^{\vee})\) satisfies the conclusion of Proposition 2.10; 3. for \(v\in\mathbb{V}_{F}^{\mathrm{fin}}\setminus\mathbb{V}_{F}^{\mathrm{spl}}\), \(\alpha(\varphi_{v},\varphi_{v}^{\vee})\neq 0\). By Proposition 2.10 and Lemma 2.8, such choice is possible. Put \[\mathcal{L}^{0}_{\mathrm{P}}(\Pi)\coloneqq\prod_{v\in\mathrm{P}}\frac{\gamma( \Pi_{v})}{\langle\varphi_{v}^{\vee},\varphi_{v}\rangle_{\pi_{v}}}\cdot\prod_{v \in\mathbb{V}_{F}^{\mathrm{fin}}\setminus\mathbb{V}_{F}^{\mathrm{spl}}}\frac{ \alpha(\varphi_{v},\varphi_{v}^{\vee})^{-1}\cdot\Delta_{n+1,v}\cdot L(\frac{1} {2},\Pi_{n,v}\times\Pi_{n+1,v})}{L(1,\Pi_{n,v},\mathrm{As}^{(-1)^{n+1}})L(1, \Pi_{n+1,v},\mathrm{As}^{(-1)^{n+1}})}\times\mathcal{P}_{\varphi}\mathcal{P}_ {\varphi^{\vee}}^{\vee}, \tag{5.1}\] where \(\gamma(\Pi_{v})\in\mathbb{L}^{\times}\) is the constant in Proposition 4.12. Note that by Lemma 2.8, Lemma 4.7, and (T3), the above expression makes sense. To show the interpolation property, we suppress \(\iota\) from the notation. In particular, all representations have coefficients in \(\mathbb{C}\). By (5.1), for \(\chi\) as in the theorem, we have \[\mathcal{L}^{0}_{\mathrm{P}}(\Pi)(\chi)\] \[=\prod_{v\in\mathrm{P}}\frac{\gamma(\Pi_{v})}{\langle\varphi_{v}^ {\vee},\varphi_{v}\rangle_{\pi_{v}}}\left(\frac{q_{v}^{\frac{\alpha(n+1)(2n+1 )}{3}}}{\lambda(\Pi_{v})}\right)^{f_{v}}\cdot\prod_{v\in\mathbb{V}_{F}^{ \mathrm{fin}}\setminus\mathbb{V}_{F}^{\mathrm{spl}}}\frac{\alpha(\varphi_{v}, \varphi_{v}^{\vee})^{-1}\cdot\Delta_{n+1,v}\cdot L(\frac{1}{2},\Pi_{n,v} \times\Pi_{n+1,v})}{L(1,\Pi_{n,v},\mathrm{As}^{(-1)^{n}})L(1,\Pi_{n+1,v}, \mathrm{As}^{(-1)^{n+1}})}\] \[\times\int_{H(F)\backslash H(\mathbb{A}_{F}^{\infty})}\varphi^{ \vee}\left(h^{\infty}\cdot(1_{n},\xi)\cdot[\varpi^{\prime}]\right)\chi(\det h^ {\infty})^{-1}\mathrm{d}h^{\infty}.\] Now we apply the refined Gan-Gross-Prasad conjecture (the Ichino-Ikeda conjecture), which has been fully proved for \(G\) in [1, 10, 11]. Let \(\mathrm{S}\) be a subset of \(\mathbb{V}_{F}^{\mathrm{fin}}\) containing \(\mathrm{P}\) such that for \(v\in\mathbb{V}_{F}^{\mathrm{fin}}\setminus\mathrm{S}\), we have \[\alpha^{\chi_{v}}(\varphi_{v},\varphi_{v}^{\vee})=\frac{\Delta_{n+1,v}\cdot L( \frac{1}{2},(\Pi_{n,v}\otimes\widetilde{\chi_{v}})\times\Pi_{n+1,v})}{L(1, \Pi_{n,v},\mathrm{As}^{(-1)^{n}})L(1,\Pi_{n+1,v},\mathrm{As}^{(-1)^{n+1}})}.\] By the refined formula, we have \[\int_{H(F)\backslash H(\Lambda^{\infty}_{F})}\varphi\left(h^{\infty} \cdot(1_{n},\xi)\cdot[\varpi^{f}]\right)\chi(\det h^{\infty})\mathrm{d}h^{\infty }\int_{H(F)\backslash H(\Lambda^{\infty}_{F})}\varphi^{\vee}\left(h^{\infty} \cdot(1_{n},\xi)\cdot[\varpi^{f}]\right)\chi(\det h^{\infty})^{-1}\mathrm{d}h^{\infty}\] \[=\frac{1}{2^{d(\Pi_{n})+d(\Pi_{n+1})}}\frac{\Delta^{\mathrm{S}}_{n +1}\cdot L(\frac{1}{2},(\Pi_{n}\otimes\widetilde{\chi})^{\mathrm{S}}\times \Pi_{n+1}^{\mathrm{S}})}{L(1,\Pi_{n}^{\mathrm{S}},\mathrm{As}^{(-1)^{n}})L(1, \Pi_{n+1}^{\mathrm{S}},\mathrm{As}^{(-1)^{n+1}})}\prod_{v\in\mathbb{P}}\alpha ^{v_{v}}(\varphi_{v},\varphi_{v}^{\vee}).\] Plugging in and by (T2), we have \[\mathcal{L}_{\mathrm{p}}^{0}(\Pi)(\chi)\] \[=\frac{1}{2^{d(\Pi_{n})+d(\Pi_{n+1})}}\frac{\Delta^{\mathrm{P}}_ {n+1}\cdot L(\frac{1}{2},(\Pi_{n}\otimes\widetilde{\chi})^{\mathrm{P}}\times \Pi_{n+1}^{\mathrm{P}})}{L(1,\Pi_{n}^{\mathrm{P}},\mathrm{As}^{(-1)^{n}})L(1, \Pi_{n+1}^{\mathrm{P}},\mathrm{As}^{(-1)^{n+1}})}\prod_{v\in\mathbb{P}}\frac{ \gamma(\Pi_{v})}{\langle\varphi_{v}^{\vee},\varphi_{v}\rangle_{\pi_{v}}} \left(\frac{q_{v}^{\frac{\alpha(n+1)2n+1}{3}}\rangle}{\lambda(\Pi_{v})} \right)^{f_{v}}\alpha^{v_{v}}(\varphi_{v},\varphi_{v}^{\vee}).\] Finally, by (T1) and Proposition 4.12, we have \[\mathcal{L}_{\mathrm{p}}^{0}(\Pi)(\chi)=\frac{1}{2^{d(\Pi_{n})+d(\Pi_{n+1})}} \frac{\Delta_{n+1}\cdot L(\frac{1}{2},(\Pi_{n}\otimes\widetilde{\chi})^{ \mathrm{P}}\times\Pi_{n+1}^{\mathrm{P}})}{L(1,\Pi_{n},\mathrm{As}^{(-1)^{n}}) L(1,\Pi_{n+1},\mathrm{As}^{(-1)^{n+1}})}\prod_{v\in\mathbb{P}}\left(\frac{q_{v}^{ \frac{\alpha(n+1)2n+1}{3}}}{\lambda(\Pi_{v})}\right)^{f_{v}}.\] The interpolation formula follows as \(L(\frac{1}{2},(\Pi_{n,v}\otimes\widetilde{\chi_{v}})\times\Pi_{n+1,v})=1\) for every \(v\in\mathbb{P}\). We have the following corollary on the functional equation satisfied by \(\mathcal{L}_{\mathrm{p}}^{0}(\Pi)\). **Corollary 5.3**.: _We have_ \[\mathcal{L}_{\mathrm{p}}^{0}(\Pi)^{\vee}=\mathcal{L}_{\mathrm{p}}^{0}(\Pi^{ \vee}).\] Proof.: It suffices to show that for finite character \(\chi\colon\Gamma_{E,\mathrm{p}}^{-}\to\mathbb{L}_{\chi}^{\times}\) of conductor \(\prod\,\mathfrak{p}_{v}^{f_{v}}\) for a tuple \(f=(f_{v})_{v\in\mathbb{P}}\) of positive integers indexed by \(\mathbb{P}\) and every embedding \(\iota\colon\mathbb{L}_{\chi}\to\mathbb{C}\), we have \(\iota\mathcal{L}_{\mathrm{p}}^{0}(\Pi)(\chi)=\iota\mathcal{L}_{\mathrm{p}}^{0}( \Pi^{\vee})(\chi^{-1})\). By Theorem 5.2, this follows from that \(\lambda(\Pi_{v})=\lambda(\Pi_{v}^{\vee})\) for \(v\in\mathbb{P}\), that \(d(\Pi_{N})=d(\Pi_{N}^{\vee})\) for \(N=n,n+1\), that \(L(1,\iota\Pi_{N},\mathrm{As}^{(-1)^{N}})=L(1,\iota\Pi_{N}^{\vee},\mathrm{As}^ {(-1)^{N}})\) for \(N=n,n+1\), and the classical functional equation \[L(\tfrac{1}{2},\iota(\Pi_{n}\otimes\widetilde{\chi})\times\iota\Pi_{n+1})=L( \tfrac{1}{2},\iota(\Pi_{n}^{\vee}\otimes\widetilde{\chi}^{-1})\times\iota\Pi_ {n+1}^{\vee}).\] At the end of this section, we propose the following nonvanishing conjecture. **Conjecture 5.4**.: _The measure \(\mathcal{L}_{\mathrm{p}}^{0}(\Pi)\) is nonzero as long as \(\mathbb{V}_{\Pi}\neq 0\). In particular, it is nonzero when \(d(\Pi_{n})=d(\Pi_{n+1})=1\) by the remark below._ _Remark 5.5_.: Take an embedding \(\iota\colon\mathbb{L}\to\mathbb{C}\). Then \(\epsilon(\frac{1}{2},\Pi_{n}^{(\iota)}\times\Pi_{n+1}^{(\iota)})\) equals \(\epsilon(\Pi)\), which we have assumed to be \(1\). Now \(\epsilon(\frac{1}{2},\Pi_{n}^{(\iota)}\times\Pi_{n+1}^{(\iota)})\) decomposes as the product of \(d(\Pi_{n})d(\Pi_{n+1})\) root numbers (valued in \(\{\pm 1\}\)) for their isobaric factors. Then by the discussion in [1, SS26], \(\mathbb{V}_{\Pi}\neq 0\) if and only if all those root numbers are equal to \(1\). ## 6. Iwasawa Selmer group and height pairing Choose an algebraic closure \(E^{\mathrm{ac}}\) of \(E\). Denote by \(E^{\mathrm{p}}\subseteq E^{\mathrm{ac}}\) the maximal abelian extension of \(E\) in \(E^{\mathrm{ac}}\) unramified outside \(\mathbb{P}\), so that \(\mathrm{Gal}(E^{\mathrm{p}}/E)=\Gamma_{E,\mathrm{p}_{E}}\) by the global class field theory. For every tuple \(f=(f_{v})_{v\in\mathbb{P}}\) of positive integers indexed by \(\mathbb{P}\), we denote by \(E^{(f)}\) the subfield of \(E^{\mathbb{P}}\) fixed by the kernel of the composite homomorphism \[\Gamma_{E,\mathbb{P}_{E}}\xrightarrow{\operatorname{Nm}_{E^{\prime}}^{-}}\Gamma _{E,\mathbb{P}}^{-}\to\Gamma_{E,\mathbb{P}}^{-}/\mathfrak{U}_{f}.\] Let \(\mathbb{W}\) be a geometric Galois representation of \(E\) with coefficients in \(\mathbb{L}\).6 For every tuple \(f\) as above, we have the restricted Galois cohomology \(\operatorname{H}^{1}_{\Box}(E^{(f)},\mathbb{W})\) (for \(\Box\in\{\,,\operatorname{fin},\operatorname{st},\dots\}\) applied at all \(p\)-adic places). For two tuples \(f=(f_{v})_{v\in\mathbb{P}}\) and \(f^{\prime}=(f_{v}^{\prime})_{v\in\mathbb{P}}\) satisfying \(f^{\prime}\geqslant f\) (that is, \(f_{v}^{\prime}\geqslant f_{v}\) for every \(v\in\mathbb{P}\)), we have the corestriction map Footnote 6: Recall that this means the representation is continuous, unramified outside a finite set of nonarchimedean places of \(E\), and is de Rham at every \(p\)-adic places of \(E\). \[\operatorname{Cor}_{f}^{f^{\prime}}\colon\operatorname{H}^{1}_{\Box}(E^{(f^{ \prime})},\mathbb{W})\to\operatorname{H}^{1}_{\Box}(E^{(f)},\mathbb{W}).\] Define the Iwasawa \(\square\)-Selmer group to be \[\operatorname{H}^{1}_{\Box}(E,\mathbb{W})_{\mathbb{P}}\coloneqq\lim_{\ell} \operatorname{H}^{1}_{\Box}(E^{(f)},\mathbb{W})\] where the transition maps are corestriction map, which is naturally a \(\mathbb{L}[[\Gamma_{E,\mathbb{P}}^{-}]]^{\circ}\)-module. Now for every \(\mathbb{L}^{\circ}\)-lattice \(\mathbb{W}^{\circ}\) of \(\mathbb{W}\) stable under \(\operatorname{Gal}(E^{\operatorname{ac}}/E)\), we define \(\operatorname{H}^{1}_{\Box}(E^{(f)},\mathbb{W}^{\circ})\) to be the inverse image of \(\operatorname{H}^{1}_{\Box}(E^{(f)},\mathbb{W})\) under the natural map \(\operatorname{H}^{1}(E^{(f)},\mathbb{W}^{\circ})\to\operatorname{H}^{1}(E^{(f )},\mathbb{W})\). Put \[\operatorname{H}^{1}_{\Box}(E,\mathbb{W}^{\circ})_{\mathbb{P}}\coloneqq\lim_{ \ell}\operatorname{H}^{1}_{\Box}(E^{(f)},\mathbb{W}^{\circ}),\] which maps naturally to \(\operatorname{H}^{1}_{\Box}(E,\mathbb{W})_{\mathbb{P}}\). **Definition 6.1**.: The _bounded Iwasawa \(\square\)-Selmer group_\(\operatorname{H}^{1}_{\Box}(E,\mathbb{W})_{\mathbb{P}}^{\circ}\) is defined to be the union of the image of \(\operatorname{H}^{1}_{\Box}(E,\mathbb{W}^{\circ})_{\mathbb{P}}\) in \(\operatorname{H}^{1}_{\Box}(E,\mathbb{W})_{\mathbb{P}}\) for all possible \(\mathbb{W}^{\circ}\), which is a \(\mathbb{L}[[\Gamma_{E,\mathbb{P}}^{-}]]^{\circ}\)-submodule of \(\operatorname{H}^{1}_{\Box}(E,\mathbb{W})_{\mathbb{P}}\). When \(\Box=\operatorname{fin}\), we simply call \(\square\)-Selmer group the Selmer group. It is clear that \(\operatorname{H}^{1}_{\Box}(E,\mathbb{W})_{\mathbb{P}}^{\circ}\) is a compact \(\mathbb{L}[[\Gamma_{E,\mathbb{P}}^{-}]]^{\circ}\)-module, hence we can talk about its characteristic ideal. We say that \(\mathbb{W}\) is _pure_ if it satisfies conditions (B,D) in [20, (2.1.2)]. It is clear that if \(\mathbb{W}\) is pure, then so is \(\mathbb{W}^{\vee}(1)\). Moreover, if \(\mathbb{W}\) is pure, then \(\operatorname{H}^{1}_{\operatorname{fin}}(E,\mathbb{W})_{\mathbb{P}}^{\circ}= \operatorname{H}^{1}_{\operatorname{st}}(E,\mathbb{W})_{\mathbb{P}}^{\circ}\) by [20, Proposition 1.24(2)]. _Example 6.2_.: For \(N=n,n+1\), we denote by \(\rho_{\operatorname{II}_{N}}\colon\operatorname{Gal}(E^{\operatorname{ac}}/E) \to\operatorname{GL}_{N}(\mathbb{L})\) the Galois representation associated with \(\Pi_{N}\) satisfying \(\rho_{\operatorname{II}_{N}}^{\vee}\simeq\rho_{\operatorname{II}_{N}}^{\circ} (N-1)\).7 Let \(\mathbb{W}_{\Pi}\) be the \(\mathbb{L}[\operatorname{Gal}(E^{\operatorname{ac}}/E)]\)-module corresponding to the representation \(\rho_{\operatorname{II}_{n}}\otimes\rho_{\operatorname{II}_{n+1}}(n)\). We claim that \(\mathbb{W}_{\Pi}\) is pure. Indeed, by [1, Theorem 1.1 & Theorem 1.2], [1, Theorem 1.1], and [19, Lemma 1.4], we know that \(\rho_{\operatorname{II}_{n}}\otimes\rho_{\operatorname{II}_{n+1}}(n)\) is pure of weight \(-1\) (in the sense of [19]) at every \(u\in\mathbb{V}_{E}^{\operatorname{fin}}\), which implies that \(\mathbb{W}_{\Pi}\) is pure. Footnote 7: Strictly speaking, \(\rho_{\operatorname{II}_{N}}\) can a priori only be defined over a finite extension \(\mathbb{L}^{\prime}\) of \(\mathbb{L}\) (although it has traces in \(\mathbb{L}\)). However, in our later applications, \(\Pi_{N}\) will be \(v\)-ordinary for some \(v\in\mathbb{V}_{E}^{(f)}\), which implies that there exists an element in the inertia group at \(v\) whose characteristic polynomial under \(\rho_{\operatorname{II}_{N}}\) has distinct roots in \(\mathbb{L}\); it follows that \(\rho_{\operatorname{II}_{N}}\) can be defined over \(\mathbb{L}\) (by an argument used in the proof of [1, Proposition 3.2.5]). **Conjecture 6.3** (Iwasawa's main conjecture in the coherent case).: _Suppose that \(\Pi\) is coherent and semi-stably \(v\)-ordinary for every \(v\in\mathbb{P}\). Then the characteristic ideal of \(\operatorname{H}^{1}_{\operatorname{fin}}(E,\mathbb{W}_{\Pi})_{\mathbb{P}}^{\circ}\) is generated by \(\mathcal{L}_{\mathbb{P}}^{0}(\Pi^{\vee})\). In particular, in view of Conjecture 5.4, \(\operatorname{H}^{1}_{\operatorname{fin}}(E,\mathbb{W}_{\Pi})_{\mathbb{P}}^{\circ}\) is a torsion \(\mathbb{L}[[\Gamma_{E,\mathbb{P}}^{-}]]^{\circ}\)-module if and only if \(\mathbb{V}_{\Pi}\neq 0\)._ At the end of this section, we use Nekovar's \(p\)-adic height pairing to produce an \(\mathbb{L}[[\Gamma^{-}_{E,\mathbb{P}}]]^{\circ}\)-sesquilinear (that is, linear in the first variable and adjoint linear in the second variable) pairing \[\mathbf{h}_{\mathbb{P}}^{\mathbb{W}}\colon\mathrm{H}^{1}_{\mathrm{fin}}(E, \mathbb{W})_{\mathbb{P}}^{\circ}\times\mathrm{H}^{1}_{\mathrm{fin}}(E,\mathbb{ W}^{\vee}(1))_{\mathbb{P}}^{\circ}\to\Gamma_{E,\mathbb{P}_{E}}\otimes_{ \mathbb{Z}_{p}}\mathbb{L}[[\Gamma^{-}_{E,\mathbb{P}}]]^{\circ} \tag{6.1}\] when \(\mathbb{W}\) is pure. The construction requires a choice of splittings of Hodge filtrations of \(\mathbb{W}\) at every place in \(\mathbb{P}_{E}\) (see [23, Theorem 2.2] for the notion of such splitting). Since \(\mathbb{W}\) is geometric, we may choose a finite extension \(E^{\dagger}/E\) contained in \(E^{\mathrm{ac}}\) such that \(\mathbb{W}\) is semistable at every \(p\)-adic place of \(E^{\dagger}\). For every tuple \(f=(f_{v})_{v\in\mathbb{P}}\) of positive integers indexed by \(\mathbb{P}\), put \(E^{(f)\dagger}\coloneqq E^{(f)}\otimes_{E}E^{\dagger}\) and denote by \(\mathbb{P}^{(f)\dagger}\) the inverse image of \(\mathbb{P}\) with respect to the finite etale extension \(E^{(f)\dagger}/F\).8 Clearly, \(\mathbb{W}\) as an \(\mathbb{L}[\mathrm{Gal}(E^{\mathrm{ac}}/E^{(f)\dagger})]\)-module remains pure, is semistable at every \(p\)-adic places of \(E^{(f)\dagger}\); and the chosen splittings of Hodge filtrations induce the ones at every place in \(\mathbb{P}^{(f)\dagger}\). We then have the \(p\)-adic height pairing Footnote 8: We regard \(E^{(f)\dagger}\) as a finite product of finite extensions \(E^{\dagger}_{i}/E^{\dagger}\) contained in \(E^{\mathrm{ac}}\). Accordingly, in what follows, we understand \(\mathrm{Gal}(E^{\mathrm{ac}}/E^{(f)\dagger})\) as \(\prod_{i}\mathrm{Gal}(E^{\mathrm{ac}}/E^{\dagger}_{i})\), \(\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f)\dagger},-)\) as \(\bigoplus_{i}\mathrm{H}^{1}_{\mathrm{fin}}(E^{(\dagger}_{i},-),\Gamma_{E^{(f) \dagger},\mathbb{P}^{(f)\dagger}}\) as \(\bigoplus_{i}\Gamma_{E^{\dagger}_{i},\mathbb{P}^{(f)\dagger}}\)\({}_{i_{i}^{\prime}}\), etc. \[\mathrm{h}_{E^{(f)\dagger}}\colon\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f)\dagger},\mathbb{W}_{1})\times\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f)\dagger},\mathbb{W} _{2})\to\Gamma_{E^{(f)\dagger},\mathbb{P}^{(f)\dagger}}\otimes_{\mathbb{Z}_{p }}p^{\delta}\mathbb{L}^{\circ}\] for every tuple \(f\). Define \[\mathrm{h}^{(f)}\colon\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f)},\mathbb{W}_{1}) \times\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f)},\mathbb{W}_{2})\to\Gamma_{E, \mathbb{P}_{E}}\otimes_{\mathbb{Z}_{p}}p^{\delta}\mathbb{L}^{\circ}\] to be the composition of the restriction map from \(\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f)},-)\) to \(\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f)\dagger},-)\), the pairing \(\mathrm{h}_{E^{(f)\dagger}}\), and the norm map \(\mathrm{Nm}_{E^{(f)\dagger}/E}\). We define \[\mathbf{h}^{(f)}\colon\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f)},\mathbb{W}_{1}) \times\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f)},\mathbb{W}_{2})\to\Gamma_{E, \mathbb{P}_{E}}\otimes_{\mathbb{Z}_{p}}p^{\delta}\mathbb{L}^{\circ}[\Gamma^{- }_{E,\mathbb{P}}/\mathbb{U}_{f}]\] to be the \(\mathbb{L}^{\circ}[\Gamma^{-}_{E,\mathbb{P}}/\mathbb{U}_{f}]\)-sesquilinearization of \(\mathrm{h}^{(f)}\), namely, \[\mathbf{h}^{(f)}(x,y)=\sum_{\varsigma\in\mathrm{Gal}(E^{(f)})/E}\mathrm{h}^{( f)}(\varsigma x,y)[\varsigma^{-1}]\] for every \((x,y)\in\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f)},\mathbb{W}_{1})\times\mathrm{H}^ {1}_{\mathrm{fin}}(E^{(f)},\mathbb{W}_{2})\). By the lemma below, we obtain the desired pairing (6.1) by considering all lattices \(\mathbb{W}_{1},\mathbb{W}_{2}\). **Lemma 6.4**.: _The collection of pairings \((\mathbf{h}^{(f)})_{f}\) is compatible under corestriction maps on the source and natural projection maps on the target, hence defines an \(\mathbb{L}^{\circ}[[\Gamma^{-}_{E,\mathbb{P}}]]\)-sesquilinear pairing_ \[\mathbf{h}_{\mathbb{P}}\coloneqq\varprojlim_{f}\mathbf{h}^{(f)}\colon\mathrm{H }^{1}_{\mathrm{fin}}(E,\mathbb{W}_{1})_{\mathbb{P}}^{\circ}\times\mathrm{H}^ {1}_{\mathrm{fin}}(E,\mathbb{W}_{2})_{\mathbb{P}}^{\circ}\to\Gamma_{E,\mathbb{P }_{E}}\otimes_{\mathbb{Z}_{p}}p^{\delta}\mathbb{L}^{\circ}[[\Gamma^{-}_{E, \mathbb{P}}]].\] Proof.: It suffices to show that for another tuple \(f^{\prime}\geqslant f\), \[\sum_{\varsigma\in\mathrm{Gal}(E^{(f^{\prime})})/E^{(f)}}\mathrm{h}^{(f^{\prime} )}(\varsigma x,y)=\mathrm{h}^{(f)}(\mathrm{Cor}_{f}^{f^{\prime}}x,\mathrm{Cor} _{f}^{f^{\prime}}y) \tag{6.2}\] holds for every \((x,y)\in\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f^{\prime})},\mathbb{W}_{1})\times \mathrm{H}^{1}_{\mathrm{fin}}(E^{(f^{\prime})},\mathbb{W}_{2})\). Put \(x^{\prime}\coloneqq\sum_{\varsigma\in\mathrm{Gal}(E^{(f^{\prime})})/E^{(f)}} \varsigma x\), which is simply the restriction of \(\mathrm{Cor}^{f^{\prime}}_{f}\,x\in\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f)}, \mathbb{W}_{1})\) to \(\mathrm{H}^{1}_{\mathrm{fin}}(E^{(f^{\prime})},\mathbb{W}_{1})\). By definition, \[\sum_{\varsigma\in\mathrm{Gal}(E^{(f^{\prime})})/E^{(f)}}\mathrm{h}^{(f^{ \prime})}(\varsigma x,y)=\mathrm{Nm}_{E^{(f^{\prime})\dagger}/E}\,\mathrm{h}_{ E^{(f)\dagger}}(x^{\prime},y)=\mathrm{Nm}_{E^{(f)\dagger}/E}\left(\mathrm{Nm}_{E^{(f^{ \prime})\dagger}/E^{(f)\dagger}}\,\mathrm{h}_{E^{(f)\dagger}}(x^{\prime},y) \right).\] Thus, (6.2) follows as \(\mathrm{Nm}_{E^{(f^{\prime\prime\dagger})}/E^{(f)\dagger}}\,\mathrm{h}_{E^{( f^{\prime\prime\dagger})}}(x^{\prime},y)=\mathrm{h}_{E^{(f)\dagger}}(\mathrm{ Cor}^{f^{\prime}}_{f}\,x,\mathrm{Cor}^{f^{\prime}}_{f}\,y)\). The lemma follows. ## 7. Incoherent anticyclotomic \(p\)-adic \(L\)-function In this section, we study the case where \(\Pi=\Pi_{n}\boxtimes\Pi_{n+1}\) is incoherent. We fix a place \(\boldsymbol{u}\in\mathbb{V}^{(\infty)}_{E}\) with \(\boldsymbol{v}\in\mathbb{V}^{(\infty)}_{F}\) its underlying place, and regard \(E\) as a subfield of \(\mathbb{C}\) via \(\boldsymbol{u}\) (and will take \(E^{\mathrm{ac}}\) to be \(\overline{\mathbb{Q}}\)). In this case, there exists a hermitian space \(V_{n}\) over \(E\) that has signature \((n-1,1)\) at \(\boldsymbol{v}\) and \((n,0)\) at other \(v\neq\boldsymbol{v}\in\mathbb{V}^{(\infty)}_{F}\), unique up to isomorphism, such that \(V_{n,v}\) is the prescribed hermitian space from Section 2 for every \(v\in\mathbb{V}^{\mathrm{fin}}_{F}\). Put \(V_{n+1}\coloneqq V_{n}\oplus E\cdot\mathbf{e}\). Put \[G\coloneqq\mathrm{U}(V_{n})\times\mathrm{U}(V_{n+1}),\quad\pi\coloneqq\otimes _{\nu\in\mathbb{V}^{\mathrm{fin}}_{F}}\pi_{\nu}\] which is an irreducible admissible representation of \(G(\mathbb{A}^{\infty}_{F})\) with coefficients in \(\mathbb{L}\). We have a system of Shimura varieties \(\{X_{K}\}_{K}\) associated with \(\mathrm{Res}_{F/\mathbb{Q}}\,G\) indexed by neat open compact subgroups \(K\) of \(G(\mathbb{A}^{\infty}_{F})\), which are quasi-projective smooth schemes over \(E\) of dimension \(2n-1\) (see for example [12, SS3.2] for more details). When \(F\neq\mathbb{Q}\), \(X_{K}\) is projective. When \(F=\mathbb{Q}\), \(X_{K}\) has a canonical smooth toroidal compactification, which we still denote by \(X_{K}\) by abuse of notation. Put \[\mathrm{H}^{i}(\overline{X},\mathbb{L}(j))\coloneqq\varinjlim_{K}\mathrm{H}^{ i}(\overline{X}_{K},\mathbb{L}(j))\] for \(i,j\in\mathbb{Z}\), where \(\overline{X}_{K}\coloneqq X_{K}\otimes_{E}\overline{\mathbb{Q}}\). Put \[\mathbb{V}_{\Pi}\coloneqq\mathrm{Hom}_{G(\mathbb{A}^{\infty}_{F})}\left(\pi^{ \vee},\mathrm{H}^{2n-1}(\overline{X},\mathbb{L}(n))\right),\quad\mathbb{V}_{ \Pi^{\vee}}\coloneqq\mathrm{Hom}_{G(\mathbb{A}^{\infty}_{F})}\left(\pi, \mathrm{H}^{2n-1}(\overline{X},\mathbb{L}(n))\right).\] Put \(\mathscr{L}^{1}_{\mathrm{P}}(\Pi)\coloneqq 0\) when \(\mathbb{V}_{\Pi}=0\). Now we assume \(\mathbb{V}_{\Pi^{\vee}}\neq 0\) hence \(\mathbb{V}_{\Pi^{\vee}}\simeq(\mathbb{V}_{\Pi})^{\vee}(1)\neq 0\). We fix a \(\mathrm{Gal}(\overline{\mathbb{Q}}/E)\)-equivariant pairing \[\langle\!\langle\,\ \rangle\!\rangle_{\Pi}\colon\mathbb{V}_{\Pi^{\vee}}\times \mathbb{V}_{\Pi}\to\mathbb{L}(1).\] By definition, we have maps \[\pi\otimes_{\mathbb{L}}\mathbb{V}_{\Pi^{\vee}}\to\mathrm{H}^{2n-1}(\overline{X },\mathbb{L}(n)),\quad\pi^{\vee}\otimes_{\mathbb{L}}\mathbb{V}_{\Pi}\to\mathrm{H} ^{2n-1}(\overline{X},\mathbb{L}(n))\] of \(\mathbb{L}[G(\mathbb{A}^{\infty}_{F})]\)-modules. Using the Poincare duality pairing on \(\overline{X}_{K}\) and \(\langle\!\langle\,\ \rangle\!\rangle_{\Pi}\), we have induced maps \[-_{\star}\colon\pi^{?}\to\mathrm{Hom}_{\mathbb{L}}\left(\varinjlim_{K}\mathrm{ H}^{2n-1}(\overline{X}_{K},\mathbb{L}(n)),\mathbb{V}_{\Pi^{\vee}}\right)\] for \(?\in\{\,\vee\}\), so that the image of \(\pi^{K}\) is contained in \(\mathrm{Hom}_{\mathbb{L}}\left(\mathrm{H}^{2n-1}(\overline{X}_{K},\mathbb{L}(n)), \mathbb{V}_{\Pi^{\vee}}\right)\). In what follows, we assume that \(\Pi\) is semi-stably \(v\)-ordinary for every \(v\in\mathrm{P}\). Denote by \(H\subseteq G\) the graph of the natural embedding \(\mathrm{U}(V_{n})\hookrightarrow\mathrm{U}(V_{n+1})\), and fix a rational Haar measure on \(H(\mathbb{A}^{\infty,\mathrm{P}}_{F})\). For every \(v\in\mathrm{P}\), fix a standard isomorphism \(G_{v}\simeq\mathrm{GL}_{n,F_{v}}\times\mathrm{GL}_{n+1,F_{v}}\) (Definition 4.1). For every element \(\varphi=\otimes_{v\in\mathbb{V}^{\mathrm{fin}}_{F}}\varphi_{v}\in\pi\) with \(\varphi_{v}\in\pi_{v}\) that is a (nonzero) ordinary vector for \(v\in\mathrm{P}\), we define an element \(\mathscr{Z}_{\varphi}\in\mathrm{H}^{1}_{\mathrm{st}}(E,\mathbb{V}_{\Pi})_{\mathrm{P}}\) as follows: Choose a neat open compact subgroup \(K^{\mathrm{P}}\) of \(G(\mathbb{A}_{F}^{\infty,\mathbb{P}})\) that fixes \(\varphi^{\mathbb{P}}\), and put \(K_{H}^{\mathbb{P}}\coloneqq K^{\mathbb{P}}\cap H(\mathbb{A}_{F}^{\infty,\mathbb{P }})\). Put \(I_{\mathbb{P}}^{(1)}\coloneqq\prod_{v\in\mathbb{P}}I_{v}^{(1)}\), which fixes \(\varphi_{\mathbb{P}}\). Take a tuple \(f=(f_{v})_{v\in\mathbb{P}}\) of positive integers indexed by \(\mathbb{P}\). Put \[\varphi^{(f)}\coloneqq\pi((1_{n},\xi)\cdot[\varpi^{f}])\varphi,\quad K_{ \mathbb{P}}^{(f)}\coloneqq((1_{n},\xi)\cdot[\varpi^{f}])I_{\mathbb{P}}^{(1)}(( 1_{n},\xi)\cdot[\varpi^{f}])^{-1},\quad K_{H,\mathbb{P}}^{(f)}\coloneqq K_{ \mathbb{P}}^{(f)}\cap H(F_{\mathbb{P}}).\] Then we have the map \[\varphi_{\star}^{(f)}\colon\mathrm{H}^{2n-1}(\overline{X}_{K^{\mathbb{P}}K_{ \mathbb{P}}^{(f)}},\mathbb{L}(n))\to\mathbb{V}_{\Pi}.\] By definition, we have a special homomorphism \[Y_{K_{H}^{\mathbb{P}}K_{H\mathbb{P}}^{(f)}}\to X_{K^{\mathbb{P}}K_{\mathbb{P}} ^{(f)}}\] which is finite and unramified, where \(Y_{\star}\) denotes the system of (compactified) Shimura varieties associated with \(H\). By Lemma 4.11(1), the structure morphism \(Y_{K_{H}^{\mathbb{P}}K_{H\mathbb{P}}^{(f)}}\to\operatorname{Spec}E\) factors through \(\operatorname{Spec}E^{(f)}\), which gives rise to a decomposition \[Y_{K_{H}^{\mathbb{P}}K_{H\mathbb{P}}^{(f)}}\otimes_{E}E^{(f)}=\coprod_{\varsigma \in\operatorname{Gal}(E^{(f)}/E)}{}^{\varsigma}Z_{K_{H}^{\mathbb{P}}K_{H \mathbb{P}}^{(f)}}\] into disjoint union of open and closed subschemes indexed by \(\operatorname{Gal}(E^{(f)}/E)\) that is compatible with the action of \(\operatorname{Gal}(E^{(f)}/E)\), normalized in the way that \({}^{1}Z_{K_{H}^{\mathbb{P}}K_{H\mathbb{P}}^{(f)}}\otimes_{E^{(f)}}\mathbb{C}\) contains the identity double coset in the complex uniformization of \(Y_{K_{H}^{\mathbb{P}}K_{H\mathbb{P}}^{(f)}}\otimes_{E}\mathbb{C}\). We have the induced cycle class \[[^{1}Z_{K_{H}^{\mathbb{P}}K_{H\mathbb{P}}^{(f)}}]\in\operatorname{CH}^{n}(X_{K ^{\mathbb{P}}K_{\mathbb{P}}^{(f)}}\otimes_{E}E^{(f)}).\] By [11, Proposition 5.9(1)], we may choose a Hecke operator \(\mathfrak{t}\in\mathbb{L}[K^{\mathbb{P}}\backslash G(\mathbb{A}_{F}^{\infty, \mathbb{P}})/K^{\mathbb{P}}]\) that annihilates \(\mathrm{H}^{2n}(\overline{X}_{K^{\mathbb{P}}K_{\mathbb{P}}^{(f)}},\mathbb{L}(n))\) and such that its action on \(\pi^{K^{\mathbb{P}}}\) is given by the multiplication by a constant \(\lambda_{\mathfrak{t}}\in\mathbb{L}^{\times}\).9 Then \(\mathfrak{t}[^{1}Z_{K_{H}^{\mathbb{P}}K_{H\mathbb{P}}^{(f)}}]\) is (geometrically) cohomologically trivial, hence induces an element Footnote 9: It is assumed in [11, Proposition 5.9(1)] that \(F\neq\mathbb{Q}\). However, when \(F=\mathbb{Q}\) hence the original Shimura variety \(X_{K}\) is not necessarily compact, the statement also holds. As we already know that \(\pi\) does not occur in the degree-\(2n\) intersection cohomology of the minimal compactification of \(X_{K}\) since it is tempered, it suffices to show that it does not occur in the total cohomology of the boundary (which is a disjoint union of extensions of abelian varieties by finite groups). Indeed, if \(\pi\) does, then it must be a CAP representation hence cannot be tempered. \[\alpha(\mathfrak{t}[^{1}Z_{K_{H}^{\mathbb{P}}K_{H\mathbb{P}}^{(f)}}])\in \mathrm{H}^{1}_{\mathrm{st}}(E^{(f)},\mathrm{H}^{2n-1}(\overline{X}_{K^{ \mathbb{P}}K_{\mathbb{P}}^{(f)}},\mathbb{L}(n)))\] by the Abel-Jacobi map and the discussion in [20, (3.9)]. Now put \[\mathcal{Z}_{\varphi}^{(f)}\coloneqq\prod_{v\in\mathbb{P}}\left(\frac{1}{ \lambda(\pi_{v})}\right)^{f_{v}}\cdot\frac{\mathrm{vol}(K_{H}^{\mathbb{P}})}{ \lambda_{\mathfrak{t}}}\cdot\varphi_{\star}^{(f)}\alpha(\mathfrak{t}[^{1}Z_{K_ {H}^{\mathbb{P}}K_{H\mathbb{P}}^{(f)}}])\in\mathrm{H}^{1}_{\mathrm{st}}(E^{(f)},\mathbb{V}_{\Pi}),\] which is independent of the choices of \(K^{\mathbb{P}}\) and \(\mathfrak{t}\). **Proposition 7.1**.: _We have_ 1. _The collection_ \((\mathcal{Z}_{\varphi}^{(f)})_{f}\) _is compatible under corestriction maps hence gives an element_ \(\mathcal{Z}_{\varphi}\in\mathrm{H}^{1}_{\mathrm{st}}(E,\mathbb{V}_{\Pi})_{ \mathbb{P}}\)_._ 2. _The element_ \(\mathcal{Z}_{\varphi}\) _is bounded, that is, it belongs to_ \(\mathrm{H}^{1}_{\mathrm{st}}(E,\mathbb{V}_{\Pi})^{\circ}_{\mathbb{P}}\)_._ Proof.: For (1), it suffices to show the compatibility under \(\operatorname{Cor}_{f}^{f^{\prime}}\) for \(f^{\prime}\) satisfying \(f^{\prime}_{v}=f_{v}+1\) for exactly one \(v\in\operatorname{P}\) and equal for others. Denote by \(\operatorname{H}^{2n}(X_{K^{\pi}\kappa^{(f^{\prime})}_{\operatorname{R}}} \otimes_{E}E^{(-)},\mathbb{L}(n))^{0}\) the kernel of the restriction map \[\operatorname{H}^{2n}(X_{K^{\pi}K^{(f^{\prime})}_{\operatorname{P}}}\otimes_{ E}E^{(-)},\mathbb{L}(n))\to\operatorname{H}^{2n}(\overline{X}_{K^{\pi}K^{(f^{ \prime})}_{\operatorname{P}}},\mathbb{L}(n)).\] Then we have the commutative diagram \[\operatorname{H}^{2n}(X_{K^{\pi}K^{(f^{\prime})}_{\operatorname{P}}}\otimes_ {E}E^{(f^{\prime})},\mathbb{L}(n))^{0}\xrightarrow{\alpha}\operatorname{H}^{ 1}(E^{(f^{\prime})},\operatorname{H}^{2n-1}(\overline{X}_{K^{\pi}K^{(f^{\prime })}_{\operatorname{P}}},\mathbb{L}(n)))\] in which \(\operatorname{Tr}_{f}^{f^{\prime}}\) denotes the trace map along the extension \(E^{(f^{\prime})}/E^{(f)}\). Thus, it suffices to show that \[\varphi_{\star}^{(f^{\prime})}\alpha(\operatorname{t}[\coprod_{\varsigma \in\operatorname{Gal}(E^{(f^{\prime})}/E^{(f)})}{}^{\varsigma}Z_{K^{\pi}_{H}K ^{(f^{\prime})}_{HP}}])=\lambda(\pi_{v})\varphi_{\star}^{(f)}\alpha( \operatorname{t}[^{1}Z_{K^{\pi}_{H}K^{(f^{\prime})}_{HP}}]), \tag{7.1}\] in which \(\coprod_{\varsigma\in\operatorname{Gal}(E^{(f^{\prime})}/E^{(f)})}{}^{ \varsigma}Z_{K^{\pi}_{H}K^{(f^{\prime})}_{HP}}\) is regarded as an open and closed subscheme of \(Y_{K^{\pi}_{H}K^{(f^{\prime})}_{HP}}\otimes_{E}E^{(f)}\). By Definition 4.5, we have \[\lambda(\pi_{v})\varphi^{(f)}=\sum_{u\in U(O_{F_{v}})/(U(O_{F_{v}})\cap[\pi_ {v}]U(O_{F_{v}})[\pi_{v}]^{-1})}\pi_{v}((1_{n},\xi)\cdot[\varpi^{f}]\cdot u \cdot[\varpi_{v}])\varphi. \tag{7.2}\] By [1, Lemma 6.5], we have for every \(u\in U(O_{F_{v}})/(U(O_{F_{v}})\cap[\varpi_{v}]U(O_{F_{v}})[\varpi_{v}]^{-1})\), \[(1_{n},\xi)\cdot[\varpi^{f}]\cdot u\cdot[\varpi_{v}]\in K^{(f^{\prime})}_{H, \operatorname{P}}\cdot(1_{n},\xi)\cdot[\varpi^{f^{\prime}}]\cdot I^{(1)}_{v}.\] By Lemma 4.11(2), we have the following commutative diagram in which \(\kappa\) is finite of degree \(q_{v}^{\frac{(\varpi+1)(2n+1)}{6}}\), which is same as the cardinality of \(U(O_{F_{v}})/(U(O_{F_{v}})\cap[\varpi_{v}]U(O_{F_{v}})[\varpi_{v}]^{-1})\). Since \(I^{(1)}_{v}\) fixes \(\varphi\) and translations by elements in \(K^{(f^{\prime})}_{H,\operatorname{P}}\) fix \(\kappa^{-1}\,^{2}Z_{K^{\pi}_{H}K^{(f^{\prime})}_{HP}}\), (7.2) implies that \[\lambda(\pi_{v})\varphi_{\star}^{(f)}\alpha(\operatorname{t}[^{1}Z_{K^{\pi}_{ H}K^{(f)}_{HP}}])=\varphi_{\star}^{(f^{\prime})}\alpha(\operatorname{t}[ \kappa^{-1}\,^{2}Z_{K^{\pi}_{H}K^{(f^{\prime})}_{HP}}]).\] Finally, since \[\kappa^{-1}\,^{2}Z_{K^{\pi}_{H}K^{(f)}_{HP}}=\coprod_{\varsigma\in\operatorname {Gal}(E^{(f^{\prime})}/E^{(f)})}{}^{\varsigma}Z_{K^{\pi}_{H}K^{(f^{\prime})}_{ HP}},\] (7.1) hence (1) follow. For (2), the right translation by \((1_{n},\xi)\cdot[\varpi^{f}]\) induces an isomorphism \[\tau^{(f)}\coloneqq X_{K^{\pi}K^{(f)}_{\operatorname{P}}}\xrightarrow{\sim}X_ {K^{\pi}K^{(f)}_{\operatorname{P}}}\] between Shimura varieties over \(E\). Moreover, we have \[\varphi_{\star}^{(f)}=\varphi_{\star}\circ\tau_{!}^{(f)}\colon\operatorname{H} ^{2n-1}(\overline{X}_{K^{\pi}K^{(f)}_{\operatorname{P}}},\mathbb{L}(n))\to \mathbb{V}_{\Pi}. \tag{7.3}\] We may choose a Hecke operator \(\mathtt{t}\in\mathbb{L}^{\circ}[K^{\mathtt{P}}\backslash G(\mathbb{A}_{F}^{\circ, \mathtt{P}})/K^{\mathtt{P}}]\) that annihilates \(\mathrm{H}^{2n}(\overline{X}_{K^{\mathtt{P}}\mathcal{H}_{\mathtt{P}}^{(n)}}, \mathbb{L}^{\circ}(n))\) and such that its action on \(\pi^{K^{\mathtt{P}}}\) is given by the multiplication by a constant \(\lambda_{\mathtt{t}}\in\mathbb{L}^{\times}\). It follows that \[\alpha(\mathtt{t}[^{\mathtt{I}}Z_{K^{\mathtt{P}}_{H}K^{(f)}_{H,\mathtt{P}}}]) \in\mathrm{H}^{1}_{\mathrm{st}}(E^{(f)},\mathrm{H}^{2n-1}(\overline{X}_{K^{ \mathtt{P}}\mathcal{H}_{\mathtt{P}}^{(f)}},\mathbb{L}^{\circ}(n))).\] We may choose a \(\mathrm{Gal}(\overline{Q}/E)\)-ordinary \(\mathbb{L}^{\circ}\)-lattice \(\mathbb{V}_{\Pi}^{\circ}\) in \(\mathbb{V}_{\Pi}\) such that the image of \(\mathrm{H}^{2n-1}(\overline{X}_{K^{\mathtt{P}}\mathcal{H}_{\mathtt{P}}^{(1)} },\mathbb{L}^{\circ}(n))\) under \(\varphi_{\star}\) is contained in \(\mathrm{vol}(K^{\mathtt{P}}_{H})^{-1}\lambda_{\mathtt{t}}\mathbb{V}_{\Pi}^{\circ}\). Then by (7.3), the image of \(\mathrm{H}^{2n-1}(\overline{X}_{K^{\mathtt{P}}\mathcal{H}_{\mathtt{P}}^{(f)}}, \mathbb{L}^{\circ}(n))\) under \(\varphi_{\star}^{(f)}\) is also contained in \(\mathrm{vol}(K^{\mathtt{P}}_{H})^{-1}\lambda_{\mathtt{t}}\mathbb{V}_{\Pi}^{\circ}\). Since \(\lambda(\pi_{v})\in\mathbb{L}^{\circ\times}\) for every \(v\in\mathbb{P}\), it follows that \(\mathcal{Z}_{\varphi}^{(f)}\in\mathrm{H}^{1}_{\mathrm{st}}(E^{(f)},\mathbb{V} _{\Pi}^{\circ})\) for every \(f\). The proof of (1) indeed shows that \((\mathcal{Z}_{\varphi}^{(f)})_{f}\) is compatible under corestriction maps for \(\mathbb{V}_{\Pi}^{\circ}\), which implies that \(\mathcal{Z}_{\varphi}\) is bounded. **Notation 7.2**.: We denote by \(\mathrm{Z}_{\Pi}\) the \(\mathbb{L}[[\Gamma_{E,\mathtt{P}}^{-}]]^{\circ}\)-submodule of \(\mathrm{H}^{1}_{\mathrm{st}}(E,\mathbb{V}_{\Pi})_{\mathtt{P}}^{\circ}\) generated by \(\mathcal{Z}_{\varphi}\) for \(\varphi\) as above. It is easy to see that \(\mathrm{Z}_{\Pi}\) depends only on \(\Pi\), not on the choice of the standard isomorphisms \(G_{v}\simeq\mathrm{GL}_{n,F_{v}}\times\mathrm{GL}_{n+1,F_{v}}\) for \(v\in\mathtt{P}\). We also put \(\mathrm{Z}_{\Pi}\coloneqq 0\) when \(\mathbb{V}_{\Pi}=0\). **Hypothesis 7.3**.: As an \(\mathbb{L}[\mathrm{Gal}(\overline{\mathbb{Q}}/E)]\)-module, \(\mathbb{V}_{\Pi}\) is canonically a direct summand of \(\mathbb{W}_{\Pi}\) (Example 6.2). In particular, \(\mathrm{H}^{1}_{\mathrm{st}}(E,\mathbb{V}_{\Pi})_{\mathtt{P}}^{\circ}=\mathrm{ H}^{1}_{\mathrm{fin}}(E,\mathbb{V}_{\Pi})_{\mathtt{P}}^{\circ}\subseteq\mathrm{ H}^{1}_{\mathrm{fin}}(E,\mathbb{W}_{\Pi})_{\mathtt{P}}^{\circ}\). _Remark 7.4_.: The above hypothesis is known when \(n\leqslant 2\). When \(n>2\) and \(F\neq\mathbb{Q}\), it would follow from an ongoing work of Kisin-Shin-Zhu. In fact, there is a precise version of the above hypothesis via Arthur's multiplicity formula; see [11, Hypothesis 6.6]. Assume Hypothesis 7.3. We have a pairing \[\mathbf{h}_{\mathtt{P}}^{\forall_{\Pi}}\colon\mathrm{H}^{1}_{\mathrm{fin}}(E, \mathbb{V}_{\Pi})_{\mathtt{P}}^{\circ}\times\mathrm{H}^{1}_{\mathrm{fin}}(E, \mathbb{V}_{\Pi^{\vee}})_{\mathtt{P}}^{\circ}\to\Gamma_{E,\mathtt{P}_{E}} \otimes_{\mathbb{Z}_{\mathtt{P}}}\mathbb{L}[[\Gamma_{E,\mathtt{P}}^{-}]]^{\circ}\] from the discussion in the previous section, after we identify \(\mathbb{V}_{\Pi^{\vee}}\) with \(\mathbb{V}_{\Pi}^{\vee}(1)\) using \(\langle\!\langle\,\ \rangle\!\rangle_{\Pi}\). Note that by Hypothesis 7.3 and the assumption that \(\Pi\) is semi-stably \(v\)-ordinary at every \(v\in\mathtt{P}\), \(\mathbb{V}_{\Pi}\) satisfies the Panchishkin condition hence admits a canonical splitting of the Hodge filtration at every \(v\in\mathtt{P}\), which is the one we use to define the above pairing. Choose a pair \(\varphi=\otimes_{v\in\mathtt{V}_{F}^{\mathrm{fin}}}\varphi_{v}\in\mathcal{V}\) and \(\varphi^{\vee}=\otimes_{v\in\mathtt{V}_{F}^{\mathrm{fin}}}\varphi_{v}^{\vee} \in\mathcal{V}^{\vee}\) with \(\varphi_{v}\in\pi_{v}\) and \(\varphi_{v}^{\vee}\in\pi_{v}^{\vee}\) satisfying 1. for \(v\in\mathtt{P}\), both \(\varphi_{v}\) and \(\varphi_{v}^{\vee}\) are (nonzero) ordinary vectors; 2. for \(v\in\mathtt{V}_{F}^{\mathrm{spl}}\setminus\mathtt{P}\), the pair \((\varphi_{v},\varphi_{v}^{\vee})\) satisfies the conclusion of Proposition 2.10; 3. for \(v\in\mathtt{V}_{F}^{\mathrm{fin}}\setminus\mathtt{V}_{F}^{\mathrm{spl}}\), \(\alpha(\varphi_{v},\varphi_{v}^{\vee})\neq 0\). By Proposition 2.10 and Lemma 2.8, such choice is possible. Put \[\mathcal{L}_{\mathtt{P}}^{1}(\Pi) \coloneqq\prod_{v\in\mathtt{P}}\frac{\gamma(\Pi_{v})}{\langle \varphi_{v}^{\vee},\varphi_{v}\rangle_{\pi_{v}}}\cdot\prod_{v\in\mathtt{V}_{F}^ {\mathrm{fin}}\setminus\mathtt{V}_{F}^{\mathrm{spl}}}\frac{\alpha(\varphi_{v}, \varphi_{v}^{\vee})^{-1}\cdot\Delta_{n+1,v}\cdot L(\frac{1}{2},\Pi_{n,v} \times\Pi_{n+1,v})}{L(1,\Pi_{n,v},\mathrm{As}^{(-1)^{n}})L(1,\Pi_{n+1,v}, \mathrm{As}^{(-1)^{n+1}})}\] \[\times\mathrm{Nm}_{E/F}\,\mathbf{h}_{\mathtt{P}}^{\forall_{\Pi}} \left(\mathcal{Z}_{\varphi},\mathcal{Z}_{\varphi^{\vee}}\right)\in\Gamma_{F, \mathtt{P}}\otimes_{\mathbb{Z}_{\mathtt{P}}}\mathbb{L}[[\Gamma_{E,\mathtt{P}}^{-}]]^{ \circ},\] where \(\gamma(\Pi_{v})\in\mathbb{L}^{\times}\) is the constant in Proposition 4.12. Note that by Lemma 2.8, Lemma 4.7, and (T3), the above expression makes sense. **Lemma 7.5**.: _The element \(\mathcal{L}_{\mathtt{P}}^{1}(\Pi)\) does not depend on the choice of \((\varphi,\varphi^{\vee})\)._ Proof.: It suffices to show that for every finite character \(\chi\colon\Gamma_{E,\mathtt{P}}^{-}\to\mathbb{L}_{\chi}^{\times}\), the element \[\mathbf{h}_{\mathtt{P}}^{\forall_{\Pi}}\left(\mathcal{Z}_{\varphi},\mathcal{Z}_{ \varphi^{\vee}}\right)(\chi)\in\Gamma_{F,\mathtt{P}}\otimes_{\mathbb{Z}_{ \mathtt{P}}}\mathbb{L}_{\chi}\] is independent of \((\varphi,\varphi^{\vee})\). Without lost of generality, we may assume \(\langle\varphi_{v}^{\vee},\varphi_{v}\rangle_{\pi_{v}}=1\) for every \(v\in\mathtt{P}\). The assignment \[(\varphi^{\mathrm{P}},\varphi^{\mathrm{VP}})\in\pi^{\mathrm{P}}\times\pi^{ \mathrm{VP}}\mapsto\mathbf{h}_{\mathrm{P}}^{\mathrm{V}_{\Pi}}\left(\mathcal{Z}_{ \varphi},\mathcal{Z}_{\varphi^{\mathrm{V}}}\right)(\chi)\] defines an element in \[\mathrm{Hom}_{H(\mathbb{A}_{F}^{\infty,\mathrm{P}})\times H(\mathbb{A}_{F}^{ \infty,\mathrm{P}})}\left((\pi_{\chi})^{\mathrm{P}}\boxtimes(\pi_{\chi})^{ \mathrm{VP}},\Gamma_{F,\mathrm{P}}\otimes_{\mathbb{Z}_{p}}\mathbb{L}_{\chi} \right).\] Thus, by Lemma 2.7, there exists a constant \(c\in\Gamma_{F,\mathrm{P}}\otimes_{\mathbb{Z}_{p}}\mathbb{L}_{\chi}\), independent of the pair \((\varphi^{\mathrm{P}},\varphi^{\mathrm{VP}})\), such that \[\mathbf{h}_{\mathrm{P}}^{\mathrm{V}_{\Pi}}\left(\mathcal{Z}_{\varphi},\mathcal{ Z}_{\varphi^{\mathrm{V}}}\right)(\chi)=c\prod_{v\in\varphi^{\mathrm{VP}}_{F} \setminus\mathrm{P}}\left(\frac{\Delta_{n+1,v}L(\frac{1}{2},\Pi_{n,v}\times \Pi_{n+1,v})}{L(1,\Pi_{n,v},\mathrm{As}^{(-1)^{n+1}})L(1,\Pi_{n+1,v},\mathrm{ As}^{(-1)^{n+1}})}\right)^{-1}\alpha(\varphi^{\mathrm{P}}_{v},\varphi^{\mathrm{VP}}_{v}).\] The lemma then follows. _Remark 7.6_.: It is easy to see that \(\mathcal{L}_{\mathrm{P}}^{1}(\Pi)\) also does not depend on the choice of the standard isomorphisms \(G_{v}\simeq\mathrm{GL}_{n,F_{v}}\times\mathrm{GL}_{n+1,F_{v}}\) for \(v\in\mathrm{P}\). However, \(\mathcal{L}_{\mathrm{P}}^{1}(\Pi)\) does depend on the choices of \(\langle\!\langle\,\ \rangle\!\rangle_{\Pi}\) and a rational Haar measure on \(H(\mathbb{A}_{F}^{\infty,\mathrm{P}})\).10 Footnote 10: Choosing \(\langle\!\langle\,\ \rangle\!\rangle_{\Pi}\) is equivalent to choosing an \(\mathbb{L}\)-valued Haar measure on \(G(\mathbb{A}_{F}^{\infty})\). **Conjecture 7.7**.: _Assume Hypothesis 7.3. The \((\Gamma_{F,\mathrm{P}}\otimes_{\mathbb{Z}_{p}}\mathbb{L}\)-valued) measure \(\mathcal{L}_{\mathrm{P}}^{1}(\Pi)\) is nonzero as long as \(\mathbb{V}_{\Pi}\neq 0\). In particular, it is nonzero when \(d(\Pi_{n})=d(\Pi_{n+1})=1\) by the remark below._ _Remark 7.8_.: Take an embedding \(\iota\colon\mathbb{L}\to\mathbb{C}\). Then \(\epsilon(\frac{1}{2},\Pi_{n}^{(a)}\times\Pi_{n+1}^{(a)})\) equals \(\epsilon(\Pi)\), which we have assumed to be \(-1\). Now \(\epsilon(\frac{1}{2},\Pi_{n}^{(a)}\times\Pi_{n+1}^{(a)})\) decomposes as the product of \(d(\Pi_{n})d(\Pi_{n+1})\) root numbers (valued in \(\{\pm 1\}\)) for their isobaric factors. Then by a similar discussion in [1, SS26] using [1, Lemma 3.15], \(\mathbb{V}_{\Pi}\neq 0\) if and only if those root numbers contains \(-1\) exactly once. The conjecture below can be regarded as a higher-dimensional analogue of Perrin-Riou's Heegner point main conjecture [20]. **Conjecture 7.9** (Iwasawa's main conjecture in the incoherent case).: _Assume Hypothesis 7.3. Then the characteristic ideal of \(\mathrm{H}_{\mathrm{fin}}^{1}(E,\mathbb{W}_{\Pi})_{\mathrm{P}}^{\circ}/\mathrm{ Z}_{\Pi}\) (Notation 7.2) is generated by \(\mathcal{L}\mathcal{L}_{\mathrm{P}}^{1}(\Pi^{\vee})\) for all \(\mathbb{Z}_{p}\)-linear maps \(\ell\colon\Gamma_{F,\mathrm{P}}\to\mathbb{Z}_{p}\). In particular, in view of Conjecture 7.7, \(\mathrm{H}_{\mathrm{fin}}^{1}(E,\mathbb{W}_{\Pi})_{\mathrm{P}}^{\circ}/\mathrm{ Z}_{\Pi}\) is a torsion \(\mathbb{L}[[\Gamma_{E,\mathrm{P}}^{-}]]^{\circ}\)-module if and only if \(\mathbb{V}_{\Pi}\neq 0\)._ At last, we propose a conjecture relating \(\mathcal{L}_{\mathrm{P}}^{1}(\Pi)\) to the derivative of the hypothetical (full) \(\mathrm{P}_{E}\)-adic \(L\)-function of \(\Pi\) along the anticyclotomic direction. **Hypothesis 7.10**.: In this hypothesis, we temporarily allow \(\Pi\) to be either coherent or incoherent. Suppose that \(\Pi\) is semi-stably \(v\)-ordinary for every \(v\in\mathrm{P}\). There exists a measure \(\mathcal{L}_{P_{E}}(\Pi)\) on \(\Gamma_{E,\mathrm{P}_{E}}\) valued in \(\mathbb{L}\), unique up a scalar in \(\mathbb{L}^{\times}\), such that for every finite character \(\Xi\colon\Gamma_{E,\mathrm{P}_{E}}\to\mathbb{L}_{\Xi}^{\times}\) of conductor \(\prod_{u\in\mathrm{P}_{E}}\,\mathfrak{p}_{u}^{f_{u}}\) for a tuple \(f=(f_{u})_{u\in\mathrm{P}_{E}}\) of positive integers indexed by \(\mathrm{P}_{E}\) and every embedding \(\iota\colon\mathbb{L}_{\Xi}\to\mathbb{C}\), we have \[\iota\mathcal{L}_{\mathrm{P}_{E}}(\Pi)(\Xi)=G(\iota\Xi)^{\frac{u(u+1)}{2}}\prod _{u\in\mathrm{P}_{E}}\left(\frac{\frac{q_{u}^{(u-1)(u+1)}}{\iota\lambda(\Pi_{u })}}{2^{d(\Pi_{u})+d(\Pi_{u+1})}\cdot L(1,\iota\Pi_{n},\mathrm{As}^{(-1)^{n} })L(1,\iota\Pi_{n+1},\mathrm{As}^{(-1)^{n+1}})},\] where \(G(\iota\Xi)\) is the global Gauss sum defined on [1, Page 460] (which depends only on \(\iota\Xi\)). _Remark 7.11_.: When \(d(\Pi_{n})=d(\Pi_{n+1})=1\), we know, for every fixed embedding \(\iota_{0}\colon\mathbb{L}\to\mathbb{C}\), the existence of the measure with the interpolation properties in the above hypothesis for \(\iota\) extending \(\iota_{0}\). This is due to a series of works (mainly) [13, 14, 15, 16]. Now when \(\Pi\) is incoherent, \(\mathscr{L}_{\mathbb{P}_{E}}(\Pi)\) vanishes along the homomorphism \(\operatorname{Nm}_{E/F}^{-}\colon\Gamma_{E,\mathbb{P}_{E}}\to\Gamma_{E,\mathbb{P}}^ {-}\). Since the conormal bundle of the rigid analytic spectrum of \(\mathbb{L}[[\Gamma_{E,\mathbb{P}}^{-}]]^{\circ}\) in that of \(\mathbb{L}[[\Gamma_{E,\mathbb{P}_{E}}]]^{\circ}\) is canonically the constant bundle \(\Gamma_{F,\mathbb{P}}\otimes_{\mathbb{Z}_{p}}\mathbb{L}\), we obtain an element \[\operatorname{d}\!\mathscr{L}_{\mathbb{P}_{E}}(\Pi)\in\Gamma_{F,\mathbb{P}} \otimes_{\mathbb{Z}_{p}}\mathbb{L}[[\Gamma_{E,\mathbb{P}}^{-}]]^{\circ},\] well-defined up to a constant in \(\mathbb{L}^{\times}\). **Conjecture 7.12**.: _Assume Hypothesis 7.3 and Hypothesis 7.10. Then_ \[\mathscr{L}_{\mathbb{P}}^{1}(\Pi)=C\cdot\operatorname{d}\!\mathscr{L}_{ \mathbb{P}_{E}}(\Pi),\] _where \(C\) is a constant in \(\mathbb{L}^{\times}\) depending on the choices of \(\langle\!\langle\,\ \rangle\!\rangle_{\Pi}\) and a rational Haar measure on \(H(\mathbb{A}_{F}^{\infty,\mathbb{P}})\)._ When \(n=1\), it should be possible to deduce the above conjecture from [11].
2305.06677
INGENIOUS: Using Informative Data Subsets for Efficient Pre-Training of Language Models
A salient characteristic of pre-trained language models (PTLMs) is a remarkable improvement in their generalization capability and emergence of new capabilities with increasing model capacity and pre-training dataset size. Consequently, we are witnessing the development of enormous models pushing the state-of-the-art. It is, however, imperative to realize that this inevitably leads to prohibitively long training times, extortionate computing costs, and a detrimental environmental impact. Significant efforts are underway to make PTLM training more efficient through innovations in model architectures, training pipelines, and loss function design, with scant attention being paid to optimizing the utility of training data. The key question that we ask is whether it is possible to train PTLMs by employing only highly informative subsets of the training data while maintaining downstream performance? Building upon the recent progress in informative data subset selection, we show how we can employ submodular optimization to select highly representative subsets of the training corpora and demonstrate that the proposed framework can be applied to efficiently train multiple PTLMs (BERT, BioBERT, GPT-2) using only a fraction of data. Further, we perform a rigorous empirical evaluation to show that the resulting models achieve up to $\sim99\%$ of the performance of the fully-trained models. We made our framework publicly available at https://github.com/Efficient-AI/ingenious.
H S V N S Kowndinya Renduchintala, Krishnateja Killamsetty, Sumit Bhatia, Milan Aggarwal, Ganesh Ramakrishnan, Rishabh Iyer, Balaji Krishnamurthy
2023-05-11T09:24:41Z
http://arxiv.org/abs/2305.06677v2
# Ingenious: Using Informative Data Subsets for Efficient Pre-Training of Large Language Models ###### Abstract A salient characteristic of large pre-trained language models (PTLMs) is a remarkable improvement in their generalization capability and emergence of new capabilities with increasing model capacity and pre-training dataset size. Consequently, we are witnessing the development of enormous models pushing the state-of-the-art. It is, however, imperative to realize that this inevitably leads to prohibitively long training times, extortionate computing costs, and a detrimental environmental impact. Significant efforts are underway to make PTLM training more efficient through innovations in model architectures, training pipelines, and loss function design, with scant attention being paid to optimizing the utility of training data. The key question that we ask is whether it is possible to train PTLMs by employing only highly informative subsets of the training data while maintaining downstream performance? Building upon the recent progress in informative data subset selection, we show how we can employ submodular optimization to select highly representative subsets of the training corpora. Our results demonstrate that the proposed framework can be applied to efficiently train multiple PTLMs (BERT, BioBERT, GPT-2) using only a fraction of data while retaining up to \(\sim 99\%\) of the performance of the fully-trained models. ## 1 Introduction Large pre-trained language models (PTLMs) Devlin et al. (2019); Radford et al. (2019); Yang et al. (2020); Brown et al. (2020); Raffel et al. (2020) have revolutionized the field of natural language processing (NLP), becoming the default choice for a wide array of NLP tasks. The versatility of PTLMs, however, is accompanied by significant costs. For instance, it costs an estimated $12 million to train GPT-3 Brown et al. (2020) with roughly 1.2 million pounds of CO\({}_{2}\) emissions Kahn (2021). Megatron-Turing NLG Smith et al. (2022) is a 530 billion parameter PTLM, which is thrice the size of GPT-3 and is trained on 4480 NVIDIA 80-GB A100 GPUs and yields close to 1% performance improvements over GPT-3. By continually increasing the size of PTLMs and pre-training corpora to improve generalization ability, significant additional resources and energy are consumed, resulting in dire environmental consequences Sharir et al. (2020). Further, such large-scale resource utilization and the costs associated with PTLMs create an uneven playing field for small organizations and universities, which operate with significant resource constraints. Hence, a crucial step towards developing responsible, fair, and GreenAI Schwartz et al. (2020) involves minimizing inefficiencies and costs in training large language models (LMs). Significant efforts toward improving the efficiency of PTLMs have ventured in directions such as optimizing the model architecture Chen et al. (2020); Gordon et al. (2020); Zafrir et al. (2021), modifications to the training pipeline Izsak et al. (2021); Shen et al. (2022) and task Schick and Schutze (2021), sample efficient masking techniques for improved convergence Bitton et al. (2021) and leveraging contextual knowledge to reduce model size Kaur et al. (2022). In this work, driven by the observation that the scale of the pre-training corpus contributes significantly to the training costs of PTLMs, we explore the feasibility of training PTLMs using highly informative subsets of the corpus. Recent studies have demonstrated the feasibility of informative data subset selection for efficient deep model training for images Mirzasoleiman et al. (2020); Killamsetty et al. (2021); Zhang et al. (2021); Zhang et al. (2022); Podalzandi et al. (2022) in both supervised and semi-supervised settings. In light of this, the key question we attempt to answer is: _Can we efficiently pre-train large language models using highly informative subsets of the training corpus #### without compromising performance? The first step in answering the above question is identifying informative (or representative) subsets of the underlying training corpus such that they maximize the representation of the remaining samples in the corpus. Intuitively, given a set of sentences, the subsequent addition of sentences similar to existing sentences in the set yields _diminishing returns_. More information gains can be achieved by adding diverse, dissimilar sentences. While the classical subset selection problem is NP-hard, we can leverage the _diminishing gains_ property of submodular functions (Fujishige, 2005) and frame subset selection as a submodular maximization problem. Several recent works (Wei et al., 2015; Mirzasoleiman et al., 2020; Kothawade et al., 2021; Karanam et al., 2022) have formulated the subset selection problem as that of maximizing a submodular objective. However, applying existing subset selection frameworks to PTLMs is non-trivial given the scale of corpora typically used for pre-training (_e.g._, Wikipedia and Common Crawl consisting of hundreds of millions of sequences and billions of tokens). Most of the existing methods rely on per-sample gradients, which are expensive to compute, and to the best of our knowledge, none of the previous works have considered subset selection for such large datasets. **Our contributions:** We propose the informative data subset selection task for efficient pre-training of PTLMs and present Ingenious, a framework for subset selection using submodular optimization (Section 3). We show how to overcome the scalability challenge for typical large-scale pre-training corpora and employ scalable sentence feature encoders to obtain individual data sample features relevant for subset selection. We also employ various engineering techniques to scalably select subsets from large-scale datasets (Section 3). We use Ingenious to pre-train BERT and GPT-2 and evaluate the performance of the resulting models on the tasks of GLUE benchmark (Section 4) similar to Devlin et al. (2019). For GPT-2, we also explore generative task. We show that the models pre-trained with Ingenious retain upto \(\approx 99\)% performance of the models pre-trained using the full dataset. We also present thorough ablation studies revealing the impact of various design choices and parameters involved. We show how Ingenious can be used to accelerate pre-training of domain-specific language models such as BioBERT (Section 4.6). Finally, we discuss the inferences that could be drawn from our work, limitations of our proposed framework and lay out directions for further improvement (Section 5). ## 2 Related Work _Knowledge distillation and pruning based methods_(Sanh et al., 2019; Jiao et al., 2020; Muhamed et al., 2021) pre-train a smaller variant of PTLMs (such as BERT) with lesser capacity using the full model as teacher network. Even though lighter versions such as DistilBERT (Sanh et al., 2019) retain \(\approx 97\)% of the performance with up to 60% faster inference, the PTLM still needs to be _completely_ pre-trained initially to be able to distill the lighter version. Thus, the efficiency gains are restricted only to the fine-tuning and inference. Other methods prune the architecture through forcing the weights with lesser magnitude to zero value during pre-training (Chen et al., 2020; Gordon et al., 2020) as well as during finetuning (Zafrir et al., 2021). _Model architecture and training task optimizations:_Schick and Schutze (2021) have shown that smaller PTLMs can achieve better performance by formulating the task input in cloze style. Izsak et al. (2021) proposed to optimize BERT pre-training through multiple optimizations related to data, model size, and optimizer choice. Shen et al. (2022) proposed a staged training mechanism where they start with training a relatively smaller model, which is then used for initializing Figure 1: Cost-savings vs Performance tradeoff achieved by Ingenious for BERT pre-training: We contrast the accuracy degradation with cost savings compared to the BERT pre-training on entire dataset. We observe cost-savings of \(4.35\times\) with \(2.13\%\) accuracy drop and \(2.33\times\) cost-savings with \(0.71\%\) accuracy drop. the full capacity model at a later stage. Yao et al. (2022) identify relevant samples from the pre-training corpus based on their similarity with the task-specific dataset to train task-specific PTLM followed by fine-tuning, thus inherently suffering from the limitation of pre-training separate models for every downstream task. _Curriculum learning based methods_ employ the sequence length of training samples as a proxy for hardness. Typically, shorter (easier) sequences are presented in the initial stages of pre-training followed by longer (harder) sequences at later stages Nagatsuka et al. (2021); Li et al. (2022). However, such methods have been shown to perform well only in limited configurations with respect to the choice of language models, stage of pre-training, _etc._. Please refer to Appendix 3 for related work on hardware optimizations. Unlike the aforementioned works (while also being complementary to them), we explore improving the pre-training efficiency of PTLMs by training only on a representative subset of the entire corpus at any given point. ## 3 The Ingenious Framework We now present Ingenious - an informative data subset selection framework for pre-training language models. We summarize the training pipeline in Figure 2. We first describe the notation to formulate the problem, followed by details of different steps involved in the framework. ### Notation We denote the unlabeled dataset for pre-training by \(\mathcal{U}=\{x_{j}\}_{j=1}^{n}\), consisting of \(n\) data points each corresponding to a varying length of sequence of symbols \(\{s_{i}\}_{i=1}^{m}\) (these symbols could be words or character sequences such as sub-words). Let \(\mathcal{S}\subseteq\mathcal{U}\) be the subset of the unlabeled dataset on which the language model is trained. Let the language model be parameterized by \(\mathbf{\theta}\). We subscript the changing variables such as model parameters \(\mathbf{\theta}\), subset \(\mathcal{S}\) with the timestep \(t\) to denote their specific values at that timestep. ### Problem Formulation In its most general form, subset selection is defined as \[\mathcal{S}_{t}=\operatorname*{arg\,max}_{\mathcal{S}\subseteq\mathcal{U}}f( \mathcal{S}) \tag{1}\] where the subset \(\mathcal{S}_{t}\subseteq\mathcal{U}\) at step \(t\) is selected such that it maximizes the function \(f\). While the general subset selection problem is NP-Hard, the problem becomes tractable in case the function \(f\) is submodular in nature Fujishige (2005). A set function \(f:2^{\mathcal{U}}\rightarrow\mathbb{R}\) is **submodular** if for \(x\in\mathcal{U}\), \(f(\mathcal{A}\cup x)-f(\mathcal{A})\geq f(\mathcal{B}\cup x)-f(\mathcal{B})\), \(\forall\mathcal{A}\subseteq\mathcal{B}\subseteq\mathcal{U}\) and \(x\notin\mathcal{B}\). We pose the data subset selection problem as a submodular maximization problem since it allows for easier optimization by employing different approximations Nemhauser et al. (1978). In order to choose a suitable submodular function, one must understand the characteristics of the subsets that are crucial for the end-goal - _efficient learning in our case_. Previous works in computer vision have demonstrated that commonly used vision datasets contain many redundancies, and eliminating such redundant data samples does not affect the model's performance Birodkar et al. (2019); Toneva et al. (2019); Paul et al. (2021); Sorscher et al. (2022). Further, one can achieve faster model training by using highly informative and representative data subsets Kaushal et al. (2019); Mirzasoleiman et al. (2020); Sorscher et al. (2022). Please refer to Appendix 3 for more related work on submodularity based subset selection. Building upon the learnings from computer vision research, our primary requirement for the selected subset is that it should faithfully represent the training data and have minimal redundancy within itself. ### Overview of Approach In order to select a representative subset as discussed above, we use **Facility Location**Salhi (1991); Krause and Golovin (2014), a commonly-used submodular function closely related to \(k\)-medoid clustering which is defined as \[f_{FL}(\mathcal{A})=\sum_{i\in\mathcal{U}}\max_{j\in\mathcal{A}}\mathbf{K}_{ij}\] where \(\mathcal{A}\) is the subset being evaluated, \(\mathbf{K}\) is a pairwise similarity kernel matrix and \(\mathbf{K}_{ij}\) is the similarity between the \(i^{th}\) and \(j^{th}\) samples. Thus, our subset selection problem can be represented as: \[\mathcal{S}_{t}=\operatorname*{arg\,max}_{\mathcal{S}\subseteq\mathcal{U}:| \mathcal{S}|=k}f_{FL}(\mathcal{S}) \tag{2}\] Here, \(k\) represents the size of the subset \(\mathcal{S}\). We would like to clarify that Equation (2) enables us to choose diverse samples such that each represents other samples in the corpus, instead of selection of similar samples. The optimization problem in Equation (2) is an instance of cardinality constrained monotone submodular maximization where an approximate solution can be obtained by expanding the subset through a lazy greedy algorithm (Minoux, 1978) with memoization (Iyer and Bilmes, 2019). Hence, we expand the subset incrementally by probabilistically sampling that data point (steps \(B\) and \(C\) in Figure 2) that leads to a maximum increase in the value of \(f_{FL}\). The facility location function utilizes a pairwise similarity kernel \(\mathbf{K}\) (of size \(|\mathcal{U}|\times|\mathcal{U}|\)) between the data samples in \(\mathcal{U}\) to select representative subsets. To estimate the kernel values, we compute the cosine similarity between the feature representations of data samples obtained using the LM itself. To ensure that the extracted representations are useful during the initial phase, the LM is pre-trained on the entire corpus for 2 epochs during warm start phase as suggested by Killamsetty et al. (2021, 2021) (step \(A\) in Figure 2). Further, to ensure that the LM sees diverse data samples, we update the subset after every \(R^{th}\) iteration (step \(D\) in Figure 2) through adaptive subset selection (Mirzasoleiman et al., 2020; Killamsetty et al., 2021, 2021) and train the model on the chosen subset for the following \(R\) steps. This process is repeated till the pre-determined number of steps. Algorithm 1 in Appendix 3 summarises the steps involved. We now describe details of each step. ### Methodology Details **Feature Encoders for Similarity Computation:** The selection of optimal representative subsets requires a similarity kernel that captures the intrinsic relationships between data samples. We explore dense and sparse feature encoders for obtaining the feature representation of text samples in \(\mathcal{U}\). As a dense feature encoder for text samples, we use the intermediate representations as obtained from the LM that is currently being trained. We compute the representations of an input sequence by averaging the output embeddings of the constituent tokens. A question then arises on which layer of the underlying model should be used for obtaining this representation since different layers encode different types of information (Rogers et al., 2020). Another possibility is to use sparse representations such as TF-IDF (Aizawa, 2003) owing to its success at capturing statistically important lexical features (Robertson et al., 2009). We study the effect of using sparse feature representations (_i.e._, TF-IDF) and dense feature representations obtained from different layers of LM in Section 4.3. Our experiments revealed that dense feature encoders yield the best results. **Submodular Greedy Ordering based Data Selection:** After deciding on the choice of similarity kernel, we now describe how to select the subsets (steps \(B\) and \(C\) in Figure 2) as defined by Equation (2). Given the size of a typical pre-training corpus, it is infeasible to do an exhaustive search to select the data points to be added to the subset. Hence, we use an approximate lazy greedy algorithm (Nemhauser et al., 1978) to select the data points. We store the submodular gain (step Figure 2: Ingenious framework for informative data subset selection to pre-train language models. We warm-start pre-training on full dataset for \(W\) steps to enable it to learn useful representations (step \(A\)). Owing to the size of pre-training data, we divide the total number of samples (n) into P partitions (step \(B_{1}\)) followed by selecting instances according to submodular gains (step \(B_{2}\)) through probabilistic sampling (step \(C\)). We obtain a subset (of total size k) of representative samples from each partition such that the subset is updated periodically (step \(D\)) after R steps of training on selected subset. \(B_{2}\) in Figure 2) of each data sample at the time of their addition when scanning the entire dataset using the lazy greedy algorithm. If \(\mathcal{S}\) represents subset selected so far, and \(e\) represents the next locally optimal data sample to be added, the submodular gain value of \(e\) is \(f(\mathcal{S}\cup e)-f(\mathcal{S})\). Recall that facility location is a submodular function, and therefore, elements added in earlier iterations yield greater submodular gains (indicating the representativeness of the data samples) than those selected in later iterations. The key idea here is to use the submodular gains associated with each data sample as an importance score and convert them to a probability distribution by using the second order Taylor-softmax operation de Brebisson and Vincent (2016) (step \(C\) in Figure 2). Given gains vector \(\{g_{1},g_{2},\cdots,g_{m}\}\), Taylor-softmax operation over the vector for converting it to probability distribution \(P\) can be specified as \(P=\left\{\frac{1+g_{i}+0.5g_{j}^{2}}{\sum_{j=1}^{m}1+g_{j}+0.5g_{j}^{2}} \right\}_{i=1}^{m}\). Using the probability distribution \(P\) for sampling ensures that more representative samples are selected with greater probability. However, it also allows the LM to explore less representative samples during training to prevent overfitting on representative samples. We reuse this probability distribution to sample new subsets of size \(k\) every \(R\) steps by sampling \(k\) points without replacement (step \(D\) in Figure 2). Recall that we require a similarity kernel of size \(|\mathcal{U}|\times|\mathcal{U}|\), hence the memory required for storing the similarity kernels is practically infeasible. We now describe how we scale Ingenious to handle size of the pre-training datasets used for LMs. **Partitioning based Efficient Subset Selection:** To minimize the memory consumption, instead of constructing a probability distribution over the entire unlabeled set directly, we first partition (step \(B_{1}\) in Figure 2) the unlabeled set into \(N_{P}\) random blocks of equal sizes (_i.e._, partition size is \(\frac{|\mathcal{U}|}{N_{P}}\)) and construct a probability distribution \(P_{i}\) over each data block \(\mathcal{U}_{i}^{p}:|\mathcal{U}_{i}^{p}|=\frac{|\mathcal{U}|}{N_{P}}\). We then use the constructed probability distributions \(P_{i}\) over each data block \(\mathcal{U}_{i}^{p}\) to sample a subset of size \(k/N_{P}\) from the data block without replacement. We compute the final subset using subsets from each partition as follows: \[\mathcal{S}_{t}=\bigcup_{i=1}^{N_{P}}\mathrm{sample}\left(\mathcal{U}_{i}^{p}, P_{i},\frac{k}{N_{P}}\right) \tag{3}\] The partitioning of the unlabeled set allows us to get away with constructing similarity kernels of size \(\frac{|\mathcal{U}|}{N_{P}}\times\frac{|\mathcal{U}|}{N_{P}}\), thereby reducing the similarity kernel memory usage by around \({N_{P}}^{2}\) times. We discuss the effect of the partition size in Section 4.5. In order to maximize the utilization of available resources, we can construct probability distributions over each block in the data partition in parallel. As in recent work Mittal et al. (2022), partitioned facility location can be shown as a lower bound of the original objective function,i.e., facility location that is being maximized. It should be noted that memory utilization also increases with the number of parallel processes. For example, when \(N_{PP}\) subsets are selected from partitions in parallel, the memory usage due to similarity kernel is of the order \(\mathcal{O}(N_{PP}\frac{|\mathcal{U}|^{2}}{N_{P}^{2}})\). In our experiments, we set \(N_{PP}=100\) processes. ## 4 Experiments and Results We use BERT Devlin et al. (2019), GPT-2 Radford et al. (2019) and a domain-specific version of BERT - BioBERT Lee et al. (2020) as the underlying LMs. Specifically, we use BERT-Base(110M) and GPT2-Small(124M). For BERT, we use English Wikipedia in conjunction with BooksCorpus as the pre-training corpora and employ MLM and NSP tasks for pre-training following details in the work of Devlin et al. (2019). We perform pre-training using a batch size of 1024 for 1,000,000 steps in the case of vanilla-BERT. We perform ablations over data subset sizes and number of pre-training steps for Ingenious enabled pre-training and find a subset size of \(25\%\) (Section 4.4) with 250,000 pre-training steps (25%) as an optimal choice. We set the value of R to 25000 steps. We refer the reader to Appendix **??** for further implementation details. For Ingenious enabled pre-training of BioBERT and GPT-2, we discuss the implementation details and experimental results in Sections 4.6 and 4.7, respectively. ### Subset selection for BERT efficiency We consider two leagues of pre-trained models, _viz._, (i) BERT pre-trained on subsets selected through Ingenious and (ii) vanilla BERT pre-trained fully up to 1 million steps. We contrast these by fine-tuning each on the commonly used GLUE benchmark Wang et al. (2019) and report the performances of each. In Table 1, we report the accuracy averaged across all GLUE tasks over 20 runs on the dev sets obtained after 250K pre-training steps. Further, we compare Ingenious against three baselines - **B1) Early Stopping:** BERT pre-training stopped at 250K steps as checkpoint for evaluation; **B2) Random Selection:** which is obtained by pre-training BERT on a randomly sampled subset of the same size as that selected by Ingenious; **B3) Loss Based Sampling**[10]: which is obtained by pre-training BERT on a subset, of the same size as those selected by Ingenious, sampled from a probability distribution that is constructed by ranking the losses in descending order and allocating the high rank (high loss) samples greater probability than low rank (low loss) samples. Further, we would like to emphasise that we choose the baselines B2 and B3 owing to their relevance to making LM pre-training efficient with respect to data optimization. We observe that despite using only a subset of training data and being trained only for 250K steps, Ingenious achieves \(98.6\%\) performance of the vanilla fully pre-trained BERT. In comparison with baselines (B1, B2, and B3), Ingenious yields better Avg. GLUE score. Ingenious also outperforms baseline B3, which prioritizes training the BERT model on samples with a high loss rate. Prioritizing high-loss samples may likely result in overfitting, which may explain the poor fine-tuning performance of baseline B3 on GLUE tasks compared to baseline B1. Therefore, Ingenious selects informative subsets that not only help improve BERT pre-training convergence but also help retain its generalization capabilities. Further, we observe that extended training of Ingenious till 400K steps yields \(99.1\%\) performance of the vanilla BERT. We would like to highlight that most of the downstream task performance achieved by an LM is due to the initial stages of pre-training with most of the later pre-training resulting in up to \(\sim 1\%\) improvement [14]. In this context, Ingenious_helps in achieving later-stage performance gains relatively earlier_. Finally, we would like to highlight that Ingenious performs significantly better compared to the baselines on the CoLA task (in Table 1) which is deemed to be most difficult [13] in the GLUE benchmark. This implies that the subsets selected by Ingenious are able to capture the important and highly informative signals from the underlying data resulting in robust performance on challenging tasks as well. Further, to compare different methods at different stages of pre-training, we obtain corresponding checkpoints and fine-tune on GLUE tasks. For this particular setting, we present a comparison of vanilla BERT pre-training against Ingenious in Figure 3. We plot the performance for all methods \begin{table} \begin{tabular}{l c c} \hline \hline & **Avg. GLUE** & **CoLA** \\ & **Score** & **Score** \\ \hline **Vanilla** & & \\ **(1M steps)** & 82.76 & 55.98 \\ **Early Stopping (B1)** & 81.27 & 50.93 \\ **(250K steps)** & (-1.49\%) & (-5.05\%) \\ **Random-Selection (B2)** & 80.64 & 51.2 \\ **(250K steps)** & (-2.12\%) & (-4.78\%) \\ **Loss-based Sampling (B3)** & 81.05 & 51.68 \\ **(250K steps)** & (-1.71\%) & (-4.3\%) \\ **Ingenious** & 81.60 & 54.61 \\ **(250K steps)** & (-1.16\%) & (-1.37\%) \\ **Ingenious** & 82.05 & 55.48 \\ **(400K steps)** & (-0.71\%) & (-0.5\%) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of Ingenious with vanilla pre-training (full 1M steps) and other baselines for BERT. We report fine-tuning performance on GLUE benchmark and CoLA task in GLUE averaged over 20 runs. Numbers in brackets denote difference relative to vanilla variant. We report metrics for Ingenious and baselines after 250K pre-training steps. Please refer to Appendix.2 for task-wise scores. Figure 3: Comparison of Ingenious with vanilla BERT on GLUE performance _vs._ pre-training steps (top) and cost (bottom) using checkpoints obtained at intermediate pre-training stages. after the warm-up phase once useful feature embeddings are learnt where it can be seen that Ingenious shows better performance than all the baselines at 250K steps of pre-training and thereafter, beyond 250K steps, the trend continues consistently (Figure 3 - top). Also, pre-training through informative subsets enables BERT to achieve a performance level of 250K which is similar to what vanilla pre-training achieves after over 350K iterations. Similarly, for any given pre-training cost, Ingenious yields a better GLUE score than the baselines (Figure 3 - bottom). **Effectiveness of Importance Sampling:** We also evaluated a variant where the samples are selected greedily based on submodular ranking instead of importance sampling over submodular gains. In contrast to the 81.6 achieved by Ingenious, it achieved an Avg. GLUE score of 80.5 after 250K pre-training steps, highlighting the effectiveness of importance sampling. ### Knowledge Retention with Ingenious Large PTLMs, when trained on a sufficiently large corpus, stores various types of knowledge implicitly in their parameters (Alkhamissi et al., 2022). Since Ingenious uses only a subset of the whole data for pre-training, it is natural for it to contain lesser knowledge in its parameters but how does it compare with vanilla BERT pretraining and other baseline when it comes to knowledge retention? To answer this question, we use LAMA benchmark (Petroni et al., 2019), a probe designed to analyze factual knowledge present in PTLMs. LAMA is derived from four distinct types of knowledge sources - Google-RE, T-REx, ConceptNet, and SQuAD - from which cloze sentences are created using facts contained in the respective knowledge sources. The PTLM has to predict the fact tokens in place of the mask tokens in cloze sentences. In Table 3, we summarize the results. We note that Ingenious suffers minimal loss in knowledge retention with respect to fully pre-trained vanilla BERT on all tasks. Further, the decrease in performance is less as compared to the baselines (for most tasks) which suffer a more severe decrease in performance. Intuitively, we attribute this to the ability of Ingenious to select highly informative subsets from the corpus while excluding the redundant information. ### Feature embeddings and subset selection Different BERT layers have been shown to capture different information - lower layers capture word order (Rogers et al., 2020), middle capture syntactic information (Hewitt and Manning, 2019; Jawahar et al., 2019) and the later layers capture task-specific information (Kovaleva et al., 2019; Hao et al., 2019). We vary the layers - (3, 6, 9 and 12) used to obtain features for subset selection and report the performance on GLUE in Table 1(a). We observe that layer 9 features yield the best results. Further, in Table 1(a), we compare the effect of using TF-IDF as sample representations and contrast them \begin{table} \end{table} Table 2: Ablation study by varying - 1) embedding representation for selecting subsets, 2) the size of the subset, and 3) number of partitions used during pre-training. We report mean GLUE score to compare Ingenious variants. \begin{table} \end{table} Table 3: Knowledge retention of different models as measured by LAMA probe. We report P@1 scores for all the four different subtasks in LAMA. against dense features (BERT Layer-9). We observe that dense embeddings perform better than shallow TF-IDF features. ### Subset size for efficiency gains We study the effect of the size of the subset selected through Ingenious that is used for pre-training BERT. In Table 1(b), we analyse using the following values of subset sizes, _viz._, 10%, 15%, 20%, 25% and 30% and evaluate the fine-tuning performance on GLUE. While lower subset sizes (10-20%) result in inferior performance owing to the fact that the LM is shown less information, optimal performance is observed when 25% of the pre-training corpus is used, hence, we report corresponding results in Table 1. ### Partitions for efficient subset selection As discussed in approach, we divide the pre-training dataset into partitions. In Table 1(c), we analyse the impact of performance on GLUE as the number of partitions is varied. Using fewest partitions (1500) is found to yield optimal performance. This aligns with the intuition that fewest partitions enable better subset selection since more samples are present in a single partition, allowing to select more representative samples overall. ### Improving Pre-training Efficiency of domain-specific LM - BioBERT We evaluate the performance of Bio-BERT (Lee et al., 2020) pre-trained on subsets selected through Ingenious and compare it with vanilla Bio-BERT by fine-tuning it on biomedical datasets for the Named Entity Recognition (NER) and Relation Extraction (RE) tasks. For vanilla Bio-BERT, we start with a pre-trained BERT model and further pre-train it on the PubMed abstracts dataset for 200,000 steps. Please refer to Appendix **??** for further implementation details. We present the performance convergence plots of vanilla Bio-BERT vs. training time using Ingenious with a subset size of 25% in Figure 4. It shows that during initial stages of pre-training, Ingenious performs similar to vanilla since the LM is still learning representations, however once better representations for subset selection are learned, Ingenious achieves faster convergence than vanilla _w.r.t_ pre-training time and achieves the best accuracy around 1.4x faster. ### Analysing Pre-training Efficiency gains of GPT-2 through Ingenious We also pre-train GPT-2 (Radford et al., 2019) using Ingenious. We estimate the mean accuracy for GLUE fine-tuning (averaged over 20 runs) and zero-shot accuracy on BBQ Lite generative task. Please refer to Appendix **??** for implementation details. We plot the performance (see Figure 5) obtained for the above benchmarks against checkpoints at different pre-training stages (steps). Figure 5 - left and right shows that Ingenious performs consistently better than vanilla GPT-2 pre-training on GLUE and BBQ Lite respectively at different stages of pre-training indicating better convergence. ## 5 Conclusion We attempt to address inefficiencies in pre-training of large LMs via data subset-selection and propose Ingenious, a framework that selects informative data subsets representative of the entire corpus. The LM is trained only on one subset at a given time and the subset is updated periodi Figure 4: Plots (a) and (b) are the convergence results comparing Avg. F1 score (over three runs) with the Wall clock time for Vanilla BioBERT and BioBERT using Ingenious with 25% subset. We observe that Ingenious achieves much faster convergence than vanilla BioBERT(i.e., Full Training). Figure 5: Comparison of Ingenious with vanilla GPT-2 pre-training at different pre-training stages. Pre-training on Ingenious subsets enables GPT-2 to achieve better GLUE score consistently. cally. We show extensive ablations justifying design choices, and present experiments using multiple LLMs such as BERT & GPT-2. We show that up to \(\sim 99\%\) of performance of fully-trained models can be achieved with significant cost savings. ## 6 Limitations In terms of limitations, the submodular maximization based on estimation of pairwise sample similarity can be potentially constrained by memory limitations and might require high CPU RAM capacity. Further, we do acknowledge that our experiments are performed on relatively smaller PTLMs compared to GPT-3, OPT or PaLM owing to resource limitations. We have tried our best to perform extensive experiments and perform ablation studies to inform our design choices with in our resource constraints. ## 7 Ethical Considerations We believe that Ingenious has a significant positive impact on society since it makes pre-training of LMs compute efficient, thereby reducing CO2 emissions and energy costs. Nonetheless, the Ingenious framework is susceptible to biases and toxic words within the pre-training corpora as it relies on standard pre-training datasets. An exciting future direction of this research is to investigate whether we could use targeted subset selection to filter out toxic words, as well as phrases that promote cultural stereotypes and biases from the pre-training corpora before LM pre-training.
2301.05851
The Weyl law of transmission eigenvalues and the completeness of generalized transmission eigenfunctions without complementing conditions
The transmission eigenvalue problem is a system of two second-order elliptic equations of two unknowns equipped with the Cauchy data on the boundary. In this work, we establish the Weyl law for the eigenvalues and the completeness of the generalized eigenfunctions for a system without complementing conditions, i.e., the two equations of the system have the same coefficients for the second order terms, and thus being degenerate. These coefficients are allowed to be anisotropic and are assumed to be of class $C^2$. One of the keys of the analysis is to establish the well-posedness and the regularity in $L^p$-scale for such a system. As a result, we largely extend and rediscover known results for which the coefficients for the second order terms are required to be isotropic and of class $C^\infty$ using a new approach.
Jean Fornerod, Hoai-Minh Nguyen
2023-01-14T08:11:10Z
http://arxiv.org/abs/2301.05851v1
The Weyl law of transmission eigenvalues and the completeness of generalized transmission eigenfunctions without complementing conditions ###### Abstract. The transmission eigenvalue problem is a system of two second-order elliptic equations of two unknowns equipped with the Cauchy data on the boundary. In this work, we establish the Weyl law for the eigenvalues and the completeness of the generalized eigenfunctions for a system without complementing conditions, i.e., the two equations of the system have the same coefficients for the second order terms, and thus being degenerate. These coefficients are allowed to be anisotropic and are assumed to be of class \(C^{2}\). One of the keys of the analysis is to establish the well-posedness and the regularity in \(L^{p}\)-scale for such a system. As a result, we largely extend and rediscover known results for which the coefficients for the second order terms are required to be isotropic and of class \(C^{\infty}\) using a new approach. **MSC**: 47A10, 47A40, 35A01, 35A15, 78A25. **Keywords**: transmission eigenvalue problem, inverse scattering, Weyl law, counting function, generalized eigenfunctions, completeness, Cauchy's problems, regularity theory, Hilbert-Schmidt operators. ###### Contents * 1 Introduction * 2 Notations * 3 Well-posedness and regularity theory for the transmission eigenvalue problems * 3.1 Half space analysis * 3.2 Proof of Theorem 3.1 * 4 The Weyl law for the transmission eigenvalues * 4.1 The operator \(T_{\lambda}\) and its adjoint \(T_{\lambda}^{*}\). * 4.2 Hilbert-Schmidt operators * 4.3 The operators \(\mathbf{T}_{\theta,t}\) and their properties * 4.4 The approximation of the trace of a kernel * 4.5 A connection of the counting function and the trace of \(\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t}\) for large \(t\) * 4.6 Proof of Theorem 1.1 * Proof of Theorem 1.2 ## 1. Introduction The transmission eigenvalue problem plays a role in the inverse scattering theory for inhomogeneous media. This eigenvalue problem is connected to the injectivity of the corresponding scattering operator [12], [20]. Transmission eigenvalues are related to interrogating frequencies for which there is an incident field that is not scattered by the medium. In the acoustic setting, the transmission eigenvalue problem is a system of two second-order elliptic equations of two unknowns equipped with the Cauchy data on the boundary. After four decades of extensive study, the spectral properties are known to depend on a type of contrasts of the media near the boundary. Natural and interesting questions on the interior transmission eigenvalue problem include: the _discreteness_ of the spectrum (see e.g. [6, 4, 40, 21, 33, 11]), the _location_ of transmission eigenvalues (see [8, 24, 41, 42], and also [9] for the application in time domain), the _Weyl law_ of transmission eigenvalues and the _completeness_ of the generalized eigenfunctions (see e.g. [21, 22, 23, 39]). We refer the reader to [7] for a recent, and self-contained introduction to the transmission eigenvalue problem and its applications. Let us describe its mathematical formulation. Let \(\Omega\) be a bounded, simply connected, open subset of \(\mathbb{R}^{d}\) of class \(C^{3}\) with \(d\geq 2\). Let \(A_{1},A_{2}\) be two real, symmetric matrix-valued functions, and let \(\Sigma_{1},\Sigma_{2}\) be two bounded positive functions that are all defined in \(\Omega\). Assume that \(A_{1}\) and \(A_{2}\) are uniformly elliptic, and \(\Sigma_{1}\) and \(\Sigma_{2}\) are bounded below by a positive constant in \(\Omega\), i.e., for some constant \(\Lambda\geq 1\), one has, for \(\ell=1,2\), \[\Lambda^{-1}|\xi|^{2}\leq\langle A_{\ell}(x)\xi,\xi\rangle\leq\Lambda|\xi|^{2 }\quad\text{ for all }\xi\in\mathbb{R}^{d},\text{ for a.e. }x\in\Omega, \tag{1.1}\] and \[\Lambda^{-1}\leq\Sigma_{\ell}(x)\leq\Lambda\text{ for a.e. }x\in\Omega. \tag{1.2}\] Here and in what follows, \(\langle\cdot,\cdot\rangle\) denotes the Euclidean scalar product in \(\mathbb{C}^{d}\) and \(|\cdot|\) is the corresponding norm. A complex number \(\lambda\) is called an eigenvalue of the transmission eigenvalue problem associated with the pairs \((A_{1},\Sigma_{1})\) and \((A_{2},\Sigma_{2})\) in \(\Omega\) if there is a non-zero pair of functions \((u_{1},u_{2})\in[H^{1}(\Omega)]^{2}\) that satisfies the system \[\left\{\begin{array}{cl}\text{div}(A_{1}\nabla u_{1})-\lambda\Sigma_{1}u_{1 }=0&\text{ in }\Omega,\\ \text{div}(A_{2}\nabla u_{2})-\lambda\Sigma_{2}u_{2}=0&\text{ in }\Omega,\\ u_{1}=u_{2},\quad A_{1}\nabla u_{1}\cdot\nu=A_{2}\nabla u_{2}\cdot\nu&\text{ on }\Gamma.\end{array}\right. \tag{1.3}\] Here and in what follows, \(\Gamma\) denotes \(\partial\Omega\), and \(\nu\) denotes the outward, normal, unit vector on \(\Gamma\). Such a pair \((u_{1},u_{2})\) is then called an eigenfunction pair. Assume that \(A_{1}\), \(A_{2}\), \(\Sigma_{1}\), \(\Sigma_{2}\) are continuous in \(\bar{\Omega}\), and the following conditions on the boundary \(\Gamma\) hold, with \(\nu=\nu(x)\) : \[\langle A_{2}(x)\nu,\nu\rangle\langle A_{2}(x)\xi,\xi\rangle-\langle A_{2}(x )\nu,\xi\rangle^{2}\neq\langle A_{1}(x)\nu,\nu\rangle\langle A_{1}(x)\xi,\xi \rangle-\langle A_{1}(x)\nu,\xi\rangle^{2}, \tag{1.4}\] for all \(x\in\Gamma\) and for all \(\xi\in\mathbb{R}^{d}\setminus\{0\}\) with \(\langle\xi,\nu\rangle=0\), and \[\big{\langle}A_{2}(x)\nu,\nu\big{\rangle}\Sigma_{2}(x)\neq\big{\langle}A_{1}( x)\nu,\nu\big{\rangle}\Sigma_{1}(x),\ \forall\,x\in\Gamma. \tag{1.5}\] (Q. H.) Nguyen and the second author [34] established the Weyl law of eigenvalues and the completeness of the generalized eigenfunctions for transmission eigenvalue problem under conditions (1.4) and (1.5) via the Fourier analysis assuming that \(A_{1}\), \(A_{2}\), \(\Sigma_{1}\), \(\Sigma_{2}\) are continuous in \(\bar{\Omega}\). Condition (1.4) is equivalent to the celebrated complementing conditions due to Agmon, Douglis, and Nirenberg [3] (see also [2]). The explicit formula given here was derived in [29] in the context of the study of negative index materials. Conditions (1.4) and (1.5) were derived by (Q. H.) Nguyen and the second author in [33] in their study of the discreteness of the eigenvalues for transmission eigenvalue problem. In the case \[A_{1}=A_{2}=A\ \text{in}\ \Omega, \tag{1.6}\] it was also shown by (Q. H.) Nguyen and the second author [33] (see also [40]) using the multiplier technique that the discreteness holds if \[\Sigma_{1}\neq\Sigma_{2}\ \text{on}\ \Gamma. \tag{1.7}\] The goal of this paper is to study the Weyl law of the eigenvalues and the completeness of the generalized eigenfunctions under conditions (1.6) and (1.7). It is worth noting that results in this direction have been obtained previously with more constraints on the coefficients than (1.6) and (1.7). Robbiano [39] (see also [38]) gives the sharp order of the counting number when \(A=I\) in \(\Omega\), \(\Sigma_{1}=1\), \(\Sigma_{2}\neq\Sigma_{1}\) near the boundary and \(\Sigma_{2}\) is smooth. The analysis is based on both the microanalysis (see, e.g., [16, 46]) and the regularity theory for the transmssion eigenvalue problem. In the isotropic case, the Weyl law was established by Petkov and Vodev [37] and Vodev [42, 43, 44] for \(C^{\infty}\) coefficients. Their analysis is heavily based on microanalysis and the smoothness condition is strongly required. In addition, their work involved a delicate analysis on the Dirichlet to Neumann maps using non-standard parametrix construction initiated by Vodev [41], which have their own interests. It is not clear how one can improve the \(C^{\infty}\) condition and extend their results to the anisotropic setting using their approach. Concerning the completeness of the generalized eigenfunctions, we want to mention the work of Robbiano [38] where the case \(A=I\) and \(\Sigma_{1}\neq\Sigma_{2}\) in \(\bar{\Omega}\) was considered. We are ready to state the main results of this paper. From now on, we will assume in addition that \[\|(A_{1},A_{2})\|_{C^{2}(\bar{\Omega})}+\|(\Sigma_{1},\Sigma_{2})\|_{C^{1}( \bar{\Omega})}\leq\Lambda. \tag{1.8}\] We denote by \((\lambda_{j})_{j}\) the set of transmission eigenvalues associated with the transmission eigenvalue problem (1.3). Concerning the Weyl law, we have **Theorem 1.1**.: _Assume (1.1)-(1.2) and (1.6)-(1.8). Let \(\mathcal{N}(t)\) denote the counting function, i.e._ \[\mathcal{N}(t)=\#\{j\in\mathbb{N}:|\lambda_{j}|\leq t\}.\] _Then_ \[\mathcal{N}(t)=\mathbf{c}t^{\frac{d}{2}}+o(t^{\frac{d}{2}})\ \text{as}\ t \to+\infty,\] _where_ \[\mathbf{c}:=\frac{1}{(2\pi)^{d}}\sum_{\ell=1}^{2}\int_{\Omega}\Big{|}\Big{\{} \xi\in\mathbb{R}^{d};\ \langle A_{\ell}(x)\xi,\xi\rangle<\Sigma_{\ell}(x)\Big{\}}\Big{|}\,dx.\] For a measurable subset \(D\) of \(\mathbb{R}^{d}\), we denote \(|D|\) its (Lebesgue) measure. Concerning the completeness, we obtain **Theorem 1.2**.: _Assume (1.1)-(1.2) and (1.6)-(1.8). The set of generalized eigenfunction pairs of (1.3) is complete in \(L^{2}(\Omega)\times L^{2}(\Omega)\)._ **Remark 1.1**.: As a direct consequence of either Theorem 1.1 or Theorem 1.2, the number of eigenvalues of the transmission eigenvalue problem is infinite. As far as we know, this fact is new under the assumption that \(A\) is allowed to be anisotropic and the regularity of the coefficients are only required up to the order \(2\). The analysis used in the proof of Theorem 1.1 and/or Theorem 1.2 also allows us to obtain the following result on the transmission eigenvalue free region of the complex plane \(\mathbb{C}\). **Proposition 1.1**.: _Assume (1.1)-(1.2) and (1.6)-(1.8). For \(\gamma>0\), there exists \(\lambda_{0}>0\) such that if \(\lambda\in\mathbb{C}\) with \(|\Im(\lambda)|\geq\gamma|\lambda|\) and \(|\lambda|\geq\lambda_{0}\), then \(\lambda\) is not a transmission eigenvalue._ Here and and in what follows, for \(z\in\mathbb{C}\), let \(\Im(z)\) denote the imaginary part of \(z\). A more general result of Proposition 1.1 is given in Proposition 3.1. **Remark 1.2**.: Since \(\gamma>0\) can be chosen arbitrary small, combining the discreteness result in [33] mentioned above and Proposition 1.1, one derives that all the transmission eigenvalues, but finitely many, lie in a wedge of arbitrary small angle. Some comments on Theorem 1.1 and Theorem 1.2 are in order. In the conclusion of Theorem 1.1, the multiplicity of eigenvalues is taken into account and the multiplicity is associated with some operator \(T_{\lambda^{*}}\), which is introduced in Section 4 (see (4.5) and (4.32)). Concerning \(T_{\lambda^{*}}\), the following facts hold (see Remark 4.3 and Remark 4.5 for more information): if \(\mu\) is a characteristic value of the operator \(T_{\lambda^{*}}\) associated with an eigenfunction \((u,v)\) and \(\lambda^{*}+\mu\neq 0\), then \(\lambda^{*}+\mu\) is a transmission eigenvalue of (1.3) with an eigenfunction pair \((u_{1},u_{2})\) given by \[u_{1}=(\lambda^{*}+\mu)u+v\qquad\text{ and }\qquad u_{2}=v.\] Moreover, if \(\lambda_{j}\) is a transmission eigenvalue problem, then \(\lambda_{j}\neq\lambda^{*}\) and \(\lambda_{j}-\lambda^{*}\) is a characteristic value of \(T_{\lambda^{*}}\). In Theorem 1.2, the generalized eigenfunctions are also associated to such an operator \(T_{\lambda^{*}}\). We recall that the generalized eigenfunctions are complete in \([L^{2}(\Omega)]^{2}\) if the subspace spanned by them is dense in \([L^{2}(\Omega)]^{2}\). Theorem 1.1 and Theorem 1.2 provide the Weyl laws and the completeness under the assumptions (1.6) and (1.7) assuming the regularity conditions in (1.8). Our results hold for \(A_{1}=A_{2}=A\) being anisotropic in contrast to the isotropic setting considered previously. Moreover, the regularity assumption (1.8) on the coefficients was out of reach previously. Our approach is in the spirit of [34] and is hence different from the ones used to study these problems given in the previous works mentioned above. The key idea is to establish the _regularity theory_ for the transmission eigenvalue problem under the stated assumptions (see Theorem 3.1). Nevertheless, several new ingredients and observations are required for the regularity theory due to the fact that (1.6), which is degenerate, is considered instead of (1.4). One of the key steps to capture the phenomena is to derive appropriate estimates in a half plane setting. It is important to note that since \(A_{1}=A_{2}=A\), the setting is non-standard, and the classical arguments pioneered in [2, 3] cannot be applied since the role of \(\Sigma_{1}\) and \(\Sigma_{2}\) are ignored there. To this end, our arguments for the Cauchy problems not only require the information of the first derivatives and their structure of the data but also involve the information of the second derivatives and their structure (see, e.g., Lemma 3.2). This is quite distinct from the complementing case where the arguments for the Cauchy problems only require the information of the first derivatives and no structure of the data is required [34] (see, e.g., [34, Lemma 2 and Corollary 2]). One might note that the arguments used to derive the discreteness in [33] requires less assumption on the regularity of the coefficients but only give the information for one direction of \(\lambda\) (\(\arg\lambda=\pi/2\)) for large \(\lambda\). This is not sufficient to apply the theory of Hilbert-Schmidt operators. We have so far discuss the transmission eigenvalue problem in the acoustic setting. Known results for the transmission eigenvalue problem in the electromagnetic setting are much less. In this direction, we mention the work of Cakoni and Nguyen [10] on the state of art on the discreteness of the eigenvalues, the work of Fornerod and Nguyen [13] on the completeness of generalized of eigenfunctions and the upper bound of the eigenvalues for the setting considered in [10], and the work of Vodev [45] on the free region of eigenvalues for a setting considered in [10], and the references therein. The Cauchy problem also naturally appears in the context of negative-index materials after using reflections as initiated in [25] (see also [31]). The well-posedness and the limiting absorption principle for the Helmholtz equation with sign-changing coefficients were developed by the second author [29] using the Fourier and multiplier approach (see also [35]). The work [29] deals with the stability question of negative index materials, and is the starting point for the analysis of the transmission eigenvalue problems in [33, 34] (see also [10]). Other aspects and applications of negative-index materials as well as the stability and instability the Cauchy problem are discussed in [27, 28, 26, 30] and the references therein. A survey is given in [32]. The paper is organized as follows. Section 2 is devoted to define some notations used throughout the paper. In Section 3, we establish the well-posedness and the regularity theory for the Cauchy systems associated with the transmission eigenvalue problems. The analysis is then developed in such a way that the theory of Hilbert-Schmidt operators can be used. This is given in Section 4 where the Weyl laws are established. The completeness is considered in Section 5. ## 2. Notations Here are some useful notations used throughput this paper. We denote, for \(\tau>0\), \[\Omega_{\tau}=\Big{\{}x\in\Omega:\operatorname{dist}(x,\Gamma)<\tau\Big{\}}. \tag{2.1}\] For \(d\geq 2\), set \[\mathbb{R}^{d}_{+}=\Big{\{}x\in\mathbb{R}^{d};x_{d}>0\Big{\}}\quad\text{ and}\quad\mathbb{R}^{d}_{0}=\Big{\{}x\in\mathbb{R}^{d};x_{d}=0\Big{\}}.\] We will identify \(\mathbb{R}^{d}_{0}\) with \(\mathbb{R}^{d-1}\) in several places. For \(s>0\), we denote \[B_{s}=\{x\in\mathbb{R}^{d}:|x|<s\}.\] For \(m\geq 1\), \(p\geq 1\), and \(\lambda\in\mathbb{C}^{*}\) and \(u\in W^{m,p}(\Omega)\), we define \[\|u\|_{W^{m,p}_{\lambda}(\Omega)}=\left(\sum_{j=0}^{m}\||\lambda|^{\frac{m-j} {2}}\nabla^{j}u\|^{p}_{L^{p}(\Omega)}\right)^{1/p}. \tag{2.2}\] ## 3. Well-posedness and regularity theory for the transmission eigenvalue problems In this section, we study the well-posedness and the regularity theory of the Cauchy problem \[\left\{\begin{array}{rll}\operatorname{div}(A_{1}\nabla u_{1})-\lambda \Sigma_{1}u_{1}=f_{1}&\text{ in }\Omega,\\ \operatorname{div}(A_{2}\nabla u_{2})-\lambda\Sigma_{2}u_{2}=f_{2}&\text{ in }\Omega,\\ u_{1}-u_{2}=0,\quad(A_{1}\nabla u_{1}-A_{2}\nabla u_{2})\cdot\nu=0&\text{ on }\Gamma,\end{array}\right. \tag{3.1}\] under the assumptions (1.1)-(1.2), and (1.7)-(1.8), and \[A_{1}=A_{2}=A\text{ in }\Omega_{\tau}, \tag{3.2}\] for some \(\tau>0\), instead of (1.6) for appropriate \(\lambda\in\mathbb{C}\) and \((f_{1},f_{2})\) in \(L^{p}\)-scale. Here is the main result of this section. **Theorem 3.1**.: _Assume (1.1)-(1.2), (1.7)-(1.8), and (3.2). Let \(1<p<+\infty\) and \(\gamma\in(0,1)\). There exist constants \(\lambda_{0}>0\) and \(C>0\) depending on \(\Omega\), \(\Lambda\), \(\tau\), \(p\), and \(\gamma\) such that for \(\lambda\in\mathbb{C}\) with \(|\lambda|>\lambda_{0}\) and \(|\Im(\lambda)|\geq\gamma|\lambda|\) and for \((f_{1},f_{2})\in[L^{p}(\Omega)]^{2}\), there is a unique solution \((u_{1},u_{2})\in[L^{p}(\Omega)]^{2}\) with \(u_{1}-u_{2}\in W^{2,p}(\Omega)\) of the Cauchy problem (3.1). Moreover,_ \[|\lambda|\|(u_{1},u_{2})\|_{L^{p}(\Omega)}+\|u_{1}-u_{2}\|_{W^{2,p}_{\lambda}( \Omega)}\leq C\|(f_{1},f_{2})\|_{L^{p}(\Omega)}. \tag{3.3}\] _Assume in addition that \(f_{1}-f_{2}\in W^{1,p}(\Omega)\). Then \((u_{1},u_{2})\in[W^{1,p}(\Omega)]^{2}\), \(u_{1}-u_{2}\in W^{3,p}(\Omega_{\tau/2})\), and_ \[|\lambda|\|(u_{1},u_{2})\|_{W^{1,p}_{\lambda}(\Omega)}+\|u_{1}-u_{2}\|_{W^{3, p}_{\lambda}(\Omega_{\tau/2})}\leq C\left(|\lambda|^{1/2}\|(f_{1},f_{2})\|_{L^{p} (\Omega)}+\|f_{1}-f_{2}\|_{W^{1,p}_{\lambda}(\Omega)}\right). \tag{3.4}\] **Remark 3.1**.: The boundary conditions must be understood as \[u_{1}-u_{2}=0\text{ on }\Gamma\quad\text{ and }\quad A\nabla(u_{1}-u_{2}) \cdot\nu=0\text{ on }\Gamma,\] which make sense since \(u_{1}-u_{2}\in W^{2,p}(\Omega)\). **Remark 3.2**.: In (3.4), we only estimate \(\|u_{1}-u_{2}\|_{W^{3,p}_{\lambda}(\Omega_{\tau/2})}\) not \(\|u_{1}-u_{2}\|_{W^{3,p}_{\lambda}(\Omega)}\) since \(f_{1}\) and \(f_{2}\) are not supposed to be in \(W^{1,p}(\Omega)\). Nevertheless, when \(A_{1}=A_{2}\) in \(\Omega\), the estimate is also valid for \(\|u_{1}-u_{2}\|_{W^{3,p}_{\lambda}(\Omega)}\). **Remark 3.1**.: As a consequence of (3.3) and the theory of regularity of elliptic equations, one derives that \((u_{1},u_{2})\in[W^{2,p}_{loc}(\Omega)]^{2}\) and for \(\Omega^{\prime}\Subset\Omega\)1, it holds Footnote 1: Recall that \(\Omega^{\prime}\Subset\Omega\) means \(\overline{\Omega^{\prime}}\subset\Omega\). \[\|(u_{1},u_{2})\|_{W^{2,p}_{\lambda}(\Omega^{\prime})}\leq C\|(f_{1},f_{2})\|_ {L^{p}(\Omega)},\] where \(C\) depends also on \(\Omega^{\prime}\) (see, e.g., [18, Lemma 17.1.5] and [15, Theorem 9.11]). As a consequence of Theorem 3.1, we obtain the following result on the free-region of the eigenvalues. **Proposition 3.1**.: _Assume (1.1)-(1.2), (1.7)-(1.8), and (3.2). For \(\gamma>0\), there exists \(\lambda_{0}>0\) such that if \(\lambda\in\mathbb{C}\) with \(|\Im(\lambda)|\geq\gamma|\lambda|\) and \(|\lambda|\geq\lambda_{0}\), then \(\lambda\) is not a transmission eigenvalue._ The rest of this section, containing two subsections, is devoted to the proof of Theorem 3.1. The first one is on the analysis in the half space. The proof of Theorem 3.1 is then given in the second subsection. ### Half space analysis Let \(1<p<+\infty\). For \(j=1,2,\cdots\), and \(\lambda\in\mathbb{C}\setminus\{0\}\), we denote \[\|\psi\|_{W^{j-1/p,p}_{\lambda}(\mathbb{R}^{d}_{0})}=|\lambda|^{1/2-1/(2p)}\| \psi\|_{W^{j-1,p}_{\lambda}(\mathbb{R}^{d}_{0})}+|\nabla^{j-1}\psi|_{W^{1-1/p,p}(\mathbb{R}^{d}_{0})},\] where \(\|\psi\|_{W^{j-1,p}_{\lambda}(\mathbb{R}^{d}_{0})}\) is defined as in (2.2) with \(\Omega=\mathbb{R}^{d}_{0}\), and \[|\psi|^{p}_{W^{1-1/p,p}(\mathbb{R}^{d}_{0})}=\int_{\mathbb{R}^{d-1}}\int_{ \mathbb{R}^{d-1}}\frac{|\psi(x^{\prime})-\psi(y^{\prime})|^{p}}{|x^{\prime}-y^ {\prime}|^{d+p-2}}\,dx^{\prime}\,dy^{\prime}.\] By the trace theory, there exists a positive constant \(C\) depending only on \(p\) and \(j\) such that \[\|u\|_{W^{j-1/p,p}_{\lambda}(\mathbb{R}^{d}_{0})}\leq C\|u\|_{W^{j,p}_{\lambda} (\mathbb{R}^{d}_{+})}\text{ for }u\in W^{j,p}(\mathbb{R}^{d}_{+}).\] In fact, this inequality holds for \(\lambda\in\mathbb{C}\) with \(|\lambda|=1\); the general case follows by scaling. The starting point and the key ingredient of our analysis is Lemma 3.2. Lemma 3.1 below is a special case of Lemma 3.2 and is later used to derive Lemma 3.2. **Lemma 3.1**.: _Let \(A\in\mathbb{R}^{d\times d}\) be a constant symmetric matrix and let \(\Sigma_{1},\Sigma_{2}\) be two positive constants such that_ \[\Lambda^{-1}|\xi|^{2}\leq\langle A\xi,\xi\rangle\leq\Lambda|\xi|^{2}\text{ for all }\xi\in\mathbb{R}^{d},\] \[\Lambda^{-1}\leq\Sigma_{1},\Sigma_{2}\leq\Lambda,\quad\text{ and }\quad|\Sigma_{1}- \Sigma_{2}|\geq\Lambda^{-1},\] _for some \(\Lambda\geq 1\). Let \(\gamma\in(0,1)\), \(1<p<+\infty\), and let \(\varphi\in W^{2-1/p,p}(\mathbb{R}^{d}_{0})\). Given \(\lambda\in\mathbb{C}\) with \(|\lambda|\geq 1\) and \(|\Im(\lambda)|\geq\gamma|\lambda|\), there exists a unique solution \((u_{1},u_{2})\in[L^{p}(\mathbb{R}^{d}_{+})]^{2}\) with \(u_{1}-u_{2}\in W^{2,p}(\mathbb{R}^{d}_{+})\) of the following Cauchy problem_ \[\left\{\begin{array}{cl}\operatorname{div}(A\nabla u_{1})-\lambda\Sigma_{1} u_{1}=0&\text{ in }\mathbb{R}^{d}_{+},\\ \operatorname{div}(A\nabla u_{2})-\lambda\Sigma_{2}u_{2}=0&\text{ in }\mathbb{R}^{d}_{+},\\ u_{1}-u_{2}=\varphi,\quad A\nabla(u_{1}-u_{2})\cdot e_{d}=0&\text{ on }\mathbb{R}^{d}_{0}.\end{array}\right.\] _Moreover,_ \[|\lambda|\|(u_{1},u_{2})\|_{L^{p}(\mathbb{R}^{d}_{+})}+\|u_{1}-u_{2}\|_{W^{2,p }_{\lambda}(\mathbb{R}^{d}_{+})}\leq C\|\varphi\|_{W^{2-1/p,p}_{\lambda}( \mathbb{R}^{d}_{0})}. \tag{3.5}\] _Assume in addition that \(\varphi\in W^{3-1/p,p}(\mathbb{R}^{d}_{0})\). Then \((u_{1},u_{2})\in[W^{1,p}(\mathbb{R}^{d}_{+})]^{2}\) with \(u_{1}-u_{2}\in W^{3,p}(\mathbb{R}^{d}_{+})\), and_ \[|\lambda|\|(u_{1},u_{2})\|_{W^{1,p}_{\lambda}(\mathbb{R}^{d}_{+})}+\|u_{1}-u_{ 2}\|_{W^{3,p}_{\lambda}(\mathbb{R}^{d}_{+})}\leq C\|\varphi\|_{W^{3-1/p,p}_{ \lambda}(\mathbb{R}^{d}_{0})}. \tag{3.6}\] _Here \(C\) is a positive constant depending only on \(\Lambda\), \(\gamma\), \(p\), and \(d\)._ Proof.: For a function \(u:\mathbb{R}^{d}\to\mathbb{C}\) (resp. \(\varphi:\mathbb{R}^{d-1}\to\mathbb{C}\)) we denote by \(\hat{u}\) the Fourier transform of \(u\) with respect to the first \((d-1)\) variables (resp. by \(\hat{\varphi}\) the Fourier transform of \(\varphi\)), i.e., for \((\xi^{\prime},x_{d})\in\mathbb{R}^{d-1}\times(0,\infty)\), \[\hat{u}(\xi^{\prime},x_{d})=\int_{\mathbb{R}^{d-1}}u(x^{\prime},x_{d})e^{-ix^ {\prime}\cdot\xi^{\prime}}\,dx^{\prime}\quad\text{ and }\quad\hat{\varphi}(\xi^{\prime})=\int_{ \mathbb{R}^{d-1}}\varphi(x^{\prime})e^{-ix^{\prime}\cdot\xi^{\prime}}\,dx^{ \prime}.\] Since, for \(\ell=1,2\), \[\operatorname{div}(A\nabla u_{\ell})-\lambda\Sigma_{\ell}u_{\ell}=0\text{ in }\mathbb{R}^{d}_{+},\] it follows that \[a\hat{u}^{\prime\prime}_{\ell}(\xi^{\prime},t)+2ib(\xi^{\prime})\hat{u}^{ \prime}_{\ell}(\xi^{\prime},t)-(c(\xi^{\prime})+\lambda\Sigma_{\ell})\hat{u}_{ \ell}(\xi^{\prime},t)=0\text{ for }t>0,\] where \[a=\langle Ae_{d},e_{d}\rangle,\ \ b(\xi^{\prime})=\sum_{j=1}^{d-1}A_{jd}\xi^{ \prime}_{j},\ \ c(\xi^{\prime})=\sum_{i,j=1}^{d-1}A_{ij}\xi^{\prime}_{i}\xi^{\prime}_{j}, \quad\text{and}\quad ac(\xi^{\prime})-b(\xi^{\prime})^{2}>0, \tag{3.7}\] since \(A\) is symmetric and positive. One then obtains, see, e.g., [34, proof of Lemma 2] for the details, \[\hat{u}_{\ell}(\xi^{\prime},t)=\alpha_{\ell}(\xi^{\prime})e^{\eta_{\ell}(\xi^{ \prime})t} \tag{3.8}\] where \[\eta_{\ell}(\xi^{\prime})=\frac{1}{a}\big{(}-ib(\xi^{\prime})-\sqrt{\Delta_{ \ell}(\xi^{\prime})}\big{)} \tag{3.9}\] and \[\alpha_{\ell}(\xi^{\prime})=\frac{\hat{\varphi}(\xi^{\prime})\sqrt{\Delta_{ \ell+1}(\xi^{\prime})}}{\sqrt{\Delta_{2}(\xi^{\prime})}-\sqrt{\Delta_{1}(\xi^{ \prime})}}\quad\text{ with }\quad\Delta_{\ell}(\xi^{\prime})=-b^{2}(\xi^{\prime})+a\big{(}c(\xi^{ \prime})+\lambda\Sigma_{\ell}\big{)}. \tag{3.10}\] Here we use the convention \(\Delta_{2+\ell}=\Delta_{\ell}\), and \(\sqrt{\Delta_{\ell}}\) denotes the square root of \(\Delta_{\ell}\) with the positive real part. Let \(v_{\ell}\in W^{1,p}(\mathbb{R}^{d}_{+})\) for \(\ell=1\), \(2\) be the unique solution of the system \[\left\{\begin{array}{cc}\mathrm{div}(A\nabla v_{\ell})-\lambda\Sigma_{\ell}v _{\ell}=0&\text{ in }\mathbb{R}^{d}_{+},\\ v_{\ell}=\varphi&\text{ on }\mathbb{R}^{d}_{0}.\end{array}\right.\] We have2, for \(\ell=1\), \(2\), Footnote 2: The results hold for \(|\lambda|=1\), see, e.g. [2, Theorem 14.1], the general case follows by scaling. \[\|v_{\ell}\|_{W^{j,p}_{\lambda}(\mathbb{R}^{d}_{+})}\leq C\|\varphi\|_{W^{j-1 /p,p}_{\lambda}(\mathbb{R}^{d}_{0})}\text{ for }j=2,3, \tag{3.11}\] and \[\hat{v}_{\ell}(\xi^{\prime},t)=\hat{\varphi}(\xi^{\prime})e^{\eta_{\ell}(\xi ^{\prime})t}. \tag{3.12}\] Extend \(u_{\ell}(x^{\prime},t)\) and \(\partial^{2}_{tt}v_{\ell}(x^{\prime},t)\) by \(0\) for \(t<0\) for \(\ell=1\), \(2\) and _still_ denote these extensions by \(u_{\ell}(x^{\prime},t)\) and \(\partial^{2}_{tt}v_{\ell}(x^{\prime},t)\). Let \(\mathcal{F}\) denote the Fourier transform in \(\mathbb{R}^{d}\). We then obtain from (3.8) and (3.12) that, with \(\xi=(\xi^{\prime},\xi_{d})\in\mathbb{R}^{d}\), \[\mathcal{F}u_{\ell}(\xi)=-\frac{\hat{\varphi}(\xi^{\prime})}{\eta_{\ell}(\xi ^{\prime})-i\xi_{d}}\frac{\sqrt{\Delta_{\ell+1}(\xi^{\prime})}}{\sqrt{\Delta_{ 2}(\xi^{\prime})}-\sqrt{\Delta_{1}(\xi^{\prime})}}\quad\text{ and }\quad \mathcal{F}\partial^{2}_{tt}v_{\ell}(\xi)=-\frac{\hat{\varphi}(\xi^{\prime}) \eta^{2}_{\ell}(\xi^{\prime})}{\eta_{\ell}(\xi^{\prime})-i\xi_{d}}.\] It follows that \[\mathcal{F}u_{\ell}(\xi)=m_{\ell,\lambda}(\xi)\mathcal{F}\partial^{2}_{tt}v_{ \ell}(\xi),\] where \[m_{\ell,\lambda}(\xi)=\frac{\sqrt{\Delta_{\ell+1}(\xi^{\prime})}}{\eta^{2}_{ \ell}(\xi^{\prime})(\sqrt{\Delta_{2}(\xi^{\prime})}-\sqrt{\Delta_{1}(\xi^{ \prime})})}. \tag{3.13}\] Note that \[\Delta_{2}(\xi^{\prime})-\Delta_{1}(\xi^{\prime})=a\lambda(\Sigma_{2}-\Sigma_ {1})\neq 0\] and \[\frac{1}{\eta_{\ell}(\xi^{\prime})}\stackrel{{\eqref{eq:a}}}{{= }}\frac{a}{-ib(\xi^{\prime})-\sqrt{\Delta_{\ell}(\xi^{\prime})}}=\frac{a\big{(} -ib(\xi^{\prime})+\sqrt{\Delta_{\ell}(\xi^{\prime})}\big{)}}{-b(\xi^{\prime}) ^{2}-\Delta_{\ell}(\xi^{\prime})}\\ \stackrel{{\eqref{eq:a}}}{{=}}\frac{a\big{(}-ib(\xi^{ \prime})+\sqrt{\Delta_{\ell}(\xi^{\prime})}\big{)}}{-a\big{(}c(\xi^{\prime})+ \lambda\Sigma_{\ell}\big{)}}=\frac{ib(\xi^{\prime})-\sqrt{\Delta_{\ell}(\xi^ {\prime})}}{c(\xi^{\prime})+\lambda\Sigma_{\ell}}.\] We derive from (3.13) that \[m_{\ell,\lambda}(\xi)=\frac{\sqrt{\Delta_{\ell+1}(\xi^{\prime})}(\sqrt{\Delta _{1}(\xi^{\prime})}+\sqrt{\Delta_{2}(\xi^{\prime})})(ib(\xi^{\prime})-\sqrt{ \Delta_{\ell}(\xi^{\prime})})^{2}}{a\lambda(\Sigma_{2}-\Sigma_{1})(c(\xi^{ \prime})+\lambda\Sigma_{\ell})^{2}}. \tag{3.14}\] We have, by (3.7) and (3.10),3 Footnote 3: Given two functions, \(p_{1}(\xi^{\prime},\lambda)\) and \(p_{2}(\xi^{\prime},\lambda)\) the notation \(p_{1}(\xi,\lambda)\sim p_{2}(\xi^{\prime},\lambda)\) means that there exists a constant \(C\geq 1\) independent of \(\xi^{\prime}\) and \(\lambda\) such that \(C^{-1}|p_{1}(\xi^{\prime},\lambda)|\leq|p_{2}(\xi^{\prime},\lambda)|\leq C|p_{ 1}(\xi^{\prime},\lambda)|\). \[|\Delta_{\ell}(\xi^{\prime})|\sim(|\xi^{\prime}|^{2}+|\lambda|),\quad|b(\xi^{ \prime})|\leq C|\xi^{\prime}|,\quad\text{ and }\quad|c(\xi^{\prime})+\lambda\Sigma_{\ell}|\sim|\xi^{\prime}|^{2}+| \lambda|.\] We then derive from (3.14) that \[|\xi|^{j}|\nabla^{j}m_{\ell,\lambda}(\xi)|\leq C_{j}|\lambda|^{-1}\text{ for }j\in\mathbb{N}. \tag{3.15}\] It follows from Mikhlin-Hormander's multiplier theorem, see, e.g., [19, Theorem 7.9.5], that \[|\lambda|\|u_{\ell}\|_{L^{p}(\mathbb{R}^{d})}\leq C\|\partial_{tt}^{2}v_{\ell} \|_{L^{p}(\mathbb{R}^{d})}, \tag{3.16}\] which implies \[|\lambda|\|u_{\ell}\|_{L^{p}(\mathbb{R}^{d})}\stackrel{{\eqref{ eq:2011}}}{{\leq}}C\|\varphi\|_{W^{2-1/p,p}_{\lambda}(\mathbb{R}^{d})}. \tag{3.17}\] On the other hand, one has \[\left\{\begin{array}{c}\operatorname{div}\big{(}A\nabla(u_{1}-u_{2})\big{)} -\lambda\Sigma_{1}(u_{1}-u_{2})=\lambda(\Sigma_{1}-\Sigma_{2})u_{2}\text{ in }\mathbb{R}^{d}_{+},\\ u_{1}-u_{2}=0\text{ on }\mathbb{R}^{d}_{0}.\end{array}\right.\] This yields \[\|u_{1}-u_{2}\|_{W^{2,p}_{\lambda}(\mathbb{R}^{d}_{+})}\leq C\|\lambda(\Sigma _{1}-\Sigma_{2})u_{2}\|_{L^{p}(\mathbb{R}^{d}_{+})}\stackrel{{ \eqref{eq:2011}}}{{\leq}}C\|\varphi\|_{W^{2-1/p,p}_{\lambda}( \mathbb{R}^{d}_{+})}.\] We next deal with (3.6). By taking the derivative of the system with respect to \(x_{j}\) for \(1\leq j\leq d-1\) and applying (3.5), we have, for \(1\leq j\leq d-1\), \[|\lambda|\|(\partial_{x_{j}}u_{1},\partial_{x_{j}}u_{2})\|_{L^{p}(\mathbb{R}^ {d}_{+})}+\|(\partial_{x_{j}}u_{1}-\partial_{x_{j}}u_{2})\|_{W^{2,p}_{\lambda} (\mathbb{R}^{d}_{+})}\leq C\|\partial_{x_{j}}\varphi\|_{W^{2-1/p,p}_{\lambda}( \mathbb{R}^{d}_{0})}. \tag{3.18}\] Extend \(\partial_{t}u_{\ell}(x^{\prime},t)\) and \(\partial^{3}_{ttt}v_{\ell}(x^{\prime},t)\) by \(0\) for \(t<0\) for \(\ell=1\), \(2\) and _still_ denote these extensions by \(\partial_{t}u_{\ell}(x^{\prime},t)\) and \(\partial^{3}_{ttt}v_{\ell}(x^{\prime},t)\). We then obtain from (3.8) and (3.12) that, with \(\xi=(\xi^{\prime},\xi_{d})\in\mathbb{R}^{d}\), \[\mathcal{F}\partial_{t}u_{\ell}(\xi)=-\frac{\hat{\varphi}(\xi^{\prime})\eta_{ \ell}(\xi^{\prime})}{\eta_{\ell}(\xi^{\prime})-i\xi_{d}}\frac{\sqrt{\Delta_{ \ell+1}(\xi^{\prime})}}{\sqrt{\Delta_{2}(\xi^{\prime})}-\sqrt{\Delta_{1}(\xi^ {\prime})}}\quad\text{ and }\quad\mathcal{F}\partial^{3}_{ttt}v_{\ell}(\xi)=-\frac{\hat{ \varphi}(\xi^{\prime})\eta^{3}_{\ell}(\xi^{\prime})}{\eta_{\ell}(\xi^{\prime}) -i\xi_{d}}.\] This yields \[\mathcal{F}\partial_{t}u_{\ell}(\xi)=m_{\ell,\lambda}(\xi)\mathcal{F}\partial^ {3}_{ttt}v_{\ell}(\xi).\] As in the proof of (3.16), we obtain \[|\lambda|\|\partial_{t}u_{\ell}\|_{L^{p}(\mathbb{R}^{d})}\leq C\|\partial^{3} _{ttt}v_{\ell}\|_{L^{p}(\mathbb{R}^{d})},\] which implies \[|\lambda|\|\partial_{t}u_{\ell}\|_{L^{p}(\mathbb{R}^{d})}\stackrel{{ \eqref{eq:2011}}}{{\leq}}C\|\varphi\|_{W^{3-1/p,p}_{\lambda}( \mathbb{R}^{d})}. \tag{3.19}\] Combining (3.18) and (3.19), we derive that \[|\lambda|\|(\nabla u_{1},\nabla u_{2})\|_{L^{p}(\mathbb{R}^{d}_{+})}+\|\nabla( u_{1}-u_{2})\|_{W^{2,p}_{\lambda}(\mathbb{R}^{d}_{+})}\leq C\|\varphi\|_{W^{3-1/p,p} _{\lambda}(\mathbb{R}^{d}_{0})}. \tag{3.20}\] Assertion (3.6) now follows from (3.20) and (3.5). The proof is complete. We now state and prove a more general version of Lemma 3.1, which is the main ingredient of the proof of Theorem 3.1. **Lemma 3.2**.: _Let \(A\in\mathbb{R}^{d\times d}\) be a constant symmetric matrix and let \(\Sigma_{1},\Sigma_{2}\) be two positive constants such that_ \[\Lambda^{-1}|\xi|^{2}\leq\langle A\xi,\xi\rangle\leq\Lambda|\xi|^{2}\text{ for all }\xi\in\mathbb{R}^{d},\] _and_ \[\Lambda^{-1}\leq\Sigma_{1},\Sigma_{2}\leq\Lambda,\quad\text{ and }\quad|\Sigma_{1}- \Sigma_{2}|\geq\Lambda^{-1},\] _for some \(\Lambda\geq 1\). Let \(\gamma\in(0,1)\), \(1<p<+\infty\), and let \(f_{1},f_{2}\in L^{p}(\mathbb{R}^{d}_{+})\), \(G_{1},G_{2}\in[L^{p}(\mathbb{R}^{d}_{+})]^{d}\) with \(G_{1}-G_{2}\in[W^{1,p}(\mathbb{R}^{d}_{+})]^{d}\), \(\varphi\in W^{2-1/p,p}(\mathbb{R}^{d}_{0})\), \(\psi\in W^{1-1/p,p}(\mathbb{R}^{d}_{0})\), and let \(r_{1}^{(ij)},r_{2}^{(ij)}\in L^{p}(\mathbb{R}^{d}_{+})\) with \(r_{1}^{(ij)}-r_{2}^{(ij)}\in W^{2,p}(\mathbb{R}_{+}^{d})\) for \(1\leq i,j\leq d\). Given \(\lambda\in\mathbb{C}\) with \(|\lambda|\geq 1\) and \(|\Im(\lambda)|\geq\gamma|\lambda|\), there exists a unique solution \((u_{1},u_{2})\in[L^{p}(\mathbb{R}_{+}^{d})]^{2}\) with \(u_{1}-u_{2}\in W^{2,p}(\mathbb{R}_{+}^{d})\) of the following Cauchy problem_ \[\left\{\begin{array}{cl}\mathrm{div}(A\nabla u_{1})-\lambda\Sigma_{1}u_{1}=f_ {1}+\mathrm{div}(G_{1})+\sum_{i,j=1}^{d}\partial_{ij}^{2}r_{1}^{(ij)}&\text{ in }\mathbb{R}_{+}^{d},\\ \mathrm{div}(A\nabla u_{2})-\lambda\Sigma_{2}u_{2}=f_{2}+\mathrm{div}(G_{2})+ \sum_{i,j=1}^{d}\partial_{ij}^{2}r_{2}^{(ij)}&\text{ in }\mathbb{R}_{+}^{d},\\ u_{1}-u_{2}=\varphi,\quad A\nabla(u_{1}-u_{2})\cdot e_{d}=\psi&\text{ on }\mathbb{R}_{0}^{d}.\end{array}\right. \tag{3.21}\] _Moreover,_ \[C\left(|\lambda|\|(u_{1},u_{2})\|_{L^{p}(\mathbb{R}_{+}^{d})}+ \|u_{1}-u_{2}\|_{W^{2,p}_{\lambda}(\mathbb{R}_{+}^{d})}\right)\\ \leq\|(f_{1},f_{2})\|_{L^{p}(\mathbb{R}_{+}^{d})}+|\lambda|^{1/2} \|(G_{1},G_{2})\|_{L^{p}(\mathbb{R}_{+}^{d})}+\sum_{i,j=1}^{d}|\lambda|\|(r_{1} ^{(ij)},r_{2}^{(ij)})\|_{L^{p}(\mathbb{R}_{+}^{d})}\\ +\|\varphi\|_{W^{2-1/p,p}_{\lambda}(\mathbb{R}_{0}^{d})}+\|\psi\|_ {W^{1-1/p,p}_{\lambda}(\mathbb{R}_{0}^{d})}+\|G_{1}-G_{2}\|_{W^{1,p}_{\lambda} (\mathbb{R}_{+}^{d})}+\sum_{i,j=1}^{d}\|r_{1}^{(ij)}-r_{2}^{(ij)}\|_{W^{2,p}_{ \lambda}(\mathbb{R}_{+}^{d})}. \tag{3.22}\] _Assume in addition that \(f_{1}-f_{2}\in W^{1,p}(\mathbb{R}_{+}^{d})\), \(G_{1}-G_{2}\in W^{2,p}(\mathbb{R}_{+}^{d})\), \(\varphi\in W^{3-1/p,p}(\mathbb{R}_{0}^{d})\), \(\psi\in W^{2-1/p,p}(\mathbb{R}_{0}^{d})\), and \(r_{1}^{(ij)}=r_{2}^{(ij)}=0\) for all \(1\leq i,j\leq d\). Then \((u_{1},u_{2})\in W^{1,p}(\mathbb{R}_{+}^{d})\) with \(u_{1}-u_{2}\in W^{3,p}(\mathbb{R}_{+}^{d})\), and it holds_ \[C\left(|\lambda|\|(u_{1},u_{2})\|_{W^{1,p}_{\lambda}(\mathbb{R} _{+}^{d})}+\|u_{1}-u_{2}\|_{W^{3,p}_{\lambda}(\mathbb{R}_{+}^{d})}\right)\\ \leq|\lambda|^{1/2}\|(f_{1},f_{2})\|_{L^{p}(\mathbb{R}_{+}^{d})}+| \lambda|\|(G_{1},G_{2})\|_{L^{p}(\mathbb{R}_{+}^{d})}+\|\varphi\|_{W^{3-1/p,p} _{\lambda}(\mathbb{R}_{0}^{d})}\\ +\|\psi\|_{W^{2-1/p,p}_{\lambda}(\mathbb{R}_{0}^{d})}+\|f_{1}-f_{2} \|_{W^{1,p}_{\lambda}(\mathbb{R}_{+}^{d})}+\|G_{1}-G_{2}\|_{W^{2,p}_{\lambda} (\mathbb{R}_{+}^{d})}. \tag{3.23}\] _Here \(C\) denotes a positive constant depending only on \(\Lambda\), \(\gamma\), \(d\), and \(p\)._ **Remark 3.3**.: Concerning (3.23), the assumption \(r_{1}^{(ij)}=r_{2}^{(ij)}=0\) for all \(1\leq i,j\leq d\) is just to avoid the redundancy; the same estimate holds for the appropriate assumptions on \(r_{\ell}^{(ij)}\) but this can be put into the conditions of \(f_{\ell}\) and \(G_{\ell}\) instead. Proof.: Since the problem is linear, (3.22) and (3.23) follow from the corresponding estimates in the following two cases: \(\bullet\) Case 1: \(f_{1}=f_{2}=0\), \(G_{1}=G_{2}=0\), and \(r_{1}^{(ij)}=r_{2}^{(ij)}=0\) for all \(1\leq i,j\leq d\). \(\bullet\) Case 2: \(\varphi=0\) and \(\psi=0\). We now proceed the proof for these cases. _Case 1_: \(f_{1}=f_{2}=0\), \(G_{1}=G_{2}=0\), and \(r_{1}^{(ij)}=r_{2}^{(ij)}=0\) for all \(1\leq i,j\leq d\). We have \[\left\{\begin{array}{cl}\mathrm{div}(A\nabla u_{1})-\lambda\Sigma_{1}u_{1}=0& \text{ in }\mathbb{R}_{+}^{d},\\ \mathrm{div}(A\nabla u_{2})-\lambda\Sigma_{2}u_{2}=0&\text{ in }\mathbb{R}_{+}^{d}, \\ u_{1}-u_{2}=\varphi,\quad A\nabla(u_{1}-u_{2})\cdot e_{d}=\psi&\text{ on }\mathbb{R}_{0}^{d}.\end{array}\right.\] Let \(v\in W^{1,p}(\mathbb{R}^{d}_{+})\) be the unique solution of \[\left\{\begin{array}{rl}\operatorname{div}(A\nabla v)-\lambda\Sigma_{1}v=0& \text{ in }\mathbb{R}^{d}_{+},\\ A\nabla v\cdot e_{d}=\psi&\text{ on }\mathbb{R}^{d}_{0}.\end{array}\right.\] As a consequence of [17, Theorem 2.3.2.7] and a scaling argument, we have \[\|v\|_{W^{2,p}_{\lambda}(\mathbb{R}^{d}_{+})}\leq C\|\psi\|_{W^{1-1/p,p}_{ \lambda}(\mathbb{R}^{d}_{0})}\quad\text{ and }\quad\|v\|_{W^{3,p}_{\lambda}(\mathbb{R}^{d}_{+})}\leq C\|\psi\|_{W^{2-1/p,p }_{\lambda}(\mathbb{R}^{d}_{0})}. \tag{3.24}\] By the trace theory, it follows that \[\|v\|_{W^{2-1/p,p}_{\lambda}(\mathbb{R}^{d}_{0})}\leq C\|\psi\|_{W^{1-1/p,p}_{ \lambda}(\mathbb{R}^{d}_{0})}\quad\text{ and }\quad\|v\|_{W^{3-1/p,p}_{\lambda}(\mathbb{R}^{d}_{0})}\leq C\|\psi\|_{W^{2-1/ p,p}_{\lambda}(\mathbb{R}^{d}_{0})}. \tag{3.25}\] Considering the system of \((u_{1}-v,u_{2})\) and using (3.24), and (3.25), the conclusion of this case follows from Lemma 3.1. _Case 2:_\(\varphi=0\), \(\psi=0\). In this case, we have \[\left\{\begin{array}{rl}\operatorname{div}(A\nabla u_{1})-\lambda\Sigma_{1}u _{1}=f_{1}+\operatorname{div}(G_{1})+\sum_{i,j=1}^{d}\partial_{ij}^{2}r_{1}^{( ij)}&\text{ in }\mathbb{R}^{d}_{+},\\ \operatorname{div}(A\nabla u_{2})-\lambda\Sigma_{2}u_{2}=f_{2}+\operatorname{ div}(G_{2})+\sum_{i,j=1}^{d}\partial_{ij}^{2}r_{2}^{(ij)}&\text{ in }\mathbb{R}^{d}_{+},\\ u_{1}-u_{2}=0,\quad A\nabla(u_{1}-u_{2})\cdot e_{d}=0&\text{ on }\mathbb{R}^{d}_{0}.\end{array}\right.\] For \(\ell=1,2\), consider the following systems \[\left\{\begin{array}{rl}\operatorname{div}(A\nabla v_{\ell}^{(0)})-\lambda \Sigma_{\ell}v_{\ell}^{(0)}&=f_{\ell}\quad\text{ in }\mathbb{R}^{d}_{+},\\ A\nabla v_{\ell}^{(0)}\cdot e_{d}&=0&\text{ on }\mathbb{R}^{d}_{0},\end{array}\right.\] \[\left\{\begin{array}{rl}\operatorname{div}(A\nabla v_{\ell}^{(j)})-\lambda \Sigma_{\ell}v_{\ell}^{(j)}&=(G_{\ell})_{j}\quad\text{ in }\mathbb{R}^{d}_{+},\\ A\nabla v_{\ell}^{(j)}\cdot e_{d}&=0&\text{ on }\mathbb{R}^{d}_{0}\end{array}\right.\qquad(1 \leq j\leq d),\] where \((G_{\ell})_{j}\) denotes the \(j\)-th component of \(G_{\ell}\), and \[\left\{\begin{array}{rl}\operatorname{div}(A\nabla v_{\ell}^{(ij)})-\lambda \Sigma_{\ell}v_{\ell}^{(ij)}&=r_{\ell}^{(ij)}\quad\text{ in }\mathbb{R}^{d}_{+},\\ A\nabla v_{\ell}^{(ij)}\cdot e_{d}&=0&\text{ on }\mathbb{R}^{d}_{0}\end{array}\right. \qquad(1\leq i,j\leq d).\] We have, see, e.g., [2, Theorem 14.1], for \(1\leq i,j\leq d\), \[\left\{\begin{array}{rl}\|v_{\ell}^{(0)}\|_{W^{2,p}_{\lambda}(\mathbb{R}^{d }_{+})}\leq C\|f_{\ell}\|_{L^{p}(\mathbb{R}^{d}_{+})},\\ \|v_{\ell}^{(j)}\|_{W^{2,p}_{\lambda}(\mathbb{R}^{d}_{+})}\leq C\|G_{\ell}\|_{L ^{p}(\mathbb{R}^{d}_{+})},\\ \|v_{\ell}^{(ij)}\|_{W^{2,p}_{\lambda}(\mathbb{R}^{d}_{+})}\leq C\|r_{\ell}^{( ij)}\|_{L^{p}(\mathbb{R}^{d}_{+})}.\end{array}\right. \tag{3.26}\] Since, we have \[\left\{\begin{array}{rl}\operatorname{div}(A\nabla(v_{1}^{(0)}-v_{2}^{(0)}))- \lambda\Sigma_{1}(v_{1}^{(0)}-v_{2}^{(0)})&=f_{1}-f_{2}+\lambda(\Sigma_{1}- \Sigma_{2})v_{2}^{(0)}&\text{ in }\mathbb{R}^{d}_{+},\\ A\nabla(v_{1}^{(0)}-v_{2}^{(0)})\cdot e_{d}&=0&\text{ on }\mathbb{R}^{d}_{0},\end{array}\right.\] and the equations for \(v_{1}^{(j)}-v_{2}^{(j)}\) and \(v_{1}^{(ij)}-v_{2}^{(ij)}\) are similar, we also get, for \(1\leq i,j\leq d\), by using (3.26), \[\left\{\begin{array}{c}C\|v_{1}^{(0)}-v_{2}^{(0)}\|_{W^{2,p}_{\lambda}({ \mathbb{R}}^{d}_{+})}\leq\|(f_{1},f_{2})\|_{L^{p}({\mathbb{R}}^{d}_{+})},\\ C\|v_{1}^{(j)}-v_{2}^{(j)}\|_{W^{3,p}_{\lambda}({\mathbb{R}}^{d}_{+})}\leq\|G_ {1}-G_{2}\|_{W^{1,p}_{\lambda}({\mathbb{R}}^{d}_{+})}+|\lambda|^{1/2}\|G_{2}\| _{L^{p}({\mathbb{R}}^{d}_{+})},\\ C\|v_{1}^{(ij)}-v_{2}^{(ij)}\|_{W^{4,p}_{\lambda}({\mathbb{R}}^{d}_{+})}\leq\|r _{1}^{(ij)}-r_{2}^{(ij)}\|_{W^{2,p}_{\lambda}({\mathbb{R}}^{d}_{+})}+|\lambda| \|r_{2}^{(ij)}\|_{L^{p}({\mathbb{R}}^{d}_{+})},\end{array}\right. \tag{3.27}\] and \[\begin{split} C\|v_{1}^{(0)}-v_{2}^{(0)}\|_{W^{3,p}_{\lambda}({ \mathbb{R}}^{d}_{+})}&\leq\|f_{1}-f_{2}\|_{W^{1,p}_{\lambda}({\mathbb{R}} ^{d}_{+})}+|\lambda|^{1/2}\|f_{2}\|_{L^{p}({\mathbb{R}}^{d}_{+})},\\ C\|v_{1}^{(j)}-v_{2}^{(j)}\|_{W^{4,p}_{\lambda}({\mathbb{R}}^{d}_{+})}& \leq\|G_{1}-G_{2}\|_{W^{2,p}_{\lambda}({\mathbb{R}}^{d}_{+})}+|\lambda| \|G_{2}\|_{L^{p}({\mathbb{R}}^{d}_{+})}.\end{split} \tag{3.28}\] For \(\ell=1,2\), set \[w_{\ell}=v_{\ell}^{(0)}+\sum_{j=1}^{d}\partial_{j}v_{\ell}^{(j)}+\sum_{i,j=1}^ {d}\partial_{ij}^{2}v_{\ell}^{(ij)}.\] We have \[\operatorname{div}(A\nabla w_{\ell})-\lambda\Sigma_{\ell}w_{\ell}=f_{\ell}+ \operatorname{div}(G_{\ell})+\sum_{i,j=1}^{d}\partial_{ij}^{2}r_{\ell}^{(ij)} \text{ in }{\mathbb{R}}^{d}_{+}.\] Moreover, \[C|\lambda|\|(w_{1},w_{2})\|_{L^{p}({\mathbb{R}}^{d}_{+})}\\ \stackrel{{\eqref{eq:w_1}}}{{\leq}}\|(f_{1},f_{2})\|_ {L^{p}({\mathbb{R}}^{d}_{+})}+|\lambda|^{1/2}\|(G_{1},G_{2})\|_{L^{p}({\mathbb{ R}}^{d}_{+})}+|\lambda|\sum_{i,j=1}^{d}\|(r_{1}^{(ij)},r_{2}^{(ij)})\|_{L^{p}({ \mathbb{R}}^{d}_{+})}. \tag{3.29}\] Using (3.27) and the trace theory, we derive that \[\|w_{1}-w_{2}\|_{W^{2,p}_{\lambda}({\mathbb{R}}^{d}_{+})}+\|w_{1}-w_{2}\|_{W^{ 2-1/p,p}_{\lambda}({\mathbb{R}}^{d}_{0})}+\|A\nabla(w_{1}-w_{2})\cdot e_{d}\|_ {W^{1-1/p,p}_{\lambda}({\mathbb{R}}^{d}_{0})}\\ \leq C\Big{(}\|f_{2}\|_{L^{p}({\mathbb{R}}^{d}_{+})}+|\lambda|^{1 /2}\|(G_{1},G_{2})\|_{L^{p}({\mathbb{R}}^{d}_{+})}+\sum_{i,j=1}^{d}|\lambda| \|r_{2}^{ij}\|_{L^{p}({\mathbb{R}}^{d}_{+})}\\ +\|f_{1}-f_{2}\|_{L^{p}({\mathbb{R}}^{d}_{+})}+\|G_{1}-G_{2}\|_{W^ {1,p}_{\lambda}({\mathbb{R}}^{d}_{+})}+\sum_{i,j=1}^{d}\|r_{1}^{(ij)}-r_{2}^{( ij)}\|_{W^{2,p}_{\lambda}({\mathbb{R}}^{d}_{+})}\Big{)}. \tag{3.30}\] Considering the system of \((u_{1}-w_{1},u_{2}-w_{2})\), and using (3.29) and (3.30), assertion (3.22) now follows from case 1. To deal with assertion (3.23), instead of (3.29) and (3.30), we use, since \(r_{1}^{(ij)}=r_{2}^{(ij)}=0\), \[|\lambda|\|(w_{1},w_{2})\|_{W^{1,p}_{\lambda}({\mathbb{R}}^{d}_{+})}\stackrel{{ \eqref{eq:w_1}}}{{\leq}}C\Big{(}|\lambda|^{1/2}\|(f_{1},f_{2})\|_{L^{p}({ \mathbb{R}}^{d}_{+})}+|\lambda|\|(G_{1},G_{2})\|_{L^{p}({\mathbb{R}}^{d}_{+})} \Big{)}, \tag{3.31}\] and \[\|w_{1}-w_{2}\|_{W^{3,p}_{\lambda}(\mathbb{R}^{d}_{+})}+\|w_{1}-w_{2} \|_{W^{3-1/p,p}_{\lambda}(\mathbb{R}^{d}_{0})}+\|A\nabla(w_{1}-w_{2})\cdot e_{d} \|_{W^{2-1/p,p}_{\lambda}(\mathbb{R}^{d}_{0})}\\ \stackrel{{\eqref{eq:w_1}}}{{\leq}}C\Big{(}\|(f_{1}- f_{2})\|_{W^{1,p}_{\lambda}(\mathbb{R}^{d}_{+})}+\|G_{1}-G_{2}\|_{W^{2,p}_{ \lambda}(\mathbb{R}^{d}_{+})}\Big{)}. \tag{3.32}\] By considering the system of \((u_{1}-w_{1},u_{2}-w_{2})\), assertion (3.23) now follows from case 1. The proof is complete. ### Proof of Theorem 3.1 The proof is divided into two steps: * Step 1: Assuming the solution exists, we establish (3.3) and (3.4). * Step 2: We establish the existence of the solutions. We now proceed these two steps. _Step 1:_ For \((f_{1},f_{2})\in[L^{p}(\Omega)]^{2}\), let \((u_{1},u_{2})\in[L^{p}(\Omega)]^{2}\) with \(u_{1}-u_{2}\in W^{2,p}(\Omega)\) be a solution of (3.1). We prove that (3.3) and (3.4) hold. Applying Lemma 3.2 and the freezing coefficient technique, we deduce that there exists \(\tau_{*}\in(0,\tau/2)\) depending only on \(\Omega\), \(\Lambda\), \(\tau\), and \(p\), such that \[C\left(|\lambda|\|(u_{1},u_{2})\|_{L^{p}(\Omega_{\tau_{*}})}+\|u _{1}-u_{2}\|_{W^{2,p}_{\lambda}(\Omega_{\tau_{*}})}\right)\\ \leq\|(f_{1},f_{2})\|_{L^{p}(\Omega)}+|\lambda|^{1/2}\|(u_{1},u_{ 2})\|_{L^{p}(\Omega_{\tau})}+\|u_{1}-u_{2}\|_{W^{1,p}_{\lambda}(\Omega_{\tau})} \tag{3.33}\] and \[C\left(|\lambda|\|(u_{1},u_{2})\|_{W^{1,p}_{\lambda}(\Omega_{ \tau_{*}})}+\|u_{1}-u_{2}\|_{W^{3,p}_{\lambda}(\Omega_{\tau_{*}})}\right)\\ \leq|\lambda|^{1/2}\|(f_{1},f_{2})\|_{L^{p}(\Omega)}+\|f_{1}-f_{2 }\|_{W^{1,p}_{\lambda}(\Omega)}\\ +|\lambda|\|(u_{1},u_{2})\|_{L^{p}(\Omega_{\tau})}+|\lambda|\|(u _{1},u_{2})\|_{L^{p}(\Omega_{\tau})}+\|u_{1}-u_{2}\|_{W^{2,p}_{\lambda}(\Omega _{\tau})}, \tag{3.34}\] for every \(\lambda\in\mathbb{C}\) with \(|\Im(\lambda)|\geq c|\lambda|\) and \(|\lambda|\geq 1\). Here and in what follows, \(C\) denotes a positive constant depending only on \(\Omega\), \(\Lambda\), \(\tau\), and \(p\). Let us emphasize here that the terms \((r_{1,ij},r_{2,ij})\) in Lemma 3.2, play a crucial role in the proof of (3.33) since the solutions \((u_{1},u_{2})\) considered are only in \([L^{p}(\Omega)]^{2}\), but not in \([W^{1,p}(\Omega)]^{2}\). Indeed, let consider a small neighborhood of \(x_{0}\in\Gamma\). Using a change of variables, without loss of generality, one might assume that the boundary in this neighbourhood is _flat_ already and \(A_{1}=A_{2}=A\) there. In the freezing process, one has, in such a neighborhood, \[\operatorname{div}(A(x_{0})\nabla u_{\ell})-\lambda\Sigma_{\ell}( x_{0})u_{\ell}=\operatorname{div}\big{(}(A(x_{0})-A(x))\nabla u_{\ell}\big{)}+ \operatorname{div}\big{(}A(x)\nabla u_{\ell}\big{)}-\lambda\Sigma_{\ell}(x_{0}) u_{\ell}\\ =\sum_{i,j=1}^{d}\partial_{ij}^{2}\Big{(}(A_{ij}(x_{0})-A_{ij}(x))u _{\ell}\Big{)}-\sum_{i,j=1}^{d}\partial_{i}\Big{(}u_{\ell}\partial_{j}(A_{ij}(x _{0})-A_{ij}(x))\Big{)}+f_{\ell}+\lambda\big{(}\Sigma_{\ell}(x)-\Sigma_{\ell}( x_{0})\big{)}u_{\ell}.\] Let \(\chi\in C^{\infty}(\mathbb{R}^{d})\) with the support in a sufficiently small neighborhood of \(x_{0}\), then with \(v_{\ell}=\chi u_{\ell}\) for \(\ell=1,2\), we have \[\operatorname{div}(A(x_{0})\nabla v_{\ell})-\lambda\Sigma_{\ell}(x_{0})v_{\ell} \tag{3.35}\] \[=\chi\sum_{i,j=1}^{d}\partial_{ij}^{2}\Big{(}(A_{ij}(x_{0})-A_{ij}(x))u_{\ell }\Big{)}-\sum_{i,j=1}^{d}\chi\partial_{i}\Big{(}u_{\ell}\partial_{j}(A_{ij}(x_ {0})-A_{ij}(x))\Big{)}\] \[+\chi f_{\ell}+\lambda\big{(}\Sigma_{\ell}(x)-\Sigma_{\ell}(x_{0})\big{)}v_{ \ell}-u_{\ell}\operatorname{div}(A(x_{0})\nabla\chi)+2\operatorname{div}(u_{ \ell}A(x_{0})\nabla\chi).\] The terms \(r_{\ell,ij}\) are then \((A_{ij}(x_{0})-A_{ij}(x))\chi u_{\ell}=(A_{ij}(x_{0})-A_{ij}(x))v_{\ell}\). Since \(A_{1}=A_{2}=A\) in \(\Omega_{\tau}\), \(u_{1}-u_{2}=0\) in \(\Gamma\), and \(A\nabla(u_{1}-u_{2})\cdot\nu=0\) on \(\Gamma\), it follows that \[v_{1}-v_{2}=0\text{ on }\Gamma\quad\text{ and }\quad A(x_{0})\nabla(v_{1}-v_{2}) \cdot\nu=\chi(A(x_{0})-A(x))\nabla(u_{1}-u_{2})\cdot\nu\text{ on }\Gamma.\] We are thus in the situation to apply Lemma 3.2 and the freezing coefficient technique to derive (3.33). Concerning (3.34), in (3.35), one writes \(\partial_{ij}^{2}\Big{(}(A_{ij}(x_{0})-A_{ij}(x))u_{\ell}(x)\Big{)}\) under the form \[\partial_{i}\Big{(}(A_{ij}(x_{0})-A_{ij}(x))\partial_{j}u_{\ell}\Big{)}+ \partial_{i}\Big{(}\partial_{j}(A_{ij}(x_{0})-A_{ij}(x))u_{\ell}\Big{)}.\] We are thus in the situation to apply Lemma 3.2 and the freezing coefficient technique to derive (3.34). The details of the rest of the proof of (3.33) and (3.34) are omitted. On the other hand, since \[\operatorname{div}(A_{\ell}\nabla u_{\ell})-\lambda\Sigma_{\ell}u_{\ell}=f_{ \ell}\quad\text{ in }\Omega,\] we have, for \(|\lambda|\geq 1\), \[\|u_{\ell}\|_{W^{1,p}_{\lambda}(\Omega\setminus\Omega_{\tau_{*}/4})}\leq C \Big{(}|\lambda|^{-1/2}\|f_{\ell}\|_{L^{p}(\Omega)}+\|u_{\ell}\|_{L^{p}(\Omega _{\tau_{*}})}\Big{)}, \tag{3.36}\] and \[\|u_{\ell}\|_{W^{2,p}_{\lambda}(\Omega\setminus\Omega_{\tau_{*}/2})}\leq C \Big{(}\|f_{\ell}\|_{L^{p}(\Omega)}+\|u_{\ell}\|_{W^{1,p}_{\lambda}(\Omega_{ \tau_{*}}\setminus\Omega_{\tau_{*}/4})}\Big{)}. \tag{3.37}\] Combining (3.36) and (3.37) yields \[\|u_{\ell}\|_{W^{2,p}_{\lambda}(\Omega\setminus\Omega_{\tau_{*}/2})}\leq C \left(\|f_{\ell}\|_{L^{p}(\Omega)}+\|u_{\ell}\|_{L^{p}(\Omega_{\tau_{*}})} \right). \tag{3.38}\] From (3.33) and (3.38), we obtain \[|\lambda|\|(u_{1},u_{2})\|_{L^{p}(\Omega)}+\|u_{1}-u_{2}\|_{W^{2,p}_{\lambda}( \Omega_{\tau_{*}})}\leq C\|(f_{1},f_{2})\|_{L^{p}(\Omega)}\] for \(|\lambda|\geq\lambda_{0}\) and for \(\lambda_{0}\) large enough. This completes the proof of (3.3). From (3.34), (3.36), after using (3.3), we obtain \[|\lambda|\|(u_{1},u_{2})\|_{W^{1,p}_{\lambda}(\Omega)}+\|u_{1}-u_{2}\|_{W^{3, p}_{\lambda}(\Omega_{\tau_{*}})}\leq C|\lambda|^{1/2}\|(f_{1},f_{2})\|_{L^{p}( \Omega)}+\|f_{1}-f_{2}\|_{W^{1,p}_{\lambda}(\Omega)}\] for \(|\lambda|\geq\lambda_{0}\) and for \(\lambda_{0}\) large enough. This completes the proof of (3.4). _Step 2:_ Set \[X=\Big{\{}(u_{1},u_{2})\in[L^{p}(\Omega)]^{2}:\operatorname{div}(A_{1}\nabla u _{1}),\operatorname{div}(A_{2}\nabla u_{2})\in L^{p}(\Omega),\] \[u_{1}-u_{2}\in W^{2,p}(\Omega),u_{1}-u_{2}=0\text{ on }\Gamma,\text{ and }(A_{1} \nabla u_{1}-A_{2}\nabla u_{2})\cdot\nu=0\text{ on }\Gamma\Big{\}}.\] The space \(X\) is a Banach space endowed with the norm \[\|(u_{1},u_{2})\|_{X}:=\|(u_{1},u_{2})\|_{L^{p}(\Omega)}+\|\operatorname{div}(A_{ 1}\nabla u_{1}),\operatorname{div}(A_{2}\nabla u_{2})\|_{L^{p}(\Omega)}+\|u_{1}- u_{2}\|_{W^{2,p}(\Omega)}. \tag{3.39}\] Define \[B_{\lambda}:X\to[L^{p}(\Omega)]^{2}\] by \[B_{\lambda}(u_{1},u_{2})=(\operatorname{div}(A_{1}\nabla u_{1})-\lambda \Sigma_{1}u_{1},\operatorname{div}(A_{2}\nabla u_{2})-\lambda\Sigma_{2}u_{2}).\] Clearly, \(B_{\lambda}\) is bilinear and continuous on \(X\). We claim that \[B_{\lambda}\text{ has a closed and dense range}. \tag{3.40}\] Assuming this, we derive that \[B_{\lambda}(X)=[L^{p}(\Omega)]^{2}, \tag{3.41}\] which yields the existence of the solutions. It remains to prove (3.40). We first prove that \(B_{\lambda}\) has a closed range. Let \(((u_{1,n},u_{2,n}))_{n}\subset X\) be such that \((f_{1,n},f_{2,n}):=B_{\lambda}(u_{1,n},u_{2,n})\to(f_{1},f_{2})\) in \([L^{p}(\Omega)]^{2}\). It follows from (3.3) by Step 1 that \(((u_{1,n},u_{2,n}))_{n}\) is a Cauchy sequence in \(X\). Let \((u_{1},u_{2})\) denote its limit. One can then show that \((f_{1,n},f_{2,n})\to(f_{1},f_{2}):=B_{\lambda}(u_{1},u_{2})\) since \(B_{\lambda}\) is continuous. Thus \(B_{\lambda}\) has a closed range. We next establish that the range of \(B_{\lambda}\) is dense. To this end, it suffices to show that if \((f_{1},f_{2})\in[L^{q}(\Omega)]^{2}\) with \(\frac{1}{p}+\frac{1}{q}=1\) is such that \[\int_{\Omega}\langle B_{\lambda}(u_{1},u_{2}),(f_{1},f_{2})\rangle dx=0 \qquad\text{ for all }(u_{1},u_{2})\in X, \tag{3.42}\] then \((f_{1},f_{2})=(0,0)\). Since (3.42) holds for all \((u_{1},u_{2})\in[C_{c}^{\infty}(\Omega)]^{2}\subset X\), it follows that, for \(\ell=1,2\), \[\operatorname{div}(A_{\ell}\nabla f_{\ell})-\overline{\lambda}\Sigma_{\ell}f _{\ell}=0\text{ in }\Omega. \tag{3.43}\] Since \(A_{\ell}\in C^{1}(\bar{\Omega})\) and \(f_{\ell}\in L^{q}(\Omega)\), using the standard regularity theory in \(L^{q}\)-scale, see also [18, Lemma 17.1.5], one has \[f_{\ell}\in W^{2,q}_{\text{\tiny loc}}(\Omega).\] Set, in \(\Omega\), \[g_{1}=f_{1}\quad\text{ and }\quad g_{2}=-f_{2}. \tag{3.44}\] Then, by (3.43), \[\operatorname{div}(A_{\ell}\nabla g_{\ell})-\overline{\lambda}\Sigma_{\ell}g_ {\ell}=0\text{ in }\Omega, \tag{3.45}\] and, by (3.42), for \((u_{1},u_{2})\in X\), \[\int_{\Omega}\operatorname{div}(A_{1}\nabla u_{1})\bar{g}_{1}-\lambda\Sigma_{1 }u_{1}\bar{g}_{1}-\int_{\Omega}\operatorname{div}(A_{2}\nabla u_{2})\bar{g}_{ 2}-\lambda\Sigma_{2}u_{2}\bar{g}_{2}=0. \tag{3.46}\] From (3.46), we have, taking \((u_{1},u_{2})\in X\cap[W^{2,p}(\Omega)]^{2}\), \[\int_{\Omega}\operatorname{div}\big{(}A_{1}\nabla(u_{1}-u_{2}) \big{)}\bar{g}_{1}+\operatorname{div}(A_{1}\nabla u_{2})(\bar{g}_{1}-\bar{g}_{ 2})+\operatorname{div}\big{(}(A_{1}-A_{2})\nabla u_{2}\big{)}\bar{g}_{2}\\ -\lambda\Sigma_{1}u_{1}\bar{g}_{1}+\lambda\Sigma_{2}u_{2}\bar{g}_ {2}=0. \tag{3.47}\] Using that \(g_{2}\in W^{2,q}_{loc}(\Omega)\) and \(A_{1}=A_{2}\) in \(\Omega_{\tau}\), an integration by parts leads to \[\int_{\Omega}\operatorname{div}\big{(}(A_{1}-A_{2})\nabla u_{2}\big{)}\bar{g}_{2 }=\int_{\Omega}\operatorname{div}\big{(}(A_{1}-A_{2})\nabla\bar{g}_{2}\big{)}u_ {2}. \tag{3.48}\] Since \(u_{1}-u_{2}\in W^{2,p}(\Omega)\), \(u_{1}-u_{2}=0\) on \(\Gamma\) and \(A\nabla(u_{1}-u_{2})\cdot\nu=0\) on \(\Gamma\), there exists a sequence \((v_{n})_{n}\subset C^{2}_{c}(\Omega)\) such that \(v_{n}\to u_{1}-u_{2}\) in \(W^{2,p}(\Omega)\). An integration by parts yields \[\int_{\Omega}\operatorname{div}\big{(}A_{1}\nabla(u_{1}-u_{2}) \big{)}\bar{g}_{1}=\lim_{n\to+\infty}\int_{\Omega}\operatorname{div}\big{(}A_{ 1}\nabla v_{n}\big{)}\bar{g}_{1}\\ =\lim_{n\to+\infty}\int_{\Omega}\operatorname{div}\big{(}A_{1} \nabla\bar{g}_{1}\big{)}v_{n}=\int_{\Omega}\operatorname{div}\big{(}A_{1} \nabla\bar{g}_{1}\big{)}(u_{1}-u_{2}). \tag{3.49}\] Combining (3.47), (3.48), and (3.49) yields (3.50) \[\int_{\Omega}\operatorname{div}(A_{1}\nabla u_{2})(\bar{g}_{1}- \bar{g}_{2})=-\int_{\Omega}\operatorname{div}\big{(}(A_{1}-A_{2})\nabla\bar{g} _{2}\big{)}u_{2}-\int_{\Omega}\operatorname{div}\big{(}A_{1}\nabla\bar{g}_{1} \big{)}(u_{1}-u_{2})\\ +\int_{\Omega}\lambda\Sigma_{1}u_{1}\bar{g}_{1}-\int_{\Omega} \lambda\Sigma_{2}u_{2}\bar{g}_{2}\overset{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq: **Proposition 4.1**.: _Assume (1.1)-(1.2), and (1.6)-(1.8). Let \(c\in(0,1)\) and \(1<p<+\infty\). There exists \(\lambda_{0}>0\) depending on \(p\), \(c\), \(\Lambda\), and \(\Omega\) such that the following holds: for \((f,g)\in[L^{p}(\Omega)]^{2}\) and for \(\lambda\in\mathbb{C}\) with \(|\Im(\lambda)|\geq c|\lambda|\) and \(|\lambda|>\lambda_{0}\), there exists a unique solution \((u,v)\in W^{2,p}(\Omega)\times L^{p}(\Omega)\) of the Cauchy problem (4.1); moreover, we have_ \[\|v\|_{L^{p}(\Omega)}+\|u\|_{W^{2,p}_{\lambda}(\Omega)}\leq C|\lambda|^{-1/2} \left(|\lambda|^{1/2}\|f\|_{L^{p}(\Omega)}+|\lambda|^{-1/2}\|g\|_{L^{p}(\Omega )}\right) \tag{4.2}\] _and_ \[\|v\|_{W^{1,p}_{\lambda}(\Omega)}+\|u\|_{W^{3,p}_{\lambda}(\Omega)}\leq C \left(\|f\|_{W^{1,p}_{\lambda}(\Omega)}+|\lambda|^{-1/2}\|g\|_{L^{p}(\Omega)} \right), \tag{4.3}\] _for some positive constant \(C\) independent of \(\lambda\), \(f\), and \(g\)._ As a consequence, we have **Corollary 4.1**.: _Assume (1.1)-(1.2) and (1.6)-(1.8). Let \(c\in(0,1)\), and \(1<p<+\infty\). There exists \(\lambda_{0}>0\) depending on \(p\), \(c\), \(\Lambda\), and \(\Omega\) such that the following holds: for \((f,g)\in W^{1,p}(\Omega)\times L^{p}(\Omega)\), and for \(\lambda\in\mathbb{C}\) with \(|\Im(\lambda)|\geq c|\lambda|\) and \(|\lambda|>\lambda_{0}\), there exists a unique solution \((u,v)\in W^{3,p}(\Omega)\times W^{1,p}(\Omega)\) of (4.1); moreover, for_ 1. _either_ \(1<p<d\) _and_ \(p\leq q\leq\frac{dp}{d-p}\)_,_ 2. _either_ \(d=p\leq q<+\infty\)_,_ 3. _or_ \(p>d\) _and_ \(q=+\infty\)_,_ _we have_ \[\|v\|_{L^{q}(\Omega)}+\|u\|_{W^{2,q}_{\lambda}(\Omega)}\leq C|\lambda|^{\frac{ d}{2}\left(\frac{1}{p}-\frac{1}{q}\right)-\frac{1}{2}}\left(\|f\|_{W^{1,p}_{ \lambda}(\Omega)}+|\lambda|^{-1/2}\|g\|_{L^{p}(\Omega)}\right), \tag{4.4}\] _for some positive constant \(C\) independent of \(\lambda\), \(f\), and \(g\)._ **Remark 4.1**.: In case (3) of Corollary 4.1, we derive that \((u,v)\in C^{2}(\overline{\Omega})\times C(\overline{\Omega})\). Proof.: Choose \(\lambda_{0}\) such that the conclusion of Proposition 4.1 holds. By Gagliardo-Nirenberg's interpolation inequalities (see [14, 36]), we have \[\|v\|_{L^{q}(\Omega)}\leq C_{p,q,\Omega}\|v\|_{L^{p}(\Omega)}^{1-a}\|v\|_{W^{ 1,p}(\Omega)}^{a}\leq C_{p,q,\Omega}\|v\|_{L^{p}(\Omega)}^{1-a}\|v\|_{W^{1,p}_{ \lambda}(\Omega)}^{a},\] where \[a=d\left(\frac{1}{p}-\frac{1}{q}\right).\] This implies \[\|v\|_{L^{q}(\Omega)}\leq C_{p,q,\Omega}|\lambda|^{-\frac{1}{2}(1-a)}\left(\| f\|_{W^{1,p}_{\lambda}(\Omega)}+|\lambda|^{-1/2}\|g\|_{L^{p}(\Omega)}\right).\] The other assertions can be proved similarly. **Definition 4.1**.: _Assume (1.1)-(1.2) and (1.6)-(1.8). Let \(1<p<+\infty\) and \(\lambda\in\mathbb{C}\). System (4.1) is said to be well-posed in \(L^{p}(\Omega)\times L^{p}(\Omega)\) if the existence, the uniqueness, and (4.2) and (4.3) hold for \((f,g)\in L^{p}(\Omega)\times L^{p}(\Omega)\). For \(p=2\) and \(\lambda\in\mathbb{C}\) being such that (4.1) is well-posed in \(L^{2}(\Omega)\times L^{2}(\Omega)\), we define_ \[\begin{array}{ccc}T_{\lambda}:&L^{2}(\Omega)\times L^{2}(\Omega)&\to&L^{2}( \Omega)\times L^{2}(\Omega)\\ &(f,g)&\mapsto&(u,v)\end{array} \tag{4.5}\] _where \((u,v)\) is the unique solution of (4.1)._ **Remark 4.2**.: Let \(\lambda\in\mathbb{C}\) satisfy the conclusion of Proposition 4.1 with \(p=2\). Then system (4.1) is well-posed in \(L^{2}(\Omega)\times L^{2}(\Omega)\) and \(T_{\lambda}\) is defined. **Remark 4.3**.: Let \(\lambda^{*}\in\mathbb{C}\) be such that \(T_{\lambda}^{*}\) is defined. If \(\mu\) is a characteristic value of the operator \(T_{\lambda^{*}}\) associated with an eigenfunction \((u,v)\) and if \(\lambda^{*}+\mu\neq 0\) we have \[\lambda^{*}+\mu\text{ is a transmission eigenvalue of (\ref{eq:1})} \tag{4.6}\] with an eigenfunction pair \((u_{1},u_{2})\) given by \[u_{1}=u+\frac{1}{\lambda^{*}+\mu}v\qquad\text{ and }\qquad u_{2}=\frac{1}{ \lambda^{*}+\mu}v. \tag{4.7}\] Moreover, the converse holds (see Remark 4.5). **Remark 4.4**.: Let \(\lambda^{*}\in\mathbb{C}\) be such that \(T_{\lambda}^{*}\) is defined. By (4.2) and (4.3) the range of \(T_{\lambda^{*}}^{2}\) is a subset of \(H^{1}(\Omega)\times H^{1}(\Omega)\). It follows that the operator \(T_{\lambda^{*}}^{2}\) is compact from \(L^{2}(\Omega)\times L^{2}(\Omega)\) into itself. By the spectral theory of compact operators, see, e.g., [5], the spectrum of \(T_{\lambda^{*}}^{2}\) consists in a discrete set of eigenvalues and the generalized eigenspace associated to each eigenvalue is of finite dimension. As a consequence, the set of eigenvalues of \(T_{\lambda^{*}}\) is discrete. This in turn implies that the set of the transmission eigenvalues of (1.3) is discrete. This fact is previously established in [33] but the arguments presented here are different. **Remark 4.5**.: Let \(\lambda^{*}\in\mathbb{C}\) be such that \(T_{\lambda}^{*}\) is defined. If \(\lambda_{j}\) is an eigenvalue of the transmission eigenvalue problem, then \(\lambda_{j}\neq\lambda^{*}\) and \(\lambda_{j}-\lambda^{*}\) is a characteristic value of \(T_{\lambda^{*}}\). One can show that the multiplicity of the characteristic values of \(\lambda_{j}-\lambda^{*}\) and \(\lambda_{j}-\hat{\lambda}\) associated with \(T_{\lambda^{*}}\) and \(T_{\hat{\lambda}}\) are the same. Hence the multiplicity of the eigenvalues associated with \(T_{\lambda^{*}}\) is independent of \(\lambda^{*}\). With this observation we define the multiplicity of \(\lambda_{j}\) as the one of the characteristic value \(\lambda_{j}-\lambda^{*}\) of \(T_{\lambda^{*}}\). The rest of this section is devoted to characterize the adjoint \(T_{\lambda}^{*}\) of \(T_{\lambda}\). This will be used in the proof of Proposition 4.2. To this end, for \((\widetilde{f},\widetilde{g})\in[L^{p}(\Omega)]^{2}\) with \(1<p<+\infty\), we consider the system, for \((\widetilde{u},\widetilde{v})\in W^{1,p}(\Omega)\times L^{p}(\Omega)\)5 Footnote 5: We emphasize here that in the first equation of (4.8), we have \(\Sigma_{2}\widetilde{u}\) not \(\Sigma_{1}\widetilde{u}\), compare with (4.1). \[\left\{\begin{array}{cl}\operatorname{div}(A\nabla\widetilde{u})-\lambda \Sigma_{2}\widetilde{u}-(\Sigma_{1}-\Sigma_{2})\widetilde{v}=\Sigma_{2} \widetilde{f}&\text{ in }\Omega,\\ \operatorname{div}(A\nabla\widetilde{v})-\lambda\Sigma_{1}\widetilde{v}= \Sigma_{1}\widetilde{g}&\text{ in }\Omega,\\ \widetilde{u}=0,\quad A\nabla\widetilde{u}\cdot\nu=0&\text{ on }\Gamma.\end{array}\right. \tag{4.8}\] Assume (1.1)-(1.2), and (1.6)-(1.8). Let \(c\in(0,1)\) and \(1<p<+\infty\). By Proposition 4.1, there exists \(\lambda_{0}>0\) depending on \(p\), \(c\), \(\Lambda\), and \(\Omega\) such that (4.8) is well-posed in \(L^{p}(\Omega)\times L^{p}(\Omega)\) for \(\lambda\in\mathbb{C}\) with \(|\Im(\lambda)|\geq c|\lambda|\) and \(|\lambda|>\lambda_{0}\), i.e., for \((f,g)\in L^{p}(\Omega)\times L^{p}(\Omega)\), there exists a unique solution \((\widetilde{u},\widetilde{v})\in W^{1,p}(\Omega)\times L^{p}(\Omega)\) of (4.8); moreover, \[\|\widetilde{u}\|_{W^{2,p}_{\lambda}(\Omega)}+\|\widetilde{v}\|_{L^{p}(\Omega) }\leq C|\lambda|^{-1/2}\left(|\lambda|^{1/2}\|\widetilde{f}\|_{L^{p}(\Omega)}+| \lambda|^{-1/2}\|\widetilde{g}\|_{L^{p}(\Omega)}\right)\] and \[\|\widetilde{u}\|_{W^{3,p}_{\lambda}(\Omega)}+\|\widetilde{v}\|_{W^{1,p}_{ \lambda}(\Omega)}\leq C\left(\|\widetilde{f}\|_{W^{1,p}_{\lambda}(\Omega)}+| \lambda|^{-1/2}\|\widetilde{g}\|_{L^{p}(\Omega)}\right).\] **Definition 4.2**.: _Assume (1.1)-(1.2) and (1.6)-(1.8). For \(p=2\) and for \(\lambda\in\mathbb{C}\) being such that (4.8) is well-posed in \(L^{2}(\Omega)\times L^{2}(\Omega)\), we define_ \[\begin{array}{ccc}\widetilde{T}_{\lambda}:&L^{2}(\Omega)\times L^{2}(\Omega )&\rightarrow&L^{2}(\Omega)\times L^{2}(\Omega)\\ &&\\ &(\widetilde{f},\widetilde{g})&\mapsto&(\widetilde{u},\widetilde{v})\end{array} \tag{4.9}\] _where \((\widetilde{u},\widetilde{v})\) is the unique solution of (4.8)._ **Lemma 4.1**.: _Assume (1.1)-(1.2) and (1.6)-(1.8). Let \(p=2\) and let \(\lambda\) be such that \(T_{\lambda}\) and \(\widetilde{T}_{\bar{\lambda}}\) are defined. Set, for \(x\in\Omega\),_ \[P(x)=\begin{pmatrix}0&\Sigma_{1}(x)\\ \Sigma_{2}(x)&0\end{pmatrix}. \tag{4.10}\] _We have_ \[T_{\lambda}^{*}=P\widetilde{T}_{\overline{\lambda}}P^{-1}. \tag{4.11}\] Proof.: Fix \((f,g)\in[L^{2}(\Omega)]^{2}\) and \((f^{*},g^{*})\in[L^{2}(\Omega)]^{2}\). Set \((u,v)=T_{\lambda}(f,g)\) and \((u^{*},v^{*})=\widetilde{T}_{\overline{\lambda}}P^{-1}(f^{*},g^{*})\). Then \[\int_{\Omega}\langle(f,g),P\widetilde{T}_{\overline{\lambda}}P^{-1}(f^{*},g^ {*})\rangle=\int_{\Omega}\Sigma_{1}\overline{fv^{*}}+\Sigma_{2}g\overline{u^{ *}}. \tag{4.12}\] Since \((u,v)=T_{\lambda}(f,g)\), we have \[\int_{\Omega}\Sigma_{1}f\overline{v^{*}}+\Sigma_{2}g\overline{u^{*}}=\int_{ \Omega}(\operatorname{div}(A\nabla u)-\lambda\Sigma_{1}u-(\Sigma_{1}-\Sigma_{ 2})v)\overline{v^{*}}+\int_{\Omega}(\operatorname{div}(A\nabla v)-\lambda \Sigma_{2}v)\overline{u^{*}}. \tag{4.13}\] As in Step 2 of the proof of Theorem 3.1, an integration by parts yields \[\int_{\Omega}(\operatorname{div}(A\nabla u)-\lambda\Sigma_{1}u-( \Sigma_{1}-\Sigma_{2})v)\overline{v^{*}}+\int_{\Omega}(\operatorname{div}(A \nabla v)-\lambda\Sigma_{2}v)\overline{u^{*}}\\ =\int_{\Omega}u(\overline{\operatorname{div}(A\nabla v^{*})- \overline{\lambda}\Sigma_{1}v^{*}})+\int_{\Omega}v(\overline{\operatorname{ div}(A\nabla u^{*})-\overline{\lambda}\Sigma_{2}u^{*}-(\Sigma_{1}-\Sigma_{2})v^{*}}). \tag{4.14}\] Since \((u^{*},v^{*})=\widetilde{T}_{\overline{\lambda}}P^{-1}(f^{*},g^{*})\), we have \[\int_{\Omega}u(\overline{\operatorname{div}(A\nabla v^{*})- \overline{\lambda}\Sigma_{1}v^{*}})+\int_{\Omega}v(\overline{\operatorname{ div}(A\nabla u^{*})-\overline{\lambda}\Sigma_{2}u^{*}-(\Sigma_{1}-\Sigma_{2})v^{*}})\\ =\int_{\Omega}\langle T_{\lambda}(f,g),(f^{*},g^{*})\rangle. \tag{4.15}\] Combining (4.12)-(4.15) yields \[\int_{\Omega}\langle(f,g),P\widetilde{T}_{\overline{\lambda}}P^{-1}(f^{*},g^ {*})\rangle=\int_{\Omega}\langle T_{\lambda}(f,g),(f^{*},g^{*})\rangle, \tag{4.16}\] and the conclusion follows. ### Hilbert-Schmidt operators In this section, we recall the definition and several properties of Hilbert-Schmidt operators. We begin with **Definition 4.3**.: _Let \(H\) be a separable Hilbert space and let \((\phi_{j})_{j=1}^{\infty}\) be an orthonormal basis of \(H\)._ 1. _Let_ \(\mathcal{T}\) _be a linear and bounded operator on_ \(H\)_. We say that_ \(\mathcal{T}\) _is Hilbert-Schmidt if its_ double norm _is finite, i.e._ \[\left|\kern-1.075pt\left|\kern-1.075pt\left|\mathcal{T}\right|\kern-1.075pt \right|\kern-1.075pt\right|:=\left(\sum_{j=1}^{\infty}\left|\kern-1.075pt\left| \mathcal{T}\phi_{j}\right|\kern-1.075pt\right|\kern-1.075pt\right|_{H}^{2} \right)^{1/2}<+\infty.\] _._ 2. _Let_ \(\mathcal{T}_{1}\) _and_ \(\mathcal{T}_{2}\) _be two Hilbert-Schmidt operators on_ \(H\)_. The_ trace _of the composition_ \(\mathcal{T}_{1}\mathcal{T}_{2}\) _is defined by_ \[\operatorname{trace}(\mathcal{T}_{1}\mathcal{T}_{2}):=\sum_{j=1}^{\infty}( \mathcal{T}_{1}\mathcal{T}_{2}\phi_{j},\phi_{j})_{H}.\] **Remark 4.6**.: One can check that Definition 4.3 does not depend on the choice of the basis \((\phi_{j})_{j=1}^{\infty}\) and the trace of \(\mathcal{T}_{1}\mathcal{T}_{2}\) is well defined as an absolutely convergent series (see [1, Theorems 12.9 and 12.12]). Let \(m\in\mathbb{N}\) and \(\mathbf{T}:[L^{2}(\Omega)]^{m}\to[L^{2}(\Omega)]^{m}\) be a Hilbert-Schmidt operator. There exists a unique kernel \(\mathbf{K}\in[L^{2}(\Omega\times\Omega)]^{m\times m}\), see e.g. [1, Theorems 12.18 and 12.19], such that \[(\mathbf{T}u)(x)=\int_{\Omega}\mathbf{K}(x,y)u(y)dy\quad\text{ for a.e. }\,x\in\Omega,\,\,\text{for all }u\in[L^{2}(\Omega)]^{m}. \tag{4.17}\] Moreover, \[\|\!\|\mathbf{T}|\!\|^{2}=\iint\limits_{\Omega\times\Omega}|\mathbf{K}(x,y)|^{ 2}\,dx\,dy. \tag{4.18}\] Note that [1, Theorems 2.18 and 12.19] state for \(m=1\), nevertheless, the same arguments hold for \(m\in\mathbb{N}\) as noted in [34]. We have, see [1] (see also [34, Lemma 4]): **Lemma 4.1**.: _Let \(m\in\mathbb{N}\) and let \(\mathbf{T}_{1},\mathbf{T}_{2}\) be two Hilbert-Schmidt operators in \([L^{2}(\Omega)]^{m}\) with the corresponding kernels \(\mathbf{K}_{1}\) and \(\mathbf{K}_{2}\). Then \(\mathbf{T}:=\mathbf{T}_{1}\mathbf{T}_{2}\) is a Hilbert-Schmidt operator with the kernel \(\mathbf{K}\) given by_ \[\mathbf{K}(x,y)=\int_{\Omega}\mathbf{K}_{1}(x,z)\mathbf{K}_{2}(z,y)\,dz. \tag{4.19}\] _Moreover,_ \[\operatorname{trace}(\mathbf{T}_{1}\mathbf{T}_{2})=\int_{\Omega}\operatorname {trace}(\mathbf{K}(x,x))dx. \tag{4.20}\] We have, see, e.g., [34, Lemma 3]. **Lemma 4.2**.: _Let \(d\geq 2\), \(m\in\mathbb{N}\), and \(\mathbf{T}:[L^{2}(\Omega)]^{m}\to[L^{2}(\Omega)]^{m}\) be such that \(\mathbf{T}(\phi)\in[C(\bar{\Omega})]^{m}\) for \(\varphi\in[L^{2}(\Omega)]^{m}\), and_ \[\|\mathbf{T}(\phi)\|_{L^{\infty}(\Omega)}\leq M\|\phi\|_{L^{2}(\Omega)}, \tag{4.21}\] _for some \(M\geq 0\). Then \(\mathbf{T}\) is a Hilbert-Schmidt operator,_ \[\|\!\|\mathbf{T}|\!\|\leq C_{m}|\Omega|^{1/2}M, \tag{4.22}\] _and the kernel \(\mathbf{K}\) of \(\mathbf{T}\) satisfies_ \[\sup_{x\in\Omega}\left(\int_{\Omega}|\mathbf{K}(x,y)|^{2}dy\right)^{1/2}\leq C _{m}|\Omega|^{1/2}M. \tag{4.23}\] _Assume in addition that_ \[\|\mathbf{T}(\phi)\|_{L^{\infty}(\Omega)}\leq\widetilde{M}||\phi||_{L^{1}( \Omega)}\text{ for }\phi\in[L^{2}(\Omega)]^{m}, \tag{4.24}\] _for some \(\widetilde{M}\geq 0\), then the kernel \(\mathbf{K}\) of \(\mathbf{T}\) satisfies_ \[|\mathbf{K}(x,y)|\leq\widetilde{M}\quad\text{for a.e. }x,y\in\Omega. \tag{4.25}\] _Here \(C_{m}\) denotes a positive constant depending only on \(m\)._ As a consequence of Lemma 4.2, we derive the following result. **Corollary 4.2**.: _Let \(\mathbf{T}_{1}\) and \(\mathbf{T}_{2}\) two Hilbert-Schmidt operators on \([L^{2}(\Omega)]^{m}\) be such that the ranges of \(\mathbf{T}_{1}\) and \(\mathbf{T}_{2}^{*}\) are in \([C(\bar{\Omega})]^{m}\) and (4.21) holds for \(\mathbf{T}_{1}\) and \(\mathbf{T}_{2}^{*}\). Assume that (4.24) holds for \(\mathbf{T}=\mathbf{T}_{1}\mathbf{T}_{2}\). Then the kernel \(\mathbf{K}\) of \(\mathbf{T}\) is continuous on \(\bar{\Omega}\times\bar{\Omega}\) and (4.25) holds for every \((x,y)\in\Omega\times\Omega\)._ Proof.: Let \(\mathbf{K}_{1}\) (resp. \(\mathbf{K}_{2}\)) be the kernel of \(\mathbf{T}_{1}\) (resp. \(\mathbf{T}_{2}\)) and let \(\mathbf{K}_{2}^{*}\) be the kernel of \(\mathbf{T}_{2}^{*}\). We claim that for \(\varepsilon>0\), there exists \(\delta>0\) such that for every \((x,x^{\prime})\in\Omega\times\Omega\) with \(|x-x^{\prime}|<\delta\) we have \[\left(\int_{\Omega}|\mathbf{K}_{1}(x,z)-\mathbf{K}_{1}(x^{\prime},z)|^{2}dz \right)^{1/2}\leq\varepsilon\quad\text{ and }\quad\left(\int_{\Omega}|\mathbf{K}_{2}^{*}(x,z)- \mathbf{K}_{2}^{*}(x^{\prime},z)|^{2}dz\right)^{1/2}\leq\varepsilon. \tag{4.26}\] Admitting (4.26), we continue the proof. We have, see, e.g., [1, Theorem 12.20], \[\mathbf{K}_{2}(z,y)=\overline{\mathbf{K}_{2}^{*}(y,z)}. \tag{4.27}\] Since \[\mathbf{K}(x,y)\stackrel{{\eqref{eq:K_2},\eqref{eq:K_2}}}{{=}} \int_{\Omega}\mathbf{K}_{1}(x,z)\overline{\mathbf{K}_{2}^{*}(z,y)}dz,\] it follows from (4.26) that \(\mathbf{K}\) is continuous in \(\bar{\Omega}\times\bar{\Omega}\). This in turn implies (4.25) by Lemma 4.2 applied to \(\mathbf{T}\). It remains to prove (4.26). We have \[\left(\int_{\Omega}|\mathbf{K}_{1}(x,z)-\mathbf{K}_{1}(x^{\prime},z)|^{2}dz \right)^{1/2}\leq C\sup_{\varphi\in[L^{2}(\Omega)]^{m}:\|\varphi\|_{L^{2}( \Omega)}\leq 1}\left|\int_{\Omega}(\mathbf{K}_{1}(x,z)-\mathbf{K}_{1}(x^{\prime},z)) \varphi(z)dz\right|.\] Given \(\varepsilon>0\), let \(\varphi_{\varepsilon}\in[L^{2}(\Omega)]^{m}\) with \(\|\varphi_{\varepsilon}\|_{L^{2}(\Omega)}\leq 1\) be such that \[\left(\int_{\Omega}|\mathbf{K}_{1}(x,z)-\mathbf{K}_{1}(x^{\prime},z)|^{2}dz \right)^{1/2}\leq\left|\int_{\Omega}(\mathbf{K}_{1}(x,z)-\mathbf{K}_{1}(x^{ \prime},z))\varphi_{\varepsilon}(z)dz\right|+\frac{\varepsilon}{2}.\] This yields \[\left(\int_{\Omega}|\mathbf{K}_{1}(x,z)-\mathbf{K}_{1}(x^{\prime},z)|^{2}dz \right)^{1/2}\leq|(\mathbf{T}_{1}\varphi_{\varepsilon})(x)-(\mathbf{T}_{1} \varphi_{\varepsilon})(x^{\prime})|+\frac{\varepsilon}{2}. \tag{4.28}\] The first inequality of (4.26) now follows from (4.28) and the fact that \(\mathbf{T}_{1}\varphi_{\varepsilon}\in[C(\bar{\Omega})]^{m}\). Similarly, we obtain the second inequality of (4.26). ### The operators \(\mathbf{T}_{\theta,t}\) and their properties Denote \[k=\left[\frac{d}{2}\right]+1, \tag{4.29}\] the smallest integer greater than \(d/2\). Fix \[2=p_{1}<p_{2}<\cdots<p_{k}<+\infty \tag{4.30}\] such that \[p_{j-1}<p_{j}<\frac{dp_{j-1}}{d-p_{j-1}}\quad\text{ and }\quad p_{k}>d. \tag{4.31}\] Denote \[\lambda^{*}=t^{*}e^{i\frac{\pi}{2}}, \tag{4.32}\] for some large \(t^{*}>0\) such that, for \(t\geq t^{*}\), (4.1) with \(\lambda=te^{i\frac{\pi}{2}}\) is well-posed in \(L^{p}(\Omega)\times L^{p}(\Omega)\) and (4.8) with \(\lambda=te^{-i\frac{\pi}{2}}\) is well-posed in \(L^{p}(\Omega)\times L^{p}(\Omega)\) with \(p=p_{1}\), \(\cdots\), \(p_{k}\). Let \[\omega_{j}\in\mathbb{C}\text{ with }1\leq j\leq k+1\text{ be the (distinct) }(k+1)\text{-th roots of }1\text{ (thus }\omega_{j}^{k+1}=1) \tag{4.33}\] and let \[\Theta=\mathbb{R}\setminus\left\{\frac{\pi}{k+1}\mathbb{Z}\right\}. \tag{4.34}\] **Definition 4.4**.: _For \(\theta\in\Theta\), \(1\leq j\leq k+1\), and \(t>0\), we define_ \[\lambda_{j,\theta,t}=\lambda^{*}+\omega_{j}te^{i\theta}, \tag{4.35}\] _and_ \[t_{\theta}>t^{*}\] _such that the following properties hold, for \(t\geq t_{\theta}\),_ (4.36) (4.1) _with \(\lambda=\lambda_{j,\theta,t}\) is well-posed in \(L^{p}(\Omega)\times L^{p}(\Omega)\)_ _and (4.8) with \(\lambda=\bar{\lambda}_{j,\theta,t}\) is well-posed in \(L^{p}(\Omega)\times L^{p}(\Omega)\) with \(p=p_{1}\), \(\cdots\), \(p_{k}\),_ \[\frac{t}{2}\leq|\lambda_{j,\theta,t}|<2t. \tag{4.37}\] Such a \(t_{\theta}>t^{*}\) exists by Proposition 4.1 after noting that, for \(\theta\in\Theta\), \[\Im\left(\omega_{j}e^{i\theta}\right)\neq 0,\] and, for \(1\leq j\leq k+1\), \[\lim_{t\to+\infty}\frac{\left|\Im\left(\lambda^{*}+t\omega_{j}e^{i\theta} \right)\right|}{|\lambda^{*}+t\omega_{j}e^{i\theta}|}=\left|\Im\left(\omega_ {j}e^{i\theta}\right)\right|>0.\] Viewing (4.2) and (4.3), it is convenient to modify \(T_{\lambda_{j,\theta,t}}\) to capture the scaling with respect to \(t\sim\lambda_{j,\theta,t}\) there, as in [39]. Denote \[M_{t}=\begin{pmatrix}t^{1/2}&0\\ 0&t^{-1/2}\end{pmatrix}. \tag{4.38}\] Let \(\theta\in\Theta\) and \(t\geq t_{\theta}\). Define, for \(1\leq j\leq k+1\), \[T_{j,\theta,t}=M_{t}T_{\lambda_{j,\theta,t}}M_{t}^{-1}\qquad\text{ and }\qquad\mathbf{T}_{\theta,t}=T_{k+1,\theta,t}\circ T_{k,\theta,t}\circ\cdots\circ T_{1,\theta,t}. \tag{4.39}\] Here is the main result of this section. **Proposition 4.2**.: _Let \(\theta\in\Theta\) and let \(t_{\theta}\) be given in Definition 4.4. Then, for \(t\geq t_{\theta}\),_ \[\|\mathbf{T}_{\theta,t}\|_{L^{2}(\Omega)\to L^{2}(\Omega)}\leq Ct^{-k-1}, \tag{4.40}\] _the range of \(\mathbf{T}_{\theta,t}\) is in \([C(\bar{\Omega})]^{2}\),_ \[\|\mathbf{T}_{\theta,t}\|_{L^{2}(\Omega)\to L^{\infty}(\Omega)}\leq Ct^{-k-1+ \frac{d}{4}}, \tag{4.41}\] _and_ \[\|\mathbf{T}_{\theta,t}\|_{L^{1}(\Omega)\to L^{2}(\Omega)}\leq Ct^{-k-1+\frac{ d}{4}} \tag{4.42}\] _for some positive constant \(C\) independent of \(t\). Similar facts hold for \(\mathbf{T}_{\theta,t}^{*}\)._ As a direct consequence of Lemma 4.2 and Proposition 4.2, we obtain **Corollary 4.3**.: _Let \(\theta\in\Theta\) and let \(t_{\theta}\) be given in Definition 4.4. Then, for \(t\geq t_{\theta}\), the operator \(\mathbf{T}_{\theta,t}\) is Hilbert-Schmidt, and_ \[\left|\!\left|\!\left|\mathbf{T}_{\theta,t}\right|\!\right|\!\right|\leq Ct^{ -k-1+\frac{d}{4}}, \tag{4.43}\] _for some positive constant \(C\) independent of \(t\)._ We now give Proof of Proposition 4.2.: We first deal with (4.41). By using (4.37), we derive that \[\|T_{j,\theta,t}\|_{L^{2}(\Omega)\to L^{2}(\Omega)}\stackrel{{ \text{Proposition \ref{prop:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def _def_def_def_def_def_def_def}}}}}{{\leq}}Ct^{-1}\] and hence \[\|\mathbf{T}_{\theta,t}\|_{L^{2}(\Omega)\to L^{2}(\Omega)}\leq\prod_{j=1} ^{k+1}\|T_{j,\theta,t}\|_{L^{2}(\Omega)\to L^{2}(\Omega)}\leq Ct^{-k-1}.\] This establishes (4.40). Next we deal with (4.41). For \(j=1,\cdots,k+1\) and \((f,g)\in[L^{2}(\Omega)]^{2}\), we write \[(u^{(j)},v^{(j)})=T_{j,\theta,t}\circ T_{j-1,\theta,t}\circ\cdots\circ T_{1, \theta,t}(f,g). \tag{4.44}\] By (4.37), we have \[t^{-1/2}\|u^{(1)}\|_{W^{1,2}_{t}(\Omega)}+\|v^{(1)}\|_{L^{2}(\Omega)}\stackrel{{ \text{Proposition \ref{prop:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def _def_def_def_def_def_def_def_def_def_def_def _def_def_def_def_def_def}}}}{{\leq}}Ct^{-1}\|(f,g)\|_{L^{2}(\Omega)\times L^{2}( \Omega)}, \tag{4.45}\] and, for \(2\leq j\leq k\), \[t^{-1/2}\|u^{(j)}\|_{W^{1,p_{j}}_{t}(\Omega)}+\|v^{(j)}\|_{L^{p_{j}}(\Omega)} \tag{4.46}\] \[\stackrel{{\text{Corollary \ref{prop:def_def_def_def_def_def_def_def_def_def _def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def _def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def _def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef_def_def_def_defdef_def_def_def_def_def_def_def_defdef_def_defdef_def_def_defdef_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef_def_def_def_defdef_def_ where \(P\) is given by (4.10). This implies \[\mathbf{T}^{*}_{\theta,t}=M_{t}^{-1}P\widetilde{T}_{\overline{\lambda}_{1,\theta, t}}\circ\cdots\circ\widetilde{T}_{\overline{\lambda}_{k+1,\theta,t}}P^{-1}M_{t}.\] Similarly to (4.48), we have \[\|\mathbf{T}^{*}_{\theta,t}\|_{L^{2}(\Omega)\to L^{\infty}(\Omega)}\leq Ct^{-k -1+\frac{d}{4}}. \tag{4.49}\] By a standard dual argument, we derive from (4.49) that \[\|\mathbf{T}_{\theta,t}\|_{L^{1}(\Omega)\to L^{2}(\Omega)}\leq Ct^{-k-1+ \frac{d}{4}}.\] The properties for \(\mathbf{T}_{\theta,t}\) are established. The properties for \(\mathbf{T}^{*}_{\theta,t}\) can be derived similarly. ### The approximation of the trace of a kernel Denote \[\alpha=\frac{\pi}{4(k+1)}\qquad\text{ and }\qquad\beta=\frac{5\pi}{4(k+1)}. \tag{4.50}\] Then \[\alpha,\beta\in\Theta\quad\text{ and }\quad e^{i\alpha(k+1)}+e^{i\beta(k+1)}=0. \tag{4.51}\] Recall that \(\Theta\) is defined in (4.34). **Lemma 4.2**.: _For \(t\geq\max\{t_{\alpha},t_{\beta}\}\), where \(t_{\alpha}\) and \(t_{\beta}\) are given in Definition 4.4, we have_ 1. _the operator_ \(\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t}\) _is Hilbert-Schmidt, and_ (4.52) \[\left|\kern-1.075pt\left|\kern-1.075pt\left|\mathbf{T}_{\alpha,t}\mathbf{T}_{ \beta,t}\right|\kern-1.075pt\right|\kern-1.075pt\right|\leq Ct^{-2k-2+\frac{d}{ 2}};\] 2. _the range of_ \(\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t}\) _is in_ \([C(\bar{\Omega})]^{2}\)_, and_ (4.53) \[\|\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t}\|_{L^{1}(\Omega)\to L^{\infty}( \Omega)}\leq Ct^{-2k-2+\frac{d}{2}};\] 3. _the kernel_ \(\mathbf{K}_{t}\) _of_ \(\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t}\) _is continuous in_ \(\Omega\times\Omega\)_, and_ (4.54) \[|\mathbf{K}_{t}(x,y)|\leq Ct^{-2k-2+\frac{d}{2}}\quad\text{ for all }(x,y)\in\Omega\times\Omega;\] _for some positive constant \(C\) independent of \(t\)._ Proof.: Assertion (4.52) follows from Corollary 4.3 and Applying Proposition 4.2 and using the fact \[\|\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t}\|_{L^{1}(\Omega)\to L^{\infty}( \Omega)}\leq\|\mathbf{T}_{\alpha,t}\|_{L^{2}(\Omega)\to L^{\infty}(\Omega)}\| \mathbf{T}_{\beta,t}\|_{L^{1}(\Omega)\to L^{2}(\Omega)},\] we obtain (4.53). Since both the range of \(\mathbf{T}_{\alpha,t}\) and \(\mathbf{T}^{*}_{\beta,t}\) are contained in \([C(\bar{\Omega})]^{2}\), the continuity of \(\mathbf{K}_{t}\) and (4.54) follow from Corollary 4.2 and (4.53). For \(\ell=1\), \(2\), \(\theta\in\Theta\), and \(t>1\), consider, with \(\lambda=te^{i\theta}\), \[\begin{array}{ccc}S_{\ell,\lambda,x_{0}}:&L^{2}(\mathbb{R}^{d})&\to&L^{2}( \mathbb{R}^{d})\\ &&f_{\ell}&\mapsto&v_{\ell}\end{array} \tag{4.55}\] where \(v_{\ell}\in H^{1}(\mathbb{R}^{d})\) is the unique solution of \[\operatorname{div}(A(x_{0})\nabla v_{\ell})-\lambda\Sigma_{\ell}(x_{0})v_{ \ell}=\Sigma_{\ell}(x_{0})f_{\ell}\text{ in }\mathbb{R}^{d}. \tag{4.56}\] One then has \[S_{\ell,\lambda,x_{0}}f(x)=\int_{\mathbb{R}^{d}}F_{\ell,\lambda}(x_{0},x-y)f(y)dy, \tag{4.57}\] where \[F_{\ell,\lambda}(x_{0},z)=-\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}\frac{e^{iz \cdot\xi}}{\Sigma_{\ell}(x_{0})^{-1}A(x_{0})\xi\cdot\xi+\lambda}d\xi. \tag{4.58}\] Set, for \(\ell=1,2\), \[\mathcal{S}_{\ell,t,x_{0}}=S_{\ell,\lambda_{k+1,\alpha,t},x_{0}}\circ\cdots \circ S_{\ell,\lambda_{1,\alpha,t},x_{0}}\circ S_{\ell,\lambda_{k+1,\beta,t},x _{0}}\circ\cdots\circ S_{\ell,\lambda_{1,\beta,t},x_{0}}.\] Define, for \(\ell=1,2\), \[\mathcal{F}_{\ell,t}(x_{0},z)=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}\frac{ e^{iz\xi}\,d\xi}{\prod_{j=1}^{k+1}\left(\Sigma_{\ell}(x_{0})^{-1}A(x_{0})\xi \cdot\xi+\lambda_{j,\alpha,t}\right)\left(\Sigma_{\ell}(x_{0})^{-1}A(x_{0}) \xi\cdot\xi+\lambda_{j,\beta,t}\right)}. \tag{4.59}\] Then \[\mathcal{S}_{\ell,t,x_{0}}f_{\ell}(x)=\int_{\mathbb{R}^{d}}\mathcal{F}_{\ell, t}(x_{0},x-y)f_{\ell}(y)dy. \tag{4.60}\] Since \(2k+2>d\), the integrand appearing in (4.59) belongs to \(L^{1}(\mathbb{R}^{d})\cap L^{2}(\mathbb{R}^{d})\), and thus \[z\mapsto\mathcal{F}_{\ell,t}(x_{0},z)\text{ is continuous and belongs to }L^{2}(\mathbb{R}^{d}). \tag{4.61}\] To introduce the freezing coefficient version of (4.1) in the whole space, we use the following result in which (4.62) is the system of \((u,v):=(v_{1}-v_{2},\lambda v_{1})\), where \(v_{\ell}\) (\(\ell=1,2\)) is defined by (4.56). **Lemma 4.3**.: _Let \(x_{0}\in\Omega\), \(c\in(0,1)\), \(\lambda\in\mathbb{C}\) with \(|\lambda|\geq 1\) and \(|\Im(\lambda)|\geq c|\lambda|\). Let \(p>1\) and let \((f,g)\in[L^{p}(\mathbb{R}^{d})]^{2}\). Then there exists a unique solution \((u,v)\in[W^{1,p}(\mathbb{R}^{d})]^{2}\) of_ \[\left\{\begin{array}{rl}\operatorname{div}(A(x_{0})\nabla u)-\lambda\Sigma_ {1}(x_{0})u-(\Sigma_{1}(x_{0})-\Sigma_{2}(x_{0}))v&=\Sigma_{1}(x_{0})f\quad \text{ in }\mathbb{R}^{d},\\ \operatorname{div}(A(x_{0})\nabla v)-\lambda\Sigma_{2}(x_{0})v&=\Sigma_{2}(x_ {0})g\quad\text{ in }\mathbb{R}^{d}.\end{array}\right. \tag{4.62}\] _Moreover,_ \[\|u\|_{W^{2,p}_{\lambda}(\mathbb{R}^{d})}+|\lambda|^{-1}\|v\|_{W^{2,p}_{ \lambda}(\mathbb{R}^{d})}\leq C\left(\|f\|_{L^{p}(\mathbb{R}^{d})}+|\lambda| ^{-1}\|g\|_{L^{p}(\mathbb{R}^{d})}\right), \tag{4.63}\] _for some \(C>0\) depending only on \(\Lambda\),\(c\) and \(p\). As a consequence,_ 1. _either_ \(1<p<d\) _and_ \(p\leq q\leq\frac{dp}{d-p}\)_,_ 2. _either_ \(d=p\leq q<+\infty\)_,_ 3. _or_ \(p>d\) _and_ \(q=+\infty\)_,_ _we have_ \[\|u\|_{W^{1,q}_{\lambda}(\mathbb{R}^{d})}+|\lambda|^{-1}\|v\|_{W^{1,q}_{ \lambda}(\mathbb{R}^{d})}\leq C|\lambda|^{\frac{d}{2}\left(\frac{1}{p}-\frac{1 }{q}\right)-\frac{1}{2}}\left(\|f\|_{L^{p}(\mathbb{R}^{d})}+|\lambda|^{-1}\|g \|_{L^{p}(\mathbb{R}^{d})}\right).\] Proof.: We emphasize here that (4.62) is a system with constant coefficients imposed in \(\mathbb{R}^{d}\). The proof is quite standard. The idea is first to obtain the existence, uniqueness, and the estimate for \(v\) using the second equation of (4.62), and then using these to derive the ones for \(u\) using the first equation of (4.62). The details are omitted. For \(x_{0}\in\Omega\), \(j=1,\cdots,k+1\), \(\theta\in\Theta\), and \(t>1\), define, for \(1<p<+\infty\), \[R_{\lambda_{j,\theta,t},x_{0}}:\ \ [L^{p}(\mathbb{R}^{d})]^{2} \to [L^{p}(\mathbb{R}^{d})]^{2}\] \[(f,g) \mapsto \ \ (u,v)\] where \((u,v)\in[W^{1,p}(\mathbb{R}^{d})]^{2}\) is the unique solution (4.62) with \(\lambda=\lambda_{j,\theta,t}\). Recall that \(\lambda_{j,\theta,t}\) is defined in (4.35). We also introduce \[R_{j,\theta,t,x_{0}}=M_{t}R_{\lambda_{j,\theta,t},x_{0}}M_{t}^{-1}\qquad\text{ and }\qquad\mathbf{R}_{\theta,t,x_{0}}=R_{k+1,\theta,t,x_{0}}\circ\cdots\circ R_{1, \theta,t,x_{0}}. \tag{4.64}\] As in the proof of Proposition 4.2, however, using Lemma 4.3 instead of Proposition 4.1 and Corollary 4.1, we obtain **Lemma 4.4**.: _Let \(\theta\in\Theta\) and \(t>1\). Then, the range of \(\mathbf{R}_{\theta,t,x_{0}}\) and \(\mathbf{R}_{\theta,t,x_{0}}^{*}\) are in \([C(\mathbb{R}^{d})]^{2}\) for all \(t>1\). Moreover,_ \[\|\mathbf{R}_{\theta,t,x_{0}}\|_{L^{2}(\mathbb{R}^{d})\to L^{\infty}(\mathbb{ R}^{d})}+\|\mathbf{R}_{\theta,t,x_{0}}\|_{L^{1}(\mathbb{R}^{d})\to L^{2}( \mathbb{R}^{d})}\leq Ct^{-k-1+\frac{d}{4}} \tag{4.65}\] _and_ \[\|\mathbf{R}_{\theta,t,x_{0}}^{*}\|_{L^{2}(\mathbb{R}^{d})\to L^{\infty}( \mathbb{R}^{d})}+\|\mathbf{R}_{\theta,t,x_{0}}^{*}\|_{L^{1}(\mathbb{R}^{d}) \to L^{2}(\mathbb{R}^{d})}\leq Ct^{-k-1+\frac{d}{4}}, \tag{4.66}\] _for some positive constant \(C\) independent of \(t\)._ Define \[\mathbf{R}_{t,x_{0}}=\mathbf{R}_{\alpha,t,x_{0}}\mathbf{R}_{\beta,t,x_{0}}. \tag{4.67}\] One can then write \(\mathbf{R}_{t,x_{0}}\) under the form \[\mathbf{R}_{t,x_{0}}=\begin{pmatrix}(\mathbf{R}_{t,x_{0}})_{11}&(\mathbf{R}_{ t,x_{0}})_{12}\\ (\mathbf{R}_{t,x_{0}})_{21}&(\mathbf{R}_{t,x_{0}})_{22}\end{pmatrix}.\] Note that, by the definition of \(S_{\ell,\lambda,x_{0}}\), \[R_{\lambda_{j,\theta,t},x_{0}}=\begin{pmatrix}S_{1,\lambda_{j,\theta,t},x_{0}} &\Sigma_{1}(x_{0})^{-1}(\Sigma_{1}(x_{0})-\Sigma_{2}(x_{0}))S_{1,\lambda_{j, \theta,t},x_{0}}S_{2,\lambda_{j,\theta,t},x_{0}}\\ 0&S_{2,\lambda_{j,\theta,t},x_{0}}\end{pmatrix}.\] It follows that \(R_{\lambda_{j,\theta,t},x_{0}}\) is an upper triangular matrix operator, and so is \(\mathbf{R}_{t,x_{0}}\). We deduce that \[(\mathbf{R}_{t,x_{0}})_{21}=0\] and, for \(\ell=1,2\), \[(\mathbf{R}_{t,x_{0}})_{\ell\ell}=\mathcal{S}_{\ell,t,x_{0}}.\] These simple observations are useful in computing the approximation of the trace of the kernel of \(\mathbf{R}_{t,x_{0}}\). As an immediate consequence of (4.60), \(\mathbf{R}_{t,x_{0}}\) is an integral operator whose kernel verifies, for \(\ell=1,2\), \[(\mathbf{K}_{t,x_{0}})_{\ell\ell}(x,y)=\mathcal{F}_{\ell}(x_{0},x-y)\text{ for }x,y\in\mathbb{R}^{d}. \tag{4.68}\] Further properties of \(\mathbf{K}_{t,x_{0}}\) are given in the following lemma. **Lemma 4.5**.: _Let \(t\geq 1\) and \(x_{0}\in\Omega\). Then \(\mathbf{K}_{t,x_{0}}\) is continuous on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\), and, for \((x,y)\in\mathbb{R}^{d}\times\mathbb{R}^{d}\), it holds, for \(\ell=1,2\),_ \[|(\mathbf{K}_{t,x_{0}})_{\ell\ell}(x,y)|\leq Ct^{-2k-2+\frac{d}{2}}. \tag{4.69}\] _Moreover,_ \[\text{trace}(\mathbf{K}_{t,x_{0}}(x_{0},x_{0}))\\ =\frac{t^{-2k-2+\frac{d}{2}}}{(2\pi)^{d}}\sum_{\ell=1}^{2}\int_{ \mathbb{R}^{d}}\frac{d\xi}{(\Sigma_{\ell}(x_{0})^{-1}A(x_{0})\xi\cdot\xi)^{2k+2 }-i}+o(t^{-2k-2+\frac{d}{2}})\text{ as }t\to+\infty. \tag{4.70}\] **Remark 4.1**.: Assertion (4.70) holds uniformly with respect to \(x_{0}\in\Omega\). Proof.: From (4.68), it follows that \((\mathbf{K}_{t,x_{0}})_{\ell\ell}(x,y)\) is continuous on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\). By the choice of \(\alpha\), \(\beta\), and \(\omega_{j}\) in (4.50), (4.51), and (4.33), one has \[\prod_{j=1}^{k+1}\left(\Sigma_{\ell}(x_{0})^{-1}A(x_{0})\xi\cdot\xi+\lambda^{ *}+\omega_{j}te^{i\alpha}\right)\left(\Sigma_{\ell}(x_{0})^{-1}A(x_{0})\xi \cdot\xi+\lambda^{*}+\omega_{j}te^{i\beta}\right)\\ =(\Sigma_{\ell}(x_{0})^{-1}A(x_{0})\xi\cdot\xi+\lambda^{*})^{2(k +1)}-it^{2(k+1)}.\] It follows from (4.59) that, for every \(x_{0}\in\Omega\) and every \(z\in\mathbb{R}^{d}\), \[\mathcal{F}_{\ell,t}(x_{0},z)=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}\frac {e^{iz\cdot\xi}\,d\xi}{(\Sigma_{\ell}(x_{0})^{-1}A(x_{0})\xi\cdot\xi+\lambda^{ *})^{2(k+1)}-it^{2(k+1)}}.\] A change of variables yields \[\mathcal{F}_{\ell,t}(x_{0},z)=\frac{t^{-2k-2+\frac{d}{2}}}{(2\pi)^{d}}\int_{ \mathbb{R}^{d}}\frac{e^{it^{1/2}z\cdot\xi}\,d\xi}{(\Sigma_{\ell}(x_{0})^{-1}A (x_{0})\xi\cdot\xi+t^{-1}\lambda^{*})^{2(k+1)}-i}. \tag{4.71}\] Assertion (4.69) follows from (4.71) since \(|e^{it^{1/2}z\cdot\xi}|=1\) and \(\lambda^{*}t^{-1}\) is uniformly bounded with respect to \(t\geq 1\). By taking \(z=0\) in (4.71), we obtain (4.70) after using the dominated convergence theorem. The proof is complete. We now prove the main result of this section concerning the trace of \(\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t}\) where \(\alpha,\beta\) are given in (4.50) and \(\mathbf{T}_{\theta,t}\) is defined in (4.39). **Proposition 4.3**.: _We have_ \[\operatorname{trace}(\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t})=\mathbf{c}t^ {-2k-2+\frac{d}{2}}+o(t^{-2k-2+\frac{d}{2}})\quad\text{ as }\quad t\to+\infty,\] _where_ \[\mathbf{c}=\frac{1}{(2\pi)^{d}}\sum_{\ell=1}^{2}\int_{\Omega}\int_{\mathbb{R} ^{d}}\frac{d\xi\,dx}{\left(\Sigma_{\ell}^{-1}(x)A(x)\xi\cdot\xi\right)^{2k+2} -i}. \tag{4.72}\] The proof of Proposition 4.3 uses the following result. **Lemma 4.6**.: _Let \(\delta_{0}\in(0,1)\) and \(\theta\in\Theta\). For every \(\varepsilon>0\), there exists \(\delta_{\varepsilon}\in(0,\delta_{0}/2)\) depending on \(\varepsilon\) such that the following holds: There exists \(t_{\varepsilon}>0\) depending on \(\varepsilon\) and \(\delta_{\varepsilon}\) such that for every \(t>t_{\varepsilon}\) and every \(x_{0}\in\Omega\setminus\overline{\Omega_{\delta_{0}}}\), we have_ \[\|\mathbf{T}_{\theta,t}-\mathbf{R}_{\theta,t,x_{0}}\mathds{1}_{\Omega}\|_{L^{2 }(\Omega)\to L^{\infty}(B(x_{0},\delta_{\varepsilon}))}\leq\varepsilon t^{-k-1 +\frac{d}{4}} \tag{4.73}\] _and_ \[\|\mathbf{T}_{\theta,t}\mathds{1}_{\Omega}-\mathbf{R}_{\theta,t,x_{0}}\|_{L^{ 2}(\mathbb{R}^{d})\to L^{\infty}(B(x_{0},\delta_{\varepsilon}))}\leq\varepsilon t ^{-k-1+\frac{d}{4}}, \tag{4.74}\] _and similar facts for \(\mathbf{T}^{*}_{\theta,t}\) and \(\mathbf{R}^{*}_{\theta,t,x_{0}}\)._ Recall that \(\mathbf{R}_{\theta,t,x_{0}}\) is defined in (4.64). We admit Lemma 4.6 and give the proof of Proposition 4.3. The proof of Lemma 4.6 is presented right after the one of Proposition 4.3. Proof of Proposition 4.3.: For \(\varepsilon>0\), let \(\delta_{0}>0\) be such that \[|\Omega_{2\delta_{0}}|<\varepsilon, \tag{4.75}\] where \(\Omega_{\tau}\) is given in (2.1). We claim that there exists \(\tau_{*}>0\), depending on \(\Omega\), and \(\varepsilon\) but independent of \(x_{0}\), and a positive constant \(C\), independent of \(\varepsilon\) and \(x_{0}\), such that, for \(t>\tau_{*}\), \[|\mathrm{trace}(\mathbf{K}_{t}(x_{0},x_{0}))-\mathrm{trace}(\mathbf{K}_{t,x_{ 0}}(x_{0},x_{0}))|\leq C\varepsilon t^{-2k-2+\frac{d}{2}}\quad\text{ for }x_{0}\in\Omega \setminus\Omega_{\delta_{0}}. \tag{4.76}\] Indeed, let \(\chi\in C_{c}^{\infty}(\mathbb{R}^{d})\) be such that \(\chi=1\) in \(B_{1}\) and \(\mathrm{supp}\,\chi\subset B_{2}\). Denote, for \(\delta\in(0,\delta_{0}/10)\), \[\chi_{\delta,x_{0}}=\chi\big{(}(\cdot-x_{0})/\delta\big{)},\] and define \[\left\{\begin{array}{l}\mathbf{P}_{1,t,\delta}=\chi_{\delta,x_{0}}(\mathbf{ R}_{\alpha,t,x_{0}}\mathds{1}_{\Omega}-\mathbf{T}_{\alpha,t})\mathbf{T}_{\beta,t} \chi_{\delta,x_{0}},\\ \mathbf{P}_{2,t,\delta}=\chi_{\delta,x_{0}}\mathbf{R}_{\alpha,t,x_{0}}(\mathbf{ R}_{\beta,t,x_{0}}-\mathds{1}_{\Omega}\mathbf{T}_{\beta,t})\chi_{\delta,x_{0}}.\end{array}\right. \tag{4.77}\] Then \[\chi_{\delta,x_{0}}(\mathbf{R}_{\alpha,t,x_{0}}\mathbf{R}_{\beta,t,x_{0}}- \mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t})\chi_{\delta,x_{0}}=\mathbf{P}_{1,t,\delta}+\mathbf{P}_{2,t,\delta}. \tag{4.78}\] By applying Lemma 4.6 below with \(\theta\in\{\alpha,\beta\}\), there exist \(\delta_{\varepsilon}>0\) and \(t_{\varepsilon}>0\) depending on \(\varepsilon\) such that for every \(t>t_{\varepsilon}\), \[\|\chi_{\delta_{\varepsilon},x_{0}}(\mathbf{T}_{\alpha,t}-\mathbf{R}_{\alpha, t,x_{0}}\mathds{1}_{\Omega})\|_{L^{2}(\Omega)\to L^{\infty}(\Omega)}\leq \varepsilon t^{-k-1+\frac{d}{4}} \tag{4.79}\] and \[\|\chi_{\delta_{\varepsilon},x_{0}}(\mathbf{T}^{*}_{\beta,t}\mathds{1}_{ \Omega}-\mathbf{R}^{*}_{\beta,t,x_{0}})\|_{L^{2}(\mathbb{R}^{d})\to L^{\infty} (\Omega)}\leq\varepsilon t^{-k-1+\frac{d}{4}}. \tag{4.80}\] Since \[\Big{(}(\mathds{1}_{\Omega}\mathbf{T}_{\beta,t}-\mathbf{R}_{\beta,t,x_{0}}) \chi_{\delta_{\varepsilon},x_{0}}\Big{)}^{*}=\chi_{\delta_{\varepsilon},x_{0} }(\mathbf{T}^{*}_{\beta,t}\mathds{1}_{\Omega}-\mathbf{R}^{*}_{\beta,t,x_{0}}),\] we derive from (4.80), using a dual argument, that \[\|(\mathds{1}_{\Omega}\mathbf{T}_{\beta,t}-\mathbf{R}_{\beta,t,x_{0}})\chi_{ \delta_{\varepsilon},x_{0}}\|_{L^{1}(\Omega)\to L^{2}(\mathbb{R}^{d})}\leq \varepsilon t^{-k-1+\frac{d}{4}}. \tag{4.81}\] By Proposition 4.2 and Lemma 4.4, we have \[\|\mathbf{T}_{\beta,t}\chi_{\delta_{\varepsilon},x_{0}}\|_{L^{1}(\Omega)\to L ^{2}(\Omega)}+\|\chi_{\delta_{\varepsilon},x_{0}}\mathbf{R}_{\alpha,t,x_{0}}\| _{L^{2}(\mathbb{R}^{d})\to L^{\infty}(\Omega)}\leq Ct^{-k-1+\frac{d}{4}} \tag{4.82}\] for some constant \(C>0\) independent of \(\varepsilon\) and \(t\). Using the fact, for appropriate linear operators \(L_{1}\) and \(L_{2}\), \[\|L_{1}L_{2}\|_{L^{1}(\Omega)\to L^{\infty}(\Omega)}\leq\|L_{1}\|_{L^{2}( \Omega)\to L^{\infty}(\Omega)}\|L_{2}\|_{L^{1}(\Omega)\to L^{2}(\Omega)},\] and \[\|L_{1}L_{2}\|_{L^{1}(\Omega)\to L^{\infty}(\Omega)}\leq\|L_{1}\|_{L^{2}( \mathbb{R}^{d})\to L^{\infty}(\Omega)}\|L_{2}\|_{L^{1}(\Omega)\to L^{2}( \mathbb{R}^{d})},\] we derive from (4.79), (4.81), and (4.82) that \[\|\mathbf{P}_{1,t,\delta_{\varepsilon}}+\mathbf{P}_{2,t,\delta_{\varepsilon}} \|_{L^{1}(\Omega)\to L^{\infty}(\Omega)}\leq\|\mathbf{P}_{1,t,\delta_{ \varepsilon}}\|_{L^{1}(\Omega)\to L^{\infty}(\Omega)}+\|\mathbf{P}_{2,t,\delta_ {\varepsilon}}\|_{L^{1}(\Omega)\to L^{\infty}(\Omega)}\leq C\varepsilon t^{-2 k-2+\frac{d}{2}}.\] This yields, by (4.78), \[\|\chi_{\delta_{\varepsilon},x_{0}}(\mathbf{R}_{\alpha,t,x_{0}}\mathbf{R}_{ \beta,t,x_{0}}-\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t})\chi_{\delta_{ \varepsilon},x_{0}}\|_{L^{1}(\Omega)\to L^{\infty}(\Omega)}\leq C\varepsilon t ^{-2k-2+\frac{d}{2}}. \tag{4.83}\] Since, for \(x\in\Omega\), \(\ell=1,2\) and \(f\in L^{2}(\Omega)\), \[\chi_{\delta_{\varepsilon},x_{0}}\Big{(}(\mathbf{R}_{\alpha,t,x_{0 }}\mathbf{R}_{\beta,t,x_{0}})_{\ell\ell}-(\mathbf{T}_{\alpha,t}\mathbf{T}_{ \beta,t})_{\ell\ell}\Big{)}\chi_{\delta_{\varepsilon},x_{0}}f(x)\\ =\chi_{\delta_{\varepsilon},x_{0}}(x)\int_{\Omega}\chi_{\delta_{ \varepsilon},x_{0}}(y)\Big{(}(\mathbf{K}_{t,x_{0}})_{\ell\ell}(x,y)-(\mathbf{K }_{t})_{\ell\ell}(x,y)\Big{)}f(y)dy,\] it follows that \(\chi_{\delta_{\varepsilon},x_{0}}(x)\chi_{\delta_{\varepsilon},x_{0}}(y)(( \mathbf{K}_{t,x_{0}})_{\ell\ell}(x,y)-(\mathbf{K}_{t}(x,y))_{\ell\ell})\) is the kernel of the operator \[\chi_{\delta_{\varepsilon},x_{0}}\Big{(}(\mathbf{R}_{\alpha,t,x_{0}}\mathbf{ R}_{\beta,t,x_{0}})_{\ell\ell}-(\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t})_{ \ell\ell}\Big{)}\chi_{\delta_{\varepsilon},x_{0}}.\] By Lemma 4.2 and Lemma 4.5, this kernel is continuous on \(\bar{\Omega}\times\bar{\Omega}\). Using (4.83) and applying Lemma 4.2, we derive that, since \(\chi_{\delta_{\varepsilon},x_{0}}(x_{0})=1\), \[|\mathrm{trace}(\mathbf{K}_{t}(x_{0},x_{0}))-\mathrm{trace}(\mathbf{K}_{t,x_ {0}}(x_{0},x_{0}))|\leq C\varepsilon t^{-2k-2+\frac{d}{2}}\quad\text{ for all }t>t_{\varepsilon}. \tag{4.84}\] Since the LHS of (4.84) does not depend on \(\varepsilon>0\), the claim (4.76) is proved. By Lemma 4.2 we have, for \(t>0\) large enough, \[\int_{\Omega_{2\delta_{0}}}|\mathrm{trace}(\mathbf{K}_{t}(x,x))|dx\leq C| \Omega_{2\delta_{0}}|t^{-2k-2+\frac{d}{2}}\stackrel{{\eqref{eq: 2k-2+\frac{d}{2}}}}{{\leq}}C\varepsilon t^{-2k-2+\frac{d}{2}} \tag{4.85}\] and, similarly by Lemma 4.5, \[\int_{\Omega_{2\delta_{0}}}|\mathrm{trace}(\mathbf{K}_{t,x}(x,x))|dx\leq C \varepsilon t^{-2k-2+\frac{d}{2}}. \tag{4.86}\] Combining (4.76),(4.85), and (4.86) yields \[\int_{\Omega}|\mathrm{trace}(\mathbf{K}_{t}(x,x))-\mathrm{trace}(\mathbf{K}_{t,x}(x,x))|dx\leq C\varepsilon t^{-2k-2+\frac{d}{2}}\quad\text{ for all }t>t_{\varepsilon}. \tag{4.87}\] The conclusion follows from Lemma 4.5 and (4.87). We now give Proof of Lemma 4.6.: Let \(\varepsilon>0\) and \(\theta\in\Theta\). First, we prove (4.73). Fix \(\chi\in C_{c}^{\infty}(\mathbb{R}^{d})\) such that \(\mathrm{supp}\,\chi\subset B_{2}\) and \(\chi=1\) in \(B_{1}\). Set, for \(0<\delta<\delta_{0}/100\), \[\chi_{\delta}=\chi\big{(}(\cdot-x_{0})/\delta\big{)}\text{ in }\mathbb{R}^{d}.\] Define, for \((f,g)\in[L^{2}(\Omega)]^{2}\), and for \(j=1,\cdots,k+1=[d/2]+2\), \[(u^{j},v^{j})=T_{\lambda_{j,\theta,t}}\circ\cdots\circ T_{\lambda_{1,\theta,t }}(u^{0},v^{0})\quad\text{ and }\quad(u^{j}_{0},v^{j}_{0})=S_{\lambda_{j,\theta,t},x_{0}} \circ\cdots\circ S_{\lambda_{1,\theta,t},x_{0}}(u^{0}_{0},v^{0}_{0}),\] where \[(u^{0},v^{0})=(f,g)\qquad\text{ and }\qquad(u^{0}_{0},v^{0}_{0})=(\mathds{1}_{ \Omega}f,\mathds{1}_{\Omega}g)\quad\text{ in }\Omega. \tag{4.88}\] Set, for \(0\leq j\leq k+1\), \[(u^{j,\delta},v^{j,\delta})=(\chi_{\delta}u^{j},\chi_{\delta}v^{j})\qquad\text { and }\qquad(u^{j,\delta}_{0},v^{j,\delta}_{0})=(\chi_{\delta}u^{j}_{0},\chi_{ \delta}v^{j}_{0}).\] We have \[(u^{j,\delta},v^{j,\delta})=S_{\lambda_{j,\theta,t},x_{0}}(u^{j-1,\delta},v^{j- 1,\delta})+S_{\lambda_{j,\theta,t},x_{0}}(f^{j,\delta},g^{j,\delta}), \tag{4.89}\] where \[\Sigma_{1}(x_{0})f^{j,\delta}=(\Sigma_{1}(x)-\Sigma_{1}(x_{0}))u^{j- 1,\delta}-\lambda_{j,\theta,t}(\Sigma_{1}(x_{0})-\Sigma_{1}(x))u^{j,\delta}+A(x )\nabla\chi_{\delta}\cdot\nabla u^{j}\\ -(\Sigma_{1}(x_{0})-\Sigma_{1}(x)-\Sigma_{2}(x_{0})+\Sigma_{2}(x ))v^{j,\delta}+\operatorname{div}\Big{(}(A(x_{0})-A(x))\nabla u^{j,\delta}+u^{ j}A(x)\nabla\chi_{\delta}\Big{)} \tag{4.90}\] and \[\Sigma_{2}(x_{0})g^{j,\delta}=(\Sigma_{2}(x)-\Sigma_{2}(x_{0}))v^{j- 1,\delta}-\lambda_{j,\theta,t}(\Sigma_{2}(x_{0})-\Sigma_{2}(x))v^{j,\delta}+A (x)\nabla\chi_{\delta}\cdot\nabla v^{j}\\ +\operatorname{div}\Big{(}(A(x_{0})-A(x))\nabla v^{j,\delta}+v^{ j}A(x)\nabla\chi_{\delta}\Big{)}. \tag{4.91}\] Similarly, we have \[(u_{0}^{j,\delta},v_{0}^{j,\delta})=S_{\lambda_{j,\theta,t},x_{0}}(u_{0}^{j- 1,\delta},v_{0}^{j-1,\delta})+S_{\lambda_{j,\theta,t},x_{0}}(f_{0}^{j,\delta},g_{0}^{j,\delta}),\] where \[\Sigma_{1}(x_{0})f_{0}^{j,\delta}=A(x_{0})\nabla\chi_{\delta}\cdot\nabla u_{0} ^{j}+\operatorname{div}\Big{(}u_{0}^{j}A(x_{0})\nabla\chi_{\delta}\Big{)}\] and \[\Sigma_{2}(x_{0})g_{0}^{j,\delta}=A(x_{0})\nabla\chi_{\delta}\cdot\nabla v_{0} ^{j}+\operatorname{div}\Big{(}v_{0}^{j}A(x_{0})\nabla\chi_{\delta}\Big{)}.\] For \(r>0\), define \[\Phi(r)=\min\left\{1,\sup_{|x-y|<r}\left(|A(x)-A(y)|+\sum_{\ell=1}^{2}|\Sigma_ {\ell}(x)-\Sigma_{\ell}(y)|\right)\right\}.\] We claim that \[\|f^{j,\delta}\|_{L^{p_{j-1}}(\Omega\setminus\Omega_{\delta_{0}/ 2})}+t^{-1}\|g^{j,\delta}\|_{L^{p_{j-1}}(\Omega\setminus\Omega_{\delta_{0}/2})} \\ \leq C_{\delta_{0}}\left(\Phi(\delta)+\frac{1}{\delta t^{1/2}}+ \frac{1}{\delta^{2}t}\right)\left(\|u^{j-1}\|_{L^{p_{j-1}}(\Omega)}+t^{-1}\|v ^{j-1}\|_{L^{p_{j-1}}(\Omega)}\right), \tag{4.92}\] and \[\|f_{0}^{j,\delta}\|_{L^{p_{j-1}}(\Omega\setminus\Omega_{\delta_ {0}/2})}+t^{-1}\|g_{0}^{j,\delta}\|_{L^{p_{j-1}}(\Omega\setminus\Omega_{ \delta_{0}/2})}\\ \leq C_{\delta_{0}}\left(\frac{1}{\delta t^{1/2}}+\frac{1}{ \delta^{2}t}\right)\left(\|u_{0}^{j-1}\|_{L^{p_{j-1}}(\Omega)}+t^{-1}\|v_{0}^{ j-1}\|_{L^{p_{j-1}}(\Omega)}\right). \tag{4.93}\] We first admit (4.92) and (4.93) and continue the proof. Since, in \(\Omega\), \[(u^{0,\delta},v^{0,\delta})=(u_{0}^{0,\delta},v_{0}^{0,\delta}),\] using (4.89), (4.92) and (4.93) and Lemma 4.3, for \(j=1\) and then for \(j=2,\ldots,k+1\), we have \[\|u^{j,\delta}-u_{0}^{j,\delta}\|_{L^{p_{j}}(\Omega\setminus \Omega_{\delta_{0}/2})}+t^{-1}\|v^{j,\delta}-v_{0}^{j,\delta}\|_{L^{p_{j}}( \Omega\setminus\Omega_{\delta_{0}/2})}\\ \leq C_{\delta_{0}}\left(\Phi(\delta)+\frac{1}{\delta t^{1/2}}+ \frac{1}{\delta^{2}t}\right)t^{-\frac{d}{2p_{j}}-j+\frac{d}{4}}\left(\|f\|_{L ^{2}(\Omega)}+t^{-1}\|g\|_{L^{2}(\Omega)}\right). \tag{4.94}\] Fix \(\delta=\delta_{\varepsilon}>0\) such that \(C_{\delta_{0}}\Phi(\delta_{\varepsilon})<\varepsilon/2\). Take \(t_{\varepsilon}>0\) sufficiently large such that \(C_{\delta_{0}}(\delta_{\varepsilon}^{-1}t^{-1/2}+\delta_{\varepsilon}^{-2}t^ {-1})<\varepsilon/2\) for every \(t>t_{\varepsilon}\). Taking \(j=k+1\) in (4.94) gives (4.73). The proof of (4.74) is similar to the one of (4.73) by considering \((u^{0},v^{0})\) and \((u^{0}_{0},v^{0}_{0})\) defined as follows \[(u^{0},v^{0})=\mathds{1}_{\Omega}(f,g)\qquad\text{ and }\qquad(u^{0}_{0},v^{0}_{0})= (f,g)\quad\text{ in }\mathbb{R}^{d},\] instead of (4.88). Similar facts for \(\mathbf{T}^{*}_{\theta,t}\) and \(\mathbf{R}^{*}_{\theta,t,x_{0}}\) by analogous arguments. It remains to establish (4.92) and (4.93). From the definition of \((u^{j},v^{j})\) and the theory of elliptic equations (see e.g. [15, Theorem 9.11]), we have, for \(\Omega_{1}\Subset\Omega_{2}\subset\Omega\), \[\|u^{j}\|_{W^{2,p}_{t}(\Omega_{1})}+t^{-1}\|v^{j}\|_{W^{2,p}_{t}(\Omega_{1})} \leq C\left(\|u^{j-1}\|_{L^{p}(\Omega_{2})}+t^{-1}\|v^{j-1}\|_{L^{p}(\Omega_{2 })}\right) \tag{4.95}\] and, similarly, \[\|u^{j}_{0}\|_{W^{2,p}_{t}(\Omega_{1})}+t^{-1}\|v^{j}_{0}\|_{W^{2,p}_{t}(\Omega _{1})}\leq C\left(\|u^{j-1}_{0}\|_{L^{p}(\Omega_{2})}+t^{-1}\|v^{j-1}_{0}\|_{L ^{p}(\Omega_{2})}\right), \tag{4.96}\] for some positive constant \(C\) independent of \(f\), \(g\), and \(t\). It follows that \[\|\nabla u^{j,\delta}\|_{L^{p}(\Omega_{1})}+t^{-1}\|\nabla v^{j,\delta}\|_{L^ {p}(\Omega_{1})}\leq C\left(\frac{1}{\delta t}+\frac{1}{t^{1/2}}\right)\left( \|u^{j-1}\|_{L^{p}(\Omega_{2})}+t^{-1}\|v^{j-1}\|_{L^{p}(\Omega_{2})}\right), \tag{4.97}\] \[\|\nabla^{2}u^{j,\delta}\|_{L^{p}(\Omega_{1})}+t^{-1}\|\nabla^{2}v^{j,\delta} \|_{L^{p}(\Omega_{1})}\\ \leq C\left(1+\frac{1}{\delta t^{1/2}}+\frac{1}{\delta^{2}t} \right)\left(\|u^{j-1}\|_{L^{p}(\Omega_{2})}+t^{-1}\|v^{j-1}\|_{L^{p}(\Omega_{ 2})}\right), \tag{4.98}\] and \[\|u^{j,\delta}_{0}\|_{W^{1,p}_{t}(\Omega_{1})}+t^{-1}\|v^{j,\delta}_{0}\|_{W^ {1,p}_{t}(\Omega_{1})}\leq C\left(\frac{1}{\delta t}+\frac{1}{t^{1/2}}\right) \left(\|u^{j-1}_{0}\|_{L^{p}(\Omega_{2})}+t^{-1}\|v^{j-1}_{0}\|_{L^{p}(\Omega_ {2})}\right). \tag{4.99}\] By (4.90) and (4.91), we have \[C\left( \|f^{j,\delta}\|_{L^{p_{j-1}}(\Omega_{1})}+t^{-1}\|g^{j,\delta} \|_{L^{p_{j-1}}(\Omega_{1})}\right)\leq\Phi(\delta)\Big{(}\|u^{j-1}\|_{L^{p_{ j-1}}}+t^{-1}\|v^{j-1}\|_{L^{p_{j-1}}(\Omega_{1})}\Big{)}\] \[+\Big{(}\Phi(\delta)+\frac{1}{\delta^{2}t}+\frac{1}{\delta t^{1/2 }}\Big{)}\Big{(}\|u^{j}\|_{W^{2,p_{j-1}}_{t}(\Omega_{1})}+t^{-1}\|v^{j}\|_{W^{ 2,p_{j-1}}_{t}(\Omega_{1})}\Big{)}\] \[+\|\nabla u^{j,\delta}\|_{L^{p_{j-1}}(\Omega_{1})}+t^{-1}\|\nabla v ^{j,\delta}\|_{L^{p_{j-1}}(\Omega_{1})}+\Phi(\delta)\Big{(}\|\nabla^{2}u^{j, \delta}\|_{L^{p_{j-1}}(\Omega_{1})}+t^{-1}\|\nabla^{2}v^{j,\delta}\|_{L^{p_{j-1} }(\Omega_{1})}\Big{)}.\] Combining (4.95)-(4.99) yields (4.92). Estimate (4.93) follows similarly. The proof is complete. A connection of the counting function and the trace of \(\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t}\) for large \(t\) We start this section by recalling the definition of the modified resolvent of an operator (see, e.g., [1, Definition 12.3]). **Definition 4.5**.: _Let \(H\) be a Hilbert space and \(\mathcal{T}:H\to H\) be a linear and bounded operator. The modified resolvent set \(\rho_{m}(\mathcal{T})\) of \(\mathcal{T}\) is the set of all non-zero complex numbers \(s\) such that \(I-s\mathcal{T}\) is bijective and \((I-s\mathcal{T})^{-1}\) is bounded on \(H\). For \(s\in\rho_{m}(\mathcal{T})\) the transformation \((\mathcal{T})_{s}=\mathcal{T}(I-s\mathcal{T})^{-1}\) is the modified resolvent of \(\mathcal{T}\)._ Recall that, for \(s\in\rho_{m}(\mathcal{T})\), we have \[(\mathcal{T})_{s}=\mathcal{T}(I-s\mathcal{T})^{-1}=(I-s\mathcal{T})^{-1} \mathcal{T}. \tag{4.100}\] Let \(\mathcal{T}:L^{2}(\Omega)\times L^{2}(\Omega)\to L^{2}(\Omega)\times L^{2}(\Omega)\) be a linear and bounded operator. We have for \(z\in\mathbb{C}\) (see e.g. [38]) \[I-z^{k+1}\mathcal{T}^{k+1}=\prod_{j=1}^{k+1}(I-\omega_{j}z\mathcal{T}) \tag{4.101}\] and \[I-z^{k+1}\mathcal{T}^{k+1}\text{ is invertible }\Longleftrightarrow I-\omega_{j}z \mathcal{T}\text{ is invertible for every }j. \tag{4.102}\] Recall that \(\omega_{j}^{k+1}=1\). Using the decomposition (4.101), and the equivalence in (4.102), one can prove the following lemma. **Lemma 4.7**.: _Let \(\widetilde{\theta}\in\mathbb{R}\setminus\{\pi\mathbb{Z}\}\). Set \(\theta:=\frac{\tilde{\theta}}{k+1}\in\Theta\). There exists \(t_{\theta}>1\) such that, for every \(t>t_{\theta}\), it holds_ \[\gamma:=t^{k+1}e^{i\widetilde{\theta}}\in\rho_{m}(T_{\lambda^{*}}^{k+1}) \tag{4.103}\] _and_ \[(T_{\lambda^{*}}^{k+1})_{\gamma}=M_{t}^{-1}\mathbf{T}_{\theta,t}M_{t}. \tag{4.104}\] Proof.: We have, by the definition of \(\gamma\), \[I-\gamma T_{\lambda^{*}}^{k+1}\overset{\eqref{eq:1.101}}{=}\prod_{j=1}^{k+1} (I-\omega_{j}te^{i\theta}T_{\lambda^{*}}). \tag{4.105}\] By Proposition 4.1, there exists \(t_{\theta}>0\) such that \(T_{\lambda_{*}+\omega te^{i\theta}}\) is defined for \(t\geq t_{\theta}\). Hence \[\omega_{j}te^{i\theta}\in\rho_{m}(T_{\lambda^{*}})\quad\text{ and }\quad(T_{\lambda^{*}})_{\omega_{j} te^{i\theta}}=T_{\lambda^{*}+\omega_{j}te^{i\theta}}=T_{\lambda_{j,\theta,t}}\text{ for }t\geq t_{\theta}. \tag{4.106}\] (see, e.g. [13, Lemma 3.1] for the arguments in a similar setting). Combining (4.105) and (4.106) leads \(\gamma\in\rho_{m}(T_{\lambda^{*}}^{k+1})\) for \(t\geq t_{\theta}\). It follows from (4.100) that, for \(t\geq t_{\theta}\), \[T_{\lambda_{j},\theta,t}=T_{\lambda^{*}}(I-\omega_{j}te^{i\theta}T_{\lambda^{ *}})^{-1}=(I-\omega_{j}te^{i\theta}T_{\lambda^{*}})^{-1}T_{\lambda^{*}} \tag{4.107}\] and thus, \[M_{t}^{-1}\mathbf{T}_{\theta,t}M_{t}\overset{\eqref{eq:1.101}}{=} \prod_{j=1}^{k+1}T_{\lambda_{j,\theta,t}}\overset{\eqref{eq:1.101}}{=}T_{ \lambda^{*}}^{k+1}\prod_{j=1}^{k+1}(I-\omega_{j}e^{i\theta}tT_{\lambda^{*}})^{ -1}\\ =T_{\lambda^{*}}^{k+1}(I-\gamma T_{\lambda^{*}}^{k+1})^{-1} \overset{\text{def.}}{=}(T_{\lambda^{*}}^{k+1})_{\gamma}. \tag{4.108}\] The proof is complete. The following proposition establishes a connection between the trace of the operator \(\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t}\) and the counting function for large \(t\). The arguments of the proof are in the spirit of [34] (see also [38]). **Proposition 4.4**.: _We have_ \[\mathcal{N}(t)=\frac{\Im(\mathbf{c})}{\frac{d}{8(k+1)}\int_{0}^{\infty}s^{\frac{d }{8(k+1)}-1}(1+s)^{-1}ds}t^{\frac{d}{2}}+o(t^{\frac{d}{2}})\qquad\text{ as }t\to+\infty,\] _where \(\mathbf{c}\) is given by (4.72)._ Proof.: For \(t\) sufficiently large, by Lemma 4.7, we have \[(T_{\lambda^{*}}^{k+1})_{t^{k+1}e^{i(k+1)\alpha}}=M_{t}^{-1}\mathbf{T}_{\alpha,t}M_{t}.\] Note that \[\Big{(}(T_{\lambda^{*}}^{k+1})_{\gamma_{1}}\Big{)}_{\gamma_{2}}=(T_{\lambda^{* }}^{k+1})_{\gamma_{1}+\gamma_{2}} \tag{4.109}\] provided that \(\gamma_{1}\),\(\gamma_{1}+\gamma_{2}\in\rho_{m}(T_{\lambda^{*}}^{k+1})\). It follows from Lemma 4.7 that, for large \(t\) and for \(s\geq 0\), \[-2(t+s)^{k+1}e^{i(k+1)\alpha}\in\rho_{m}\Big{(}M_{t}^{-1}\mathbf{T}_{\alpha,t} M_{t}\Big{)}.\] Let \(s_{1},s_{2},\dots\) be the characteristic values of \(M_{t}^{-1}\mathbf{T}_{\theta,t}M_{t}\) repeated a number of times equal to their multiplicities. Applying [1, Theorem 12.17], we have \[\mathrm{trace}\Big{(}M_{t}^{-1}\mathbf{T}_{\alpha,t}M_{t}(M_{t}^{ -1}\mathbf{T}_{\alpha,t}M_{t})_{-2(t+s)^{k+1}e^{i(k+1)\alpha}}\Big{)} = \sum_{j}\frac{1}{s_{j}(s_{j}+2e^{i\alpha(k+1)}(t+s)^{k+1})}\;+\;c_ {t}. \tag{4.110}\] We claim that \[c_{t}=0. \tag{4.111}\] Assume this, we continue the proof. As a consequence of (4.110) with \(s=0\), we have \[\mathrm{trace}\Big{(}M_{t}^{-1}\mathbf{T}_{\alpha,t}M_{t}(M_{t}^{-1}\mathbf{T }_{\alpha,t}M_{t})_{-2t^{k+1}e^{i(k+1)\alpha}}\Big{)}=\sum_{j}\frac{1}{s_{j}( s_{j}+2e^{i\alpha(k+1)}t^{k+1})}. \tag{4.112}\] Let \((\mu_{j})_{j}\) be the set of characteristic values of \(T_{\lambda^{*}}\) repeated according to their multiplicity. It is well-known that \(\mu_{j}^{k+1}\) are the characteristic values of \(T_{\lambda^{*}}^{k+1}\) and the multiplicity of \(\mu_{j}^{k+1}\) is equal to the sum of the one of the characteristic values \(\mu\) of \(T_{\lambda^{*}}\) such that \(\mu^{k+1}=\mu_{j}^{k+1}\). By Lemma 4.7, for large \(t\), \(e^{i\alpha(k+1)}t^{k+1}\) is not a characteristic value of \(T_{\lambda^{*}}^{k+1}\). We obtain, by [1, Theorem 12.4], that the set of the characteristic values of \((T_{\lambda^{*}}^{k+1})_{t^{k+1}e^{i(k+1)\alpha}}\) is given by \[\Big{\{}\mu_{j}^{k+1}-t^{k+1}e^{i(k+1)\alpha}\;;\;j\geq 1\Big{\}}.\] We now derive from (4.112) that \[\mathrm{trace}\Big{(}M_{t}^{-1}\mathbf{T}_{\alpha,t}M_{t}(M_{t}^ {-1}\mathbf{T}_{\alpha,t}M_{t})_{-2t^{k+1}e^{i(k+1)\alpha}}\Big{)}\\ =\sum_{j}\frac{1}{(\mu_{j}^{k+1}-t^{k+1}e^{i(k+1)\alpha})(\mu_{j} ^{k+1}+t^{k+1}e^{i(k+1)\alpha})},\] which yields, since \(\alpha=\frac{\pi}{4(k+1)}\), \[\mathrm{trace}\Big{(}M_{t}^{-1}\mathbf{T}_{\alpha,t}M_{t}(M_{t}^{-1}\mathbf{T }_{\alpha,t}M_{t})_{-2t^{k+1}e^{i(k+1)\alpha}}\Big{)}=\sum_{j}\frac{1}{\mu_{j} ^{2(k+1)}-it^{2(k+1)}}. \tag{4.113}\] We have, by Proposition 4.1, \[\limsup_{|\mu_{j}|\to+\infty}\left|\frac{\Im(\mu_{j})}{\mu_{j}}\right|=0.\] As a consequence and as in [34, Proof of Corollary 3], we derive that \[\sum_{j}\frac{1}{\mu_{j}^{2k+2}-it^{2k+2}}-\sum_{j}\frac{1}{|\mu_{j}|^{2k+2}- it^{2k+2}}=o(t^{-2k-2+\frac{d}{2}})\text{ as }t\to+\infty. \tag{4.114}\] Combining (4.113) and (4.114) yield \[\text{trace}\Big{(}M_{t}^{-1}\mathbf{T}_{\alpha,t}M_{t}(M_{t}^{- 1}\mathbf{T}_{\alpha,t}M_{t})_{-2t^{k+1}e^{i(k+1)\alpha}}\Big{)}\\ =\sum_{j}\frac{1}{|\mu_{j}|^{2(k+1)}-it^{2(k+1)}}+o(t^{-2k-2+\frac {d}{2}})\text{ as }t\to+\infty. \tag{4.115}\] Applying (4.109) with \(\gamma_{1}=t^{k+1}e^{i(k+1)\alpha}\) and \(\gamma_{2}=-2t^{k+1}e^{i(k+1)\alpha}\) and using Lemma 4.7, we derive that \[(M_{t}^{-1}\mathbf{T}_{\alpha,t}M_{t})_{-2t^{k+1}e^{i(k+1)\alpha}}=M_{t}^{-1} \mathbf{T}_{\beta,t}M_{t}. \tag{4.116}\] Since \[\text{trace}\left(M_{t}^{-1}\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t}M_{t} \right)=\text{trace}\left(\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t}\right),\] it follows from (4.115) and (4.116) that \[\text{trace}\left(\mathbf{T}_{\alpha,t}\mathbf{T}_{\beta,t}\right)=\sum_{j} \frac{1}{|\mu_{j}|^{2(k+1)}-it^{2(k+1)}}+o(t^{-2k-2+\frac{d}{2}})\text{ as }t\to+\infty. \tag{4.117}\] Applying Proposition 4.3, we derive from (4.117) that, as \(t\to+\infty\) \[\sum_{j}\frac{1}{|\mu_{j}|^{2k+2}-it^{2k+2}}=\mathbf{c}t^{-2k-2+\frac{d}{2}}+ o(t^{-2k-2+\frac{d}{2}}). \tag{4.118}\] Considering the imaginary part of (4.118) we get, for \(\tau=t^{4k+4}\), \[\sum_{j}\frac{1}{|\mu_{j}|^{4k+4}+\tau}=\Im(\mathbf{c})\tau^{\frac{d}{8k+8}-1 }+o(\tau^{\frac{d}{8k+8}-1})\text{ as }\tau\to+\infty.\] Since \(\lambda_{j}=\mu_{j}+\lambda^{*}\), it follows that, as \(\tau\to+\infty\), \[\sum_{j}\frac{1}{|\lambda_{j}|^{4k+4}+\tau}=\Im(\mathbf{c})\tau^{\frac{d}{8k+8 }-1}+o(\tau^{\frac{d}{8k+8}-1}). \tag{4.119}\] Using the fact \[\sum_{j}\frac{1}{|\lambda_{j}|^{4k+4}+\tau}=\int_{0}^{\infty}\frac{d\mathcal{N }(s^{\frac{1}{4(k+1)}})}{s+\tau},\] we derive that \[\int_{0}^{\infty}\frac{d\mathcal{N}(s^{\frac{1}{4(k+1)}})}{s+\tau}=\Im( \mathbf{c})\tau^{\frac{d}{8k+8}-1}+o(\tau^{\frac{d}{8k+8}-1})\text{ as }\tau\to+\infty. \tag{4.120}\] Applying a Tauberian Theorem of Hardy and Littlewood (see, e.g., [1, Theorem 14.5]), we obtain \[\mathcal{N}(t)=\frac{\Im(\mathbf{c})}{\frac{d}{8(k+1)}\int_{0}^{\infty}s^{\frac{ d}{8(k+1)}-1}(1+s)^{-1}ds}t^{\frac{d}{2}}+o(t^{\frac{d}{2}})\qquad\text{ as }t\to+\infty,\] which is the conclusion. It remains to prove (4.111). Applying (4.109) with \(\gamma_{1}=t^{k+1}e^{i(k+1)\alpha}\) and \(\gamma_{2}=-2(t+s)^{k+1}e^{i(k+1)\alpha}\) and using Lemma 4.7, we derive that \[(M_{t}^{-1}\mathbf{T}_{\alpha,t}M_{t})_{-2(t+s)^{k+1}e^{i(k+1)\alpha}}=M_{r}^{ -1}\mathbf{T}_{\widetilde{\alpha},r}M_{r}.\] where \[\widetilde{\alpha}=\alpha+\frac{\pi}{k+1}\quad\text{ and }\quad r=(2(t+s)^{k+1}-t^{k+1})^{\frac{1}{k+1}}.\] Thus by [1, Theorem 12.14], \[\Big{\{}s_{j}+2(t+s)^{k+1}e^{i\alpha(k+1)}\ ;\ j\geq 1\Big{\}}\text{ is the set of characteristic values of }M_{r}^{-1}\mathbf{T}_{\widetilde{\alpha},r}M_{r}. \tag{4.121}\] Applying Corollary 4.3 and using (4.38), we have \[|\!|\!|M_{t}^{-1}\mathbf{T}_{\alpha,t}M_{t}|\!|\leq Ct^{-k+\frac{d}{4}}\quad \text{ and }\quad|\!|\!|M_{r}^{-1}\mathbf{T}_{\widetilde{\alpha},r}M_{r}|\!|\!|\leq Cr ^{-k+\frac{d}{4}} \tag{4.122}\] for some constant \(C>0\) which does not depend on \(s\) (and \(t\)). By [1, Theorem 12.12] we have \[|\!|\!|\text{trace}\Big{(}M_{t}^{-1}\mathbf{T}_{\alpha,t}M_{t}(M_{t}^{-1} \mathbf{T}_{\alpha,t}M_{t})_{-2\alpha^{k+1}(t+s)^{k+1}}\Big{)}|\leq|\!|\!|M_{t} ^{-1}\mathbf{T}_{\alpha,t}M_{t}|\!|\!|\!|M_{r}^{-1}\mathbf{T}_{\widetilde{ \alpha},r}M_{r}|\!|\!|. \tag{4.123}\] Since \(-k+\frac{d}{4}<0\) it follows from (4.122) and (4.123) that \[\lim_{s\to+\infty}\text{trace}\Big{(}M_{t}^{-1}\mathbf{T}_{\alpha,t}M_{t}(M_{ t}^{-1}\mathbf{T}_{\alpha,t}M_{t})_{-2\alpha^{k+1}(t+s)^{k+1}}\Big{)}=0. \tag{4.124}\] On the other hand, by [1, Theorem 12.14], \[\Big{|}\sum_{j}\frac{1}{s_{j}(s_{j}+2e^{i\alpha(k+1)}(t+s)^{k+1})} \Big{|}^{2}\leq\sum_{j}\frac{1}{|s_{j}|^{2}}\sum_{j}\frac{1}{|s_{j}+2e^{i\alpha (k+1)}(t+s)^{k+1}|^{2}}\] \[\overset{\eqref{eq:121}}{\leq}|\!|\!|M_{t}^{-1}\mathbf{T}_{\alpha,t}M_{t}|\!|\!|^{2}|\!|\!|M_{r}^{-1}\mathbf{T}_{\widetilde{\alpha},r}M_{r}|\!| \!|^{2}\overset{\eqref{eq:122}}{\to}0\text{ as }s\to+\infty. \tag{4.125}\] Combining (4.124) and (4.125) yields \(c_{t}=0\), which is (4.111). The proof is complete. ### Proof of Theorem 1.1 As in [34, p.34], we derive from Proposition 4.3 that \[\Im(\mathbf{c}) =\frac{1}{(2\pi)^{d}}\int_{\Omega}\sum_{\ell=1}^{2}\int_{\mathbb{ R}^{d}}\frac{1}{\left(\Sigma_{\ell}(x_{0})^{-1}A(x_{0})\xi\cdot\xi\right)^{4k+4}+1}d \xi dx\] \[=\frac{1}{(2\pi)^{d}}\sum_{\ell=1}^{2}\int_{\Omega}|\{\xi:A(x)\xi \cdot\xi<\Sigma_{\ell}(x)\}|\,dx\frac{d}{8(k+1)}\int_{0}^{\infty}s^{\frac{d}{8 (k+1)}-1}(1+s)^{-1}ds.\] The conclusion now follows from Proposition 4.4. Completeness of the generalized eigenfunctions of the transmission eigenvalue problem - Proof of Theorem 1.2 By Lemma 4.7, for all \(\widetilde{\theta}\in(0,2\pi)\setminus\{\pi\}\), there exists \(t_{\widetilde{\theta}}>0\) such that, for \(t>t_{\widetilde{\theta}}\), \[(T_{\lambda^{*}}^{k+1})_{te^{i\widetilde{\theta}}}=M_{t_{k}}^{-1}\mathbf{T}_{ \theta,t_{k}}M_{t_{k}}, \tag{5.1}\] where \[\theta=\frac{\widetilde{\theta}}{k+1}\quad\text{ and }\quad t_{k}=t^{\frac{1}{k+1}}.\] By Proposition 4.1 and Corollary 4.3, \[\left|\!\left|\!\left|M_{t_{k}}^{-1}\mathbf{T}_{\theta,t_{k}}M_{t_{k}}\right|\! \right|\!\right|\leq Ct_{k}^{-k+\frac{d}{4}}\qquad\text{ and }\qquad\left|M_{t_{k}}^{-1}\mathbf{T}_{\theta,t_{k}}M_{t_{k}}\right| _{L^{2}(\Omega)\to L^{2}(\Omega)}\leq Ct_{k}^{-k}. \tag{5.2}\] In particular, \(T_{\lambda^{*}}^{k+1}\) is a Hilbert-Schmidt operator; moreover, for \(t>t_{\theta}\), \[\left|\!\left|\!\left|\!\left|\left(T_{\lambda^{*}}^{k+1}\right)_{te^{i \widetilde{\theta}}}\right|\!\right|\!\right|\leq C_{\widetilde{\theta}}t^{-1 +\frac{1}{k+1}+\frac{d}{4(k+1)}}. \tag{5.3}\] Since \(k=[d/2]+1\), it follows that \(-1+\frac{1}{k+1}+\frac{d}{4(k+1)}\leq 0\). This implies that \[\text{ for all }\widetilde{\theta}\in(0,2\pi)\setminus\{\pi\}\text{ there exist }t_{\widetilde{\theta}}>0\text{ and }C_{\widetilde{\theta}}>0\text{ such that }\sup_{t>t_{\widetilde{\theta}}}\left|\!\left|\!\left|\left(T_{\lambda^{*}}^{k+ 1}\right)_{te^{i\widetilde{\theta}}}\right|\!\right|\!\right|\leq C_{ \widetilde{\theta}}. \tag{5.4}\] Since \(T_{\lambda^{*}}^{k+1}\) is a Hilbert-Schmidt operator, it follows from [1, Theorem 16.4] that 1. the space spanned by the generalized eigenfunctions of \(T_{\lambda^{*}}^{k+1}\) is equal to \(\overline{\text{range}(T_{\lambda^{*}}^{k+1})}\), the closure of the range of \(T_{\lambda^{*}}^{k+1}\) with respect to the \(L^{2}\)-topology. In fact, in order to be able to apply [1, Theorem 16.4], one requires the assumptions on the directions of the minimal growth of the modified resolvent of \(T_{\lambda^{*}}^{k+1}\). We have only proved (5.3) and (5.4) instead of this requirement. Nevertheless, this is sufficient to derive 1) using almost the same arguments in [1] (see also [39]). The rest of the proof is as in [34, 13]. We have * range \(T_{\lambda^{*}}^{k+1}\) is dense in \([L^{2}(\Omega)]^{2}\) since range \(T_{\lambda^{*}}\) is dense in \([L^{2}(\Omega)]^{2}\) and \(T_{\lambda^{*}}\) is continuous, * the space spanned by the general eigenfunctions of \(T_{\lambda^{*}}^{k+1}\) associated to the non-zero eigenvalues of \(T_{\lambda^{*}}^{k+1}\) is equal to the space spanned by the general eigenfunctions of \(T_{\lambda^{*}}\) associated to the non-zero eigenvalues of \(T_{\lambda^{*}}\). This can be done as in the last part of the proof of [1, Theorem 16.5]. Consequently, the space spanned by all generalized eigenfunctions of \(T_{\lambda^{*}}^{k+1}\) is equal to the space spanned by all generalized eigenfunctions of \(T_{\lambda^{*}}\). The conclusion now follows from i), ii), and iii). **Acknowledgement.** The authors thank Fioralba Cakoni for attracting their attention to the problem and stimulating discussions.
2307.07310
Unsourced Random Access Using Multiple Stages of Orthogonal Pilots: MIMO and Single-Antenna Structures
We study the problem of unsourced random access (URA) over Rayleigh block-fading channels with a receiver equipped with multiple antennas. We propose a slotted structure with multiple stages of orthogonal pilots, each of which is randomly picked from a codebook. In the proposed signaling structure, each user encodes its message using a polar code and appends it to the selected pilot sequences to construct its transmitted signal. Accordingly, the transmitted signal is composed of multiple orthogonal pilot parts and a polar-coded part, which is sent through a randomly selected slot. The performance of the proposed scheme is further improved by randomly dividing users into different groups each having a unique interleaver-power pair. We also apply the idea of multiple stages of orthogonal pilots to the case of a single receive antenna. In all the set-ups, we use an iterative approach for decoding the transmitted messages along with a suitable successive interference cancellation technique. The use of orthogonal pilots and the slotted structure lead to improved accuracy and reduced computational complexity in the proposed set-ups, and make the implementation with short blocklengths more viable. Performance of the proposed set-ups is illustrated via extensive simulation results which show that the proposed set-ups with multiple antennas perform better than the existing MIMO URA solutions for both short and large blocklengths, and that the proposed single-antenna set-ups are superior to the existing single-antenna URA schemes.
Mohammad Javad Ahmadi, Mohammad Kazemi, Tolga M. Duman
2023-07-14T12:43:25Z
http://arxiv.org/abs/2307.07310v1
Unsourced Random Access Using Multiple Stages of Orthogonal Pilots: MIMO and Single-Antenna Structures ###### Abstract We study the problem of unsourced random access (URA) over Rayleigh block-fading channels with a receiver equipped with multiple antennas. We propose a slotted structure with multiple stages of orthogonal pilots, each of which is randomly picked from a codebook. In the proposed signaling structure, each user encodes its message using a polar code and appends it to the selected pilot sequences to construct its transmitted signal. Accordingly, the transmitted signal is composed of multiple orthogonal pilot parts and a polar-coded part, which is sent through a randomly selected slot. The performance of the proposed scheme is further improved by randomly dividing users into different groups each having a unique interleaver-power pair. We also apply the idea of multiple stages of orthogonal pilots to the case of a single receive antenna. In all the set-ups, we use an iterative approach for decoding the transmitted messages along with a suitable successive interference cancellation technique. The use of orthogonal pilots and the slotted structure lead to improved accuracy and reduced computational complexity in the proposed set-ups, and make the implementation with short blocklengths more viable. Performance of the proposed set-ups is illustrated via extensive simulation results which show that the proposed set-ups with multiple antennas perform better than the existing MIMO URA solutions for both short and large blocklengths, and that the proposed single-antenna set-ups are superior to the existing single-antenna URA schemes. Unsourced random access (URA), internet of things (IoT), orthogonal pilots, massive MIMO, pilot detection, power diversity, CRC check, performance analysis, fading channel. ## I Introduction In contrast to the conventional grant-based multiple access, where the base station (BS) waits for the preamble from devices to allocate resources to them, in grant-free random access, users transmit their data without any coordination. Removing the need for scheduling results in some benefits, such as reducing the latency and signaling overhead, which makes the grant-free set-up interesting for serving many users. Sourced and unsourced random access schemes are the main categories of grant-free random access. In the former, both the messages and identities of the users are important to the BS, so each user is assigned a unique pilot. However, this is inefficient, especially considering the next-generation wireless networks with a massive number of connected devices [1, 2]. In the so-called _unsourced random access_ (URA), which was introduced by Polyanskiy in [3], the BS cares only about the transmitted messages, i.e., the identity of the users is not a concern. The BS is connected to millions of cheap devices, a small fraction of which are active at a given time. In this set-up, the users employ a common codebook, and they share a short frame for transmitting their messages. In URA, the per-user probability of error (PUPE) is adopted as the performance criterion. Many low-complexity coding schemes are devised for URA over a Gaussian multiple-access channel (GMAC) including T-fold slotted ALOHA (SA) [4, 5, 6, 7], sparse codes [8, 9, 10, 11], and random spreading [12, 13, 14]. However, GMAC is not a fully realistic channel model for wireless communications. Therefore, in [15, 16, 17, 18, 19], the synchronous Rayleigh quasi-static fading MAC is investigated, and the asynchronous set-up is considered in [20, 21]. Recently, several studies have investigated Rayleigh block-fading channels in a massive MIMO setting [22, 23, 24, 25]. In [22], a covariance-based activity detection (AD) algorithm is used to detect the active messages. A pilot-based scheme is introduced in [23] where non-orthogonal pilots are employed for detection and channel estimation, and a polar list decoder is used for decoding messages. Furthermore, in a scheme called FASURA [24], each user transmits a signal containing a non-orthogonal pilot and a randomly spread polar code. The coherence blocklength is defined as the period over which the channel coefficients stay constant. As discussed in [23], the coherence time can be approximated as \(T_{c}\approx 1/4D_{s}\), where \(D_{s}\) is the maximal Doppler spread. For a typical carrier frequency of \(2\) GHz, the coherence time may vary in the range of \(1\) ms-\(45\) ms (corresponding to the transmitter speeds between \(3\) km/h-\(120\) km/h). Moreover, the sampling frequency should be chosen in the order of coherence bandwidth, whose typical value is between \(100\) kHz-\(500\) kHz in outdoor environments. Consequently, the coherence blocklength \(L_{c}\) can range from \(100\) to \(20000\) samples. Although the AD algorithm in [22] performs well in the fast fading scenario (e.g., when \(L_{c}\leq 320\)), it is not implementable with larger blocklengths due to run-time complexity scaling with \(L_{c}^{2}\). In contrast, the schemes in [23, 24] work well in the large-blocklength regimes (e.g., for \(L_{c}=3200\)); that is, in a slow fading environment where large blocklengths can be employed, their decoding performance is better than that of [22]. Most coding schemes in URA employ non-orthogonal pilots/sequences for identification and estimation purposes [12, 13, 14, 15, 16, 17, 18, 19]. [14, 22, 23, 24]. Performance of detectors and channel estimators may be improved in terms of accuracy and computational complexity by employing a codebook of orthogonal pilots; however, this significantly increases the amount of collisions due to the limited number of available orthogonal pilot sequences. To address this problem, the proposed schemes in this paper employ multiple stages of orthogonal pilots combined with an iterative detector. In the proposed scheme, the transmitted signal of each user is composed of \(J+1\) stages: a polar codeword appended to \(J\) independently generated orthogonal pilots. Thus, the scheme is called multi-stage set-up with multiple receive antennas (MS-MRA). At each iteration of MS-MRA at the receiver side, only one of the pilot parts is employed for pilot detection and channel estimation, and the polar codeword is decoded using a polar list decoder. Therefore, the transmitted pilots in the remaining \(J-1\) pilot parts are still unknown. To determine the active pilots in these, we adopt two approaches. In the first one, all the pilot bits are coded jointly with the data bits and cyclic redundancy check (CRC) bits (therefore, the transmitted bits of all the pilot parts are detected after successful polar decoding). As a second approach, to avoid waste of resources, we propose an enhanced version of the MS-MRA, where only data and CRC bits are fed to the polar encoder. At the receiver side, the decoder iteratively moves through different \(J+1\) parts of the signal to detect all the parts of an active user's message. Since it does not encode the pilot bits, this is called MS-MRA without pilot bits encoding (MS-MRA-WOPBE). We further improve the performance of the MS-MRA by randomly dividing users into different groups. In this scheme (called multi-stage set-up with user grouping for multiple receive antennas (MSUG-MRA)), each group is assigned a unique interleaver-power pair. Transmission with different power levels increases the decoding probability of the users with the highest power (because they are perturbed by interfering users with low power levels). Since successfully decoded signals are removed using successive interference cancellation (SIC), users with lower power levels have increased chance of being decoded in the subsequent steps. By repeating each user's signal multiple times, we further extend the idea in MS-MRA and MSUG-MRA to the case of a single receive antenna. These extensions are called multi-stage set-up with a single receive antenna (MS-SRA) and multi-stage set-up with user grouping for a single receive antenna (MSUG-SRA). We demonstrate that, while the covariance-based AD algorithm in [22] suffers from performance degradation with large blocklengths, and the algorithms in [23, 24] do not work well in the short blocklength regime (hence not suitable for fast fading scenarios), the MS-MRA and MSUG-MRA have a superior performance in both regimes. Furthermore, the MS-SRA and MSUG-SRA show better performance compared to similar solutions with a single receive antenna over fading channels [17, 19, 20]. Our contributions are as follows: * We propose a URA set-up with multiple receive antennas, namely MS-MRA. The proposed set-up offers comparable performance with the existing schemes with large blocklengths, while having lower computational complexity. Moreover, for the short-blocklength scenario, it significantly improves the state-of-the-art. * We provide a theoretical analysis to predict the error probability of the MS-MRA, taking into account all the sources of error, namely, errors resulting from pilot detection, channel estimation, channel decoding, SIC, and collisions. * We extend the MS-MRA set-up by randomly dividing the users into groups, i.e., MSUG-MRA, which is more energy-efficient than MS-MRA and other MIMO URA schemes. * Two URA set-ups with a single receive antenna, called MS-SRA and MSUG-SRA, are provided by adopting the ideas of the MS-MRA and MSUG-MRA to the case of a single receive antenna. They perform better than the alternative solutions over fading channels. The rest of the paper is organized as follows. Section II presents the system model for the proposed framework. The encoding and decoding procedures of the proposed schemes are introduced in Section III. In Section IV, extensive numerical results and examples are provided. Finally, Section V provides our conclusions. The following notation is adopted throughout the paper. We denote the sets of real and imaginary numbers by \(\mathbb{R}\) and \(\mathbb{C}\), respectively. \([\mathbf{T}]_{(l,:)}\) and \([\mathbf{T}]_{(:,l)}\) represent the \(l\)th row and the \(l\)th column of \(\mathbf{T}\), respectively; the \(\mathrm{Re}\left(\mathbf{t}\right)\) and \(\mathrm{Im}\left(\mathbf{t}\right)\) give the real and imaginary parts of \(\mathbf{t}\), respectively; the transpose and Hermitian of matrix \(\mathbf{T}\) are denoted by \(\mathbf{T}^{T}\) and \(\mathbf{T}^{H}\), respectively; \(|.|\) denotes the cardinality of a set, \(\mathbf{I}_{M}\) and \(\mathbf{1}_{s}\) denote the \(M\times M\) identity matrix and \(1\times s\) all-ones vector, respectively; we use \([a_{1}:a_{2}]\) to denote \(\{i\in\mathbb{Z}:a_{1}\leq i\leq a_{2}\}\), and \(\delta_{i,j}\) is the Kronecker delta. ## II System Model We consider an unsourced random access model over a block-fading wireless channel. The BS is equipped with \(M\) receive antennas connected to \(K_{T}\) potential users, for which \(K_{a}\) of them are active in a given frame. Assuming that the channel coherence time is larger than \(L\), we divide the length-\(n\) time-frame into \(S\) slots of length \(L\) each (\(n=SL\)). Each active user randomly selects a single slot to transmit \(B\) bits of information. In the absence of synchronization errors, the received signal vector corresponding to the \(s\)th slot at the \(m\)th antenna is written as \[\mathbf{y}_{m,s}=\sum_{i\in\mathcal{K}_{s}}h_{m,i}\mathbf{x}\left(\mathbf{w} (i)\right)+\mathbf{z}_{m,s}, \tag{1}\] where \(\mathbf{y}_{m,s}\in\mathbb{C}^{1\times L}\), \(\mathcal{K}_{s}\) denotes the set of active user indices available in the \(s\)th slot, \(K_{s}:=|\mathcal{K}_{s}|\), \(\mathbf{x}\left(\mathbf{w}(i)\right)\in\mathbb{C}^{1\times L}\) is the encoded and modulated signal corresponding to the message bit sequence \(\mathbf{w}(i)\in\{0,1\}^{B}\) of the \(i\)th user, \(h_{m,i}\sim\mathcal{CN}(0,1)\) is the channel coefficient between the \(i\)th user and the \(m\)th receive antenna, and \(\mathbf{z}_{m,s}\sim\mathcal{CN}(\mathbf{0},\sigma_{2}^{2}\mathbf{I}_{L})\) is the circularly symmetric complex white Gaussian noise vector. Letting \(\mathcal{K}_{a}\) and \(\mathcal{L}_{d}\) be the set of active user indices and the list of decoded messages, respectively, the PUPE of the system is defined in terms of the probability of false-alarm, \(p_{fa}\), and the probability of missed-detection, \(p_{md}\), as \[P_{e}=p_{fa}+p_{md}, \tag{2}\] where \(p_{md}=\dfrac{1}{K_{a}}\sum_{i\in\mathcal{K}_{a}}\Pr(\mathbf{w}(i)\notin\mathcal{ L}_{d})\) and \(p_{fa}=\mathbb{E}\left\{\dfrac{n_{fa}}{|\mathcal{L}_{d}|}\right\}\), with \(n_{fa}\) being the number of decoded messages that were indeed not sent. The energy-per-bit of the set-up can be written as \(\dfrac{E_{b}}{N_{0}}=\dfrac{LP}{\sigma_{2}^{2}B}\), where \(P\) denotes the average power of each user per channel use. The objective is to minimize the required energy-per-bit for a target PUPE. ## III URA with Multiple Stages of Orthogonal Pilots ### _MS-MRA Encoder_ In this part, we introduce a multi-stage signal structure which is used in both of the proposed URA set-ups. As shown in Fig. 1, we divide the message of the \(i\)th user into \(J+1\) parts (one coded part and \(J\) pilot parts) denoted by \(\mathbf{w}_{c}(i)\) and \(\mathbf{w}_{p_{j}}(i),j=1,2,...,J\) with lengths \(B_{c}\) and \(B_{p}\), respectively, where \(B_{c}+JB_{p}=B\). The \(i\)th user obtains its \(j\)th pilot sequence, \(\mathbf{b}_{ji}\), with length \(n_{p}=2^{B_{p}}\) by mapping \(\mathbf{w}_{p_{j}}(i)\) to the orthogonal rows of an \(n_{p}\times n_{p}\) Hadamard matrix \(\mathbf{B}_{n_{p}}\), which is generated as \[\mathbf{B}_{2}=\begin{bmatrix}1&1\\ 1&-1\end{bmatrix},\quad\mathbf{B}_{2^{i}}=\mathbf{B}_{2}\otimes\mathbf{B}_{2^ {i-1}}\ \ \forall\ \ i=2,3,\ldots,\] where \(\otimes\) represents the Kronecker product. Since the number of possible pilots in the orthogonal Hadamard codebook is limited, it is likely that the users will be in collision in certain pilot segments, that is, they share the same pilots with the other users. However, the parameters are chosen such that two different users are in a complete collision in all the pilot parts with a very low probability. To construct the coded sequence of the \(i\)th user, we accumulate all the message bits in a row vector as \[\mathbf{w}(i)=\left[\mathbf{w}_{p_{1}}(i),\mathbf{w}_{p_{2}}(i),\ldots, \mathbf{w}_{p_{J}}(i),\mathbf{w}_{c}(i)\right], \tag{3}\] and pass it to an \((2n_{c},\,B+r)\) polar code, where \(r\) is the number of CRC bits. Note that contrary to the existing schemes in URA, we feed not only data bits but the pilot bit sequences to the encoder. Hence, in the case of successful decoding, all the pilot sequences for the user can be retrieved. The polar codeword is then modulated using quadrature phase shift keying (QPSK), resulting in \(\mathbf{v}_{i}\in\{\sqrt{P_{c}/2}(\pm 1\pm j)\}^{1\times n_{e}}\), where \(P_{c}\) is the average power of the polar coded part, and Gray mapping is used. The overall transmitted signal for the \(i\)th user consists of \(J\) pilot parts and one coded part, i.e., \[\mathbf{x}_{i}=\left[\sqrt{P_{p}}\mathbf{b}_{1i},\sqrt{P_{p}}\mathbf{b}_{2i}, \ldots,\sqrt{P_{p}}\mathbf{b}_{Ji},\mathbf{v}_{i}\right]\in\mathbb{C}^{1 \times L}, \tag{4}\] where \(L=n_{c}+Jn_{p}\) and \(P_{p}\) denotes the average power of the pilot sequence. Accordingly, the received signal in a slot is composed of \(J+1\) parts, for which, at each iteration, the decoding is done by employing one of the \(J\) pilot parts (sequentially) and the coded part of the received signal. Generally, only the non-colliding users can be decoded. Some non-colliding users in the current pilot stage may experience collisions in the other pilot parts. Therefore, by successfully decoding and removing them using SIC, the collision density is reduced, and with further decoding iterations, the effects of such collisions are ameliorated. ### _MS-MRA Decoder_ We now introduce the decoding steps of MS-MRA where the transmitted signal in (4) is received by \(M\) antennas through a fading channel. The \(j\)th pilot part and the polar coded part of the received signal in the \(s\)th slot of the MS-MRA can be modeled using (1) as \[\mathbf{Y}_{p_{j}} =\sqrt{P_{p}}\mathbf{H}\mathbf{B}_{j}+\mathbf{Z}_{p_{j}}\in \mathbb{C}^{M\times n_{p}},\ j=1,2,\ldots,J, \tag{5}\] \[\mathbf{Y}_{c} =\mathbf{H}\mathbf{V}+\mathbf{Z}_{c}\in\mathbb{C}^{M\times n_{e}}, \tag{6}\] where \(\mathbf{H}\in\mathbb{C}^{M\times K_{s}}\) is the channel coefficient matrix with \(h_{m,i}\) in its \(m\)th row and \(i\)th column, \(\mathbf{Z}_{p_{j}}\) and \(\mathbf{Z}_{c}\) consist of independent and identically distributed (i.i.d.) noise samples drawn from \(\mathcal{CN}(0,\sigma_{z}^{2})\) (i.e., a circularly symmetric complex Gaussian distribution), and \(\mathbf{b}_{ji}\) and \(\mathbf{v}_{i}\) determine the rows of \(\mathbf{B}_{j}\in\{\pm 1\}^{K_{s}\times n_{p}}\) and \(\mathbf{V}\in\{\sqrt{P_{c}/2}(\pm 1\pm j)\}^{K_{s}\times n_{e}}\), respectively, with \(i\in\mathcal{K}_{s}\). Note that we have removed the slot indices from the above matrices to simplify the notation. The decoding process is comprised of five different steps that work in tandem. A pilot detector based on a Neyman-Pearson (NP) test identifies the active pilots in the current pilot part; channel coefficients corresponding to the detected pilots are estimated using a channel estimator; maximum-ratio combining (MRC) is used to produce a soft estimate of the modulated signal; after demodulation, the signal is passed to a polar list decoder; and, the successfully decoded codewords are added to the list of successfully decoded signals before being subtracted from the received signal via SIC. The process is repeated until there are no successfully decoded users in Fig. 1: Illustration of the encoding process in the proposed MS-MRA schemes. \(J\) consecutive SIC iterations. In the following, \(\mathbf{Y}_{p_{j}}^{\prime}\) and \(\mathbf{Y}_{c}^{\prime}\) denote the received signals in (5) and (6) after removing the list of messages successfully decoded in the current slot up to the current iteration. #### Iii-B1 Pilot Detection Based on NP Hypothesis Testing At the \(j\)th pilot part, we can write the following binary hypothesis testing problem: \[\mathbf{u}_{ji}|\mathcal{H}_{0} \sim\mathcal{CN}\left(\mathbf{0},\sigma_{z}^{2}\mathbf{I}_{M} \right),\] \[\mathbf{u}_{ji}|\mathcal{H}_{1} \sim\mathcal{CN}\left(\mathbf{0},\sigma_{1}^{2}\mathbf{I}_{M} \right), \tag{7}\] where \(\sigma_{1}=\sqrt{\sigma_{z}^{2}+m_{ij}n_{p}P_{p}}\), \(\mathbf{u}_{ji}:=\mathbf{Y}_{p_{j}}^{\prime}\mathbf{\tilde{b}}_{i}^{H}/\sqrt{n _{p}}\), with \(\mathbf{\tilde{b}}_{i}=\left[\mathbf{B}_{n_{p}}\right]_{(i,:)}\), \(\mathcal{H}_{1}\) and \(\mathcal{H}_{0}\) are alternative and null hypotheses that show the existence and absence of the pilot \(\mathbf{\tilde{b}}_{i}\) at the \(j\)th pilot part, respectively, and \(m_{ij}\) is the number of users that pick the pilot \(\mathbf{\tilde{b}}_{i}\) as their \(j\)th pilots. **Lemma 1**.: _([25, Appendix A]): Let \(\hat{\mathcal{D}}_{j}\) be the estimate of the set of active rows of \(\mathbf{B}_{n_{p}}\) in the \(j\)th pilot part. Using a \(\gamma-\)level Neyman-Pearson hypothesis testing (where \(\gamma\) is the bound on the false-alarm probability), \(\hat{\mathcal{D}}_{j}\) can be obtained as_ \[\hat{\mathcal{D}}_{j}=\left\{l:\mathbf{u}_{jl}^{H}\mathbf{u}_{jl}\geq\tau_{0} ^{\prime}\right\}, \tag{8}\] _where \(\tau_{0}^{\prime}=0.5\sigma_{z}^{2}\mathbf{r}_{2M}^{-1}(1-\gamma)\), \(\Gamma_{k}(.)\) denotes the cumulative distribution function of the chi-squared distribution with \(k\) degrees of freedom \(\chi_{k}^{2}\), and \(\Gamma_{k}^{-1}(.)\) is its inverse._ The detection probability of a non-colliding user (\(m_{ij}=1\)) is then obtained as \[P_{D}(\delta_{NP})= \mathbb{P}\left(\mathbf{u}_{ji}^{H}\mathbf{u}_{ji}\geq\tau_{0}^{ \prime}|\mathcal{H}_{1}\right)\] \[\overset{(\leftrightarrow)}{=} 1-\Gamma_{2M}\left(\frac{2\tau_{0}^{\prime}}{\sigma_{z}^{2}+n_{ p}P_{p}}\right)\] \[= 1-\Gamma_{2M}\left(\frac{\sigma_{z}^{2}\Gamma_{2M}^{-1}(1-\gamma )}{\sigma_{z}^{2}+n_{p}P_{p}}\right), \tag{9}\] where in (a), we use the fact that \(\frac{2}{\sigma_{1}^{2}}\mathbf{u}_{ji}^{H}\mathbf{u}_{ji}|\mathcal{H}_{1} \sim\chi_{2M}^{2}\). Note that a higher probability of detection is obtained in the general case of \(m_{ij}>1\). It is clear that the probability of detection is increased by increasing the parameters \(\gamma\), \(n_{p}\), \(P_{p}\), and \(M\). #### Iii-B2 Channel Estimation Let \(\mathbf{B}_{\hat{\mathcal{D}}_{j}}\in\{\pm 1\}^{|\hat{\mathcal{D}}_{j}|\times n_{p}}\) be a sub-matrix of \(\mathbf{B}_{n_{p}}\) consisting of the detected pilots in (8), and suppose that \(\mathbf{\tilde{b}}_{jk}=\left[\mathbf{B}_{\hat{\mathcal{D}}_{j}}\right]_{(k,:)}\) is the corresponding pilot of the \(i\)th user. Since the rows of the codebook are orthogonal to each other, the channel coefficient vector of the \(i\)th user can be estimated as \[\hat{\mathbf{h}}_{i}=\frac{1}{n_{p}\sqrt{P_{p}}}\mathbf{Y}_{p_{j}}^{\prime} \tilde{\mathbf{b}}_{jk}^{T}. \tag{10}\] If the \(i\)th user is in a collision ( \(m_{ij}>1\)), (10) gives an unreliable estimate of the channel coefficient vector. However, this is not important since a CRC check is employed after decoding and such errors do not propagate. #### Iii-B3 MRC, Demodulation, and Channel Decoding Let \(\mathbf{h}_{i}\) be the channel coefficient vector of the \(i\)th user, where \(i\in\tilde{\mathcal{S}}_{s}\) with \(\tilde{\mathcal{S}}_{s}\) denoting the set of remaining users in the \(s\)th slot. Using \(\hat{\mathbf{h}}_{i}\) in (10), the modulated signal of the \(i\)th user can be estimated employing the MRC technique as \[\hat{\mathbf{v}}_{i}=\hat{\mathbf{h}}_{i}^{H}\mathbf{Y}_{c}^{\prime}. \tag{11}\] Plugging (6) into (11), \(\hat{\mathbf{v}}_{i}\) is written as \[\hat{\mathbf{v}}_{i}=\hat{\mathbf{h}}_{i}^{H}\mathbf{h}_{i}\mathbf{v}_{i}+ \mathbf{n}_{i}, \tag{12}\] where \(\mathbf{n}_{i}=\sum_{k\in\tilde{\mathcal{S}}_{s},k\neq i}\hat{\mathbf{h}}_{i}^ {H}\mathbf{h}_{k}\mathbf{v}_{k}+\hat{\mathbf{h}}_{i}^{H}\mathbf{Z}_{c}\). The first and second terms on the right-hand side of (12) are the signal and interference-plus-noise terms, respectively. We can approximate \(\mathbf{n}_{i}\) to be Gaussian distributed, i.e., \(\mathbf{n}_{i}\sim\mathcal{CN}(\mathbf{0},\sigma_{oi}^{2}\mathbf{I}_{n_{e}})\), where \(\sigma_{oi}^{2}=\frac{1}{n_{c}}\mathbb{E}\{\mathbf{n}_{i}\mathbf{n}_{i}^{H} \}=P_{c}\sum_{k\in\hat{\mathcal{D}}_{j},k\neq i}\|\hat{\mathbf{h}}_{i}^{H} \mathbf{h}_{k}|^{2}+\sigma_{z}^{2}\|\hat{\mathbf{h}}_{i}\|^{2}\), which is obtained by treating the coded data sequences of different users to be uncorrelated. The demodulated signal can be obtained as \[\mathbf{g}_{i}=\left[\operatorname{Im}\left(\vartheta_{1i}\right),\operatorname {Re}\left(\vartheta_{1i}\right),\ldots,\operatorname{Im}\left(\vartheta_{n_{c} i}\right),\operatorname{Re}\left(\vartheta_{n_{c}i}\right)\right], \tag{13}\] where \(\vartheta_{ti}=[\hat{\mathbf{v}}_{i}]_{(.,t)}\). From (12) and (13), and using \(\hat{\mathbf{h}}_{i}^{H}\mathbf{h}_{i}\approx\|\hat{\mathbf{h}}_{i}\|^{2}\), each sample of \(\mathbf{g}_{i}\) can be approximated as \(\pm\sqrt{P_{c}/2}\|\hat{\mathbf{h}}_{i}\|^{2}+n^{\prime}\), where \(n^{\prime}\sim\mathcal{CN}\left(0,\frac{\sigma_{oi}^{2}}{2}\right)\). The following log-likelihood ratio (LLR) is obtained as the input to the polar list decoder \[\mathbf{f}_{i}=\frac{2\sqrt{2P_{c}}\|\hat{\mathbf{h}}_{i}\|^{2}}{\hat{\sigma}_{oi }^{2}}\mathbf{g}_{i}, \tag{14}\] where \(\hat{\sigma}_{oi}^{2}\) is an approximation of \(\sigma_{oi}^{2}\) which is obtained by replacing \(\mathbf{h}_{k}\)'s by their estimates. At the \(j\)th pilot part, the \(i\)th user is declared as successfully decoded if 1) its decoder output satisfies the CRC check, and 2) by mapping the \(j\)th pilot part of its decoded message to the Hadamard codebook \(\mathbf{\tilde{b}}_{jk}\) is obtained. Then, all the successfully decoded messages (in the current and previous iterations) are accumulated in the set \(\mathcal{S}_{s}\), where \(|\mathcal{S}_{s}|+|\tilde{\mathcal{S}}_{s}|=K_{s}\). #### Iii-B4 SIC we can see in (3) that the successfully decoded messages contain bit sequences of pilot parts and the coded part (\(\mathbf{w}_{p_{j}}(i),j=1,2,...,J\) and \(\mathbf{w}_{c}(i)\)). Having the bit sequences of successfully decoded messages, we can construct the corresponding transmitted signals using (4). The received signal matrix can be written as \[\mathbf{Y}=\mathbf{H}_{\mathcal{S}_{s}}\mathbf{X}_{\mathcal{S}_{s}}+\mathbf{H} _{\tilde{\mathcal{S}}_{s}}\mathbf{X}_{\tilde{\mathcal{S}}_{s}}+\mathbf{Z}_{s}, \tag{15}\] where \(\mathbf{Y}\) is obtained by merging received signal matrices of different parts, i.e., \[\mathbf{Y}=[\mathbf{Y}_{p_{1}},\ldots,\mathbf{Y}_{p_{J}},\mathbf{Y}_{c}]\in \mathbb{C}^{M\times L}\] with \(\mathbf{X}_{\mathcal{S}_{s}}\in\mathbb{C}^{|\mathcal{S}_{s}|\times L}\) and \(\mathbf{X}_{\mathcal{S}_{s}}\in\mathbb{C}^{|\mathcal{S}_{s}|\times L}\) including the signals in the sets \(\mathcal{S}_{s}\) and \(\tilde{\mathcal{S}}_{s}\), and \(\mathbf{H}_{\mathcal{S}_{s}}\in\mathbb{C}^{M\times|\mathcal{S}_{s}|}\) and \(\mathbf{H}_{\tilde{\mathcal{S matrix (not the output of the latest SIC iteration). The SIC procedure is performed as follows \[\mathbf{Y}^{\prime}=[\mathbf{Y}^{\prime}{}_{p_{1}},\mathbf{Y}^{\prime}{}_{p_{2}}, \ldots,\mathbf{Y}^{\prime}{}_{p_{J}},\mathbf{Y}^{\prime}{}_{c}]=\mathbf{Y}- \hat{\mathbf{H}}_{\mathcal{S}_{s}}\mathbf{X}_{\mathcal{S}_{s}}. \tag{17}\] Finally, \(\mathbf{Y}^{\prime}\) is fed back to the pilot detection algorithm for the next iteration, where the next pilot part is employed. We note that if no user is successfully decoded in \(J\) consecutive iterations (corresponding to \(J\) different pilot parts), the algorithm is stopped. The details of the decoding stages of MS-MRA are shown in Fig. 2 and Algorithm 1. Note that we will discuss MS-MRA-WOPBE, which deviates from the above model, in Section III-D. **Theorem 1**.: _The signal-to-interference-plus-noise ratio (SINR) at the output of MRC for a non-colliding user in the \(s\)th slot can be approximated as_ \[\beta_{s}\approx\frac{\omega_{e_{s}}P_{c}\left(\omega_{p_{s}}\mathbb{E}\{\| \mathbf{h}_{i}\|^{4}\}+\frac{\sigma_{z}^{2}}{n_{p}P_{p}}\mathbb{E}\{\|\mathbf{ h}_{i}\|^{2}\}\right)}{\left(P_{c}(|\mathcal{\bar{S}}_{s}|-1)+\sigma_{z}^{2} \right)\left(\omega_{p_{s}}\mathbb{E}\{\|\mathbf{h}_{i}\|^{2}\}+\frac{M\sigma_ {z}^{2}}{n_{p}P_{p}}\right)}, \tag{18}\] _where \(\omega_{p_{s}}=\omega_{c_{s}}=1-\frac{|\mathcal{S}_{s}|}{L}\) if the transmitted signals are randomly interleaved, and \(\omega_{p_{s}}=1-\frac{1}{E_{x}}P_{p}|\mathcal{S}_{s}|\), \(\omega_{c_{s}}=1-\frac{1}{E_{x}}P_{c}|\mathcal{S}_{s}|\), otherwise, with \(E_{x}=Jn_{p}P_{p}+n_{c}P_{c}\)._ Proof.: See Appendix A. We employ the above approximate SINR expression 1) to estimate the error probability of MS-MRA analytically, and 2) to determine the optimal power allocation for each group in MSUG-MRA. We further note that using this SINR approximation, the performance of the MS-MRA is well predicted in the low and medium \(K_{a}\) regimes (see Fig. 6). The reason why the SINR approximation does not work well in the high \(K_{a}\) regime is the employed approximations in Lemma 4 (see the Appendix A for details). ### _Analysis of MS-MRA_ In this part, the PUPE of the MS-MRA is analytically calculated, where errors resulting from the collision, pilot detection, and polar decoder are considered. For our analyses, we assume that after successfully decoding and removing a user using a pilot part, the decoder moves to the next pilot part. Hence, in the \(t\)th iteration of the \(s\)th slot, we have \[|\mathcal{S}_{s}| =t-1, \tag{19a}\] \[|\mathcal{\bar{S}}_{s}| =K_{s}-t+1. \tag{19b}\] **Lemma 2**.: _Let \(\xi_{k}\) be the event that \(k\) out of \(K_{s}\) users remain in the \(s\)th slot, and define \(\eta_{i}:=\|\mathbf{h}_{i}\|^{2}\), where \(\mathbf{h}_{i}\sim\mathcal{CN}(\mathbf{0},\mathbf{I}_{M})\). Assuming that the strongest users with highest \(\eta_{i}\) values are decoded first, we have_ \[\mathbb{E}\{\eta_{i}^{m}|\xi_{k}\}=\mu_{(k,m)}, \tag{20}\] _where \(\mu_{(k,m)}:=\frac{\int_{-\infty}^{\hat{x}_{k}}\eta^{m}f_{2M}^{\hat{x}^{2}}( 2\eta)d\eta}{\int_{-\infty}^{\hat{x}_{k}}f_{2M}^{\hat{x}^{2}}(2\eta)d\eta}\), with \(f_{k}^{\hat{x}^{2}}(.)\) denoting the PDF of the chi-squared distribution with \(k\) degrees of Fig. 2: The decoding process of MS-MRA at the \(j\)th pilot part and the \(s\)th slot. freedom and \(\bar{x}_{k}=0.5\Gamma_{2M}^{-1}(k/K_{s})\)._ Proof.: In the first iteration of the \(s\)th slot for which no user is decoded yet (all the \(K_{s}\) active users are available), since \(\mathbf{h}_{i}\sim\mathcal{CN}(\mathbf{0},\mathbf{I}_{M})\), we have \(2\eta_{i}|\xi_{K_{s}}\sim\chi_{2M}^{2}\). We assume that the users with higher values of \(\eta_{i}\) are decoded first. Hence, if in an iteration, \(k\) out of \(K_{s}\) users remain in the slot, the distribution of \(\eta_{i}\) is obtained by \(2\eta_{i}|\xi_{k}\sim\left\{\chi_{2M}^{2}\right\}_{k/K_{s}},\) where \(\left\{.\right\}_{\beta}\) removes \(1-\beta\) portion of the samples with higher values from the distribution and normalizes the distribution of the remaining samples, i.e., \[\mathbb{P}(\eta_{i}=y|\xi_{k})=\frac{f_{2M}^{\chi^{2}}(2y)}{\int_{-\infty}^{x _{k}}f_{2M}^{\chi_{2M}}(2y)dy},y<\bar{x}_{k},\] where \(\bar{x}_{k}\) is obtained by solving the following equation \(\mathbb{P}\left(\eta_{i}<\bar{x}_{k}|\xi_{K_{s}}\right)=k/K_{s}\), which results in \(\bar{x}_{k}=0.5\Gamma_{2M}^{-1}(k/K_{s})\). Therefore, we obtain \[\mathbb{E}\{\eta_{i}^{m}|\xi_{k}\}=\frac{\int_{-\infty}^{\bar{x}_{k}}\eta^{m}f _{2M}^{\chi^{2}}(2\eta)d\eta}{\int_{-\infty}^{\bar{x}_{k}}f_{2M}^{\chi^{2}}(2 \eta)d\eta}. \tag{21}\] We can see from (13) and (14) that the input of the polar decoder is a \(1\times 2n_{c}\) real codeword. Thus, the average decoding error probability of a non-colliding user in the \(t\)th iteration of a slot with \(K_{s}\) users can be approximated as (see [27]) \[P_{K_{s},t}^{dec}\approx Q\left(\frac{0.5\log\left(1+\alpha_{K_{s},t}\right)- \frac{B+r}{2n_{c}}}{\sqrt{\frac{1}{2n_{c}}\frac{\alpha_{K_{s},t}(\alpha_{K_{s },t}+2)\log^{2}e}{2(\alpha_{K_{s},t}+1)^{2}}}}\right), \tag{22}\] where \(Q(.)\) denotes the standard \(Q\)-function, and \(\alpha_{K_{s},t}\) is the SINR of a non-colliding user in the \(t\)th iteration of a slot with \(K_{s}\) users, which is calculated using Theorem 1, Lemma 2, and (19) as \[\alpha_{K_{s},t}\approx\frac{s_{c_{t}}P_{c}\left(s_{p_{t}}\mu_{(K_{s}-t+1,2)} +\frac{\sigma_{z}^{2}}{n_{p}P_{p}}\mu_{(K_{s}-t+1,1)}\right)}{\left(P_{c}(K_{ s}-t)+\sigma_{z}^{2}\right)\left(s_{p_{t}}\mu_{(K_{s}-t+1,1)}+\frac{M\sigma_{z}^{2}}{n _{p}P_{p}}\right)}, \tag{23}\] where \(s_{p_{t}}=1-P_{p}\frac{t-1}{E_{x}}\) and \(s_{c_{t}}=1-P_{c}\frac{t-1}{E_{x}}\). Note that since the powers of signal and interference-plus-noise terms of \(\hat{\mathbf{v}}_{i}\) are equal in their real and imaginary parts, the SINRs of \(\mathbf{f}_{i}\) in (14) and \(\hat{\mathbf{v}}_{i}\) are the same. Therefore, in (22), we employ the SINR calculated in Theorem 1 for the input of the polar list decoder. Since decoding in the initial iterations well represents the overall decoding performance of the MS-MRA, we approximate the SINR of the first iteration by setting \(t=1\) in (23) as \[\alpha_{K_{s},1}\approx\frac{P_{c}M}{(\sigma_{z}^{2}+P_{c}K_{s})\left(1+\frac {\sigma_{z}^{2}}{n_{p}P_{p}}\right)}. \tag{24}\] Concentrating on (22), we notice that \(P_{K_{s},1}^{dec}\) is a decreasing function of \(n_{c}\) and \(\alpha_{K_{s},1}\). Besides, (24) shows that \(\alpha_{K_{s},1}\) increases by decreasing \(n_{c}\) and \(J\) (considering \(K_{s}\approx K_{a}(Jn_{p}+n_{c})/n\)), and increasing \(M\), \(P_{c}\), and \(P_{p}\), however, it is not a strictly monotonic function of \(n_{p}\). Since our goal is to achieve the lowest \(P_{K_{s},t}^{dec}\) by spending the minimum \(E_{b}/N_{0}=(n_{c}P_{c}+Jn_{p}P_{p})/B\), we can optimize the parameters \(n_{c}\), \(n_{p}\), \(P_{c}\), and \(P_{p}\). **Theorem 2**.: _In the \(t\)th iteration of the \(s\)th slot, the probability of collision for a remaining user \(i\in\tilde{\mathcal{S}}_{s}\) can be approximated as_ \[P_{K_{s},t}^{pol}\approx 1-\frac{N_{1}^{(t)}}{K_{s}-t+1}, \tag{25}\] _where \(N_{i}^{(k)}\) denotes the average number of pilots that are in \(i\)-collision (selected by \(i\) different users) in the \(k\)th iteration, which is calculated as_ \[N_{i}^{(k+1)}\approx N_{i}^{(k)}+\begin{cases}\kappa_{k}\left((i+1)N_{i+1}^{( k)}-iN_{i}^{(k)}\right)&i\geq 2\\ \kappa_{k}\left(2N_{2}^{(k)}-N_{1}^{(k)}\right)-\frac{1}{J}&i=1\end{cases}, \tag{26}\] _where \(\kappa_{k}=\frac{J-1}{J(K_{s}-k+1)}\), and \(N_{i}^{(1)}\approx n_{p}f_{p}(i;K_{s}/n_{p})\) with \(f_{p}(i;a)\) denoting the probability mass function (PMF) of the Poisson distribution with the parameter \(a\)._ Proof.: See Appendix B. Note that to extend the result in Theorem 2 to an SIC-based system with only one pilot sequence (orthogonal or non-orthogonal), we only need to set \(J=1\) in the above expressions. From (25), the collision probability in the first iteration can be calculated as \(P_{K_{s},1}^{col}\approx 1-e^{-K_{s}/n_{p}}\), which is a decreasing function of \(n_{p}\). Since the overall decoding performance of the system depends dramatically on the collision probability in the first iteration, we can increase \(n_{p}\), however, this results in additional overhead. **Corollary 1**.: _Assuming a relatively large CRC length (hence negligible \(p_{fa}\)), the PUPE of the MS-MRA with \(S\) slots and \(K_{a}\) active users can be approximated as_ \[P_{e}\approx 1-\sum_{r=1}^{K_{a}}(1-\epsilon_{r})\binom{K_{a}-1}{r-1} \left(\frac{1}{S}\right)^{r-1}\left(1-\frac{1}{S}\right)^{K_{a}-r}, \tag{27}\] _where \(\epsilon_{r}\) denotes the PUPE of a slot with \(r\) users, which is obtained as_ \[\epsilon_{r}\approx\sum_{j=1}^{r}\frac{r-j+1}{r}p_{j,r}, \tag{28}\] _with \(p_{j,r}=(e_{j,r})^{r-j+1}\prod_{j=1}^{j-1}\left(1-(e_{f,r})^{r-f+1}\right)\), and_ \[e_{t,r}=1-P_{D}(\delta_{NP})\left(1-P_{r,t}^{dec}\right)\left(1-P_{r,t}^{col} \right), \tag{29}\] _where \(P_{r,t}^{dec}\), \(P_{r,t}^{col}\), and \(P_{D}(\delta_{NP})\) are computed in (22), Theorem 2, and (9), respectively._ Note that the result in Corollary 1 can also be used in any other slotted system with SIC by replacing appropriate \(e_{j,r}\). ### _Ms-Mra-Wopbe_ As discussed in Section III-A, in the MS-MRA scheme, the pilot bits are fed to the polar encoder along with the data and CRC bits. To improve the performance by decreasing the coding rate, the MS-MRA-WOPBE scheme passes only the data and CRC bits to the encoder. To detect the bit sequences of different parts of the message, it employs an extra iterative decoding block called iterative inter-symbol decoder (IISD) (described in Section III-D2). At each step of IISD, it detects one part of a user's signal (polar or pilot part), appends the detected part to the current pilot (which was used for channel estimation in the previous step) to have an extended pilot, and re-estimates the channel coefficients accordingly. The encoding and decoding procedures of MS-MRA-WOPBE are described below. #### Iii-D1 Encoder The \(i\)th user encodes its bits using the following steps (the general construction is shown in Fig. 1). Similar to the MS-MRA encoder in Section III-A, \(B\) information bits are divided into \(J+1\) parts as in (3), and the transmitted signal is generated as in (4). The only difference is in the construction of the QPSK signal. The encoder in MS-MRA-WOPBE defines two CRC bit sequences as \(\mathbf{c}_{2}(i)=\mathbf{w}(i)\mathbf{G}_{2}\) and \(\mathbf{c}_{1}(i)=[\mathbf{w}_{c}(i),\mathbf{c}_{2}(i)]\mathbf{G}_{1}\), where \(\mathbf{G}_{2}\in\{0,1\}^{B\times r_{2}}\) and \(\mathbf{G}_{1}\in\{0,1\}^{\{B_{c}+r_{2}\}\times r_{1}}\) are generator matrices known by the BS and users. Then, it passes \([\mathbf{w}_{c}(i),\mathbf{c}_{2}(i),\mathbf{c}_{1}(i)]\) to an \((2n_{c},~{}B_{c}+r_{1}+r_{2})\) polar encoder, and modulates the output by QPSK to obtain \(\mathbf{v}_{i}\in\{\sqrt{P_{c}/2}(\pm 1\pm j)\}^{1\times n_{c}}\). #### Iii-D2 Decoder As shown in Algorithm 1, MS-MRA-WOPBE exploits the same decoding steps as the MS-MRA scheme, except for the IISD step. We can see in Algorithm 1 that the \(j\)th pilot of the \(i\)th user is detected before employing the IISD. Then, IISD must detect the data (polar) sequence and the \(f\)th pilot of the \(i\)th user, where \(f=1,...,J,f\neq j\). In the following, IISD is described in detail. **Step 1** [Detecting \(\mathbf{w}_{c}(i)\forall i\in\hat{\mathcal{D}}_{j}\)]: We first obtain \(\mathbf{g}_{i}\) using (13), where \(\hat{\mathbf{v}}_{i}=\hat{\mathbf{h}}_{i}^{H}\mathbf{R}_{h}^{-1}\mathbf{Y^{ \prime}}_{c}\), and \(\mathbf{R}_{h}=\sigma_{z}^{2}\mathbf{I}_{M}+P_{c}\sum_{l\in\hat{\mathcal{D}}_{ j}}\hat{\mathbf{h}}_{l}\hat{\mathbf{h}}_{l}^{H}\). Then, we pass \(\mathbf{f}_{i}=\frac{2\sqrt{2P_{c}}}{1-P_{c}\hat{\mathbf{h}}_{l}^{H}\mathbf{R }_{h}^{-1}\mathbf{h}_{i}^{-1}}\mathbf{g}_{i}\) to the list decoder. A CRC check \(\mathrm{flag}_{\mathrm{CRCI}}(i)\in\{0,1\}\) and an estimate of \([\mathbf{w}_{c}(i),\mathbf{c}_{2}(i),\mathbf{c}_{1}(i)]\)1 are obtained by the polar list decoder. Footnote 1: In the output of the polar list decoder, there is a list of possible messages. If more than one messages satisfy the CRC check (\(\mathbf{c}_{1}(i)=[\mathbf{w}_{c}(i),\mathbf{c}_{2}(i)]\mathbf{G}_{1}\)), the most likely of them is returned as the detected message and the CRC flag is set to zero. **Step 2** [Updating \(\hat{\mathbf{h}}_{i}\)]: Since the \(j\)th pilot and polar codeword of the \(i\)th user are detected so far, we append them to construct a longer signal as \(\mathbf{q}_{i}=[\mathbf{b}_{ji},\mathbf{v}_{i}]\in\mathbb{C}^{1\times(n_{p}+ n_{s})}\). Then, we update \(\hat{\mathbf{h}}_{i}\) by MMSE estimation as \(\hat{\mathbf{h}}_{i}=\mathbf{Y^{\prime}}_{q}\mathbf{R}_{q}^{-1}\mathbf{q}_{q}^ {H}\), where \(\mathbf{R}_{q}=\sigma_{z}^{2}\mathbf{I}_{(n_{p}+n_{c})}+\sum_{l\in\hat{\mathcal{D }}_{j}}\mathbf{q}_{l}^{H}\mathbf{q}_{l}\), and \(\mathbf{Y^{\prime}}_{q}=[\mathbf{Y^{\prime}}_{p},\mathbf{Y^{\prime}}_{c}]\). **Step 3** [Detecting \(\mathbf{w}_{p_{f}}(i)\forall i\in\hat{\mathcal{D}}_{j},f\neq j\)]: Assuming that the \(t\)th row of the Hadamard matrix is active in the \(f\)th pilot part (\(f\neq j\)), we estimate the corresponding channel coefficient as \(\mathbf{s}_{ft}=\frac{1}{n_{p}\sqrt{P_{p}}}\mathbf{Y^{\prime}}_{p_{f}}\tilde{ \mathbf{b}}_{ft}^{T}\) (see (10)). To find the \(f\)th pilot sequence of the \(i\)th user, we find the pilot whose corresponding channel coefficient vector is most similar to \(\hat{\mathbf{h}}_{i}\), i.e., we maximize the correlation between \(\hat{\mathbf{h}}_{i}\) and \(\mathbf{s}_{ft}\) as \[\hat{t}_{fi}=\max_{t}\frac{|\hat{\mathbf{h}}_{i}^{H}\mathbf{s}_{ft}|^{2}}{ \mathbf{s}_{ft}^{H}\mathbf{s}_{ft}},f=1,...,J,f\neq j. \tag{30}\] **Step4** [Updating \(\hat{\mathbf{h}}_{i}\)]: Since the bit sequences of all \(J+1\) parts are detected, we can construct \(\mathbf{x}_{i}\) using (4). The channel coefficient vector can be updated by MMSE as \(\hat{\mathbf{h}}_{i}=\mathbf{Y^{\prime}}\mathbf{R}^{-1}\mathbf{x}_{i}^{H}\), where \(\mathbf{R}=\sigma_{z}^{2}\mathbf{I}_{L}+\sum_{l\in\hat{\mathcal{D}}_{j}} \mathbf{x}_{l}^{H}\mathbf{x}_{l}\). If the number of users that satisfy \(\mathrm{flag}_{\mathrm{CRCI}}(i)=1\) is not changed in an iteration, the iteration is stopped, otherwise, the algorithm goes to Step 1 for another iteration with updated \(\hat{\mathbf{h}}_{i}\). Users whose bit sequences satisfy \(\mathbf{c}_{2}(i)=\mathbf{w}(i)\mathbf{G}_{2}\) and \(\mathbf{c}_{1}(i)=[\mathbf{w}_{c}(i),\mathbf{c}_{2}(i)]\mathbf{G}_{1}\) are added to the set \(\mathcal{S}_{t^{\prime}j}\) as successfully decoded users of the current iteration. ### _Msug-Mra_ Different from MS-MRA where the power of every user is the same and signals are not interleaved, MSUG-MRA defines \(G\) groups, each being assigned unique interleaver and power pair (\(\pi_{g}(.),P_{p_{g}},P_{c_{g}}\)), \(g=1,2,...,G\). We assume that \(\phi=\frac{P_{p_{g}}}{P_{c_{g}}}\) is constant in all groups, hence each group can be modified with a unique interleaver-power pair \((\pi_{g}(.),P_{c_{g}})\), which is known at both transmitter and receiver sides. The details of encoding and decoding procedures as well as the power selection strategy are explained below. Note that we assume without loss of generality that \(P_{c_{1}}<P_{c_{2}}...<P_{c_{G}}\). #### Iii-E1 Encoder The encoding is adopted as follows: * Every user randomly selects a group, e.g., with index \(g\). * Each user employs \(P_{c_{g}}\) and \(\phi P_{c_{g}}\) as the powers of the coded and pilot parts, with which it generates its multi-stage signal \(\mathbf{x}_{i}\) similar to MS-MRA (according to (4)). * The transmitted signal is created as \(\tilde{\mathbf{x}}_{i}=\pi_{g}(\mathbf{x}_{i})\). #### Iii-E2 Decoder In each iteration, the decoder tends to decode the messages belonging to the users of the dominant group (the \(G\)th group with the highest power level). After decoding and removing users in the \(G\)th group, users in the \((G-1)\)st group become the dominant ones. Using the same trend, all the groups have the chance to be the dominant group at some point. Since users in different groups are interleaved differently, signals of users in other groups are uncorrelated from the signals in the dominant group. Thus, letting the \(g_{0}\)th group to be dominant, we approximately model the \(f\)th signal in the the \(g\)th group (\(g\neq g_{0}\)) as \[\tilde{\mathbf{x}}_{f}\sim\mathcal{CN}(\mathbf{0},\zeta P_{c_{g}}\mathbf{I}_{L}), \tag{31}\] where \(\zeta=\frac{J\phi n_{p}+n_{c}}{L}\). Therefore, when the \(g_{0}\)th group is dominant (the users in the groups with indices greater than \(g_{0}\) are already removed using SIC), users in the \(g_{0}\)th group are perturbed by i.i.d. noise samples drawn from \(\mathcal{CN}(0,\delta_{g_{0}})\), with \(\delta_{g_{0}}\approx\zeta K_{0}\sum_{g=1}^{g_{0}-1}P_{c_{g}}+\sigma_{z}^{2}\), where \(K_{0}=\frac{K_{a}}{SG}\) is the average number of users in each group of the current slot. Consequently, by replacing \(\sigma_{z}^{2}\), \(P_{p}\), and \(P_{c}\) with \(\delta_{g_{0}}\), \(\phi P_{c_{g_{0}}}\), and \(P_{c_{g_{0}}}\) in the decoding steps of MS-MRA (in Section III-B), the decoding procedure of MSUG-MRA is obtained as: * Deinterleave the rows of the received signals:\(\mathbf{\tilde{Y}}_{p_{j}}^{\prime}=\pi_{g_{0}}^{-1}(\mathbf{Y^{\prime}}_{p_{j}})\) and \(\mathbf{\tilde{Y}}_{c}^{\prime}=\pi_{g_{0}}^{-1}(\mathbf{Y^{\prime}}_{c})\). * Find active pilots as \[\hat{\mathcal{D}}_{j}=\left\{l:\mathbf{\tilde{u}}_{ji}^{H}\mathbf{\tilde{u}}_{ ji}\geq 0.5\delta_{g_{0}}\Gamma_{2M}^{-1}(1-\gamma)\right\},\] where \(\mathbf{\tilde{u}}_{ji}:=\mathbf{\tilde{Y}}_{p_{j}}^{\prime}\mathbf{\tilde{b} }_{i}^{H}/\sqrt{n_{p}}\). * Channel estimation and MRC: \(\hat{\mathbf{v}}_{i}=\hat{\mathbf{h}}_{i}^{H}\mathbf{\tilde{Y}}_{c}^{\prime}\), where \(\hat{\mathbf{h}}_{i}=\frac{1}{n_{p}\sqrt{\phi P_{e_{g_{0}}}}}\mathbf{\tilde{Y }}_{p_{j}}^{\prime}\mathbf{\tilde{b}}_{p_{j}}^{T}\), and \(\tilde{\mathbf{b}}_{jk}\) is one of the detected pilots. * Pass \(\mathbf{f}_{i}=\frac{2\sqrt{2P_{e_{g_{0}}}}\|\hat{\mathbf{h}}_{i}\|^{2}}{ \hat{\sigma}_{oi}^{2}}\mathbf{g}_{i}\) to the polar decoder, where \(\hat{\sigma}_{oi}^{2}=P_{e_{g_{0}}}\sum_{k\in\hat{\mathcal{D}}_{j},k\neq i}| \hat{\mathbf{h}}_{i}^{H}\mathbf{\tilde{h}}_{k}|^{2}\)\(+\delta_{g_{0}}\|\hat{\mathbf{h}}_{i}\|^{2}\), and \(\mathbf{g}_{i}\) is defined in (13). * Regenerate signals of successfully decoded users according to Section III-E1 (using \((\pi_{g_{0}}(.)\),\(P_{e_{g_{0}}})\) pair), and collect them in the rows of \(\mathbf{\tilde{X}}_{\mathcal{S}_{s}}\). * Apply LS-based SIC similar to (17), i.e., \(\mathbf{Y}^{\prime}=\mathbf{Y}(\mathbf{I}_{L}-\mathbf{\tilde{X}}_{\mathcal{S }_{s}}^{H}(\mathbf{\tilde{X}}_{\mathcal{S}_{s}}\mathbf{\tilde{X}}_{\mathcal{ S}_{s}}^{H})^{-1}\mathbf{\tilde{X}}_{\mathcal{S}_{s}})\). Note that this loop is repeated for \(G\) different group indices and \(J\) different pilot parts, and the iteration is stopped if there is no successfully decoded users in \(GJ\) consecutive iterations. #### Iii-B3 Power Calculation When MSUG-MRA starts the decoding in the \(g_{0}\)th group, there are \(|\mathcal{S}_{s}|\)\(\approx K_{0}(G-g_{0})\) successfully decoded users from previous groups (with higher power levels), \(|\mathcal{\tilde{S}}_{s}|\)\(=K_{0}\) users remain in the \(g_{0}\)th group, and users in the current group are perturbed with a complex Gaussian noise with covariance matrix \(\delta_{g_{0}}\mathbf{I}_{M}\). Therefore, the SINR of a non-colliding user in the current group can be calculated by replacing \(|\mathcal{\tilde{S}}_{s}|\)\(\approx K_{0}\), \(|\mathcal{S}_{s}|\)\(=K_{0}(G-g_{0})\), \(\mathbb{E}\{\|\mathbf{h}_{i}\|^{2}\}=M\), \(\mathbb{E}\{\|\mathbf{h}_{i}\|^{4}\}=M^{2}\), \(P_{c}=P_{e_{g_{0}}}\), \(P_{p}=\phi P_{e_{g_{0}}}\), \(\sigma_{z}^{2}\approx\delta_{g_{0}}\), and \(\omega_{p_{s}}=\omega_{c_{s}}=1-\frac{|\mathcal{S}_{s}|}{L}\) in (18) as \[\beta_{g_{0}}^{\prime}\approx\frac{\rho_{g_{0}}MP_{e_{g_{0}}}^{2}+ \frac{\delta_{g_{0}}}{n_{p}\phi}P_{e_{g_{0}}}}{\left(P_{e_{g_{0}}}(K_{0}-1)+ \delta_{g_{0}}\right)\left(P_{e_{g_{0}}}+\frac{\delta_{g_{0}}}{\rho_{g_{0}}n_{ p}\phi}\right)}, \tag{32}\] where \(\rho_{g_{0}}=1-\frac{K_{0}(G-g_{0})}{L}\). To impose similar performance on different groups, we set \(\beta_{1}^{\prime}=\beta_{2}^{\prime}=\ldots=\beta_{G}^{\prime}\). Solving this equation, the power of the \(g\)th group satisfies \(c_{1}P_{g}^{2}+c_{2}P_{g}+c_{3}=0\), where \(c_{1}=(K_{0}-1)-\frac{\rho_{g}M}{\beta_{g-1}^{\prime}}\), \(c_{2}=\delta_{g}\left(1+\frac{(K_{0}-1)}{\phi n_{p}\rho_{g}}-\frac{1}{\phi n_ {p}\beta_{g-1}^{\prime}}\right)\), \(c_{3}=\frac{\delta_{g}^{2}}{\phi n_{p}\rho_{g}}\). Solving this equation, we have \[P_{t}=\frac{-c_{2}+\sqrt{c_{2}^{2}-4c_{1}c_{3}}}{2c_{1}}, \tag{33}\] \[\mathrm{s.t.}\ \frac{1}{G}\sum_{f=1}^{G}P_{f}=P\ \mathrm{and}\ \ P_{t}\in \mathbb{R}^{+}.\] Note that the MS-MRA scheme is a special case of the MSUG-MRA with \(G=1\). ### _MS-SRA and MSUG-SRA_ In this part, we apply the proposed MIMO coding schemes to the case of a single receive antenna. To accomplish this, we repeat each user's length-\(L\) signal multiple times to create temporal diversity in MS-SRA and MSUG-SRA. Accordingly, we divide the whole frame into \(V\) sub-frames of length \(n^{\prime}=n/V\), then divide each sub-frame into \(S\) slots of length \(L=n^{\prime}/S\). Each user randomly selects a slot index, namely \(s\), and transmits its signal, through the \(s\)th slot of each sub-frame. Assuming the coherence time to be \(L\), each sub-frame is analogous to a receive antenna. Therefore, the transmitted messages in MS-SRA and MSUG-SRA can be decoded using MS-MRA and MSUG-MRA decoders in Sections III-B and III-E2, respectively, considering \(V\) receive antennas. Since each user repeats its signal \(V\) times, for this case, we have \(E_{b}/N_{0}=\frac{VLP}{\sigma_{z}^{2}B}\). ### _Computational Complexity_ We focus on the number of multiplications as a measure of the computational complexity, and make a complexity comparison among the proposed and existing URA solutions. The per-iteration computational complexity of the MS-MRA in a slot is calculated as follows: The pilot detection in (8) has a complexity of \(\mathcal{O}(n_{p}^{2}MJS)\) corresponding to \(J\) different pilot parts and \(S\) different slots, where \(\mathcal{O}(.)\) is the standard big-O notation, denoting the order of complexity. The channel estimator in (10) does not require any extra computation, because \(\hat{\mathbf{h}}_{i}\) corresponds to \(\mathbf{u}_{ji}\) which is calculated before for pilot detection; the MRC in (11) has a complexity of \(\mathcal{O}(\sum_{j=1}^{J}\lvert\mathcal{D}_{j}\rvert Mn_{c}S)\); to compute the LLR in (14), the required computational complexity is \(\mathcal{O}(\sum_{j=1}^{J}\lvert\mathcal{D}_{j}\rvert^{2}MS)\); the computational complexity of the polar list decoder is [26]\(\mathcal{O}(\sum_{j=1}^{J}\lvert\mathcal{D}_{j}\rvert n_{c}\log n_{c}S)\); and, the SIC has a complexity of \(\mathcal{O}(ML|\mathcal{S}_{s}|S+|\mathcal{S}_{s}|^{2}LS)\). We know from (44) that in the first iteration, we have \(\lvert\mathcal{D}_{j}\rvert\)\(\approx n_{p}-n_{p}e^{-K_{a}/(n_{p}S)}<K_{a}/S\), and \(\lvert\mathcal{S}_{s}\rvert\)\(=0\); in the last iterations, we have \(\lvert\mathcal{S}_{s}\rvert\)\(\approx\)\(K_{a}/S\) and \(\lvert\mathcal{D}_{j}\rvert\)\(\approx\)\(0\). Hence, considering \(M\gg\log n_{c}\) and \(n_{c}\lvert\mathcal{D}_{j}\rvert\)\(\gg\)\(n_{p}\), we can compute the computational complexity of the MS-MRA in the first and last iterations as \(\mathcal{O}\left(K_{a}MJ(n_{c}+K_{a}/S)\right)\) and \(\mathcal{O}\left(LK_{a}(M+K_{a}/S)\right)\), respectively. Considering the computational complexity in the intermediate iterations to be in the same order, the per-iteration computational complexity of the MS-MRA can be bounded by \(\mathcal{O}\Big{(}n_{p}^{2}MJS+\max\left(K_{a}MJ(n_{c}+K_{a}/S),LK_{a}(M+K_{a}/S) \right)\Big{)}\). Note that the computational complexity of MSUG-MRA is in the same order as MS-MRA, and for MS-SRA and MSUG-SRA schemes, the computational complexity is obtained by replacing \(M\) by \(V\) in the above figures. Note that by employing a low-complexity adaptive filter [28, 29, 30], we can considerably reduce the computational complexity of the LS-based channel estimator in (16) (hence the total computational complexity of the proposed schemes). Looking at Algorithm 1, we can infer that MS-MRA-WOPBE is obtained by employing the same pilot detector (with complexity \(\mathcal{O}(n_{p}^{2}MJS)\)), channel estimator (does not incur any extra computational complexity), and SIC (with complexity \(\mathcal{O}(ML|\mathcal{S}_{a}|S+|\mathcal{S}_{a}|^{2}LS)\)) as in the MS-MRA case, except for employing the IISD block. In Step 1 of IISD, the complexity for computing \(\mathbf{f}_{i}\) and implementing polar decoder are \(\mathcal{O}((Mn_{c}+M^{2})T_{I}S\sum_{j=1}^{J}|\mathcal{D}_{j}|)\) and \(\mathcal{O}(T_{I}n_{c}\log n_{c}S\sum_{j=1}^{J}|\mathcal{D}_{j}|)\), respectively, where \(T_{I}\) denotes the number of iterations of IISD. In the Step 2 of IISD, computing \(\hat{\mathbf{h}}_{i}\) and \(e_{k}\) has the complexity of \(\mathcal{O}(T_{I}(n_{c}+n_{p})^{2}S\sum_{j=1}^{J}|\mathcal{D}_{j}|+T_{I}(n_{c }+n_{p})MS\sum_{j=1}^{J}|\mathcal{D}_{j}|)\) and \(\mathcal{O}(T_{I}(J-1)n_{p}MS\sum_{j=1}^{J}|\mathcal{D}_{j}|)\), respectively. The computational complexity of obtaining \(\hat{\mathbf{h}}_{i}\) in Step 3 of IISD is \(\mathcal{O}(T_{I}(L^{2}+LM)S\sum_{j=1}^{J}|\mathcal{D}_{j}|)\). Then, replacing \(|\mathcal{D}_{j}|\) and \(|\mathcal{S}_{a}|\) with their approximate values (discussed in the previous paragraph), the overall computational complexity of the MS-MRA-WOPBE is bounded by \(\mathcal{O}\Big{(}n_{p}^{2}MJS+\max\big{(}\big{(}(L^{2}+M^{2})+ML\big{)}\,T_{ I}JK_{a},LK_{a}(M+K_{a}/S)\big{)}\Big{)}\). For comparison purposes, the dominant per-iteration computational complexity of the FASURA in [24] (which is due to energy detector and SIC operation) can also be computed as \(\mathcal{O}\left(M(n_{p}+L^{\prime}n_{c})2^{B_{f}}+K_{a}(nM+n^{2})\right)\), where \(B_{f}\) denotes the number of pilot bits, \(n\) is the frame length, and \(L^{\prime}\) is the length of the spreading sequence. ## IV Numerical Results We provide a set of numerical results to assess the performance of the proposed URA set-ups. In all the results, we set \(B=100\), the number of CRC bits \(r=11\), the Neyman-Pearson threshold \(\gamma=0.1\), and the list size of the decoder to \(64\). For MS-MRA and MSUG-MRA, we set the frame length \(n\approx 3200\), and \(P_{e}=0.05\). The corresponding values for the MS-SRA and MSUG-SRA are \(n\approx 30000\), and \(P_{e}=0.1\). In Fig. 3, the performance of the proposed MS-MRA and MSUG-MRA is compared with the short blocklength scheme of [22] with the number of antennas \(M=100\) and slot length \(L=200\). (In this scenario, we consider a fast-fading environment, where the coherence blocklength is considered as \(L_{c}=2000\)). To facilitate a fair comparison, we consider \((J,n_{p},n_{c})=(2,32,128)\) (\(L=192\)) and \(P_{p}/P_{c}=1\) (\(\phi=1\) for MSUG-MRA) for all the proposed schemes. For MSUG-MRA, the value of \(G\) is set as \(G=1\) for \(K_{a}\leq 400\), \(G=3\) for \(K_{a}=500\), \(G=6\) for \(600\leq K_{a}\leq 800\), \(G=8\) for \(900\leq K_{a}\leq 1000\), and \(G=10\) for \(K_{a}>1000\). The superiority of the proposed schemes over the one in [22] is mostly due to the more powerful performance of the polar code compared to the simple coding scheme adopted in [22] and the use of the SIC block, which significantly diminishes the effect of interference. We also observe that MS-MRA-WOPBE outperforms MS-MRA, which is due to 1) employing IISD, which iteratively improves the accuracy of the channel estimation, and 2) lower coding rate by not encoding the pilot bits. Besides, the range of the number of active users that are detected by the MSUG-MRA is higher than those of MS-MRA and MS-MRA-WOPBE schemes. This improvement results from randomly dividing users into different groups, which provides each group with a lower number of active users (hence a lower effective interference level). In Fig. 4, we compare the proposed MS-MRA and MSUG-MRA with the ones in [23, 24], considering the slow-fading channel with coherence blocklength \(L_{c}=3200\). We set \((J,n_{p},n_{c})=(2,256,512)\), \(M=50\), \(P_{p}/P_{c}=0.66\) for MS-MRA. We choose \((J,n_{p},n_{c},G)=(2,256,512,1)\) for \(K_{a}\leq 700\), \((J,n_{p},n_{c},G)=(2,64,512,6)\) for \(K_{a}=900\), and \((J,n_{p},n_{c},G)=(2,64,512,18)\) for \(K_{a}>900\) with \(\phi=0.66\). Thanks to employing the slotted structure, SIC, and orthogonal pilots, all the proposed schemes have superior performance compared to [23]. Due to employing random spreading and an efficient block called NOPICE, FASURA in [24] performs better than the proposed MS-MRA and MSUG-MRA in the low \(K_{a}\) regimes; however, its performance is worse than the MSUG-MRA in higher values of \(K_{a}\) (thanks to the random user grouping employed in MSUG-MRA). The proposed MS-MRA-WOPBE also shows a similar performance as FASURA. To achieve the result in Fig. 4, FASURA sets \(n_{p}=896\), \(L^{\prime}=9\), \(n_{c}=256\), \(n=3200\), \(B_{f}=16\), and \(M=50\). The order of computational complexity for these schemes is given in the performance-complexity plot in Fig. 5. It can be interpreted from this figure that the proposed MS-MRA-WOPBE has comparable accuracy to FASURA while offering a lower computational complexity. Note also that despite the higher required \(E_{b}/N_{0}\) compared to FASURA, MS-MRA offers very large savings in terms of computational complexity, which is attributed to employing orthogonal pilots, slotted structure, and simpler decoding blocks. As a further note, FASURA considers \(2^{B_{p}}\) possible spreading sequences of length \(L^{\prime}\) for each symbol of the polar codeword; hence every transceiver should store \(n_{c}2^{B_{p}}\) vectors of length \(L^{\prime}\), as well as a pilot codebook of size \(2^{J}\times n_{p}\). For typical values reported in [24], the BS and every user must store \(1.6\times 10^{7}\) vectors of length \(9\) and a matrix of size \(5.8\times 10^{7}\). For the proposed schemes in this paper, every transceiver must store only an orthogonal codebook of size \(n_{p}\times n_{p}\), where \(n_{p}=256\). Thus, FASURA requires about 3000 times larger memory than our proposed schemes, which Fig. 3: The required \(E_{b}/N_{0}\) in the proposed MIMO set-ups and the scheme in [22] for \(L\approx 200\), \(M=100\), and \(P_{e}=0.05\). may be restrictive for some target URA applications such as sensor networks, where a massive number of cheap sensors are deployed. Moreover, unlike FASURA, the proposed solutions are implementable with short blocklengths (see Fig. 3), which makes them appropriate for fast fading scenarios as well. In Fig. 6, we compare the theoretical PURE in (27) with the simulation results of the MS-MRA for three different scenarios (\(M=50,100,200\)) with \(P_{p}/P_{c}=0.66\) and \((J,n_{p},n_{c})=(2,256,512)\). It is shown that the approximate theoretical analysis well predicts the performance of the MS-MRA for \(K_{a}\leq 700\), however, the results are not consistent for higher values of \(K_{a}\). The reason for the mismatch for the \(K_{a}>800\) regime is the approximations employed while analyzing SIC in Lemma 4 (e.g., \(n_{c},n_{p}\gg 1\), \(|\mathcal{S}_{s}|\gg 1\), uncorrelated QPSK codewords of two different users, and uncorrelated samples of \(\mathbf{x}_{i}\)). Fig. 7 compares the MS-SRA and MSUG-SRA with the existing single-antenna solutions [17, 19, 20]. For both setups, we set \((J,n_{p},n_{c})=(2,64,512)\), \(P_{p}/P_{c}=1\) (\(\phi=1\) for MSUG-SRA), \((S,V)=(6,8)\) for \(K_{a}\leq 200\), and \((S,V)=(12,4)\) for \(K_{a}\geq 300\). For MSUG-SRA, we also choose \(G=1\) for \(K_{a}\leq 300\), \(G=3\) for \(500\leq K_{a}\leq 700\), and \(G=6\) for \(K_{a}\geq 900\). It is observed that the proposed MS-SRA has a superior performance compared to the existing URA approaches for the low number of active users, However, it performs worse than the scheme in [19] for higher values of \(K_{a}\). Furthermore, the proposed MSUG-SRA outperforms existing solutions, and its effective range of \(K_{a}\) is up to \(1500\) users. ## V Conclusions We propose a family of unsourced random access solutions for MIMO Rayleigh block fading channels. The proposed approaches employ a slotted structure with multiple stages of orthogonal pilots. The use of a slotted structure along with the orthogonal pilots leads to the lower computational complexity at the receiver, and also makes the proposed designs implementable for fast fading scenarios. We further improve the performance of the proposed solutions when the number of active users is very large by randomly dividing the users into different interleaver-power groups. The results show that the proposed MIMO URA designs are superior for both short and large blocklengths, while offering a lower computational complexity. ## Appendix A Proof of Theorem 1 **Lemma 3**.: _Assuming that the transmitted data part contains uncorrelated and equally likely QPSK symbols, for \(i,j\in\mathcal{S}_{s}\) and \(n_{p},n_{c}\rightarrow\infty\), the transmitted signals satisfy_ \[\frac{1}{E_{x}}\mathbf{x}_{i}\mathbf{x}_{j}^{H~{}\overset{p}{\rightarrow}0}, \tag{34}\] _where \(E_{x}=Jn_{p}P_{p}+n_{c}P_{c}\)._ Fig. 4: The required \(E_{b}/N_{0}\) in the proposed MIMO set-ups and the results in [23, 24] for \(M=50\). Fig. 5: Performance-complexity curve for the proposed MIMO schemes and FASURA in [24]. Fig. 6: Comparison of the simulation and analytical performance of the MS-MRA for different values of \(M\). Proof.: Let \(\mathbf{b}_{ji}\) and \(\mathbf{b}_{jr}\) be the \(j\)th pilots of the \(i\)th and \(r\)th users, and \(\mathbf{v}_{i}\) and \(\mathbf{v}_{r}\) be the corresponding polar-coded and QPSK-modulated signals. Since \(\mathbf{b}_{ji}\) and \(\mathbf{b}_{jr}\) are randomly chosen rows of the Hadamard matrix, \(\mathbf{b}_{ji}\mathbf{b}_{ji}^{T}=n_{p}\) with probability \(\dfrac{1}{n_{p}}\), and it is zero with probability \(1-\dfrac{1}{n_{p}}\). Besides, for \(n_{c}\rightarrow\infty\), \(v_{it}\) and \(v_{rt}\) are zero-mean and uncorrelated, where \(v_{it}=[\mathbf{v}_{i}]_{(:,t)}\). Therefore, \[\lim_{n_{p},n_{c}\rightarrow\infty} \mathbb{P}\left(\dfrac{1}{E_{x}}|\mathbf{x}_{r}\mathbf{x}_{i}^{H}| >0\right)\] \[=\lim_{n_{p},n_{c}\rightarrow\infty}\mathbb{P}\left(\dfrac{1}{E_ {x}}\left|P_{p}\sum_{j=1}^{J}\mathbf{b}_{jr}\mathbf{b}_{ji}^{H}+\mathbf{v}_{r }\mathbf{v}_{i}^{H}\right|>0\right)\] \[\leq\lim_{n_{p},n_{c}\rightarrow\infty}\mathbb{P}\left(\dfrac{P _{p}}{E_{x}}\sum_{j=1}^{J}|\mathbf{b}_{jr}\mathbf{b}_{ji}^{H}|+\dfrac{1}{E_{x} }\left|\mathbf{v}_{r}\mathbf{v}_{i}^{H}\right|>0\right)\] \[\leq\lim_{n_{p},n_{c}\rightarrow\infty}\sum_{j=1}^{J}\mathbb{P} \left(\dfrac{P_{p}}{E_{x}}\left|\mathbf{b}_{jr}\mathbf{b}_{ji}^{H}\right|>0\right)\] \[\qquad\qquad\qquad\qquad\qquad+\mathbb{P}\left(\dfrac{1}{E_{x}} \left|\sum_{t=1}^{n_{c}}v_{rt}\mathbf{v}_{it}^{H}\right|>0\right)\] \[\approx\lim_{n_{p},n_{c}\rightarrow\infty}\dfrac{JP_{p}}{n_{p}E_ {x}}+\mathbb{P}\left(\dfrac{n_{c}}{E_{x}}\left|\mathbb{E}\left\{v_{rt}v_{it}^{ H}\right\}\right|>0\right)\] \[\approx 0.\] Note that, strictly speaking, the uncorrelated QPSK symbol assumption is not accurate for coded systems. Nevertheless, it is useful to obtain a good approximation of SINR, as we will show later. **Lemma 4**.: _By applying LS-based SIC, the residual received signal matrices of pilot and coded parts can be written based on the signal and interference-plus-noise terms as_ \[\mathbf{Y}_{p_{j}}^{\prime}\approx \sqrt{P_{p}}\mathbf{h}_{i}\mathbf{b}_{ji}\mathbf{L}_{p_{j}}+\sqrt {P_{p}}\sum_{k\in\tilde{\mathcal{S}}_{s},k\neq i}\mathbf{h}_{k}\mathbf{b}_{jk }\mathbf{L}_{p_{j}}+\mathbf{Z}_{n,p_{j}}, \tag{35}\] \[\mathbf{Y}_{c}^{\prime}\approx\mathbf{h}_{i}\mathbf{v}_{i}\mathbf{L}_{c}+\sum _{k\in\tilde{\mathcal{S}}_{s},k\neq i}\mathbf{h}_{k}\mathbf{v}_{k}\mathbf{L}_{ c}+\mathbf{Z}_{n,c}, \tag{36}\] _where \(\mathbf{h}_{i}\in\mathbb{C}^{M\times 1}\) is the channel coefficient vector of the \(i\)th user, \(\mathbf{L}_{p_{j}}=\omega_{p_{j}}\mathbf{I}_{n_{p}}\), \(\mathbf{L}_{c}=\omega_{c_{j}}\mathbf{I}_{n_{c}}\), and the elements of \(\mathbf{Z}_{n,p_{j}}\) and \(\mathbf{Z}_{n,c}\) are drawn from \(\mathcal{CN}\left(0,\omega_{c_{j}}\sigma_{z}^{2}\right)\) and \(\mathcal{CN}\left(0,\omega_{p_{s}}\sigma_{z}^{2}\right)\), respectively, with \(\omega_{p_{s}}\) and \(\omega_{c_{s}}\) are as defined in the statement of the Theorem 1._ Proof.: Plugging (15) and (16) into (17), we obtain \[\mathbf{Y}^{\prime} =\mathbf{H}_{\mathcal{S}_{s}}\mathbf{X}_{\mathcal{S}_{s}}\mathbf{ L}+\mathbf{H}_{\tilde{\mathcal{S}}_{s}}\mathbf{X}_{\tilde{\mathcal{S}}_{s}} \mathbf{L}+\mathbf{Z}_{s}\mathbf{L}\] \[=\mathbf{H}_{\mathcal{S}_{s}}\mathbf{X}_{\tilde{\mathcal{S}}_{s}} \mathbf{L}+\mathbf{Z}_{s}\mathbf{L}\] \[=\mathbf{h}_{i}\mathbf{x}_{i}\mathbf{L}+\sum_{k\in\tilde{\mathcal{ S}}_{s},k\neq i}\mathbf{h}_{k}\mathbf{x}_{k}\mathbf{L}+\mathbf{Z}_{n}, \tag{37}\] where \(\mathbf{L}=\mathbf{I}_{L}-\mathbf{X}_{\mathcal{S}_{s}}^{H}(\mathbf{X}_{ \mathcal{S}_{s}}\mathbf{X}_{\mathcal{S}_{s}}^{H})^{-1}\mathbf{X}_{\mathcal{S}_{ s}}\), and \(\mathbf{Z}_{n}=\mathbf{Z}_{s}\mathbf{L}\). Since \(\mathbf{L}^{H}\mathbf{L}=\mathbf{L}\) and \(\mathbf{Z}_{s}\sim\mathcal{CN}\left(\mathbf{0},\sigma_{z}^{2}\mathbf{I}_{L}\right)\), we have \[\mathbf{Z}_{n}\sim\mathcal{CN}\left(\mathbf{0},\sigma_{z}^{2}\mathbb{E}\{ \mathbf{L}\}\right). \tag{38}\] Since the values of \(n_{p}\) and \(n_{c}\) are large, and using (34), we have \(\dfrac{1}{E_{x}}\mathbf{X}_{\mathcal{S}_{s}}\mathbf{X}_{\mathcal{S}_{s}}^{H} \approx\mathbf{I}_{|\mathcal{S}_{s}|}\), where \(E_{x}=Jn_{p}P_{p}+n_{c}P_{c}\). In other words, we can approximate \(\mathbf{L}\) as \[\mathbf{L}\approx\mathbf{I}_{L}-\dfrac{1}{E_{x}}\sum_{r\in\mathcal{S}_{s}} \mathbf{x}_{r}^{H}\mathbf{x}_{r}. \tag{39}\] Using the weak law of large numbers, and assuming samples of \(\mathbf{x}_{r}\) to be uncorrelated and \(|\mathcal{S}_{t}|\gg 1\), we can rewrite \(\mathbf{L}\) in (39) as \[\mathbf{L}\approx\begin{bmatrix}\mathbf{L}_{p_{1}}&...&\mathbf{0}&\mathbf{0} \\ \vdots&\ddots&\vdots&\vdots\\ \mathbf{0}&...&\mathbf{L}_{p_{J}}&\mathbf{0}\\ \mathbf{0}&...&\mathbf{0}&\mathbf{L}_{c}\end{bmatrix}, \tag{40}\] where \(\mathbf{L}_{p_{j}}=\omega_{p_{j}}\mathbf{I}_{n_{p}}\) and \(\mathbf{L}_{c}=\omega_{c_{s}}\mathbf{I}_{n_{c}}\) with \(\omega_{p_{s}}=\omega_{c_{s}}=1-\dfrac{|\mathcal{S}_{s}|}{L}\) if the transmitted signals are randomly interleaved, and \(\omega_{p_{s}}=1-\dfrac{1}{E_{p}}P_{p}|\mathcal{S}_{s}|\), \(\omega_{c_{s}}=1-\dfrac{1}{E_{x}}P_{c}|\mathcal{S}_{s}|\), otherwise. Letting \(\mathbf{Z}_{n}=\left[\mathbf{Z}_{n,p_{1}},\ldots,\mathbf{Z}_{n,p_{J}},\mathbf{Z} _{n,c}\right]\), we can infer from (38) and (40) that the elements of \(\mathbf{Z}_{n,p_{j}}\) and \(\mathbf{Z}_{n,c}\) approximately follow \(\mathcal{CN}\left(0,\omega_{p_{s}}\sigma_{z}^{2}\right)\) and \(\mathcal{CN}\left(0,\omega_{c_{s}}\sigma_{z}^{2}\right)\), respectively. Besides, using (40) and the signal structure in (4), we can divide (37) into pilot and coded parts as in (35) and (36). Fig. 7: The required \(E_{b}/N_{0}\) of the proposed MS-SRA and MSUG-SRA for the case of single-antenna receiver. **Lemma 5**.: _The estimated channel coefficients of a non-colliding user approximately satisfy the following expressions:_ \[\mathbb{E}\{\|\hat{\mathbf{h}}_{i}\|^{2}\} \approx\omega_{p_{x}}^{2}\mathbb{E}\{\|\mathbf{h}_{i}\|^{2}\}+ \frac{M\omega_{p_{x}}\sigma_{z}^{2}}{n_{p}P_{p}}, \tag{41a}\] \[\mathbb{E}\{|\hat{\mathbf{h}}_{i}^{H}\mathbf{h}_{k}|^{2}\} \approx\omega_{p_{x}}^{2}\mathbb{E}\{\|\mathbf{h}_{i}\|^{2}\}+ \frac{M\omega_{p_{x}}\sigma_{z}^{2}}{n_{p}P_{p}},\] (41b) \[\mathbb{E}\{|\hat{\mathbf{h}}_{i}^{H}\mathbf{h}_{i}|^{2}\} \approx\omega_{p_{x}}^{2}\mathbb{E}\{\|\mathbf{h}_{i}\|^{4}\}+ \frac{\omega_{p_{x}}\sigma_{z}^{2}}{n_{p}P_{p}}\mathbb{E}\{\|\mathbf{h}_{i}\|^ {2}\}. \tag{41c}\] Proof.: Using the approximation of \(\mathbf{Y}_{p_{j}}^{\prime}\) in (35) in (10), the channel coefficient vector of the \(i\)th user can be estimated as \[\hat{\mathbf{h}}_{i} \approx\frac{\omega_{p_{x}}}{n_{p}}\mathbf{h}_{i}\mathbf{b}_{j} \tilde{\mathbf{b}}_{jk}^{H}+\frac{\omega_{p_{x}}}{n_{p}}\sum_{f\in\mathcal{S} _{x},f\neq i}\mathbf{h}_{f}\mathbf{b}_{jf}\tilde{\mathbf{b}}_{jk}^{H}+\mathbf{ z}_{p_{j},n}\] \[\stackrel{{(a)}}{{=}}\omega_{p_{x}}\mathbf{h}_{i}+ \mathbf{z}_{p_{j},n}, \tag{42}\] where \(\mathbf{z}_{p_{j},n}=\dfrac{1}{n_{p}\sqrt{P_{p}}}\mathbf{Z}_{n,p_{j}}\tilde{ \mathbf{b}}_{jk}^{H}\), and in (a), we use the assumption that the \(i\)th user is non-colliding, hence \(\tilde{\mathbf{b}}_{jk}\) is only selected by the \(i\)th user (\(\mathbf{b}_{ji}=\tilde{\mathbf{b}}_{jk}\) and \(\mathbf{b}_{jf}\neq\tilde{\mathbf{b}}_{jk}^{H}\) for \(f\in\tilde{\mathcal{S}}_{x},f\neq i\)). We can argue the following approximation \(\mathbf{z}_{p_{j},n}\sim\mathcal{CN}\left(0,\frac{\omega_{p_{x}}\sigma_{z}^{2} }{n_{p}P_{p}}\right)\). Using (42), we can show that \(\mathbb{E}\{\|\hat{\mathbf{h}}_{i}\|^{2}\}\approx\omega_{p_{x}}^{2}\mathbb{E}\{ \|\mathbf{h}_{i}\|^{2}\}+\dfrac{M\omega_{p_{x}}\sigma_{z}^{2}}{n_{p}P_{p}}\), \(\mathbb{E}\{\|\hat{\mathbf{h}}_{i}^{H}\mathbf{h}_{i}|^{2}\}\approx\omega_{p_{x }}^{2}\mathbb{E}\{\|\mathbf{h}_{i}\|^{4}\}+\dfrac{\omega_{p_{x}}\sigma_{z}^{2} }{n_{p}P_{p}}\mathbb{E}\{\|\mathbf{h}_{i}\|^{2}\}\), and \(\mathbb{E}\{|\hat{\mathbf{h}}_{i}^{H}\mathbf{h}_{k}|^{2}\}=\mathbb{E}\{\|\hat {\mathbf{h}}_{i}\|^{2}\}\). Plugging (36) into the MRC expression in (11), \(\hat{\mathbf{v}}_{i}\) can be estimated as \[\hat{\mathbf{v}}_{i}\approx\omega_{c_{x}}\hat{\mathbf{h}}_{i}^{H}\mathbf{h}_{ i}\mathbf{v}_{i}+\mathbf{z}_{in}, \tag{43}\] where the first term on the right-hand side is the signal term, and \(\mathbf{z}_{in}=\sum_{k\in\tilde{S}_{x},k\neq i}\hat{\mathbf{h}}_{i}^{H} \mathbf{h}_{k}\mathbf{v}_{k}\mathbf{L}_{c}+\hat{\mathbf{h}}_{i}^{H}\mathbf{Z} _{n,c}\) is the interference-plus-noise term. Since \(\mathbf{L}^{H}\mathbf{L}=\mathbf{L}\), and using (40), we can show \(\mathbf{L}_{c}^{H}\mathbf{L}_{c}\approx\mathbf{L}_{c}\). Therefore, by employing Lemma 5, we can approximate \(\mathbf{z}_{in}\sim\mathcal{CN}(\mathbf{0},\sigma_{in}^{2}\mathbf{I}_{n_{c}})\), where \[\sigma_{in}^{2}=\omega_{c_{x}}\left(P_{c}(|\tilde{\mathcal{S}}_{x}|-1)+ \sigma_{z}^{2}\right)\left(\omega_{p_{x}}^{2}\mathbb{E}\{\|\mathbf{h}_{i}\|^ {2}\}+\frac{M\omega_{p_{x}}\sigma_{z}^{2}}{n_{p}P_{p}}\right).\] Besides, the per-symbol power of the signal term can be obtained as \(\sigma_{z}^{2}\approx\omega_{c_{x}}^{2}\mathbb{E}\{|\mathbf{h}_{i}^{H} \mathbf{h}_{i}|^{2}\}P_{c}\). Then, using Lemma 5, the SINR of \(\hat{\mathbf{v}}_{i}\) can be calculated as in (18). ## Appendix B Proof of Theorem 2 In the first iteration of the \(s\)th slot, since \(K_{s}\) users have selected one out of \(n_{p}\) pilots randomly, the number of users that select an arbitrary pilot approximately follows a Poisson distribution with the parameter \(K_{s}/n_{p}\). In the \(k\)th iteration of the \(s\)th slot, let \(T_{j,i}^{(k)}\) be the average number of \(i\)-collision pilots (pilots selected by \(i\) different users) in the \(j\)th pilot part. We have \[T_{j,i}^{(1)}\approx n_{p}f_{p}(i;K_{s}/n_{p}), \tag{44}\] where \(f_{p}(i;a)\) denotes the PMF of the Poisson distribution with the parameter \(a\). The average number of i-collision users in the \(k\)th iteration of the \(j\)th pilot part is then calculated as \(K_{j,i}^{(k)}\approx iT_{j,i}^{(k)}\). Supposing that in the \(k\)th iteration (using the assumption in (19)), the decoder employs the \(j\)th pilot part for channel estimation, the removed user is non-colliding (1-collision) in its \(j\)th pilot part (we assume that the decoder can only decode the non-colliding users), and it is in \(i\)-collision in its \(j\)'th (\(j^{\prime}\neq j\)) pilot part with probability \(p_{i,j^{\prime}}^{(k)}=\dfrac{K_{j^{\prime},i}^{(k)}}{K_{s}-k+1}\). Therefore, removing a user from the \(j\)th pilot part results in * In the \(j\)th pilot part, we have \(T_{j,1}^{(k+1)}=T_{j,1}^{(k)}-1\), and \(T_{j,i}^{(k+1)}=T_{j,i}^{(k)}\) for \(i>1\). * In the \(j\)'th pilot part (\(j^{\prime}\neq j\)), we have \(T_{j^{\prime},i}^{(k+1)}=T_{j^{\prime},i}^{(k)}+p_{i+1,j^{\prime}}^{(k)}-p_{i,j^{ \prime}}^{(k)}\). The collision probability of the \(j\)th pilot part in the \(t\)th iteration is then obtained as \(P_{col}(j,t)=1-\dfrac{T_{j,1}^{(t)}}{K_{s}-t+1}\). Finally, by approximating \(T_{j,i}^{(t)}\) by its average over different pilot parts (i.e., \(T_{j,i}^{(t)}\approx N_{i}^{(t)}:=\dfrac{1}{J}\sum_{j=1}^{J}T_{j,i}^{(t)}\)) in above equations, the results in Theorem 2 are obtained. Note that since all the pilot parts are equally likely in the first iteration, we have \(N_{i}^{(1)}\approx T_{j,i}^{(1)}\approx n_{p}f_{p}(i;K_{s}/n_{p}),\forall j=1,...,J\).
2305.12602
Rigorous estimates for the quasi-steady state approximation of the Michaelis-Menten reaction mechanism at low enzyme concentrations
There is a vast amount of literature concerning the appropriateness of various perturbation parameters for the standard quasi-steady state approximation in the Michaelis-Menten reaction mechanism, and also concerning the relevance of these parameters for the accuracy of the approximation by the familiar Michaelis-Menten equation. Typically, the arguments in the literature are based on (heuristic) timescale estimates, from which one cannot obtain reliable quantitative estimates for the error of the quasi-steady state approximation. We take a different approach. By combining phase plane analysis with differential inequalities, we derive sharp explicit upper and lower estimates for the duration of the initial transient and substrate depletion during this transitory phase. In addition, we obtain rigorous bounds on the accuracy of the standard quasi-steady state approximation in the slow dynamics regime. Notably, under the assumption that the quasi-steady state approximation is valid over the entire time course of the reaction, our error estimate is of order one in the Segel-Slemrod parameter.
Justin Eilertsen, Santiago Schnell, Sebastian Walcher
2023-05-21T23:34:59Z
http://arxiv.org/abs/2305.12602v2
# Rigorous estimates for the quasi-steady state approximation ###### Abstract There is a vast amount of literature concerning the appropriateness of various perturbation parameters for the standard quasi-steady state approximation in the Michaelis-Menten reaction mechanism, and also concerning the relevance of these parameters for the accuracy of the approximation by the familiar Michaelis-Menten equation. Typically, the arguments in the literature are based on (heuristic) timescale estimates, from which one cannot obtain reliable quantitative estimates for the error of the quasi-steady state approximation. We take a different approach. By combining phase plane analysis with differential inequalities, we derive sharp explicit upper and lower estimates for the duration of the initial transient and substrate depletion during this transitory phase. In addition, we obtain rigorous bounds on the accuracy of the standard quasi-steady state approximation in the slow dynamics regime. Notably, under the assumption that the quasi-steady state approximation is valid over the entire time course of the reaction, our error estimate is of order one in the Segel-Slemrod parameter. ## 1 Introduction We consider the classical Michaelis-Menten reaction mechanism for enzyme action. Its time evolution is governed by the ordinary differential equations for the substrate \(s\) and intermediate complex \(c\) concentrations \[\begin{array}{rclcl}\dot{s}&=&-k_{1}e_{0}s&+&(k_{1}s+k_{-1})c,\\ \dot{c}&=&k_{1}e_{0}s&-&(k_{1}s+k_{-1}+k_{2})c\end{array} \tag{1}\] with initial values \(s(0)=s_{0}\), \(c(0)=0\), and conservation laws for the substrate and enzyme [24]. We will focus on the standard quasi-steady state (QSS) [29] approximation with low initial enzyme concentration \(e_{0}\). In this case, the appropriate reduction is given by the Michaelis-Menten equation1 Footnote 1: The common choice for the initial value of the Michaelis–Menten equation (2) is \(s(0)=s_{0}\). This choice is convenient from the experimental point of view, and also compatible with singular perturbation theory, but it needs to be considered critically with regard to parameter identification experiments. We will discuss this point in the course of the paper. \[\dot{s}=-\frac{k_{1}k_{2}e_{0}s}{k_{-1}+k_{2}+k_{1}s}=-\frac{v_{\infty}s}{K_{M }+s}, \tag{2}\] with the Michaelis constant \[K_{M}:=\frac{k_{-1}+k_{2}}{k_{1}} \tag{3}\] and the limiting rate \[v_{\infty}=k_{2}e_{0}. \tag{4}\] For a biochemical definition of the above constants, we invite the readers to consult [8]. If we look geometrically at the ordinary differential equation system (1), the slow manifold (or QSS manifold) is defined by \[c=g(s):=\frac{k_{1}e_{0}s}{k_{-1}+k_{2}+k_{1}s}=\frac{e_{0}s}{K_{M}+s}. \tag{5}\] The accuracy and the range of validity for the QSS reduction are not only of theoretical interest, but also of practical relevance for parameter identification in the laboratory. Ideally, practitioners wish for a suitable "small parameter" that ensures accuracy of the reduction, while measurements are taken in laboratory experiments.2 In his seminal paper, Segel [28] proposed a parameter that is widely accepted. However, the arguments used to derive this parameter, like several variants [29, 3, 24, 32], cannot provide a quantitative estimate for the approximation error. Indeed, there seem to be no rigorous and meaningful quantitative estimates available in the literature (see, [11] for a more detailed account). Moreover, one can estimate \(K_{M}\) and \(v_{\infty}\), by fitting experimental data to the Michaelis-Menten equation (2) under steady-state assay conditions, but obtaining \(k_{1}\) would also be of interest. From a practical perspective, further information is needed about the onset time of the QSS regime, and the substrate depletion in the transitory phase. Overall, one could say that a rigorous mathematical foundation for a very basic approximation essential to understand the appropriate usage of the Michaelis-Menten equation for laboratory measurements is still missing. Footnote 2: The Michaelis–Menten equations is also used for modeling biochemical reactions in signaling, metabolic and pharmacological pathways. ### Goal and results of the present paper The fundamental goal of the present paper is to provide: (i) reasonably sharp and rigorous estimates for the approximation error, (ii) the determination of lower and upper estimates for the onset time of QSS, and (iii) the substrate loss in the initial transient of the reaction. Our approach is inspired by arguments from singular perturbation theory. However, our methods mostly rely on elementary facts concerning differential equations and differential inequalities. In Section 2, we recall some qualitative features of (1). In particular, we recollect that the time \(t_{\text{cross}}\) when the solution crosses the QSS manifold suffices as an onset time for the slow regime. Section 3 contains the rigorous estimates that comprise the fundamental technical results of this paper. By modifying a Lyapunov function approach, we first obtain upper and lower limits for \(t_{\text{cross}}\), which is of interest in its own right. Using differential inequalities, we then obtain upper and lower limits for the substrate depletion in the transitory phase. In a final step, we derive (in two different ways) rigorous bounds for the approximation error during the QSS regime. Generally, these turn out to be of order \(\varepsilon\log(1/\varepsilon)\), where \(\varepsilon\) here denotes the parameter proposed by Segel [28]. For the special situation corresponding to an initial value \(s_{0}\) for the Michaelis-Menten equation at \(t=0\), we obtain sharper bounds of order \(\varepsilon\) over the whole time range. By nature this is a rather technical section, but the technical expenditure also yields estimates for the reliability of (simpler) asymptotic error bounds. In Section 4, we list and discuss these asymptotic bounds and their relevance in the context of laboratory practice. In a short Appendix, we list the relevant results and parameters for the case of small \(k_{1}\). Review of qualitative properties We first recall some qualitative features and some underlying theory. In later sections, we will focus less on what these results say, but rather go beyond them towards quantitative results. ### The standard quasi-steady-state reduction The standard quasi-steady-state (sQSS) approximation, as given by (2), is a well-known approximation to (1). It was originally obtained by Briggs & Haldane [4], and put on solid mathematical ground by Heineken, Tsuchiya & Aris [17], who applied the singular perturbation theory developed by Tikhonov [31] and later by Fenichel [12]. By singular perturbation theory, the reduction (2) accounts with high accuracy for the depletion of substrate after a short transitory phase, whenever the initial enzyme concentration, \(e_{0}\), is sufficiently small with respect to the initial substrate concentration, \(s_{0}\). The utility of (2) emanates from the fact that initial enzyme and substrate concentration are controllable within an experiment. Therefore, it is as least theoretically possible to prepare an experiment in a way that ensures (2) is an appropriate model from which to estimate the kinetic parameters: \(K_{M}\) and \(v_{\infty}\). However, the phrasing "sufficiently small \(e_{0}\)" is qualitative, and certainly not sufficient to satisfy a quantitative experimentalist or even a theorist (in certain contexts). Thus, in any practical application of (2), one is forced to ask: How small should \(e_{0}\) be to confidently replace (1) with (2)? Several dimensionless parameters, \(\varepsilon_{X}\), have been introduced in the literature that suggest (at least implicitly) that the error between (2) and (1) is bounded by \(\gamma\cdot\varepsilon_{X}\), where \(\gamma\) is a dimensional constant with units of concentration. From Briggs & Haldane [4], we have \[\varepsilon_{BH}=\frac{e_{0}}{s_{0}}, \tag{6}\] which was also employed by Heineken, Tsuchiya & Aris [17]. Other notable dimensionless parameters include \[\varepsilon_{RS}=\frac{k_{1}e_{0}}{k_{-1}+k_{2}}=\frac{e_{0}}{K_{M}}, \tag{7}\] originally proposed by Reich and Selkov [22], as well as the widely used \[\varepsilon_{SSl}=\frac{k_{1}e_{0}}{k_{-1}+k_{2}+k_{1}s_{0}}=\frac{e_{0}}{K_{ M}+s_{0}}, \tag{8}\] which was introduced by Segel [28] and analyzed by Segel & Slemrod [29]. Finally, we mention \[\varepsilon_{MM}=\frac{k_{1}k_{2}e_{0}}{(k_{-1}+k_{2})^{2}}=\frac{e_{0}}{K_{M }}\cdot\frac{k_{2}}{k_{-1}+k_{2}}=\varepsilon_{RS}\cdot\frac{k_{2}}{k_{-1}+k _{2}} \tag{9}\] which reflects the linear timescale ratio at the stationary point as \(e_{0}\to 0\), as follows from [10] Proposition 1 and Remark 2. In particular, see Eq. (9) in [10]. All the parameters \(\varepsilon_{X}\) mentioned above have the following property. If \(\varepsilon_{X}\) approaches zero in a well defined manner with \(e_{0}\to 0\), while the other reaction parameters are bounded above and below by positive constants, then the \(s\) component of the exact solution will approach the approximate solution with any degree of accuracy. However, contrary to an assumption prevalent in the literature, from these (and other proposed) parameters one cannot obtain quantitative information [10, 11]. Moreover, expressions like \(\varepsilon_{X}\ll 1\) are sometimes used in a literal interpretation (such as "\(10^{-2}\ll 1\)") [25, for an example], which misses the point. Ideally, a dimensionless small parameter \(\varepsilon_{\rm ideal}\) should control the discrepancy between the \(s\) component of the solution of the system (1) with initial value \((s_{0},0)\) and the solution of the approximate equation (2) (with initial value \(s_{0}\)), by an estimate \(\varepsilon_{\rm ideal}\cdot s_{0}\). Obtaining such a parameter is a principal goal of the present paper. ### Demarcating fast and slow dynamics of the reaction For the initial value \((s,c)(t=0)=(s_{0},0)\) for (1),3 we need to determine a point in time to separate fast and slow dynamics. Singular perturbation theory does not provide a unique choice for such a point in time. But, as noted by Schauer & Heinrich [27], and proven in Noethen & Walcher [20] and Calder & Siegel [5],4 there exists a distinguished time for the governing equations (1) of the Michaelis-Menten reaction mechanism. We recall this fact: Footnote 3: Generally, for any initial value below the graph of the slow manifold. Footnote 4: Calder and Siegel [5] also proved the existence of a unique distinguished invariant manifold. An extension to the open Michaelis–Menten reaction mechanism was given in [9]. **Lemma 1**.: _The solution of (1) with initial value \((s_{0},0)\) crosses the graph of \(g\) at a unique positive time \(t_{\rm cross}\), and remains above the graph for all \(t>t_{\rm cross}\). One has \(\dot{c}(t)\geq 0\) for all \(t\leq t_{\rm cross}\) and \(\dot{c}(t)\leq 0\) for all \(t\geq t_{\rm cross}\). Moreover, \(\dot{s}(t)<0\) for all \(t\geq 0\)._ Thus, we note a biochemical property of the reaction. The maximal concentration of complex \(c\) is attained at \(t=t_{\rm cross}\). In view of this, it seems natural to consider \(t_{\rm cross}\) as a starting time of the slow phase.5 We furthermore set \(s_{\rm cross}:=s(t_{\rm cross})\) and \(c_{\rm cross}:=c(t_{\rm cross})\). Footnote 5: In view of non-uniqueness, this designation is not meant to imply that the slow dynamics sets in precisely at \(t_{\rm cross}\). We invite the readers to see also the discussion in Remarks 3 and 5. We illustrate the \((s,c)\) phase plane geometry of the Michaelis-Menten reaction mechanism in Figure 1. **Lemma 1** shows that the set above the graph of \(g\) is positively invariant. This result can be sharpened: For \(0\leq\delta\leq 1\) set \[g_{\delta}(s)=\frac{k_{1}e_{0}s}{(1-\delta)k_{2}+k_{-1}+k_{1}s}, \tag{10}\] noting that \(c=g_{\delta}(s)\) defines an isocline of system (1) for each \(\delta\). In particular \(c=g_{0}=g\) defines the \(c\)-isocline, and \(c=g_{1}\) defines the \(s\)-isocline. Now, set \[\delta^{*}:=\frac{k_{1}}{2k_{2}}(K_{M}+e_{0})\left(1-\sqrt{1-\frac{4k_{2}}{k_ {1}}\frac{e_{0}}{(K_{M}+e_{0})^{2}}}\right). \tag{11}\] In Noethen & Walcher [20, Props. 5 and 6, with proofs stated for the \((s,p)\)-plane], the following was shown: **Lemma 2**.: _For every \(\delta\geq\delta^{*}\), the subset of \([0,\,s_{0}]\times[0,\,e_{0}]\) which is bounded by the graphs of \(g_{0}\) and \(g_{\delta}\) is positively invariant for the Michaelis-Menten reaction mechanism system._ **Remark 1**.: The expression for \(\delta^{*}\) may look prohibitive, but less complicated estimates are readily obtained. For example, given \(x\leq 0.1\), by the mean value theorem and generous estimates, there exists \(\xi\leq 0.1\) so that \[\sqrt{1-x}-1=-\frac{1}{2\sqrt{1-\xi}}\,x\leq-0.9\,x,\] from which one sees that \[\delta^{*}\leq\frac{10}{9}\frac{e_{0}}{K_{M}+e_{0}}\leq\frac{10}{9}\varepsilon _{RS}\quad\mbox{whenever}\quad\varepsilon_{RS}\leq 0.1. \tag{12}\] ## 3 Critical estimates for the dynamics of the Michaelis-Menten reaction mechanism One can rewrite system (1) as \[\begin{array}{rcl}\dot{s}&=&-k_{1}e_{0}s+(k_{-1}+k_{1}s)c,\\ \dot{c}&=&-(k_{-1}+k_{2}+k_{1}s)(c-g(s)).\end{array} \tag{13}\] In the above system, \(g(s)\) is given by (5). We are only interested in the solution with initial value \((s_{0},0)\), which starts below the graph of \(g\). Since \(\hat{c}\leq 0\) for \(c\geq g(s)\), we have \[c\leq\widetilde{c}\,\colon=\max_{0\leq s\leq s_{0}}g(s)=\frac{e_{0}s_{0}}{K_{M} +s_{0}}=\varepsilon_{SSl}\,s_{0}. \tag{14}\] We will frequently use basic properties of differential inequalities (see, for instance, Walter [34, SS9, Theorem 8]). For later use, we note two estimates for substrate concentration, \(s\): **Lemma 3**.: _Let \(s(t)\) be the first component of the solution of (13) with initial value \((s_{0},0)\). Then, for all \(t\geq 0\) one has_ \[s(t)\geq s_{0}\exp(-k_{1}e_{0}t), \tag{15}\] _and_ \[s(t)\leq s_{0}\left(\frac{k_{-1}}{k_{-1}+k_{2}}+\frac{k_{2}}{k_{-1}+k_{2}}\! \exp\bigg{(}-\frac{k_{1}e_{0}K_{M}}{K_{M}+s_{0}}\cdot t\bigg{)}\right). \tag{16}\] Proof.: The first estimate follows readily from \(\dot{s}\geq-k_{1}e_{0}s\), \(s(0)=s_{0}\). As for the second, from the first equation in (1), with (14), one finds \[\dot{s}=-k_{1}(e_{0}-c)s+k_{-1}c\leq-k_{1}(e_{0}-\widetilde{c})s+k_{-1} \widetilde{c}.\] Figure 1: **The \((s,c)\) phase plane geometry of the Michaelis–Menten reaction mechanism.** The thick red curve, the graph of \(g(s)\), is the QSS variety (i.e., the \(c\)-nullcline) and the thick blue curve, the graph of \(g_{1}(s)\), is the \(s\)-nullcline. The thick black curve that lies in the shaded violet region between \(g(s)\) and \(g_{1}(s)\) is the slow invariant manifold, \(\mathcal{M}\), that connects the stable equilibrium at the origin with a saddle equilibrium at infinity. The vector field in the red shaded region below \(g(s)\) satisfies \(\hat{c}>0\) and \(\dot{s}<0\). On \(g(s)\), \(\hat{c}=0\) and \(\dot{s}<0\). In the violet region that lies above \(g(s)\) and below \(g_{1}(s)\), \(\dot{s}<0\) and \(\dot{c}<0\). On \(g_{1}(s)\), \(\dot{s}=0\) and \(\dot{c}<0\). In the blue shaded region above \(g_{1}(s)\) and below \(c=e_{0}\), \(0<\dot{s}\) and \(\dot{c}<0\). The thin black curve is a sketch of the trajectory that starts on the \(s\)-axis at \((s,c)(0)=(s_{0},0)\). As time evolves forward the trajectory approaches and intercepts \(g(s)\) at \(t=t_{\text{cross}}\). For \(t>t_{\text{cross}}\), the trajectory lies above \(g(s)\), but below \(\mathcal{M}\). Comparing \(s\) with the solution of the linear differential equation \[\dot{x}=k_{1}(e_{0}-\widetilde{c})x+k_{-1}\widetilde{c}=-\frac{k_{1}e_{0}K_{M}}{K _{M}+s_{0}}\,x+\frac{k_{-1}e_{0}s_{0}}{K_{M}+s_{0}},\quad x(0)=s_{0}\] shows the assertion. **Remark 2**.: Note that the derivative of the right-hand side of (15) is equal to \(-k_{1}e_{0}s_{0}\) at \(t=0\), which agrees with \(\dot{s}(0)\) in (1), while the right-hand side of (16) has derivative \(-\frac{k_{2}}{k_{1}(K_{M}+s_{0})}\cdot k_{1}e_{0}s_{0}\), which is markedly different. From this perspective, the upper estimate is not optimal. We will proceed in three steps. First, we estimate the distance of the solution to the slow manifold. In a second step, we obtain lower and upper approximations for \(t_{\text{cross}}\), and we compare exact and approximate solutions near the slow manifold in the third step. ### First Step: Approach to the slow manifold We will employ two variants of a Lyapunov function approach. The first variant is based on established procedure [2, see, Section 2.1, as an example]. However, some adjustments are necessary, because system (1) with small parameter \(e_{0}=\varepsilon e_{0}^{*}\) (\(e_{0}^{*}\) some reference value) is not in Tikhonov standard form with separated slow and fast variables. We will restrict attention to the compact positively invariant rectangle defined by \(0\leq s\leq s_{0}\) and \(0\leq c\leq e_{0}^{*}\). By (14), we may further restrict attention to the rectangle defined by \(0\leq s\leq s_{0}\) and \(0\leq c\leq\widetilde{c}\). Consider \[\frac{d}{dt}(c-g(s))^{2}=-2(k_{-1}+k_{2}+k_{1}s)(c-g(s))^{2}-2g^{\prime}(s) \dot{s}(c-g(s)). \tag{17}\] Let \(L:=c-g(s)\). By invoking \(\dot{s}+\dot{c}=-k_{2}c\), we obtain with repeated use of (13): \[\frac{d}{dt}L^{2} =-2(k_{-1}+k_{2}+k_{1}s))L^{2}-2g^{\prime}(s)\dot{s}(c-g(s)) \tag{18a}\] \[=-2(k_{-1}+k_{2}+k_{1}s)L^{2}+2g^{\prime}(s)\bigg{[}\dot{c}+k_{2} c\bigg{]}(c-g(s))\] (18b) \[=-2(k_{-1}+k_{2}+k_{1}s)L^{2}+2g^{\prime}(s)\bigg{[}-(k_{-1}+k_{2 }+k_{1}s)(c-g(s))+k_{2}c\bigg{]}(c-g(s))\] (18c) \[=-2(k_{-1}+k_{2}+k_{1}s)L^{2}-2(k_{-1}+k_{2}+k_{1}s)g^{\prime}(s )L^{2}+2g^{\prime}(s)k_{2}c(c-g(s))\] (18d) \[\leq-2\bigg{[}\min_{s\in[0,s_{0}]}(k_{-1}+k_{2}+k_{1}s)(1+g^{ \prime}(s))\bigg{]}L^{2}+2\max_{s\in[0,s_{0}]}|g^{\prime}(s)|\cdot k_{2} \widetilde{c}\cdot|c-g(s)|. \tag{18e}\] Now \[g^{\prime}(s)=\frac{K_{M}e_{0}}{(K_{M}+s)^{2}}\geq 0;\quad\max_{s\in[0,s_{0}]}| g^{\prime}(s)|=\frac{e_{0}}{K_{M}}, \tag{19}\] therefore \[\min_{s\in[0,s_{0}]}(k_{-1}+k_{2}+k_{1}s)(1+g^{\prime}(s))\geq\min_{s\in[0,s _{0}]}(k_{-1}+k_{2}+k_{1}s)=k_{-1}+k_{2}.\] Altogether we obtain with (14) \[\frac{d}{dt}L^{2}\leq-2(k_{-1}+k_{2})\cdot L^{2}+2\frac{e_{0}}{K_{M}}\cdot \frac{k_{2}e_{0}s_{0}}{K_{M}+s_{0}}\cdot|L|. \tag{20}\] Next, we apply the Cauchy-Schwarz inequality \[2\frac{e_{0}}{K_{M}}\cdot\frac{k_{2}e_{0}s_{0}}{K_{M}+s_{0}}\cdot|L|\leq \sigma L^{2}+\left(2\frac{e_{0}}{K_{M}}\cdot\frac{k_{2}e_{0}s_{0}}{K_{M}+s_{0} }\right)^{2}\cdot\frac{1}{2\sigma},\] which holds for any \(\sigma>0\). For \(\sigma=k_{-1}+k_{2}\), this yields \[\frac{d}{dt}L^{2}\leq-(k_{-1}+k_{2})\cdot L^{2}+\frac{1}{2}\cdot\ (k_{-1}+k_{2}) \cdot(\varepsilon_{SSI}\cdot\varepsilon_{MM})^{2}\cdot s_{0}^{2}. \tag{21}\] **Lemma 4**.: _Let \(t_{0}\geq 0\) be given, with \(L(t_{0})=L_{0}\). Then, for all \(t\geq t_{0}\) one has_ \[L^{2}\leq L_{0}^{2}\exp(-k_{1}K_{M}(t-t_{0}))+\frac{1}{2}\varepsilon_{SSl}^{2} \varepsilon_{MM}^{2}\left(1-\exp(-k_{1}K_{M}(t-t_{0}))\right). \tag{22}\] _In particular, with \(t_{0}=0\) and \(L(0)=e_{0}s_{0}/(s_{0}+K_{M})=s_{0}\varepsilon_{SSl}\) for the initial value \((s_{0},0)\),_ \[\begin{array}{rcl}\frac{L^{2}}{s_{0}^{2}}&\leq&\varepsilon_{SSl}^{2}\left( \exp(-k_{1}K_{M}t)+\frac{1}{2}\varepsilon_{MM}^{2}(1-\exp(-k_{1}K_{M}t))\right) \\ &=&\varepsilon_{SSl}^{2}\left(\exp(-(k_{-1}+k_{2})t)+\frac{1}{2}\varepsilon_{ MM}^{2}(1-\exp(-(k_{-1}+k_{2})t))\right).\end{array} \tag{23}\] Proof.: Compare (21) with the differential equation \[\frac{dV}{dt}=-(k_{-1}+k_{2})\cdot V+\frac{1}{2}(k_{-1}+k_{2})\varepsilon_{SSl }^{2}\varepsilon_{MM}^{2}\cdot s_{0}^{2}\] for \(V:=L^{2}\). The explicit solution of this linear equation for \(V\) and a differential inequality argument (or Gronwall) yield the asserted estimates. **Remark 3**.: The first factor on the right hand side of (23) is the square of the Segel-Slemrod parameter \(\varepsilon_{SSI}\), and the factor inside the second bracket is the square of the local timescale parameter \(\varepsilon_{MM}\). Since \(L/s_{0}\) is generally bounded by \(\varepsilon_{SSI}\), the relevant parameter for the distance to the QSS manifold will be \(\varepsilon_{MM}\). For a more detailed inspection, we assume \(\varepsilon_{MM}<1\), so the right hand side of (23) decreases with \(t\). Then, for all \(t\geq\widehat{t}\), \[\widehat{t}:=2\frac{1}{k_{-1}+k_{2}}\cdot\log\left(\frac{(k_{-1}+k_{2})^{2}}{k _{1}k_{2}e_{0}}\right)=\frac{2}{k_{1}K_{M}}\log\frac{1}{\varepsilon_{MM}} \tag{24}\] one obtains that \[\frac{|c-g(s)|}{s_{0}}\leq\sqrt{\frac{3}{2}}\cdot\varepsilon_{SSI}\cdot \varepsilon_{MM}. \tag{25}\] To verify the inequality, it suffices to do so for \(t=\widehat{t}\), and this, in turn, follows from the fact that \[\exp(-(k_{-1}+k_{2})t)=\varepsilon_{MM}{}^{2}\] is solved by \(t=\widehat{t}\). This provides a first estimate for the approach to the slow manifold. **Remark 4**.: One may consider similar estimates for complex QSS, with no a priori reference to singular perturbations, in the system with substrate inflow: \[\begin{array}{rclrcl}\dot{s}&=&k_{0}-k_{1}e_{0}s&+&(k_{-1}+k_{1}s)c\\ \dot{c}&=&k_{1}e_{0}s&-&(k_{-1}+k_{2}+k_{1}s)c.\end{array}\] Here, it is appropriate to choose initial values \(s(0)=c(0)=0\). The chain of inequalities above works similarly, with the crucial difference that \(\dot{s}=k_{0}-\dot{c}+k_{2}c\). So, the assumption \(e_{0}=\varepsilon e_{0}^{*}\) will no longer result in an order \(\varepsilon^{4}\) term in the analogue of (23) (unless \(k_{0}\) is also of order \(\varepsilon\)); only order \(\varepsilon^{2}\) can be salvaged. For more details, please see the discussion in Eilertsen et al. [9, Subsection 4.4]. For \(t\leq t_{\rm cross}\) an alternative Lyapunov function approach is suggested by **Lemma 1**. We start with a variant of equation (17): \[\frac{d}{dt}(c-g(s))=-(k_{-1}+k_{2}+k_{1}s)(c-g(s))-g^{\prime}(s)\dot{s}. \tag{26}\] Again, let \(L:=c-g(s)\). Similar to the derivation of **Lemma 4**, we find for \(t\leq t_{\rm cross}\) (using \(c-g(s)\leq 0\)): \[\frac{dL}{dt} =-(k_{-1}+k_{2}+k_{1}s))L-g^{\prime}(s)\dot{s} \tag{27a}\] \[=-(k_{-1}+k_{2}+k_{1}s)L+g^{\prime}(s)\left(\dot{c}+k_{2}c\right)\] (27b) \[=-\left((1+g^{\prime}(s))(k_{-1}+k_{2}+k_{1}s)\right)L+k_{2}g^{ \prime}(s)c\] (27c) \[=-\left((1+g^{\prime}(s))(k_{-1}+k_{2}+k_{1}s)\right)L+k_{2}g^{ \prime}(s)(L+g(s)). \tag{27d}\] So, we have **Lemma 5**.: _Consider the solution of (1) with initial value \((s_{0},0)\) at \(t=0\). Then, for \(0\leq t\leq t_{\rm cross}\),_ \[\frac{dL}{dt}=-A\,L+B \tag{28}\] _with_ \[\begin{array}{rcl}A:&=&k_{-1}+k_{2}+k_{1}s+g^{\prime}(s)(k_{-1}+k_{1}s),\\ B:&=&k_{2}g^{\prime}(s)g(s).\end{array}\] Note that when \(g(s)=c\) and \(s>0\), then \(dL/dt=k_{2}g^{\prime}(s)g(s)>0\). ### Second Step: The crossing time We will use **Lemma 5** to compute upper and lower bounds, \(t_{u}\) and \(t_{\ell}\), such that \(t_{\rm cross}\in[t_{\ell},t_{u}]\). The strategy will be to extract \(t_{u}\) and \(t_{\ell}\) from appropriate differential inequalities. We will express most of our estimates via the Segel-Slemrod parameter \(\varepsilon_{SSl}\). Although - as mentioned in the Introduction - the parameters used by Briggs and Haldane or, by Reich and Selkov, would be equally applicable in any well-defined limit with \(e_{0}\to 0\) (and all other parameters in a compact subset of the open positive orthant), the Segel-Slemrod parameter turns out to be the most convenient. We first determine a lower bound \(t_{\ell}\). By (5) and (19), we obtain \[g(s)\leq\frac{e_{0}s_{0}}{K_{M}+s_{0}}\mbox{ and }g^{\prime}(s)\leq\frac{e_{ 0}}{K_{M}}\mbox{ for }0\leq s\leq s_{0}.\] Now, with the notation of **Lemma 5**, we have \[A = k_{-1}+k_{2}+k_{1}s+\frac{e_{0}K_{M}}{(K_{M}+s)^{2}}(k_{-1}+k_{1 }s)\] \[\leq k_{1}(K_{M}+s)\left(1+\frac{e_{0}K_{M}}{(K_{M}+s)^{2}}\right)\] \[= k_{1}(K_{M}+s)+k_{1}e_{0}\frac{K_{M}^{2}}{(K_{M}+s)^{2}}\] \[\leq k_{1}(K_{M}+e_{0}+s_{0})\eqqcolon A^{*},\] and furthermore6 Footnote 6: For \(B^{*}\), we simply use \(\max g\cdot\max g^{\prime}\), both on \([0,\,s_{0}]\). The global maximum of \(B\) on \([0,\,\infty)\) equals \(4/27\cdot k_{2}e_{0}^{2}/K_{M}\). For the record, we point out that using this estimate would not make an essential difference. \[\frac{dL}{dt}\leq-A^{*}\,L+B^{*}. \tag{29}\] Thus, defining \(L^{*}\) by \[\frac{dL^{*}}{dt}=-A^{*}L^{*}+B^{*},\quad L^{*}(0)=-g(s_{0}),\] one obtains that \(L(t)\leq L^{*}(t)\) for \(0\leq t\leq t_{\rm cross}\). Explicitly, \[\begin{array}{rcl}L^{*}&=&-\left(\frac{B^{*}}{A^{*}}+g(s_{0})\right)\exp(-A ^{*}t)+\frac{B^{*}}{A^{*}}\\ \\ &=&s_{0}\varepsilon_{SSl}\left(-\left(1+\frac{k_{2}}{k_{1}K_{M}(1+ \varepsilon_{SSl})}\varepsilon_{SSI}\right)\,\exp\left(-(1+\varepsilon_{SSI}) \lambda t\right)+\frac{k_{2}}{k_{1}K_{M}(1+\varepsilon_{SSl})}\varepsilon_{SSI }\right),\end{array} \tag{30}\] where \(\lambda:=k_{1}(K_{M}+s_{0})\). Now define \(t_{\ell}\) by \(L^{*}(t_{\ell})=0\). A straightforward calculation shows \[\begin{array}{rcl}t_{\ell}&=&\frac{1}{k_{1}(K_{M}+s_{0})(1+ \varepsilon_{SSI})}\,\log\left(1+\frac{k_{-1}+k_{2}}{k_{2}}(1+\varepsilon_{SSI })\cdot\frac{1}{\varepsilon_{SSI}}\right)\\ \\ &=&\frac{1}{(k_{1}s_{0}+k_{-1}+k_{2})(1+\varepsilon_{SSI})}\,\log\left(1+ \frac{k_{-1}+k_{2}}{k_{2}}(1+\varepsilon_{SSI})\cdot\frac{1}{\varepsilon_{SSI }}\right)\end{array} \tag{31}\] This provides a lower estimate: **Lemma 6**.: _For the solution of (1), with initial value \((s_{0},0)\), one has \(t_{\rm cross}\geq t_{\ell}\)._ Proof.: Assume that \(t_{\rm cross}<t_{\ell}\), then \(L(t_{\ell})>0\). This is a contradiction to \(L(t_{\ell})\leq L^{*}(t_{\ell})=0\). Recall that Segel and Slemrod [29] introduced \[t_{SSl}:=\frac{1}{k_{1}(K_{M}+s_{0})}=\frac{1}{k_{-1}+k_{2}+k_{1}s_{0}} \tag{32}\] to estimate the duration of the fast transient. This defines the appropriate timescale at the very start, but as we show below, it cannot reflect the full transient phase. There is a slightly simplified estimate for \(t_{\ell}\): \[t_{\ell} = t_{SSl}\frac{1}{1+\varepsilon_{SSl}}\log\left(\frac{1}{ \varepsilon_{SSl}}\bigg{[}\frac{k_{1}K_{M}}{k_{2}}+\varepsilon_{SSl}\left( \frac{k_{1}K_{M}}{k_{2}}+1\right)\bigg{]}\right)\] \[\geq t_{SSl}\frac{1}{1+\varepsilon_{SSl}}\log\left(\frac{1}{ \varepsilon_{SSl}}\frac{k_{1}K_{M}}{k_{2}}\right)\] \[= t_{SSl}\frac{1}{1+\varepsilon_{SSl}}\log\left(\frac{1}{ \varepsilon_{SSl}}\right)+\log\left(\frac{k_{1}K_{M}}{k_{2}}\right)\] \[\geq t_{SSl}\left(1-\varepsilon_{SSl}\right)\left(\log\left(\frac{1} {\varepsilon_{SSl}}\right)+\log\left(\frac{k_{1}K_{M}}{k_{2}}\right)\right).\] Therefore, we may define \[t_{\ell}^{\dagger}:=t_{SSl}(1-\varepsilon_{SSl})\left(\log\left(\frac{1}{ \varepsilon_{SSl}}\right)+\log\left(\frac{k_{-1}+k_{2}}{k_{2}}\right)\right) \tag{33}\] as a lower estimate for the crossing time. An asymptotic expansion of the right-hand side yields \[t_{\ell}^{\dagger}\sim t_{SSl}\left[\log\frac{1}{\varepsilon_{SSl}}+\log \frac{k_{-1}+k_{2}}{k_{2}}+o(1)\right] \tag{34}\] For the slow timescale, chosen (in consistency with the choice of the small parameter) as \(\tau=\varepsilon_{SSl}t\), the above observations yield a lower estimate with leading term of order \(\varepsilon_{SSl}\cdot\log(1/\varepsilon_{SSl})\) in the asymptotic expansion. **Remark 5**.: At this point, it seems appropriate to reconsider the notion "onset of slow dynamics". For system (1), we noted that the distinguished time \(t_{\rm cross}\) (see, **Lemma 1** and the following ones) is a natural choice from a biochemical perspective. But singular perturbation theory, does not provide a precisely defined time for the onset of the slow phase. The following two observations are based on a fundamental criterion for slow dynamics, namely closeness of the solution to the QSS manifold: 1. Equation (30) shows that \(|L^{*}(t_{SSl})/s_{0}|\approx\varepsilon_{SSl}\exp(-1)\). But, since \(|L/s_{0}|\) can always be estimated above by terms of order \(\varepsilon_{SSl}\) [see, (23)], this inequality does not indicate closeness to the QSS manifold. Thus, the onset of slow dynamics cannot be assumed near \(t_{SSl}\), and the Segel-Slemrod time \(t_{SSl}\) seriously underestimates the duration of the transient phase. 2. One may replace the condition \(L^{*}(t)=0\) from (30) by an order \(\varepsilon_{SSl}\) closeness condition, requiring \(L^{*}(t)/s_{0}\geq-M\cdot\varepsilon_{SSl}^{2}\), with some positive constant \(M\), as the defining characteristic of the slow phase. A provisional definition of \(t_{ons}\) by \(L^{*}(t_{ons})/s_{0}=-M\cdot\varepsilon_{SSl}^{2}\) yields \(L^{*}(t)/s_{0}\geq-M\cdot\varepsilon_{SSl}^{2}\) for \(t_{ons}\leq t\leq t_{\rm cross}\). Similar to the derivation of (31), one obtains an estimate \[t_{ons}=t_{SSl}\log(M^{*}/\varepsilon_{SSl})+\cdots\] (35) with some constant \(M^{*}\), and the dots representing higher order terms. Thus, we have the same lowest order asymptotic term \(\log(1/\varepsilon_{SSl})\) as for \(t_{\ell}^{\dagger}\). We proceed to estimate initial substrate depletion: **Proposition 1**.: _One has the inequality_ \[\frac{s(t_{\ell}^{\dagger})}{s_{0}}\leq\frac{k_{-1}}{k_{-1}+k_{2}}+\frac{k_{2}}{k _{-1}+k_{2}}\exp\left(-\frac{\varepsilon_{SSl}K_{M}}{(K_{M}+s_{0})}(1- \varepsilon_{SSI})\cdot\log\left(\frac{k_{1}K_{M}}{\varepsilon_{SSI}k_{2}} \right)\right).\] _Moreover, when_ \[\varepsilon_{SSI}\cdot\log\left(\frac{k_{1}K_{M}}{k_{2}\varepsilon_{SSI}} \right)<1, \tag{36}\] _then_ \[\frac{s_{0}-s_{\rm cross}}{s_{0}}\geq\frac{s_{0}-s(t_{\ell}^{\dagger})}{s_{0}} \geq\frac{k_{2}}{2k_{1}(K_{M}+s_{0})}\varepsilon_{SSI}(1-\varepsilon_{SSI}) \log\left(\frac{k_{1}K_{M}}{\varepsilon_{SSI}k_{2}}\right). \tag{37}\] Proof.: The estimate for \(s(t_{\ell}^{\dagger})\) is obtained by substitution of \(t_{\ell}^{\dagger}\) in (16). As for (37), the first inequality holds because \(s\) is decreasing with \(t\). Then one directly obtains \[\frac{s_{0}-s(t_{\ell}^{\dagger})}{s_{0}}\geq\frac{k_{2}}{k_{-1}+k_{2}}\left( 1-\exp(-\alpha)\right)\] with \[\alpha:=\frac{\varepsilon_{SSI}K_{M}}{K_{M}+s_{0}}(1-\varepsilon_{SSI})\cdot \log\left(\frac{k_{1}K_{M}}{\varepsilon_{SSI}k_{2}}\right)<\varepsilon_{SSI} \cdot\log\left(\frac{k_{1}K_{M}}{\varepsilon_{SSI}k_{2}}\right).\] Condition (36) implies that \(\alpha<1\), and the exponential series and the Leibniz criterion show that \[\exp(-\alpha)\leq 1-\alpha+\alpha^{2}/2\leq 1-\alpha/2.\] This estimate yields the second assertion. **Remark 6**.: The estimate in **Proposition 1** can be improved, subject to more restrictive assumptions on \(\varepsilon_{SSI}\). Replacing (36) by \[\varepsilon_{SSI}\cdot\log\left(\frac{k_{1}K_{M}}{\varepsilon_{SSI}k_{2}} \right)<r \tag{38}\] for some \(r\), \(0<r\leq 1\), it is straightforward to see that \[\frac{s_{0}-s_{\rm cross}}{s_{0}}\geq(1-r/2)\,\frac{k_{2}}{k_{1}}\frac{ \varepsilon_{SSI}}{K_{M}+s_{0}}(1-\varepsilon_{SSI})\log\left(\frac{k_{1}K_{ M}}{\varepsilon_{SSI}k_{2}}\right) \tag{39}\] in this case. This suggests a simplified asymptotic estimate for \(s_{\rm cross}\) by setting, for instance, \(r=\sqrt{\varepsilon_{SSI}}\) and keeping only lowest order terms, \[\frac{s_{0}-s_{\rm cross}}{s_{0}}\geq\frac{k_{2}}{k_{1}(K_{M}+s_{0})} \varepsilon_{SSI}\left[\log\left(\frac{1}{\varepsilon_{SSI}}\right)+\log \left(\frac{k_{1}K_{M}}{k_{2}}\right)\right]+\cdots\] In comparison, the asymptotic expansion of the right-hand side of (37) starts with \[\frac{k_{2}}{2k_{1}(K_{M}+s_{0})}\varepsilon_{SSI}(1-\varepsilon_{SSI})\log \left(\frac{k_{1}K_{M}}{\varepsilon_{SSI}k_{2}}\right)\sim\frac{k_{2}}{2k_{1} (K_{M}+s_{0})}\varepsilon_{SSI}\left[\log\frac{1}{\varepsilon_{SSI}}+\log \frac{k_{1}K_{M}}{k_{2}}\right]+\cdots \tag{40}\] It turns out below that the (removable) factor \(\frac{1}{2}\) is less problematic for estimates than the factor \(\frac{k_{2}}{k_{1}(K_{M}+s_{0})}\). **Remark 7**.: Thus, for \((s_{0}-s_{\rm cross})/s_{0}\) one has a lower estimate by an expression asymptotic to \(\varepsilon_{SSI}\log(1/\varepsilon_{SSI})\). Notably, this estimate indicates that the widely held assumption in the literature (see, e.g. Segel & Slemrod [29]) about negligibility of the substrate depletion in the pre-QSS phase should be subject to further consideration. Moreover, upon replacing \(t_{\rm cross}\) by a differently chosen onset time \(t_{ons}\) as in (35), the argument in the proof of the Proposition, with **Lemma 3** shows that \[9\frac{s_{0}-s(t_{ons})}{s_{0}}\geq\frac{k_{2}}{2k_{1}(K_{M}+s_{0})} \varepsilon_{SSI}\log\left(\frac{M^{*}}{\varepsilon_{SSI}}\right)+\cdots \tag{41}\] whenever \(\varepsilon_{SSI}\log\left(\frac{M^{*}}{\varepsilon_{SSI}}\right)<1\). Thus, the lowest order of the asymptotic expansion remains unchanged. We now turn to upper bounds for \(t_{\rm cross}\), fixing an auxiliary constant \(0<q<1\). Then, \[g(s)\geq\frac{qe_{0}s_{0}}{K_{M}+qs_{0}}\mbox{ and }g^{\prime}(s)\geq\frac{e_{0}K_{M} }{(K_{M}+s_{0})^{2}}\quad\mbox{for}\quad qs_{0}\leq s\leq s_{0}.\] Therefore, \[A\geq k_{1}(qs_{0}+K_{M})=:A_{*},\] and \[B\geq k_{2}\frac{qe_{0}s_{0}}{K_{M}+qs_{0}}\cdot\frac{e_{0}K_{M}}{(K_{M}+s_{0} )^{2}}=:B_{*}\] when \(qs_{0}\leq s\leq s_{0}\). Hence, for \(0\leq t\leq t_{\rm cross}\) and \(s(t)\geq qs_{0}\), one has \[\frac{dL}{dt}\geq-A_{*}L+B_{*}, \tag{42}\] and defining \(L_{*}\) by \[\frac{dL_{*}}{dt}=-A_{*}L_{*}+B_{*},\quad L_{*}(0)=-g(s_{0}),\] the usual differential inequality argument shows \(L\geq L_{*}\). Explicitly, \[L_{*}=-\left(\frac{B_{*}}{A_{*}}+g(s_{0})\right)\exp(-A_{*}t)+\frac{B_{*}}{A_ {*}}.\] Define \(t_{u}=t_{u}(q)\) by \(L_{*}(t_{u}(q))=0\), thus \[t_{u}(q)=\frac{1}{k_{1}(K_{M}+qs_{0})}\,\log\left(1+C(q)\cdot\frac{1}{ \varepsilon_{SSI}}\right),\quad\mbox{with}\quad C=C(q):=\frac{1}{q}\cdot\frac{ (k_{-1}+k_{2}+qk_{1}s_{0})^{2}}{k_{2}(k_{-1}+k_{2})}. \tag{43}\] With the inequality \[\frac{1}{q}<C(q)<\frac{C^{*}}{q},\quad C^{*}:=C(1)=\frac{(k_{-1}+k_{2}+k_{1}s _{0})^{2}}{k_{2}(k_{-1}+k_{2})}=\frac{k_{1}(K_{M}+s_{0})^{2}}{k_{2}K_{M}}, \tag{44}\] we obtain a more convenient estimate for \(t_{u}(q)\): \[t_{u}(q) = t_{SSI}\frac{K_{M}+s_{0}}{K_{M}+qs_{0}}\log\left(1+\frac{C(q)}{ \varepsilon_{SSI}}\right)\] \[\leq t_{SSI}\frac{1}{q}\,\log\left(1+\frac{1}{\varepsilon_{SSI}} \frac{C^{*}}{q}\right).\] This gives rise to the upper estimate \[t_{u}^{\dagger}(q):=t_{SSI}\frac{1}{q}\,\log\left(1+\frac{1}{\varepsilon_{SSI }}\frac{C^{*}}{q}\right)\geq t_{u}(q). \tag{45}\] The involvement of the constant \(q\) is somewhat annoying, but it seems unavoidable. For later use, we note \[\log\left(1+\frac{1}{\varepsilon_{SSI}}\frac{C^{*}}{q}\right)=\frac{1}{q}\log \frac{1}{\varepsilon_{SSI}}+\log\frac{C^{*}}{q}+\log\left(1+\frac{q\varepsilon _{SSI}}{C^{*}}\right)\] and obtain the asymptotic expansion \[t_{u}^{\dagger}(q)=t_{SSI}\frac{1}{q}\,\log\left(1+\frac{1}{\varepsilon_{SSI }}\frac{C^{*}}{q}\right)\sim\frac{1}{q}t_{SSI}\left[\log\frac{1}{\varepsilon_ {SSI}}+\log\frac{C^{*}}{q}+o(1)\right]. \tag{46}\] Equation (43) provides an upper estimate for the crossing time, subject to an additional condition: **Lemma 7**.: _Given \(0<q<1\), assume that the solution of (1), with initial value \((s_{0},0)\), satisfies \(s(t_{u}(q))\geq qs_{0}\). Then, \(t_{\rm cross}\leq t_{u}(q)\leq t_{u}^{\dagger}(q)\)._ Proof.: Assume that \(t_{\rm cross}>t_{u}(q)\), then \(L_{*}(t_{\rm cross})>0\) and consequently \(L(t_{\rm cross})>0\); a contradiction. Modulo the hypothesis of **Lemma 7**, we get an upper estimate for \(t_{\rm cross}\) which is asymptotic to \(\log(1/\varepsilon_{SSI})\), and complements the lower estimate \(t_{\ell}\) with the same asymptotics. This clarifies the asymptotic behavior of \(t_{\rm cross}\) as \(\varepsilon_{SSI}\to 0\). Still, criteria are needed to satisfy the hypothesis of the Lemma. The first step to obtain such criteria is to apply **Lemma 3** for \(t=t_{u}^{\dagger}(q)\). By straightforward calculations, one finds the first estimate in the following proposition: **Proposition 2**.: _One has_ \[\frac{s(t_{u}^{\dagger}(q))}{s_{0}}\geq\exp\left(-\frac{1}{q}\varepsilon_{SSI }\log\left(1+\frac{1}{\varepsilon_{SSI}}\cdot\frac{C^{*}}{q}\right)\right). \tag{47}\] _Moreover, when_ \[\varepsilon_{SSI}\log\left(1+\frac{1}{\varepsilon_{SSI}}\cdot\frac{C^{*}}{q} \right)<q\] _then_ \[\frac{s_{0}-s_{\rm cross}}{s_{0}}\leq\frac{s_{0}-s(t_{u}^{\dagger}(q))}{s_{0} }\leq\frac{1}{q}\varepsilon_{SSI}\log\left(1+\frac{C^{*}}{q}\cdot\frac{1}{ \varepsilon_{SSI}}\right). \tag{48}\] Proof.: There remains estimate (48). The first inequality follows from monotonicity of \(t\mapsto s(t)\). When the stated condition holds then \[\gamma:=\frac{1}{q}\varepsilon_{SSI}\log\left(1+\frac{1}{\varepsilon_{SSI}} \cdot\frac{C^{*}}{q}\right)<1,\] and therefore \(\exp(-\gamma)>1-\gamma\) by the exponential series and the Leibniz criterion. Substitution yields the assertion. Analogous to the derivation of expansion (46) one obtains an expansion of the right-hand side of equation (48), up to terms of order \(o(\varepsilon_{SSI})\): \[\frac{1}{q}\varepsilon_{SSI}\log\left(1+\frac{1}{\varepsilon_{SSI}}\cdot \frac{C^{*}}{q}\right)\sim\frac{1}{q}\left[\varepsilon_{SSI}\log\frac{1}{ \varepsilon_{SSI}}+\varepsilon_{SSI}\log\frac{C^{*}}{q}+\cdots\right]. \tag{49}\] Equation (47), in view of \(\lim_{x\to 0+}x\log(1/x)=0\), shows that for any fixed \(q\) the condition \(s(t_{u}^{\dagger}(q))\geq qs_{0}\) holds for sufficiently small \(\varepsilon_{SSI}\). There remains to determine usable explicit bounds for \(\varepsilon_{SSI}\) for given \(q\). We aim here at providing simple workable, rather than optimal, conditions: **Proposition 3**.: _Let \(q\geq\frac{1}{2}\), such that_ \[4q\log(1/q)\cdot\log(4C^{*})<1.\] _Assume that_ \[\varepsilon_{SSI}<\exp(-1)\quad\mbox{and}\quad\varepsilon_{SSI}\leq\frac{9}{ 16}\left(q\log(1/q)\right)^{2}.\] _Then, \(s(t_{u}^{\dagger}(q))\geq qs_{0}\) and consequently \(t_{\rm cross}\leq t_{u}^{\dagger}(q)\)._ Proof.: By **Lemma 7**, it is sufficient to prove the inequality \(s(t_{u}^{\dagger}(q))\geq qs_{0}\). By (47), this holds whenever \[\exp\left(-\frac{1}{q}\varepsilon_{SSI}\log\left(1+\frac{C^{*}}{q}\cdot\frac{ 1}{\varepsilon_{SSI}}\right)\right)\geq q.\] Equivalently \[\frac{1}{q}\varepsilon_{SSI}\log\left(1+\frac{C^{*}}{q}\cdot\frac{1}{ \varepsilon_{SSI}}\right)\leq\log\frac{1}{q},\] \[\varepsilon_{SSI}\log\left(1+\frac{C^{*}}{q}\cdot\frac{1}{\varepsilon_{SSI}}\right) \leq q\log\frac{1}{q}. \tag{50}\] Rewrite the left hand side as \[\varepsilon_{SSI}\log\left(1+\frac{C^{*}}{q}\cdot\frac{1}{ \varepsilon_{SSI}}\right) = \varepsilon_{SSI}\log\left(\frac{1}{\varepsilon_{SSI}}\left( \varepsilon_{SSI}+\frac{C^{*}}{q}\right)\right)\] \[= \varepsilon_{SSI}\log\left(\frac{1}{\varepsilon_{SSI}}\right)+ \varepsilon_{SSI}\log\left(\varepsilon_{SSI}+\frac{C^{*}}{q}\right)\] \[\leq \varepsilon_{SSI}\log\left(\frac{1}{\varepsilon_{SSI}}\right)+ \varepsilon_{SSI}\log\left(2\cdot\frac{C^{*}}{q}\right)\] \[\leq \varepsilon_{SSI}\log\left(\frac{1}{\varepsilon_{SSI}}\right)+ \varepsilon_{SSI}\log(4C^{*}),\] where we used \(C^{*}\geq 1>\varepsilon_{SSI}\) and \(\frac{1}{2}\leq q<1\). In view of \(\varepsilon_{SSI}\log(1/\varepsilon_{SSI})\leq\sqrt{\varepsilon_{SSI}}\), the inequality (50) holds whenever \[\sqrt{\varepsilon_{SSI}}+\varepsilon_{SSI}\log(4C^{*})\leq q\log\frac{1}{q}. \tag{51}\] For the remainder of this proof, we abbreviate \(A:=\log(4C^{*})\) and \(B:=q\log\frac{1}{q}\). Let \(\theta\) be the positive number with \(A\theta^{2}+\theta=B\). Then, for any \(\varepsilon_{SSI}\leq\theta^{2}\), the inequality (51) holds. Now, the solution of the above quadratic equation with \(AB<1/4\), Taylor expansion and the Leibniz criterion show \[\theta=\frac{1}{2A}\left(-1+\sqrt{1+4AB}\right)\geq\frac{1}{2A}\left(-1+1+ \frac{4AB}{2}-\frac{16A^{2}B^{2}}{8}\right),\] hence \[\delta\geq B(1-AB)\geq\frac{3}{4}B.\] Thus, inequality (51) holds whenever \(\varepsilon_{SSI}\leq(\frac{3}{4}B)^{2}\). The role of the constant \(q\) is mostly auxiliary. It serves to ensure the applicability of **Proposition 3**, but actual estimates e.g. of \(s(t_{u}^{\dagger}(q))\) will rely on **Proposition 2**. **Example 1**.: We consider one particular setting for the purpose of illustration. Assume that \[C^{*}=\frac{(k_{-1}+k_{2}+k_{1}s_{0})^{2}}{k_{2}(k_{-1}+k_{2})}\leq 250.\] This condition covers a wide range of reaction parameters, for instance it is satisfied whenever \(s_{0}\leq 5K_{M}\) and \(k_{-1}\leq 5k_{2}\). Then, the requirement on \(q\) in **Proposition 3** is satisfied whenever \(q\geq 0.97\). For \(q=0.97\). one finds the condition \(\varepsilon_{SSI}\leq 4.9\cdot 10^{-4}\). Rather than \(t_{u}^{\dagger}(q)\), one may consider a slightly weaker, but more convenient estimate. Fix \(\varepsilon_{SSI}\) such that \(s(t_{u}^{\dagger}(q))\geq qs_{0}\). We will prove that the relative error upon replacing \(t_{u}^{\dagger}(q)\) by \[t_{u}^{\dagger}(1)=\frac{1}{k_{1}(K_{M}+s_{0})}\log\left(1+\frac{k_{1}(K_{M}+s _{0})^{3}}{k_{2}K_{M}e_{0}}\right)=t_{SSI}\log\left(1+\frac{C^{*}}{\varepsilon _{SSI}}\right) \tag{52}\] is approximately equal to \((1-q)\) when \(q\) approaches \(1\). **Lemma 8**.: _One has_ \[0\leq\frac{t_{u}^{\dagger}(q)-t_{u}^{\dagger}(1)}{t_{u}^{\dagger}(q)}\leq \frac{1-q}{q}\cdot\frac{1+\log(1+C^{*}/(q\,\varepsilon_{SSI}))}{\log(1+C^{*}/ (q\,\varepsilon_{SSI}))}.\] Proof.: We abbreviate \(A=C^{*}/\varepsilon_{SSl}\) and consider the function \[q\mapsto f(q):=\frac{1}{q}\log(1+A/q),\] noting \(t_{u}^{\dagger}(q)=t_{SSl}f(q)\). The derivative \[f^{\prime}(q)=-\frac{1}{q^{2}}\left(\log(1+A/q)+\frac{A}{A+q}\right)\] is an increasing function of \(q\). By the mean value theorem, one has \(f(q)-f(1)=(q-1)f^{\prime}(q^{*})\) for some \(q^{*}\) between \(q\) and \(1\). Hence, by monotonicity and with \(A/(A+q)<1\), \[f(q)-f(1)\leq(1-q)\left|f^{\prime}(q)\right|\leq\frac{1-q}{q^{2}}\left(\log(1+ A/q)+1\right).\] The assertion follows. We also note an asymptotic expansion: \[\begin{array}{rcl}t_{u}^{\dagger}(1)&\sim& t_{SSl}\left[\log\frac{1}{ \varepsilon_{SSI}}+\log\left(\frac{k_{1}K_{M}}{k_{2}}\left(\frac{K_{M}+s_{0}}{ K_{M}}\right)^{2}\right)+o(1)\right]\\ &\sim& t_{SSl}\left[\log\frac{1}{\varepsilon_{SSI}}+\log\left(\frac{k_{-1}+k_{ 2}}{k_{2}}\left(\frac{k_{-1}+k_{2}+k_{1}s_{0}}{k_{-1}+k_{2}}\right)^{2}\!\! \right)+o(1)\right].\end{array} \tag{53}\] The numerical simulations underlying Figure 2 illustrate that \(t_{u}^{\dagger}(1)\) is a quite good approximation of the crossing time. **Remark 8**.: Observe that \(t_{u}^{\dagger}(1)\) corresponds to the estimate \(T_{\text{in}}\) from Noethen & Walcher [20, Lemma 4], but with an additional factor \((s_{0}+K_{M})/K_{M}\) nested inside the logarithm. The presence of this term is relevant: the solution slows down significantly - especially in the \(c\)-direction - near the \(c\)-nullcline in regions where \(K_{M}\ll s\). In these regions, the solution will travel nearly horizontally and below the QSS manifold for an extended period of time before finally crossing. Moreover, the vanishing of \(K_{M}\) gives rise a line of equilibrium points at \(c=e_{0}\). In this limiting case, the crossing time \(t_{\text{cross}}\) tends to infinity for any trajectory for which \(c(0)\neq e_{0}\). This fact is reflected by the term \((K_{M}+s_{0})/K_{M}\) in the expression for \(t_{u}^{\dagger}(1)\). Finally, it may be appropriate to look at the substrate depletion during the transitory phase from a general perspective: As shown by equations (35) and (45) (setting \(q=1\)), the onset time for the slow dynamics will in any case be of the type \[t_{ons}^{*}=t_{SSl}\log\frac{M}{\varepsilon_{SSI}}+\cdots \tag{54}\] with some positive constant \(M\). With slight modifications of **Propositions 1** and **2** one arrives at \[\widehat{M}_{1}\cdot\varepsilon_{SSI}\,\log\frac{1}{\varepsilon_{SSI}}+\cdots \leq\frac{s_{0}-s(t_{ons}^{*})}{s_{0}}\leq\widehat{M}_{2}\cdot\varepsilon_{ SSI}\,\log\frac{1}{\varepsilon_{SSI}}+\cdots, \tag{55}\] with suitable constants \(\widehat{M}_{i}\). Thus we have the asymptotic order \(\varepsilon_{SSI}\,\log(1/\varepsilon_{SSI})\) for the relative initial substrate depletion. ### Third Step: Error estimates for the approximation We now turn toward global error estimates for the reduction. As in the previous subsection, we will express most estimates in terms of the Segel-Slemrod parameter \(\varepsilon_{SSI}\). For \(t\geq t_{\text{cross}}\), we consider the familiar Michaelis-Menten equation, augmented by an error term. We start from \[\dot{s}=-k_{1}e_{0}s+(k_{-1}+k_{1}s)g(s)+(k_{-1}+k_{1}s)(c-g(s)). \tag{56}\] **Lemma 9**.: _For all \(t\geq t_{\rm cross}\), the \(s\) entry of the solution of (1) with initial value \((s_{0},0)\) satisfies_ \[\dot{s} \geq -\frac{k_{1}k_{2}e_{0}s}{k_{-1}+k_{2}+k_{1}s}; \tag{57}\] \[\dot{s} \leq -\frac{k_{1}k_{2}e_{0}s}{k_{-1}+k_{2}+k_{1}s}\quad+\quad\tfrac{1}{ \sqrt{2}}k_{1}e_{0}s_{0}\cdot\left(\frac{k_{-1}+k_{1}s_{0}}{k_{-1}+k_{2}+k_{1} s_{0}}\right)\cdot\left(\frac{k_{1}k_{2}e_{0}}{(k_{-1}+k_{2})^{2}}\right)\ \ =:\ \ U(s).\] Proof.: For the first inequality note that \(c-g(s)\geq 0\) for \(t\geq t_{\rm cross}\). For the second inequality, using (21) with \(L(t_{\rm cross})=0\), one obtains \[\frac{L^{2}}{s_{0}^{2}}\leq\frac{1}{2}\,\left(\varepsilon_{MM}\varepsilon_{SSI }\right)^{2}\cdot\left(1-\exp\left[-(k_{-1}+k_{2})(t-t_{\rm cross})\right] \right)\leq\frac{1}{2}\,\left(\varepsilon_{MM}\varepsilon_{SSI}\right)^{2} \tag{58}\] Figure 2: **Numerical simulations indicate that \(t_{u}^{\dagger}(1)\), defined in (52), is a reasonable estimation of \(t_{\rm cross}\) when \(e_{0}\ll K_{M}\). In all panels, \(e_{0}\in[0.025,0.05,0.075,0.1,0.25,0.5,0.75,1.0,2.5,5.0,7.5,10]\), \(k_{1}=1.0\), \(k_{2}=k_{-1}=100\), thus \(K_{M}=200\), and \(\sigma=s_{0}/K_{M}\). The solid black diamonds are the numerically estimated crossing times. The densely dashed line is obtained from (52). The dotted line is obtained from (34). Top Left: \(s_{0}=2\). Top Right: \(s_{0}=20\). Bottom Left: \(s_{0}=200\). Bottom Right: \(s_{0}=2000\). Observe the noticeable difference between (53) and (34) when \(s_{0}\) is much larger than \(K_{M}\). This is due to the difference in the constant terms of the expansions. One also sees that the lower estimate \(t_{\ell}^{t}\) from (34) is worse than the upper estimate; compare Remark 2.** for all \(t\geq t_{\rm cross}\), thus one has \(L/s_{0}\leq\frac{1}{\sqrt{2}}\varepsilon_{SSl}\varepsilon_{MM}\), with \[\dot{s} = -k_{1}e_{0}s+(k_{1}s+k_{-1})g(s)\ \ \ +\ Turning to the proof of part (b), note that \(x^{*}=ab/(c-b)\) is the only stationary point of the differential equation for \(x\). So, the solution with initial value \(x_{0}>x^{*}\) is strictly decreasing and converges to this point. Now, we have \[\frac{d}{dt}(x-y) = -c\left(\frac{x}{x+a}-\frac{y}{y+a}\right)+b\] \[= -c\frac{a(x-y)}{(x+a)(y+a)}+b\] \[\leq -\frac{ac}{(x_{0}+a)^{2}}\left(x-y\right)+b.\] Compare this with the solution of the initial value problem \[\dot{v}=-\frac{ac}{(x_{0}+a)^{2}}v+b,\quad v(0)=0\] to obtain the assertion. As for part (c), the first inequality is immediate, while the second is verified by a variant of the previous argument, with the inequality \[\frac{d}{dt}(z-y) = -c\left(\frac{z}{z+a}-\frac{y}{y+a}\right)\] \[\leq -\frac{ac}{(z_{0}+a)^{2}}\left(z-y\right).\] Evaluating the constant in part (b) of **Lemma 10** with \(a=K_{M}\), \(b=\frac{1}{\sqrt{2}}e_{0}s_{0}\left(\frac{k_{2}e_{0}}{K_{M}^{2}}\right)\frac{ K_{S}+s_{0}}{K_{M}+s_{0}}\), \(c=k_{2}e_{0}\) and \(x_{0}=\widetilde{s}\), we obtain \[\frac{1}{\sqrt{2}}s_{0}\cdot\frac{e_{0}}{K_{M}}\cdot\frac{K_{S}+s _{0}}{K_{M}+s_{0}}\frac{K_{M}+\widetilde{s}}{K_{M}}\cdot\frac{K_{M}+\widetilde {s}}{K_{M}} \leq\frac{1}{\sqrt{2}}s_{0}\cdot\frac{e_{0}}{K_{M}}\cdot\frac{K_{S }+s_{0}}{K_{M}+s_{0}}\left(\frac{K_{M}+s_{0}}{K_{M}}\right)^{2}\] \[=\frac{1}{\sqrt{2}}s_{0}\cdot\varepsilon_{RS}\cdot\frac{(K_{M}+s _{0})(K_{S}+s_{0})}{K_{M}^{2}}.\] Choosing a natural scaling (and omitting the factor \(\frac{1}{\sqrt{2}}\)), the parameter \[\varepsilon_{L}:=\varepsilon_{RS}\frac{(K_{M}+s_{0})(K_{S}+s_{0})}{K_{M}^{2}} =\frac{e_{0}}{K_{M}}\frac{(K_{M}+s_{0})(K_{S}+s_{0})}{K_{M}^{2}}=\frac{k_{1}e_ {0}}{k_{-1}+k_{2}}\frac{(k_{-1}+k_{2}+k_{1}s_{0})(k_{-1}+k_{1}s_{0})}{(k_{-1}+ k_{2})^{2}} \tag{63}\] provides an upper estimate for the long-term accuracy of the reduction. Note that the index indicates that the parameter was obtained from a linear differential inequality. #### 3.3.1 Estimates for the slow dynamics: Special case In the application of the QSSA, it is generally assumed that there is an initial transient during which the substrate concentration remains approximately constant or changes slowly while the complex concentration builds up. This assumption - that the substrate concentration does not change significantly during this initial transient - is known as the reactant stationary approximation (RSA) [16, 23]. The general assumption is that \(s\approx s_{0}\) from \(t=0\) until \(t_{\text{cross}}\). However, this a qualitative estimate. A more careful analysis is required in order to formulate a quantitative assertion concerning the validity of the RSA. We first determine estimates given the special assumption that the substrate concentration at the start of the slow phase is exactly known. In view of **Lemma 9** and **Lemma 10**, we then obtain **Proposition 4**.: _Denote by \(s(t)\) the first component of the solution of (1) with initial value \((s_{0},0)\) at \(t=0\). Moreover, let \(\widetilde{t}\geq t_{\text{cross}}\), \(\widetilde{s}:=s(\widetilde{t})\) and define \(\underline{s}\), resp. \(\overline{s}\) by_ \[\dot{\underline{s}} = -\frac{k_{2}e_{0}\underline{s}}{K_{M}+\underline{s}}\qquad\qquad \qquad\qquad\qquad\quad\underline{s}(\widetilde{t})=\widetilde{s}; \tag{64}\] \[\dot{\overline{s}} = -\frac{k_{1}k_{2}e_{0}\overline{s}}{K_{M}+\overline{s}}+\sqrt{2}k _{1}e_{0}s_{0}\cdot\left(\frac{k_{-1}+k_{1}s_{0}}{k_{-1}+k_{2}+k_{1}s_{0}} \right)\cdot\left(\frac{k_{1}k_{2}e_{0}}{(k_{-1}+k_{2})^{2}}\right)\quad, \overline{s}(\widetilde{t})=\widetilde{s}.\] _Then, for all \(t\geq\widetilde{t}\), we have_ \[\underline{s}(t)\leq s(t)\leq\overline{s}(t) \tag{65}\] _and_ \[\overline{s}(t)-s(t)\leq s_{0}\cdot\varepsilon_{L};\quad s(t)-\underline{s}(t) \leq s_{0}\cdot\varepsilon_{L}. \tag{66}\] Proof.: To prove (65), use **Lemma 9**. Moreover, parts (a) and (b) of **Lemma 10** show that \[\overline{s}(t)-\underline{s}(t)\leq\frac{1}{\sqrt{2}}s_{0}\varepsilon_{RS} \cdot\frac{K_{S}+s_{0}}{K_{M}+s_{0}}\cdot\left(\frac{K_{M}+\widetilde{s}}{K_{M }}\right)^{2}\leq\frac{1}{\sqrt{2}}s_{0}\varepsilon_{RS}\cdot\frac{(K_{M}+s_{ 0})(K_{S}+s_{0})}{K_{M}^{2}}<s_{0}\,\varepsilon_{L},\] which in combination with (65) proves (66). There is a different approach to upper and lower estimates for \(s\) in the slow regime, based on **Lemma 2**, with the parameter \(\delta^{*}\) defined in (11). We also utilize the explicit solution of the Michaelis-Menten equation via the Lambert W function, as obtained in Schnell & Mendoza [26]. **Proposition 5**.: _Denote by \(s(t)\) the first component of the solution of (1) with initial value \((s_{0},0)\) at \(t=0\). Moreover let \(\widetilde{t}\geq t_{\mathrm{cross}}\), \(\widetilde{s}:=s(\widetilde{t})\), and \(1>\delta\geq\delta^{*}\)._ 1. _Define_ \(\underline{s}\)_, resp._ \(\overline{s}\) _by_ \[\begin{array}{lll}\dot{\underline{s}}&=&-\frac{k_{2}e_{0} \underline{s}}{K_{M}+\underline{s}}&,&\underline{s}(\widetilde{t})= \widetilde{s};\\ \dot{\overline{s}}&=&-(1-\delta)\frac{k_{1}k_{2}e_{0}\overline{s}}{K_{M}+ \overline{s}}&,&\overline{s}(\widetilde{t})=\widetilde{s}.\end{array}\] (67) _Then, for all_ \(t\geq\widetilde{t}\)_, we have_ \[\underline{s}(t)\leq s(t)\leq\overline{s}(t).\] (68) 2. _Explicitly, setting_ \[A:=\frac{\widetilde{s}}{K_{M}}\exp\frac{\widetilde{s}}{K_{M}},\quad T:=\frac{ k_{2}e_{0}(t-\widetilde{t})}{K_{M}}\] _we obtain_ \[\begin{array}{lll}\underline{s}(t)&=&K_{M}\,W(A\exp(-T))\\ \overline{s}(t)&=&K_{M}\,W(A\exp(-T)\exp(\delta T))\end{array}\] (69) We turn to estimating \(\overline{s}-\underline{s}\), using basic properties of the Lambert \(W\) function that can for instance be found in Mezo [18, Section 1]. **Lemma 11**.: _With the notation introduced in_ **Lemma 5**_, one has_ \[\begin{array}{lll}\overline{s}-\underline{s}&\leq&K_{M}\log\left(1+W(Ae^{-T} )(e^{\delta T}-1)\right)&\leq&K_{M}W(Ae^{-T})(e^{\delta T}-1)\\ \overline{s}-\underline{s}&\leq&K_{M}\log\left(1+Ae^{-T}(e^{\delta T}-1) \right)&\leq&K_{M}Ae^{-T}(e^{\delta T}-1)\end{array} \tag{70}\] _for all \(t\geq\widetilde{t}\)._ Proof.: Let us abbreviate \(\alpha:=Ae^{-T}\) and \(\beta:=\alpha e^{\delta T}\). Then, with a known identity for \(W^{\prime}\) and monotonicity of \(W\), one sees \[K_{M}^{-1}\left(\overline{s}-\underline{s}\right) = \int_{\alpha}^{\beta}W^{\prime}(x)\,\mathrm{d}x=\int_{\alpha}^{ \beta}\frac{\mathrm{d}x}{x+\exp(W(x))}\] \[\leq \int_{\alpha}^{\beta}\frac{\mathrm{d}x}{x+\exp(W(\alpha))}=\log( x+\exp(W(\alpha))|_{\alpha}^{\beta}\] \[= \log\left(\frac{1+\beta e^{-W(\alpha)}}{1+\alpha e^{-W(\alpha)}} \right)=\log\left(1+\frac{\alpha e^{-W(\alpha)}(e^{\delta T}-1)}{1+\alpha e^{ -W(\alpha)}}\right)\] \[\leq \log\left(1+\alpha e^{-W(\alpha)}(e^{\delta T}-1)\right)\] \[= \log\left(1+W(\alpha)(e^{\delta T}-1)\right),\] where we have used the defining identity for \(W\) in the last step. This shows the first inequality, and the remaining ones follow from \(0\leq W(x)\leq x\) and \(\log(1+x)\leq x\) when \(x\geq 0\) Presently, we will use only the last inequality from (70) to obtain a global error estimate. **Proposition 6**.: _With the assumptions and notation from Proposition 5, for all \(t\geq\widetilde{t}\) the following inequalities hold:_ \[0\leq s-\underline{s}\leq\overline{s}-\underline{s}\leq s_{0}\,\exp\left(\frac{ s_{0}}{K_{M}}-1\right)\cdot\frac{\delta}{1-\delta}\leq s_{0}\,\exp\left(\frac{s_{0}}{K _{M}}-1\right)\cdot\frac{\delta^{*}}{1-\delta^{*}}=:s_{0}\cdot\varepsilon_{W}. \tag{71}\] Proof.: By elementary arguments, the function \(T\mapsto e^{-T}(e^{\delta T}-1)\), with derivative \(T\mapsto e^{-T}\left(1-(1-\delta)e^{\delta T}\right)\) attains its maximum at \(T^{*}=-\log(1-\delta)/\delta\geq 1\), with value \[\widetilde{s}\exp(\widetilde{s}/K_{M})\exp(-T^{*})\cdot\frac{\delta}{1-\delta }\leq s_{0}\exp(s_{0}/K_{M})\exp(-1)\cdot\frac{\delta}{1-\delta}.\] The assertion follows. **Remark 10**.: The index in \(\varepsilon_{W}\) should remind of its derivation via the Lambert \(W\) function. This may not be a particularly user-friendly parameter, but one can replace it by more convenient estimates. For instance, in case \(\varepsilon_{RS}\leq 0.1\), by (12) one may choose \(\delta\leq\frac{10}{9}\varepsilon_{RS}\), and proceed to obtain the estimate \[\varepsilon_{LW}\leq\frac{5}{4}\exp\left(\frac{s_{0}}{K_{M}}-1\right)\cdot \varepsilon_{RS}.\] **Remark 11**.: For all \(t\geq\widetilde{t}\), we thus obtained the estimates \(|s-\underline{s}|\leq s_{0}\varepsilon_{W}\), and \(|s-\underline{s}|\leq s_{0}\varepsilon_{L}\). Either of these may be better, given the circumstances. Both estimates are rigorous. However, we have to note that their derivation involves some simplified estimates, so they may not be optimal. Indeed, extensive numerical experiments point to an upper estimate \[\varepsilon_{opt}:=\frac{K_{S}+s_{0}}{K_{M}+s_{0}}\varepsilon_{SSI}\leq \varepsilon_{SSI}, \tag{72}\] but with our toolbox a rigorous proof for this conjecture does not seem possible (see, Figure 3). #### 3.3.2 Estimates for the slow dynamics: General case Under the hypothesis that \(\widetilde{t}\) and \(\widetilde{s}=s(\widetilde{t})\) are known exactly, we obtained upper estimates for the approximation error. However, this idealizing assumption does not reflect the real-life setting of parameter identification for the RSA. Due to lack of complete information, experimental scientists effectively apply the Michaelis-Menten equation with some estimate \(s^{*}\) for \(s(\widetilde{t})\) valid under the RSA conditions [16]. This discrepancy must be accounted for by an additional term in the error estimate. Define \(\xi\) by \[\dot{\xi}=-\frac{k_{1}k_{2}e_{0}\xi}{k_{-1}+k_{2}+k_{1}\xi}=-\frac{k_{2}e_{0} \xi}{K_{M}+\xi},\quad\xi(\widetilde{t})=s^{*}\in(0,\,s_{0}]. \tag{73}\] **Proposition 7**.: _Denote by \(s(t)\) the first component of the solution of (1) with initial value \((s_{0},0)\) at \(t=0\). Moreover, let \(\widetilde{t}\geq t_{\text{cross}}\), \(\widetilde{s}:=s(\widetilde{t})\). Then, with \(\underline{s}\) from (64) [or from (67)], for all \(t\geq\widetilde{t}\), we have_ \[|\xi-\underline{s}|\leq|s^{*}-\widetilde{s}| \tag{74}\] _and_ \[|\xi-s|\leq|s^{*}-\widetilde{s}|+s_{0}\cdot\varepsilon_{L}, \tag{75}\] _as well as_ \[|\xi-s|\leq|s^{*}-\widetilde{s}|+s_{0}\cdot\varepsilon_{W}. \tag{76}\] Proof.: For the first inequality use **Lemma 10(c)**. For the second, note that \[|\xi-s|\leq|\xi-\underline{s}|+|\underline{s}-s|\leq|s^{*}-\widetilde{s}|+| \underline{s}-s|\] and use **Propositions 4** and 6, respectively. **Remark 12**.: We should make the following observations for the above proposition: 1. **Lemma 10** includes an exponentially decaying factor for the first term in the estimate. For practical experimental applications, this might be of little relevance for some enzyme catalyzed reactions, since in the scenario under consideration here this exponential decay will be slow and the initial transient will be fast. 2. The special case of (73) with \(s^{*}=s_{0}\) seems to reflect the implicit assumption underlying many ex Figure 3: **Numerical simulations suggest that (72) provides an upper bound on the normalized error between the \(s\)-component of the mass action equations and the sQSSA for the complete time course when initial conditions lie on the QSS variety, \(c=g(s)\). In both panels, the black curve is the numerically-estimated normalized absolute error, \(|\xi-s|/s_{0}\). The red line is \(\varepsilon_{\rm opt}\). On the \(x\)-axis, \(t\) has been mapped to \(t_{\infty}=1-1/\log(t+e)\), and initial conditions for the mass action equations and the sQSSA satisfy \((s,c)(0)=(s,c)(t_{\rm cross})\) and \(\xi(0)=s(t_{\rm cross})\), respectively (\(t_{\rm cross}\) is estimated numerically). Top: The parameters used in the simulation are (in arbitrary units): \(s_{0}=10.0\), \(e_{0}=10.0\), \(k_{1}=2.0\), \(k_{2}=100.0\) and \(k_{-1}=100.0\). Bottom: The parameters used in the simulation are (in arbitrary units): \(s_{0}=100.0\), \(e_{0}=1.0\), \(k_{1}=2.0\), \(k_{2}=100.0\) and \(k_{-1}=100.0\). periments, i.e., that there is no discernible loss in the transitory phase before the starting time \(\widetilde{t}\) for measurements. With the obvious (and to some extent controllable) choice \(\widetilde{t}=t_{u}^{\dagger}(q)\), we obtain with **Proposition 2**: **Corollary 1**.: _Let \(0<q<1\) and let \(\varepsilon_{SSI}\) satisfy the hypotheses of_ **Proposition 3**_. Then, for all \(t\geq t_{u}^{\dagger}(q)\), one has_ \[\frac{|\xi-s|}{s_{0}} \leq \varepsilon_{SSI}\cdot\frac{1}{q}\log\left(1+\frac{1}{q}\frac{k_ {1}(K_{M}+s_{0})^{2}}{k_{2}K_{M}}\frac{1}{\varepsilon_{SSI}}\right)+ \varepsilon_{L}; \tag{77}\] \[\frac{|\xi-s|}{s_{0}} \leq \varepsilon_{SSI}\cdot\frac{1}{q}\log\left(1+\frac{1}{q}\frac{k_ {1}(K_{M}+s_{0})^{2}}{k_{2}K_{M}}\frac{1}{\varepsilon_{SSI}}\right)+ \varepsilon_{W}.\] #### 3.3.3 Assuming the standard quasi-steady-state approximation starts at \(t=0\) In experiments, it is generally assumed that the substrate concentration does not change during the initial fast transient. Here we consider a different scenario. We assume that sQSSA is applicable from \(t=0\). This reflects a widely used scenario in the literature, where one considers the reduced Michaelis-Menten equation with initial value \(s_{0}\) at \(t=0\) (see, the usual choice of initial value for (2) in the literature), and compares its solution to the true solution. This choice is compatible with the perspective of singular perturbation theory, because the relevant solution of (1) starts on the critical manifold with \(c=0\). Experimentally it is not an unreasonable approximation, particularly for fast-acting enzymes, like carbonic anhydrase. We will show for this scenario the approximation error is bounded by a term of order \(\varepsilon_{SSI}\). More precisely: **Proposition 8**.: _Let \(z(t)\) satisfy_ \[\dot{z}=-\frac{k_{2}e_{0}z}{K_{M}+z},\quad z(0)=s_{0},\] _and denote by \(s(t)\) the first component of the solution of (1) with initial value \((s_{0},0)\) at \(t=0\)._ 1. _Then, for all_ \(t\) _with_ \(0\leq t\leq t_{\mathrm{cross}}\)_, one has_ \(z(t)\geq s(t)\) _and_ \[\frac{z-s}{s_{0}}\leq\varepsilon_{SSI}\cdot\left(\frac{s_{0}+K_{S}}{s_{0}+K_{M }}\right)\exp\left(k_{1}s_{0}\varepsilon_{SSI}t_{\mathrm{cross}}\right).\] (78) 2. _Let_ \(0<q<1\) _and_ \(\varepsilon_{SSI}\) _satisfy the hypotheses of_ **Proposition 3**_. Then, for all_ \(t\) _with_ \(0\leq t\leq t_{\mathrm{cross}}\)_, one has_ \[\frac{z-s}{s_{0}}\leq\varepsilon_{SSI}\cdot\frac{1}{q}\!\left(\frac{s_{0}+K_{S }}{s_{0}+K_{M}}\right)\exp\left(\frac{1}{q}k_{1}s_{0}t_{SSI}\cdot\varepsilon_{ SSI}\log\left(1+\frac{1}{\varepsilon_{SSI}}\frac{C^{*}}{q}\right)\right),\] (79) _with_ \(C^{*}\) _from equation (_44_)._ Proof.: Let \[f(s,c)=:-k_{1}(e_{0}-c)s+k_{-1}c,\] and recall that \[f(s,\,g(s))=-\frac{k_{2}e_{0}s}{K_{M}+s},\] with \(g(s)=e_{0}s/(K_{M}+s)\) defined in (5). As in the proof of **Lemma 10**, one finds \[g(z)-g(s) =e_{0}\!\left(\frac{z}{K_{M}+s}\!-\frac{s}{K_{M}+z}\right)\] \[=e_{0}K_{M}\cdot\frac{z-s}{(K_{M}+s)(K_{M}+z)}.\] Now, limit the temporal domain to \(t\in[0,t_{\rm cross}]\), which implies \(L:=c-g(s)\leq 0\) by **Lemma 1**, and furthermore \[\dot{s}=f(s,c) =-k_{1}e_{0}s+(k_{-1}+k_{1}s)c,\] \[\leq-k_{1}e_{0}s+(k_{-1}+k_{1}s)g(s),\] \[=-\frac{k_{2}e_{0}s}{K_{M}+s}.\] Therefore, \(z\geq s\) for \(t\leq t_{\rm cross}\) by the usual differential inequality argument. With \(c=g(s)+L\) one now has \[\frac{d}{dt}(z-s) =f(z,g(z))-f(s,g(s)+L)\] \[=f(z,g(z))-f(s,g(z))+f(s,g(z))-f(s,g(s)+L)\] \[=-k_{1}(e_{0}-c)(z-s)+k_{1}(K_{S}+s)(g(z)-g(s)-L)\] \[\leq-k_{1}(e_{0}-c)(z-s)+k_{1}(K_{S}+z)(g(z)-g(s)-L)\] \[=-k_{1}(e_{0}-c)(z-s)-k_{1}(K_{S}+z)L+k_{1}(K_{S}+z)\cdot\frac{e_ {0}K_{M}\cdot(z-s)}{(K_{M}+s)(K_{M}+z)}\] \[\leq-k_{1}(e_{0}-c)(z-s)-k_{1}(K_{S}+z)L+k_{1}e_{0}(z-s),\] which leaves us with: \[\frac{d}{dt}(z-s)\leq k_{1}c(z-s)-k_{1}(K_{S}+s_{0})L,\quad\text{for all $t\leq t_{\rm cross}$}. \tag{83}\] Since \(c\leq s_{0}\varepsilon_{SSl}\) by (14), this ultimately implies \[\frac{d}{dt}(z-s)\leq\varepsilon_{SSl}k_{1}s_{0}(z-s)-k_{1}(K_{S}+s_{0})L, \quad\text{for all $t\leq t_{\rm cross}$}. \tag{84}\] Now, for \(t\leq t_{\rm cross}\), one has with \(f(s,c)\leq 0\) and \(g^{\prime}(s)\geq 0\) and (13): \[\frac{dL}{dt} = k_{1}(K_{M}+s)L-g^{\prime}(s)\cdot f(s,c)\] \[\geq k_{1}(K_{M}+s)L\geq k_{1}(K_{M}+s_{\rm cross})L,\] and therefore \[L\geq-\frac{e_{0}s_{0}}{K_{M}+s_{0}}\,\exp\left(k_{1}(K_{M}+s_{\rm cross})t \right).\] Consequently, we obtain: \[\frac{d}{dt}(z-s)\leq\varepsilon_{SSl}k_{1}s_{0}\bigg{[}(z-s)+(K_{S}+s_{0}) \exp(-k_{1}(K_{M}+s_{\rm cross})t)\bigg{]},\quad\text{for all $t\leq t_{\rm cross}$}. \tag{85}\] Solving the corresponding linear differential equation yields for all \(t\leq t_{\rm cross}\): \[\frac{z-s}{s_{0}} \leq \varepsilon_{SSl}\cdot\bigg{(}\frac{K_{S}+s_{0}}{K_{M}+s_{\rm cross }+\varepsilon_{SSl}s_{0}}\bigg{)}\bigg{(}\exp(k_{1}\varepsilon_{SSl}s_{0}t)- \exp(-k_{1}(K_{M}+s_{\rm cross})t)\bigg{)} \tag{86}\] \[\leq \varepsilon_{SSl}\cdot\frac{K_{S}+s_{0}}{K_{M}+s_{\rm cross}}\exp (k_{1}\varepsilon_{SSl}s_{0}t)\] \[\leq \varepsilon_{SSl}\cdot\frac{K_{S}+s_{0}}{K_{M}+s_{\rm cross}}\exp (k_{1}\varepsilon_{SSl}s_{0}t_{\rm cross}),\] which finishes the proof of part (a). For part (b), we use \(t_{\rm cross}\leq t_{u}^{\dagger}(q)\) [see, (45)], as well as \(s_{\rm cross}\geq qs_{0}\) due to **Lemma 7**. Numerical results confirm that (86) yields a rather sharp bound on the normalized error accumulated as the phase-plane trajectory approaches the QSS manifold (see, Figure 4). **Remark 13**.: The following observations should be made about our results: 1. Keeping only the lowest order term in (79), one has for \(0\leq t\leq t_{\mathrm{cross}}\) that \[\frac{z-s}{s_{0}}\sim\varepsilon_{SSl}\cdot\frac{1}{q}\bigg{(}\frac{K_{S}+s_{0 }}{K_{M}+s_{0}}\bigg{)}+o(\varepsilon_{SSl})=:\frac{1}{q}\varepsilon_{opt}+o( \varepsilon_{SSI})\] (87) with \(\varepsilon_{opt}\) defined in (72). 2. Numerical simulations confirm that (87) is a reliable estimation of the normalized error between \(z\) and \(s\) when \(t\leq t_{\mathrm{cross}}\) (see, Figure 5). 3. With **Propositions 5**, 6 and 7 one sees that \(|z-s|/s_{0}\) is of order \(\varepsilon_{SSI}\) over the whole time range. Numerical simulations suggest that \(\varepsilon_{opt}\) is a global upper bound (see, Figures 5 and 7). **Remark 14**.: The distinguishing difference between \(\varepsilon_{SSI}\) and (87) is the appearance of the dimensionless factor \[\eta=:\bigg{(}\frac{K_{S}+s_{0}}{K_{M}+s_{0}}\bigg{)}.\] Recall that the specificity constant [14], \(\Theta\), is defined as \[\Theta=:\frac{k_{2}}{K_{M}}\leq k_{1}. \tag{88}\] From (88) we can define the _normalized_ specificity constant, \(\bar{\Theta}=:\Theta/k_{1}\). Expressing \(\eta\) in terms of \(\bar{\Theta}\) and setting \(\sigma:=s_{0}/K_{M}\) yields \[\eta=\frac{1+\sigma-\bar{\Theta}}{1+\sigma}, \tag{89}\] and we conclude that (87) will be much smaller than \(\varepsilon_{SSI}\) whenever \(\bar{\Theta}\approx 1+\sigma\), which implies that \(\sigma\ll 1\) and \(\bar{\Theta}\) is close to \(1\). This scenario can be useful in the study of functional effects of enzyme mutations [15]. Numerical simulations confirm that the normalized error may be far less than \(\varepsilon_{SSI}\) when \(\eta\ll 1\) (see, Figure 6). ### About the long-time quality of the approximation The goal of the present work was to obtain workable upper estimates for the relative approximation error, \(|(*-s)/s_{0}|\), where \(*\) symbolizes the solution of some reduced equation, that are valid over the whole range of the slow dynamics. In their derivation, we deliberately chose simplified estimates which do not reflect that substrate concentration approaches \(0\) as \(t\to\infty\). Notably, in **Lemma 10** and **Lemma 11**, we eventually disregarded slowly decaying terms which would imply convergence to zero for the approximations. So in these estimates the dynamics is not reflected well for very long times. (We recall that the parameters \(\varepsilon_{MM}\) and \(\varepsilon_{RS}\) govern the accuracy of the approximation for very long times; see Eilertsen et al. [10].) Our simplifications are justified since for the intended application - parameter identification - the time range directly after the onset of the slow dynamics is relevant, while the behavior as \(t\to\infty\) is of less interest in the experimental setting. ## 4 Discussion: A view toward applications Experimental enzymologists, and biochemists and analytical chemists may be less interested in mathematical technicalities and wish to focus on the essential results. Therefore, we will here summarize some essential application-relevant consequences from our theoretical considerations. These takeaways will remain technical for experimental scientists, but they are accessible to mathematical biologists and chemists, who work in close collaboration with experimental scientists. These sections provide quantitative error estimates, which may be relevant for a detailed study in application scenarios. In order to present the results without recourse to the technical sections, we will accept some redundancy. As a distinguished perturbation parameter we choose \[\varepsilon_{SSl}=\frac{e_{0}}{K_{M}+s_{0}},\] as proposed by Segel and Slemrod [29].7 We will only exhibit the two lowest-order terms in the asymptotic Figure 4: **Numerical simulations confirm that (86) provides a sharp bound on the normalized error between the \(s\)-component of the mass action equations and the sQSSA.** In both panels, the black curve is the numerical solution the mass action equations. The dashed/dotted curve is the numerical solution to the sQSSA. The shaded region is demarcates the error bound generated by the right-hand side of (86). The blue line is the (normalized) numerical solution to the right-hand side of (86) with numerically-estimated (a priori) \(q\) that corresponds to the upper boundary of (86). Time has been rescaled by \(\tau=t/t_{\text{cross}}\), where \(t_{\text{cross}}\) has been numerically-estimated. Top: The parameters used in the simulation are (in arbitrary units): \(s_{0}=10.0\), \(e_{0}=10.0\), \(k_{1}=2.0\), \(k_{2}=100.0\) and \(k_{-1}=100.0\). Bottom: The parameters used in the simulation are (in arbitrary units): \(s_{0}=100.0\), \(e_{0}=1.0\), \(k_{1}=2.0\), \(k_{2}=100.0\) and \(k_{-1}=100.0\). expansions with respect to \(\varepsilon_{SSI}\), since these are dominant for sufficiently small initial enzyme concentration, and in some instances we will proceed to simplify these terms. The accuracy of approximation can in any case be gauged by a full analysis of the results in Section 3. ### Onset of the slow dynamics Generally, via singular perturbation theory one cannot define a fixed time for the end of the transitory phase. There always remains some freedom of choice when implementing a scale. As a definitive (biochemically relevant) time for the onset of the slow dynamics of the Michaelis-Menten reaction mechanism, following precedent, we chose the crossing time \(t_{\text{cross}}\), at which complex concentration is maximal. As noted in **Remark 7**, the familiar time \[t_{SSI}=\frac{1}{k_{1}(s_{0}+K_{M})}\] which seems to be suggested by Segel and Slemrod [29, equation (12)c], for the duration of the transient phase, leads to an underestimate for the asymptotics. In this paper, we found: * According to (33) and (34), an asymptotic lower estimate for the crossing time is given by \[t_{\ell}^{*}=t_{SSI}\,\left[\log\left(\frac{1}{\varepsilon_{SSI}}\right)+\log \frac{k_{-1}+k_{2}}{k_{2}}+\cdots\right]\] * According to (45) and (46), an asymptotic upper estimate for the crossing time is given by \[t_{u}^{*}(q)=\frac{1}{q}t_{SSI}\left[\log\frac{1}{\varepsilon_{SSI}}+\log \frac{k_{1}(s_{0}+K_{M})^{2}}{qk_{2}K_{M}}+\cdots\right],\] Figure 6: **Numerical simulations confirm that (72) provides a sharp bound on the normalized error at \(t=t_{\text{cross}}\) between the \(s\)-component of the mass action equations and the sQSSA.** In both panels the black line corresponds to \(\log(\text{Error})=\log(\varepsilon_{SSI})\) and \(e_{0}\in[0.025,0.05,0.075,0.1,0.25,0.5,0.75,1.0,2.5,5.0,7.5,10]\). The orange diamonds correspond to (87) and the black crosses are the numerically-estimated normalized error between \(s\) and \(z\) at the numerically-estimated crossing time. Top: The parameters used in the simulation are (in arbitrary units): \(s_{0}=10.0\), \(e_{0}=1.0\), \(k_{1}=2.0\), \(k_{2}=1.0\) and \(k_{-1}=100.0\). Bottom: The parameters used in the simulation are (in arbitrary units): \(s_{0}=100.0\), \(e_{0}=1.0\), \(k_{1}=2.0\), \(k_{2}=100.0\), and \(k_{-1}=1.0\). Note that \(\sigma\approx 0.1\) in both simulations. In the top panel, \(\eta\approx 0.99\) and therefore the normalized error is approximately \(\varepsilon_{SSI}\). On the other hand, \(\eta\approx 0.1\) in the bottom panel, and therefore the normalized error at the crossing time is roughly one order of magnitude less than \(\varepsilon_{SSI}\). where \(q<1\) is fixed but may be taken arbitrarily close to \(1\). To simplify this, we may approximate the upper estimate by \[t_{u}^{*}(1)=t_{SSI}\left[\log\frac{1}{\varepsilon_{SSI}}+\log\frac{k_{1}(s_{0}+K _{M})^{2}}{k_{2}K_{M}}+\cdots\right],\] since by **Lemma 8**, we may control the relative error by \(1-q\) times a factor close to \(1\). These considerations show that a lowest order approximation of the crossing time, hence of the onset time Figure 7: **Assuming RSA from \(t=0\): Numerical simulations suggest that (72) provides a sharp bound on the normalized error between the \(s\)-component of the mass action equations and the sQSSA for the complete time course.** In both panels, the black curve is the normalized absolute error \(|s-z|/s_{0}\). The red line is \(\varepsilon_{\rm opt}\). The initial conditions are \((s,c)(0)=(s_{0},0)\) and \(z(0)=s_{0}\) and thus correspond to the assumption that the RSA is valid at the start of the reaction outlined in Section 3.3.3. The dotted vertical line demarcates the numerically–computed \(t_{\rm cross}\). Note that \(t\) has been mapped to \(t_{\infty}=1-1/\log(t+e)\). Top: The parameters used in the simulation are (in arbitrary units): \(s_{0}=10.0\), \(e_{0}=10.0\), \(k_{1}=2.0\), \(k_{2}=100.0\) and \(k_{-1}=100.0\). Bottom: The parameters used in the simulation are (in arbitrary units): \(s_{0}=100.0\), \(e_{0}=1.0\), \(k_{1}=2.0\), \(k_{2}=100.0\) and \(k_{-1}=100.0\). of slow dynamics, is given by \[t_{\text{cross}}\approx t^{*}:=t_{SSI}\,\log\left(\frac{1}{\varepsilon_{SSI}} \right). \tag{90}\] ### Substrate depletion in the transient phase Now, we estimate the relative substrate loss at \(t_{\text{cross}}\), thus \(\Delta:=\frac{s_{0}-s_{\text{cross}}}{s_{0}}\) to estimate the validity of the RSA. In this paper, we found: * From (37) and (40), we get an asymptotic lower estimate \[\Delta\geq\frac{1}{2}k_{2}t_{SSI}\varepsilon_{SSI}\left[\log\frac{1}{ \varepsilon_{SSI}}+\log\frac{k_{1}K_{M}}{k_{2}}+\cdots\right],\] noting (with **Remark 6**) that the factor \(\frac{1}{2}\) could be discarded under slightly stricter hypotheses. The factor \(k_{2}t_{SSI}\) stems from the estimate (16) in **Lemma 3**. As noted in **Remark 2**, the latter is not optimal. * From (48) and (49), we obtain an asymptotic upper estimate \[\Delta\leq\frac{1}{q}\varepsilon_{SSI}\left[\log\frac{1}{\varepsilon_{SSI}}+ \log\frac{k_{1}(s_{0}+K_{M})^{2}}{qk_{2}K_{M}}+\cdots\right],\] with \(q<1\) but arbitrarily close to \(1\). With similar arguments as those in **Lemma 8**, one sees that replacing \(\Delta\) by \[\frac{s_{0}-s_{\text{cross}}}{s_{0}}\approx\Delta^{*}:=\varepsilon_{SSI}\left[ \log\frac{1}{\varepsilon_{SSI}}+\log\frac{k_{1}(s_{0}+K_{M})^{2}}{k_{2}K_{M}}+ \cdots\right] \tag{91}\] involves a relative error equal to \(1-q\) times a factor close to \(1\). From the derivations via differential inequalities, there remains a gap between \(\Delta^{*}\) and the lower estimate \(k_{2}t_{SSI}\Delta^{*}\). Here, we resort to a heuristic argument. Given the accuracy of lower and upper estimates in **Lemma 3** [note **Remark 2**], it seems preferable to choose \(\Delta^{*}\) as the appropriate approximation. Keeping only the lowest order term, we note the approximation \[\frac{s_{0}-s_{\text{cross}}}{s_{0}}\approx\Delta^{**}:=\varepsilon_{SSI}\log \frac{1}{\varepsilon_{SSI}} \tag{92}\] which depends only on the Segel-Slemrod parameter. ### The approximation error assuming no transient substrate loss Here, we discuss the scenario assuming no loss of substrate in the transient phase: \(s(0\leq t\leq t_{\text{cross}})=s_{0}\). This is considered the standard RSA scenario in enzyme kinetics. Allowing for somewhat weaker estimates by taking the limiting case with \(q=1\) and discarding higher order terms as \(e_{0}\to 0\), we arrive at "ultimate small parameters" for estimating the approximation error. The first step yields, depending on **Proposition 4** or **Proposition 6**, respectively: \[\varepsilon_{L}^{\dagger}:=\varepsilon_{SSI}\left(\log\left( \frac{k_{1}(K_{M}+s_{0})^{2}}{k_{2}K_{M}}\frac{1}{\varepsilon_{SSI}}\right)+ \frac{(K_{M}+s_{0})^{2}(K_{S}+s_{0})}{K_{M}^{3}}\right),\] (93) or \[\varepsilon_{M}^{\dagger}:=\varepsilon_{SSI}\left(\log\left( \frac{k_{1}(K_{M}+s_{0})^{2}}{k_{2}K_{M}}\frac{1}{\varepsilon_{SSI}}\right)+ \exp\left(\frac{s_{0}}{K_{M}}-1\right)\frac{(K_{M}+s_{0})}{K_{M}}\right). \tag{94}\] In a second step, we keep only lowest order terms in the asymptotics of \(\varepsilon^{\dagger}\). With \[\log\left(\frac{k_{1}(K_{M}+s_{0})^{2}}{k_{2}K_{M}}\frac{1}{\varepsilon_{SSI}} \right)=\log\left(\frac{k_{1}(K_{M}+s_{0})^{2}}{k_{2}K_{M}}\right)+\log\left( \frac{1}{\varepsilon_{SSI}}\right)\] we ultimately obtain \[\varepsilon^{\ddagger}=\varepsilon_{SSI}\log\left(\frac{1}{\varepsilon_{SSI}} \right). \tag{95}\] Remarkably, in the asymptotic limit the error due to substrate depletion in the transitory phase (which is responsible for the logarithmic term) is dominant. ### The approximation error assuming standard quasi-steady-state approximation starts at \(t=0\) In contrast to the general setting, we obtained a sharper error estimate (87). Combining this with **Proposition 4**, resp. **Proposition 6**, and keeping only lowest order terms, we obtain \[\varepsilon_{L}^{\lx@sectionsign} :=\varepsilon_{SSI}\left(\frac{K_{S}+s_{0}}{K_{M}+s_{0}}+\frac{( K_{M}+s_{0})^{2}(K_{S}+s_{0})}{K_{M}^{3}}\right),\] (96) or \[\varepsilon_{M}^{\lx@sectionsign} :=\varepsilon_{SSI}\left(\frac{K_{S}+s_{0}}{K_{M}+s_{0}}+\exp \left(\frac{s_{0}}{K_{M}}-1\right)\frac{(K_{M}+s_{0})}{K_{M}}\right). \tag{97}\] Beyond these rigorously proven asymptotic estimates, numerical simulations suggest sharper bound \[\varepsilon_{\text{opt}}=\varepsilon_{SSI}\frac{K_{S}+s_{0}}{K_{M}+s_{0}}.\] ### Open challenges within the laboratory setting The Michaelis-Menten equation, \[\dot{s}=-\dot{p}=-\frac{k_{2}e_{0}s}{K_{M}+s},\] involves two parameters, the Michaelis constant (\(K_{M}\)) and catalytic constant (\(k_{2}\)) when the initial enzyme concentration (\(e_{0}\)) and initial substrate concentration (\(s_{0}\)) are known and can be controlled. In principle, experimental scientists can estimate \(k_{2}\) and \(K_{M}\) via steady-state initial rate experiments with the Michaelis-Menten equation, or steady-state progress curve experiments with the Schnell-Mendoza equation [26]. However, there is a fundamental problem with those parameter estimations. It requires to have _prior_ knowledge of the duration of transient \(t_{\text{cross}}\) and substrate depletion in the transient phase \(s(t=t_{\text{cross}})\), assuming sufficiently small \(\varepsilon_{SSI}\). The fundamental goal of the present paper is to provide rigorous estimates for \(t_{\text{cross}}\), \(s(t=t_{\text{cross}})\) as well as \(\varepsilon_{SSI}\) from a mathematical perspective. Generally, the role of our theoretical results is to provide consistency checks for experimental conclusions. Our estimates for the crossing times involve only parameters that are controllable or amenable to determination by experiments, though challenges remains in the unique estimation of \(k_{1}\). In this respect, our mathematical results remain to be explored in the experimental laboratory setting. By assuming sufficiently small \(\varepsilon_{SSI}\), our theoretical results might make possible to obtain an educated guess for \(s_{\text{cross}}\) by (92) in enzyme assays. By identifying the time when the guess for \(s_{\text{cross}}\) is attained, we could obtain an estimate for \(t_{\text{cross}}\), which in turn, with known \(s_{0}\) and \(K_{M}\), and equation (90), could provide an estimate for \(k_{1}\). Our results could also be used to check the consistency of experimental results by measuring the end of the transition time, or the substrate depletion during the transient phase in steady-state experiments. Interestingly, the same problem already was present with the Segel-Slemrod timescale \((k_{1}(s_{0}+K_{M}))^{-1}\), and it is actually an inherent feature of any parameter estimation that is solely based on the Michaelis-Menten equation (2). The essential new aspect of our work is that we obtained rigorous asymptotic expressions for the substrate loss in the transient phase, as well as for the approximation error, that only involve \(\varepsilon_{SSI}=e_{0}/(K_{M}+s_{0})\). Our expressions involve only quantities that are controllable or measurable. This represents significant progress, and needs to be tested and validated experimentally in the future. But rigorous experimental protocols require further quantitative information - e.g. about the onset of the slow time regime - that is not readily available. This remains an open problem for exploration in future work. Appendix: The Michaelis-Menten reaction mechanism with a low enzyme and substrate binding rate constant (\(k_{1}\to 0\)) The case of low enzyme concentration in the Michaelis-Menten reaction mechanism is not the only one which leads to a singular perturbation reduction via Tikhonov and Fenichel. We can also obtains reductions in the limit \(k_{2}\to 0\) (which will be discussed in future work) and in the limit \(k_{1}\to 0\).8 Footnote 8: It is known from [13] that these are all the possible “small parameters” for singular perturbation scenarios. The case of low enzyme and substrate binding \(k_{1}\to 0\) is of some interest since it represents the commonly expressed setting "\(s_{0}\ll K_{M}\)" in terms of singular perturbations (while letting \(s_{0}\to 0\) does not). The arguments so far were motivated by the scenario with \(e_{0}\to 0\), but all the estimates obtained in Sections 2 and 3 do hold, possibly upon rewriting some expressions involving \(K_{M}\), without any restriction on the reaction rates and concentrations involved. So here, we briefly summarize the pertinent results when \(k_{1}\to 0\), while \(e_{0}\) is bounded below.9 Thus, \(k_{1}=\varepsilon k_{1}^{*}\) with \(\varepsilon\to 0\). The "crossing Lemma", **Lemma 1** holds for all Michaelis-Menten type reaction mechanisms, so one may still employ the sQSS manifold given by \(c=g(s)\) in (5) for the analysis of the system. Note that the first order approximation of the slow manifold is given by Footnote 9: Letting both \(k_{1}\) and \(e_{0}\) tend to zero leads to a degenerate Tikhonov-Fenichel reduction with trivial right hand side, which is of little interest. \[c=\widehat{g}(s):=\frac{k_{1}e_{0}s}{k_{-1}+k_{2}},\] but the discrepancy between \(g\) and \(\widehat{g}\) is of order \(\varepsilon^{2}\), and the distinguished role of the sQSS manifold [defined by (5)] for the time course of complex concentration remains convenient in the analysis. One may also keep the Michaelis-Menten equation, in the version \[\dot{s}=-\frac{k_{1}k_{2}e_{0}s}{k_{-1}+k_{2}+k_{1}s},\] noting that the standard reduction procedure yields the right-hand side \[-\frac{k_{1}k_{2}e_{0}s}{k_{-1}+k_{2}}=-\frac{k_{1}k_{2}e_{0}s}{k_{-1}+k_{2}+ k_{1}s}+o(\varepsilon),\] and for Tikhonov's theorem higher-order terms on the right-hand side are irrelevant. It seems appropriate to take \(\varepsilon_{RS}=\frac{k_{1}e_{0}}{k_{-1}+k_{2}}\) as a benchmark here. As noted, the relevant expressions obtained in Section 3 remain unchanged, but we record some asymptotics with the dots denoting higher order terms with respect to \(\varepsilon_{RS}\): \[\varepsilon_{SSl} = \varepsilon_{RS}+\cdots\] \[t_{SSl} = \frac{1}{k_{-1}+k_{2}}+\cdots\] \[t_{\ell}^{\dagger} = \frac{1}{k_{-1}+k_{2}}\left(\log\frac{1}{\varepsilon_{RS}}+\log \frac{k_{-1}+k_{2}}{k_{2}}+\cdots\right)\] \[C^{*} = \frac{k_{-1}+k_{2}}{k_{2}}+\cdots\] \[t_{u}^{\dagger}(1) = \frac{1}{k_{-1}+k_{2}}\left(\log\frac{1}{\varepsilon_{RS}}+\log \frac{k_{-1}+k_{2}}{k_{2}}+\cdots\right)\] \[\varepsilon_{\infty} = \varepsilon_{RS}\cdot\frac{k_{-1}}{k_{-1}+k_{2}}+\cdots\] For the substrate depletion during the transient phase, one gets from **Proposition 2** and **Lemma 8**: \[\frac{s_{0}-s_{\text{cross}}}{s_{0}}\lesssim\varepsilon_{RS}\left(\log\frac{1} {\varepsilon_{RS}}+\log\frac{k_{-1}+k_{2}}{k_{2}}\right)+\cdots\] We may summarize this by stating that in lowest order the dynamics is unaffected by initial substrate, in marked contrast to the low enzyme case.
2301.11824
PECAN: A Deterministic Certified Defense Against Backdoor Attacks
Neural networks are vulnerable to backdoor poisoning attacks, where the attackers maliciously poison the training set and insert triggers into the test input to change the prediction of the victim model. Existing defenses for backdoor attacks either provide no formal guarantees or come with expensive-to-compute and ineffective probabilistic guarantees. We present PECAN, an efficient and certified approach for defending against backdoor attacks. The key insight powering PECAN is to apply off-the-shelf test-time evasion certification techniques on a set of neural networks trained on disjoint partitions of the data. We evaluate PECAN on image classification and malware detection datasets. Our results demonstrate that PECAN can (1) significantly outperform the state-of-the-art certified backdoor defense, both in defense strength and efficiency, and (2) on real back-door attacks, PECAN can reduce attack success rate by order of magnitude when compared to a range of baselines from the literature.
Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
2023-01-27T16:25:43Z
http://arxiv.org/abs/2301.11824v4
# PECAN: A Deterministic Certified Defense Against Backdoor Attacks ###### Abstract Neural networks are vulnerable to backdoor poisoning attacks, where the attackers maliciously _poison_ the training set and insert _triggers_ into the test input to change the prediction of the victim model. Existing defenses for backdoor attacks either provide no formal guarantees or come with expensive-to-compute and ineffective probabilistic guarantees. We present PECAN, an efficient and certified approach for defending against backdoor attacks. The key insight powering PECAN is to apply off-the-shelf test-time evasion certification techniques on a set of neural networks trained on disjoint _partitions_ of the data. We evaluate PECAN on image classification and malware detection datasets. Our results demonstrate that PECAN can (1) significantly outperform the state-of-the-art certified backdoor defense, both in defense strength and efficiency, and (2) on real backdoor attacks, PECAN can reduce attack success rate by order of magnitude when compared to a range of baselines from the literature. Machine Learning, Backdoor Attacks, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning retrain thousands of models when performing predictions for a single test input. Retraining can be mitigated by Bonferroni correction, which allows reusing the trained models for a fixed number of predictions. However, retraining is still necessary after a short period, making it hard to deploy these defenses in practice. On the other hand, deterministic defenses (Levine and Feizi, 2021; Wang et al., 2022) can reuse the trained models an arbitrary number of times when producing certificates for different test inputs. Furthermore, probabilistic defenses for backdoor attacks, e.g., BagFlip (Zhang et al., 2022), need to add noise to the training data, resulting in low accuracy for datasets that cannot tolerate too much noise when training (Section 5.2). PECANIn this paper, we propose PECAN (**P**artitioning data and **E**nsembling of **C**ertified near**A**l **N**etworks), a deterministic certified defense against backdoor attacks for neural networks. The key insight underlying PECAN is that we can take _any_ off-the-shelf technique for evasion certification and use it to construct a certified backdoor defense. This insight results in a simple implementation and allows us to seamlessly leverage future advances in evasion certification algorithms. Specifically, PECAN trains a set of neural networks on disjoint partitions of the dataset, and then applies evasion certification to the neural networks. By partitioning the dataset, we analytically bound the number of poisoned data seen per neural network; by employing evasion certification, we bound the number of neural networks that are robust in the face of triggers. Using this information, we efficiently derive a backdoor-robustness guarantee. Figure 1 illustrates the workflow of PECAN. In Step 1, inspired by _deep partition aggregation_(Levine and Feizi, 2021), PECAN deterministically partitions a dataset into multiple disjoint subsets. This step ensures that a poisoned data item only affects a single partition. In Step 2, PECAN trains an ensemble of neural networks, one on each partition. At test time, PECAN performs evasion certification to check which neural networks are immune to triggers; those that are not immune (or that cannot be proven immune) abstain from performing a prediction. Finally, in Step 3, PECAN aggregates the results of the ensemble and produces a prediction together with a robustness certificate: the percentage of the poisoned data in the training set that the training process can tolerate, the _certified radius_. We evaluate PECAN on two three datasets, MNIST, CIFAR10, and EMBER. First, we show that PECAN outperforms or competes with BagFlip, the state-of-the-art probabilistic certified defense against backdoor attacks. Furthermore, BagFlip takes hours to compute the certificate, while PECAN only takes a few seconds. Second, when we evaluate PECAN against a concrete known backdoor attack (Severi et al., 2021), PECAN reduces the attack success rate to 1.85%, while DPA and CROWN-IBP fail to defend against the backdoor attack on 18.05% and 15.24% of the cases, respectively. The results show that PECAN can defend against a known backdoor attack while other baselines, such as DPA and CROWN-IBP, cannot. ## 2 Related Work Deep learning models are vulnerable to backdoor attacks (Saha et al., 2020; Turner et al., 2019). Although many empirical defenses (Geiping et al., 2021; Liu et al., 2018) have been proposed, recent works (Wang et al., 2020; Koh et al., 2022) show that new attacks can break these empirical defenses. Therefore, certified defense is crucial for defending against backdoor attacks. Certified defenses against backdoor attacksExisting certification approaches provide probabilistic certificates by extending randomized smoothing (Cohen et al., 2019; Dvijotham et al., 2020; Lee et al., 2019), originally proposed to defend against adversarial evasion attacks, to defend against backdoor attacks. BagFlip (Zhang et al., 2022) is the state-of-the-art model-agnostic probabilistic defense against feature-flipping backdoor attacks. Wang et al. (2020); Weber et al. (2020) proposed backdoor-attack defenses that are also model-agnostic, but are less effective than BagFlip. PECAN is deterministic and therefore less expensive and more effective than these defenses. Probabilistic defenses are model-agnostic; while PECAN is evaluated on neural networks, it can work for any machine learning model as long as a deterministic evasion certification approach of the model is available. Weber et al. (2020) proposed a determin Figure 1: An overview of our approach PECAN. istic de-randomized smoothing approach for kNN classifiers. Their approach computes the certificates using an expensive dynamic programming algorithm, whereas PECAN's certification algorithm has constant time complexity. Certified defenses against trigger-less attacksMany approaches provide certificates for trigger-less attacks. Jia et al. (2021) use bootstrap aggregating (Bagging). Chen et al. (2020) extended Bagging with new selection strategies. Rosenfeld et al. (2020) defend against label-flipping attacks on linear classifiers. Differential privacy (Ma et al., 2019) can also provide probabilistic certificates for trigger-less attacks. DPA (Levine and Feizi, 2021) is a deterministic defense that partitions the training set and ensembles the trained classifiers. Wang et al. (2022) proposed FA, an extension of DPA, by introducing a spread stage. A conjecture proposed by Wang et al. (2022) implies that DPA and FA are asymptotically optimal defenses against trigger-less attacks. Chen et al. (2022) proposed to compute collective certificates, while PECAN computes sample-wise certificates. Jia et al. (2020); Meyer et al. (2021); Drews et al. (2020) provide certificates for nearest neighborhood classifiers and decision trees. The approaches listed above only defend against _trigger-less_ attacks, while PECAN is a deterministic approach for _backdoor_ attacks. Certified defenses against evasion attacksThere are two lines of certified defense against evasion attacks: complete certification (Wang et al., 2021; Zhang et al., 2022; Katz et al., 2019) and incomplete certification (Xu et al., 2020; Zhang et al., 2021; Singh et al., 2019). The complete certified defenses either find an adversarial example or generate proof that all inputs in the given perturbation space will be correctly classified. Compared to the complete certified defenses, the incomplete ones will abstain from predicting if they cannot prove the correctness of the prediction because their techniques will introduce over-approximation. The complete approaches do not have over-approximation issues but require expensive verification algorithms such as branch and bound. Our implementation of PECAN uses an incomplete certified approach CROWN-IBP (Zhang et al., 2020) because it is the best incomplete approach, trading off between efficiency and the degree of over-approximation. ## 3 Problem Definition Given a dataset \(D=\{(\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n})\}\), a (test) input \(\mathbf{x}\), and a machine learning algorithm \(A\), we write \(A_{D}\) to denote the machine learning model learned on dataset \(D\) by the algorithm \(A\), and \(A_{D}(\mathbf{x})\) to denote the output label predicted by the model \(A_{D}\) on input \(\mathbf{x}\). We assume the algorithm will behave the same if trained on the same dataset across multiple runs. This assumption can be guaranteed by fixing the random seeds during training. We are interested in certifying that if an attacker has poisoned the dataset, the model we have trained on the dataset will still behave "well" on the test input with maliciously added triggers. Before describing what "well" means, we need to define the _perturbation spaces_ of the dataset and the test input, i.e., what possible changes the attacker could make to the dataset and the test input. Perturbation space of the datasetFollowing Levine and Feizi (2021), we define a _general_ perturbation space over the dataset, allowing attackers to delete, insert, or modify training examples in the dataset. Given a dataset \(D\) and a _radius_\(r\geq 0\), we define the _perturbation space_ as the set of datasets that can be obtained by deleting or inserting up to \(r\) examples in \(D\): \[S_{r}(D)=\left\{\widetilde{D}\mid|D\ominus\widetilde{D}|\leq r\right\},\] where \(A\ominus B\) is the symmetric difference of sets \(A\) and \(B\). Intuitively, \(r\) quantifies how many examples need to be deleted or inserted to transform from \(D\) to \(\widetilde{D}\). **Example 3.1**.: _If the attacker modifies one training example \(\mathbf{x}\in D\) to another training example \(\widetilde{\mathbf{x}}\) to form a poisoned dataset \(\widetilde{D}=(D\setminus\{\mathbf{x}\})\cup\{\widetilde{\mathbf{x}}\}\). Then \(\widetilde{D}\in S_{2}(D)\) but \(\widetilde{D}\notin S_{1}(D)\) because \(S_{r}(D)\) considers one modification as one deletion and one insertion._ Note that we assume a more general perturbation space of the training set than the one considered by Zhang et al. (2022); Weber et al. (2020); Wang et al. (2020); our work allows inserting and deleting examples instead of just modifying existing training examples. Perturbation space of the test inputWe write \(\pi(\mathbf{x})\) to denote the set of perturbed examples that an attacker can transform the example \(\mathbf{x}\) into. Formally, the perturbation space \(\pi(\mathbf{x})\) can be defined as the \(l_{p}\) norm ball with radius \(s\) around the test input \(\mathbf{x}\), \[\pi(\mathbf{x})=\{\widetilde{\mathbf{x}}\mid\|\mathbf{x}-\widetilde{\mathbf{ x}}\|_{p}\leq s\}\] **Example 3.2**.: _BagFlip (Zhang et al., 2022) considers the \(l_{0}\) feature-flip perturbation \(\mathtt{f}_{s}(\mathbf{x})\), which allows the attacker to modify up to \(s\) features in an input \(\mathbf{x}\),_ \[\mathtt{f}_{s}(\mathbf{x})=\{\widetilde{\mathbf{x}}\mid\|\mathbf{x}- \widetilde{\mathbf{x}}\|_{0}\leq s\}\] Threat modelsNext, we define what type of guarantees we are interested in our learning algorithm and model. We consider backdoor attacks, where the attacker can perturb both the training set and the test input. For the training set, we assume we are given a perturbation space \(S_{r}(D)\) of the training set \(D\) with a radius \(r\geq 0\). For the test input, we assume a perturbation space \(\pi(\mathbf{x})\) of the test input \(\mathbf{x}\) with a given \(l_{p}\) norm and the radius \(s\). We say that an algorithm \(A\) is robust to a **backdoor attack** on a backdoored test input \(\widetilde{\mathbf{x}}\) if the algorithm trained on any perturbed dataset \(\widetilde{D}\) would predict the backdoored input \(\widetilde{\mathbf{x}}\) the same as \(A_{D}(\mathbf{x})\). Formally, \[\forall\widetilde{D}\in S_{r}(D),\;\widetilde{\mathbf{x}}\in\pi(\mathbf{x}).\; A_{\widetilde{D}}(\widetilde{\mathbf{x}})=A_{D}(\mathbf{x}) \tag{1}\] _When \(r=0\), Eq 1 degenerates to evasion robustness, i.e., \(\forall\widetilde{\mathbf{x}}\in\pi(\mathbf{x})\). \(A_{D}(\widetilde{\mathbf{x}})=A_{D}(\mathbf{x})\), because \(S_{0}(D)=\{D\}\)._ Given a large enough radius \(r\), an attacker can always change enough inputs and succeed at breaking robustness. Therefore, we will typically focus on computing the maximal radius \(r\) for which we can prove that Eq 1 for given perturbation spaces \(S_{r}(D)\) and \(\pi(\mathbf{x})\). We refer to this quantity as the _certified radius_. Certified guaranteesThis paper aims to design a certifiable algorithm \(A\), which can defend against backdoor attacks, and to compute the certified radius of \(A\). In our experiments (Section 5.2), we suppose a given benign dataset \(D\) and a benign test input \(\mathbf{x}\), and we certifiably quantify the robustness of the algorithm \(A\) against backdoor attacks by computing the certified radius. In Section 5.3, we also experiment with how the certifiable algorithm \(A\) defends the backdoor attacks if a poisoned dataset \(\widetilde{D}\) and a test input \(\widetilde{\mathbf{x}}\) with malicious triggers are given, but the clean data is unknown. We theoretically show that we can still compute the certified radius if the clean data \(D\) and \(\mathbf{x}\) are unknown in Section 4.3. ## 4 The PECAN Certification Technique Our approach, which we call PECAN (**P**artitioning data and **E**nsembling of **C**ertified neurAl **N**etworks), is a deterministic certification technique that defends against backdoor attacks. Given a learning algorithm \(A\), we show how to automatically construct a new learning algorithm \(\bar{A}\) with certified backdoor-robustness guarantees (Equation (1)) in Section 4.1. In Section 4.2, we prove the certified backdoor-robustness guarantees (Equation (1)) provided by \(\bar{A}\). We further discuss how \(\bar{A}\) can defend against a backdoored dataset and formally justify our discussion in Section 4.3. ### Constructing Certifiable Algorithm \(\bar{A}\) The key idea of PECAN is that we can take any off-the-shelf technique for evasion certification and use it to construct a certified backdoor defense. Intuitively, PECAN uses the evasion certification to defend against the possible triggers at test time, and it encapsulates the evasion certification in deep partition aggregation (DPA) (Levine and Feizi, 2021) to defend against training set poisoning. Given a dataset \(D\), a test input \(\mathbf{x}\), and a machine learning algorithm \(A\), PECAN produce a new learning algorithm \(\bar{A}\) as described in the following steps (shown in Figure 1), Dataset PartitioningWe partition the dataset \(D\) into \(n\) disjoint sub-datasets, denoted as \(D_{1},\ldots,D_{n}\), using a hash function that deterministically maps each training example into a sub-dataset \(D_{i}\). Train \(n\) classifiers \(A_{D_{1}},\ldots,A_{D_{n}}\) on these sub-datasets. Evasion CertificationWe certify whether the prediction of each classifier \(A_{D_{i}}\) is robust under the perturbation space \(\pi(\mathbf{x})\) by any evasion certification approach for the learning algorithm, e.g., CROWN-IBP for neural networks (Xu et al., 2020). Formally, the certification approach determines whether the following equation holds, \[\forall\widetilde{\mathbf{x}}\in\pi(\mathbf{x}).\;A_{D_{i}}(\mathbf{x})=A_{D_{ i}}(\widetilde{\mathbf{x}}) \tag{2}\] We denote the output of each certification as \(A_{D_{i}}^{\pi}(\mathbf{x})\), which can either be \(A_{D_{i}}^{\pi}(\mathbf{x})=\mathrm{cert}\), meaning Eq 2 is certified. Otherwise, \(A_{D_{i}}^{\pi}(\mathbf{x})=\mathrm{abstain}\), meaning the certification approach cannot certify Eq 2. AggregationWe compute the top label \(y^{*}\) by aggregating all predictions from \(A_{D_{i}}(\mathbf{x})\). Concretely, \(y^{*}\triangleq\operatorname*{argmax}_{y\in\mathcal{C}}\sum_{i=1}^{n}\mathds{1 }_{A_{D_{i}}(\mathbf{x})=y}\), where \(\mathcal{C}=\{0,1,\ldots\}\) is the set of possible labels. Note that if a tie happens when taking the argmax, we break ties deterministically by setting the smaller label index as \(y^{*}\). We denote the runner-up label as \(y^{\prime}\) as \(\operatorname*{argmax}_{y\in\mathcal{C}\wedge y\neq y^{*}}\sum_{i=1}^{n} \mathds{1}_{A_{D_{i}}(\mathbf{x})=y}\). We count the number of certified predictions equal to \(y^{*}\) as \(N_{1}\), the number of certified predictions equal to \(y^{\prime}\) as \(N_{2}\), and the number of abstentions as \(N_{3}\), \[N_{1} =\sum_{i=1}^{n}\mathds{1}_{A_{D_{i}}(\mathbf{x})=y^{*}\wedge A_{ D_{i}}^{\pi}(\mathbf{x})=\mathrm{cert}},\] \[N_{2} =\sum_{i=1}^{n}\mathds{1}_{A_{D_{i}}(\mathbf{x})=y^{\prime} \wedge A_{D_{i}}^{\pi}(\mathbf{x})=\mathrm{cert}},\] \[N_{3} =\sum_{i=1}^{n}\mathds{1}_{A_{D_{i}}^{\pi}(\mathbf{x})=\mathrm{ abstain}}.\] We set the prediction \(\bar{A}_{D}(\mathbf{x})\) as \(y^{*}\). We compute the certified radius \(r\) in the following two cases. If \(N_{1}-N_{2}-N_{3}-\mathds{1}_{y^{*}>y^{\prime}}<0\), we set \(r\) as \(\diamond\), i.e., a value denoting no certification. In this case, PECAN cannot certify that \(\bar{A}\) is robust to evasion attacks even if the dataset is not poisoned. Otherwise, we compute \(r\) as \(\lfloor\frac{N_{1}-N_{2}-N_{3}-\mathds{1}_{y^{*}}>x^{\prime}}{2}\rfloor\). A special case is \(r=0\), when PECAN can certify \(\bar{A}\) is robust to evasion attacks, but cannot certify that it is robust if the dataset is poisoned. We note that the computation of the certified radius is equivalent to DPA when no classifier abstains, i.e., \(N_{3}=0\), ### Proving the Soundness of PECAN In this section, we show that the prediction \(\bar{A}_{D}(\mathbf{x})\) and the certified radius \(r\) satisfy the certified backdoor-robustness guarantees (Equation (1)) by proving the following theorem. **Theorem 4.1** (Soundness of PECAN).: _Given a dataset \(D\) and a test input \(\mathbf{x}\), PECAN computes the prediction \(\bar{A}_{D}(\mathbf{x})\) and the certified radius as \(r\). Then, either \(r=\diamond\) or_ \[\forall\widetilde{D}\in S_{r}(D),\;\widetilde{\mathbf{x}}\in\pi(\mathbf{x}). \;\bar{A}_{\widetilde{D}}(\widetilde{\mathbf{x}})=\bar{A}_{D}(\mathbf{x}) \tag{3}\] Proof.: For any poisoned dataset \(\widetilde{D}\), we partition \(\widetilde{D}\) into \(n\) sub-datasets \(\{\widetilde{D}_{1},\ldots,\widetilde{D}_{n}\}\) according to \(\{D_{1},\ldots,D_{n}\}\) from the clean dataset \(D\). Note that we can determine such a correspondence between \(D_{i}\) and \(\widetilde{D}_{i}\) because our hash function is deterministic and only depends on each training example. We further divide \(\{D_{1},\ldots,D_{n}\}\) into three disjoint parts \(D_{\mathrm{abs}}\), \(D_{\mathrm{bd}}\), and \(D_{\mathrm{safe}}\) in the following way, * \(D_{\mathrm{abs}}=\{D_{i}\mid A_{D_{i}}^{\pi}(\mathbf{x})=\mathrm{abstain}\}\) are the sub-datasets, on which \(A\) abstains from making the prediction on \(\mathbf{x}\). From the definition of \(N_{3}\), we have \(|D_{\mathrm{abs}}|=N_{3}\). Intuitively, \(D_{\mathrm{abs}}\) contains the sub-datasets that can possibly be attacked by the test input \(\widetilde{\mathbf{x}}\) with malicious triggers. * \(D_{\mathrm{bd}}\) are the sub-datasets on which \(A\) does not abstain and are also poisoned, i.e., each of them has at least one training example removed or inserted. Even though we do not know the exact sub-datasets in \(D_{\mathrm{bd}}\), we know \(|D_{\mathrm{bd}}|\leq r\) because \(\widetilde{D}\in S_{r}(D)\) constrains that there are at most \(r\) such poisoned sub-datasets. * \(D_{\mathrm{safe}}=\{D_{i}\mid D_{i}=\widetilde{D}_{i}\wedge A_{\widetilde{D}_ {i}}^{\pi}(\mathbf{x})=\mathrm{cert}\}\) contains the clean sub-datasets, on which \(A\) does not abstain. We denote the numbers of the original top prediction \(y^{\star}\) and the original runner-up prediction \(y^{\prime}\) on the backdoored data \(\widetilde{D}\) and \(\widetilde{\mathbf{x}}\) as \(\widetilde{N}_{y^{\star}}\) and \(\widetilde{N}_{y^{\prime}}\), respectively. Formally, \[\widetilde{N}_{y^{\star}}=\sum_{i=1}^{n}\mathds{1}_{A_{\widetilde{D}_{i}}( \widetilde{\mathbf{x}})=y^{\star}},\quad\widetilde{N}_{y^{\prime}}=\sum_{i=1}^ {n}\mathds{1}_{A_{\widetilde{D}_{i}}(\widetilde{\mathbf{x}})=y^{\prime}}\] Next, we prove Eq 3 for any backdoored data \(\widetilde{D}\) and \(\widetilde{\mathbf{x}}\) by showing that \[\widetilde{N}_{y^{\star}}\geq\widetilde{N}_{y^{\prime}}+\mathds{1}_{y^{\star }>y^{\prime}} \tag{4}\] We prove Eq 4 by showing a lower bound of \(\widetilde{N}_{y^{\star}}\) is \(N_{1}-r\) and an upper bound of \(\widetilde{N}_{y^{\prime}}\) is \(N_{2}+r+N_{3}\). Together with the definition of \(r\), we can prove Eq 4 because we have, \[\widetilde{N}_{y^{\star}}-\widetilde{N}_{y^{\prime}}-\mathds{1}_ {y^{\star}>y^{\prime}}\] \[\geq N_{1}-r-(N_{2}+r+N_{3})-\mathds{1}_{y^{\star}>y^{\prime}}\] \[= N_{1}-N_{2}-2r-N_{3}-\mathds{1}_{y^{\star}>y^{\prime}}\] \[= N_{1}-N_{2}-2\lfloor\frac{N_{1}-N_{2}-N_{3}-\mathds{1}_{y^{\star} >y^{\prime}}}{2}\rfloor-N_{3}-\mathds{1}_{y^{\star}>y^{\prime}}\] \[\geq N_{1}-N_{2}-(N_{1}-N_{2}-N_{3}-\mathds{1}_{y^{\star}>y^{\prime}})-N _{3}-\mathds{1}_{y^{\star}>y^{\prime}}\] \[= 0.\] Note that the second last line holds iff \(N_{1}-N_{2}-N_{3}-\mathds{1}_{y^{\star}>y^{\prime}}\geq 0\). Otherwise, we have \(r=\diamond\). As shown in Figure 2, the lower bound of \(\widetilde{N}_{y^{\star}}\) can be computed by noticing that 1) the attacker can change any prediction in \(D_{\mathrm{bd}}\) from \(y^{\star}\) to another label because these datasets are poisoned, 2) the attacker can change any prediction in \(D_{\mathrm{abs}}\) to another label because CROWN-IBP cannot certify the prediction under the evasion attacks, and 3) the attacker cannot change anything in \(D_{\mathrm{safe}}\) because of the guarantee of CROWN-IBP and \(D_{\mathrm{safe}}\) is not poisoned, \[\forall D_{i}\in D_{\mathrm{safe}},\widetilde{\mathbf{x}}\in\pi(\mathbf{x}).\;A_{D_{i}}(\mathbf{x})=A_{D_{i}}(\widetilde{\mathbf{x}})=A_{\widetilde{D}_ {i}}(\widetilde{\mathbf{x}})\] The upper bound of \(\widetilde{N}_{y^{\prime}}\) can be computed by noticing that 1) the attacker can change any prediction in \(D_{\mathrm{bd}}\) to \(y^{\prime}\), 2) the attacker can change any prediction in \(D_{\mathrm{abs}}\) to \(y^{\prime}\), and 3) the attacker cannot change anything in \(D_{\mathrm{safe}}\). We complete the proof by showing that the best attack strategy of the attacker is to change the prediction of \(\bar{A}\) to the runner-up label \(y^{\prime}\). If the attacker chooses to change the prediction of \(\bar{A}\) to another label \(y^{\prime\prime}\), denoted the counts as \(\widetilde{N}_{y^{\prime\prime}}\), then the upper bound of \(\widetilde{N}_{y^{\prime\prime}}\) will be always smaller or equal to \(\widetilde{N}_{y^{\prime}}\). ### PECAN under the Backdoored Data The above algorithm and proof of PECAN assume that a clean dataset \(D\) and a clean test example \(\mathbf{x}\) are already given. However, we may be interested in another scenario where the poisoned dataset \(\widetilde{D}\in S_{r}(D)\) and the input example Figure 2: An illustration of the proof of Theorem 4.1. It shows the worst case for PECAN, where the attacker can change all predictions in \(\overline{D_{\mathrm{abs}}}\) and \(\overline{D_{\mathrm{bd}}}\) to the runner-up label \(y^{\prime}\). Note that we group \(D_{\mathrm{abs}}\), \(D_{\mathrm{bd}}\), and \(D_{\mathrm{safe}}\) together to ease illustration. \(\widetilde{\mathbf{x}}\in\pi(\mathbf{x})\) with malicious triggers are given, and the clean data \(D\) and \(\mathbf{x}\) are unknown. In other words, we want to find the maximal radius \(r\) such that \(\bar{A}_{\widetilde{D}}(\widetilde{\mathbf{x}})=\bar{A}_{D}(\mathbf{x})\) for any \(D\) and \(\mathbf{x}\) that can be perturbed to \(\widetilde{D}\) and \(\widetilde{\mathbf{x}}\) by the perturbation \(S_{r}\) and \(\pi\), respectively. Formally, \[\forall D,\mathbf{x}.\ \widetilde{D}\in S_{r}(D)\wedge\widetilde{\mathbf{x}}\in \pi(\mathbf{x})\implies\bar{A}_{\widetilde{D}}(\widetilde{\mathbf{x}})=\bar{ A}_{D}(\mathbf{x}) \tag{5}\] Intuitively, Eq 5 is the symmetrical version of Eq 1. Owing to the symmetrical definition of \(S_{r}\) and \(\pi\), if we apply PECAN to the given poisoned data \(\widetilde{D},\widetilde{\mathbf{x}}\), then the prediction \(\bar{A}_{\widetilde{D}}(\widetilde{\mathbf{x}})\) and the certified radius \(r\) satisfy the certified backdoor-robustness guarantee (Eq 5). The following theorem formally states the soundness of PECAN under the backdoored data. We prove Theorem 4.2 in Appendix A. **Theorem 4.2** (Soundness of PECAN under the backdoored data).: _Given a dataset \(\widetilde{D}\) and a test input \(\widetilde{\mathbf{x}}\), PECAN computes the prediction \(\bar{A}_{\widetilde{D}}(\widetilde{\mathbf{x}})\) and the certified radius as \(r\). Then, either \(r=\circ\) or Eq 5 holds._ ## 5 Experiments We implemented PECAN in Python and provided the implementation in the supplementary materials. In our evaluation, we use CROWN-IBP, implemented in auto-LiRPA (Xu et al., 2020), as the evasion defense approach for neural networks. We also use CROWN-IBP to train the classifiers in the dataset partitioning step since the classifiers trained by CROWN-IBP can improve the certification rate in the evasion certification step. In Section 5.2, we evaluate the effectiveness and efficiency of PECAN by comparing it to BagFlip (Zhang et al., 2022), the state-of-the-art probabilistic certified defense against backdoor attacks. In Section 5.3, we evaluate the effectiveness of PECAN under the backdoor attack (Severi et al., 2021) for malware detection and compare PECAN to other baselines, DPA and CROWN-IBP. ### Experimental Setup DatasetsWe conduct experiments on MNIST, CIFAR10, and EMBER (Anderson and Roth, 2018) datasets. MNIST is an image classification dataset containing 60,000 training and 10,000 test examples. CIFAR10 is an image classification dataset containing 50,000 training and 10,000 test examples. EMBER is a malware detection dataset containing 600,000 training and 200,000 test examples. Each example is a vector containing 2,351 features of the software. ModelsFor image classification datasets MNIST and CIFAR10, we train fully-connected neural networks with four layers for PECAN, while BagFlip uses CNN and ResNet for MNIST and CIFAR10, respectively. We do not use CNN and ResNet because CROWN-IBP used in PECAN has a higher abstention rate for deeper and more complex neural network structures. We use the same fully-connected neural network for EMBER as in related works (Zhang et al., 2022; Severi et al., 2021). We use the same data augmentation for PECAN and other baselines. MetricsFor each test input \(\mathbf{x}_{i},y_{i}\), the algorithm \(\bar{A}\) will predict a label and the certified radius \(r_{i}\). In this section, we assume that the attacker had _modified_\(R\%\) examples in the training set. We denote \(R\) as the _modification amount_. We summarize all the metrics used as follows, _Certified Accuracy_ denotes the percentage of test examples that are correctly classified and whose certified radii are no less than \(R\), i.e., \(\frac{1}{m}\sum_{i=1}^{m}\mathds{1}_{\bar{A}_{D}(\mathbf{x}_{i})=y_{i}\wedge \frac{r_{i}}{D|\geq 2R\%}}\), where \(m\) and \(|D|\) are the sizes of test set and training set, respectively. Notice that there is a factor of \(2\) on the modification amount \(R\) because \(S_{r}(D)\) considers one modification as one insertion and one deletion, as illustrated in Example 3.1. _Normal Accuracy_ denotes the percentage of test examples that are correctly classified by the algorithm _without_ certification, i.e., \(\frac{1}{m}\sum_{i=1}^{m}\mathds{1}_{\bar{A}_{D}(\mathbf{x}_{i})=y_{i}}\). _Attack Success Rate (ASR)_. In Section 5.3, we are interested in how many test examples are certified but wrongly classified by the classifier, i.e., \(\frac{1}{m}\sum_{i=1}^{m}\mathds{1}_{\bar{A}_{D}(\mathbf{x}_{i})y_{i}\wedge r _{i}\frac{r_{i}}{|D|}\geq 2R\%}\). We denote the above quantity as the attack success rate. We note that a prediction can still be incorrect even if it is certified by PECAN because the classifier can have incorrect predictions even when the data is clean. _Abstention Rate_ is computed as \(\frac{1}{m}\sum_{i=1}^{m}\mathds{1}_{\frac{r_{i}}{|D|}<2R\%}\). ### Effectiveness and Efficiency of PECAN We evaluate the effectiveness and efficiency of PECAN on MNIST, CIFAR10, and EMBER under the backdoor attack with the \(l_{0}\) feature-flip perturbation F\({}_{1}\), which allows the attacker to modify up to one feature in an example. We compare PECAN to BagFlip, the state-of-the-art probabilistic certified defense against \(l_{0}\) feature-flip backdoor attacks. Moreover, we note that PECAN needs to construct harder proofs than BagFlip because their definitions of perturbation space are different, as discussed in Appendix B.1. In Appendix B.2, we evaluate the effectiveness of PECAN against the perturbation space with the \(l_{\infty}\) norm. Summary of the resultsPECAN achieves significantly higher certified accuracy than BagFlip on CIFAR10 and EMBER. PECAN achieves competitive results on MNIST compared to BagFlip. PECAN has similar normal accuracy as BagFlip for all datasets. PECAN is more efficient than BagFlip at computing the certified radius. SetupWe use the same hyper-parameters for BagFlip as reported in their paper for all datasets. For PECAN, we vary \(n\), the number of partitions, to ensure a fair comparison between BagFlip. Appendix B.1 presents a detailed discussion of hyper-parameter settings for BagFlip and PECAN. We denote PECAN with different settings of \(n\) as PECAN-\(n\). than the ASR on non-malware, because the former shows how many pieces of malware can bypass the classifier. For PECAN and DPA, we show their results at modification amount \(R=0.1\%\). We show CROWN-IBP results against the perturbations \(\textsc{f}_{1}\), \(\textsc{f}_{2}\), and \(\textsc{f}_{3}\) regardless of \(R\) because CROWN-IBP does not consider \(R\). ResultsFigures 5 and 6 show the ASR, accuracy, and abstention rate of all the approaches on the malware test set with and without triggers, respectively. Table 1 in the appendix shows the detailed numbers. **Note that PECAN is the only certified approach for backdoor attacks. The results of other baselines can be seen as empirical because DPA and CROWN-IBP certify a different goal, and NoDef has no defense.** **PECAN can defend against the backdoor attack on the EMBER dataset.** Figures 5 and 6 show that PECAN has the lowest ASR \(1.85\%\) and \(1.03\%\) on both malware test sets with and without triggers on average, compared to DPA (\(18.05\%\), \(1.98\%\)), CROWN-IBP (\(15.24\%\), \(6.82\%\)), and NoDef (\(41.33\%\), \(2.12\%\)). **DPA and CROWN-IBP fail to defend against the backdoor attack.** The average ASR of DPA and CROWN-IBP on the malware test set with triggers are \(18.05\%\) and \(15.24\%\) in Figure 5, respectively, meaning that many malware with triggers can bypass their defenses. The average ASR of DPA on the malware test set without triggers, \(1.98\%\), is much lower than its ASR on the one with triggers, \(18.05\%\), which shows that DPA successfully defends against triggerless attacks when the test input does not have any trigger. CROWN-IBP has high ASR on both the malware test sets with and without triggers, as CROWN-IBP cannot defend against the poison in the training sets. **PECAN has higher abstention rates than other approaches.** On average, PECAN abstains from \(50.41\%\) predictions compared to DPA (\(34.73\%\)) and CROWN-IBP (\(26.44\%\)). We further compare the accuracy, ASR, and abstention rate of PECAN and DPA across all modification amount \(R\) when trained on \(\widetilde{D}_{3}\) in Figure 7. The results on \(\widetilde{D}_{1}\) and \(\widetilde{D}_{2}\) are shown in Appendix B.5. We can observe that PECAN has a much lower ASR than DPA across all modification amounts. Meanwhile, Figure 7 shows that the certification of PECAN might be over-conservative because the ASR is low (\(3.17\%\)) even when we regard \(\widetilde{D}_{3}\) as non-poisoned (when \(R=0\)), yet \(\widetilde{D}_{3}\) is actually poisoned. ## 6 Conclusion, Limitations, and Future Work We presented PECAN, a deterministic certified approach to effectively and efficiently defend against backdoor attacks. We foresee many future improvements to PECAN. First, we implemented PECAN as a certified defense specialized for neural networks because the evasion certification step, CROWN-IBP, is limited to neural networks. However, we can replace CROWN-IBP with an evasion certification approach for another machine learning model to get a corresponding backdoor defense for that model. Second, we adopt the idea of deep partition aggregation (DPA) to design the partition and aggregation steps in PECAN. We can improve these steps by using finite aggregation (FA) (Wang et al., 2022b), which extends DPA and gives higher certified accuracy. Third, during the certification of evasion attacks, we need to propagate the abstraction of the same test input Figure 5: Results of PECAN, DPA, CROWN-IBP (C-IBP), and vanilla model without defense (NoDef) trained on three poisoned EMBER datasets when evaluated on the malware test set with malicious triggers. We note that NoDef does not have abstention rates because it does not use any defense. Figure 6: Results of PECAN, DPA, C-IBP, and NoDef when evaluated on the (original) malware test set without malicious triggers. Figure 7: Comparison between PECAN and DPA trained on \(\widetilde{D}_{3}\) across all modification amount \(R\) when evaluated on the malware test set with triggers. through thousands of neural networks that have different weights but the same architecture. Sharing the propagation results among different neural networks (Fischer et al., 2022) can greatly improve the efficiency of PECAN and may enable using complete certification methods.
2307.04619
Learning Fine Pinch-Grasp Skills using Tactile Sensing from A Few Real-world Demonstrations
Imitation learning for robot dexterous manipulation, especially with a real robot setup, typically requires a large number of demonstrations. In this paper, we present a data-efficient learning from demonstration framework which exploits the use of rich tactile sensing data and achieves fine bimanual pinch grasping. Specifically, we employ a convolutional autoencoder network that can effectively extract and encode high-dimensional tactile information. Further, We develop a framework that achieves efficient multi-sensor fusion for imitation learning, allowing the robot to learn contact-aware sensorimotor skills from demonstrations. Our comparision study against the framework without using encoded tactile features highlighted the effectiveness of incorporating rich contact information, which enabled dexterous bimanual grasping with active contact searching. Extensive experiments demonstrated the robustness of the fine pinch grasp policy directly learned from few-shot demonstration, including grasping of the same object with different initial poses, generalizing to ten unseen new objects, robust and firm grasping against external pushes, as well as contact-aware and reactive re-grasping in case of dropping objects under very large perturbations. Furthermore, the saliency map analysis method is used to describe weight distribution across various modalities during pinch grasping, confirming the effectiveness of our framework at leveraging multimodal information.
Xiaofeng Mao, Yucheng Xu, Ruoshi Wen, Mohammadreza Kasaei, Wanming Yu, Efi Psomopoulou, Nathan F. Lepora, Zhibin Li
2023-07-10T15:07:29Z
http://arxiv.org/abs/2307.04619v2
# Learning Fine Pinch-Grasp Skills using Tactile Sensing from Real Demonstration Data ###### Abstract This work develops a data-efficient learning from demonstration framework which exploits the use of rich tactile sensing and achieves fine dexterous bimanual manipulation. Specifically, we formulated a convolutional autoencoder network that can effectively extract and encode high-dimensional tactile information. Further, we developed a behaviour cloning network that can learn human-like sensorimotor skills demonstrated directly on the robot hardware in the task space by fusing both proprioceptive and tactile feedback. Our comparison study with the baseline method revealed the effectiveness of the contact information, which enabled successful extraction and replication of the demonstrated motor skills. Extensive experiments on real dual-arm robots demonstrated the robustness and effectiveness of the fine pinch grasp policy directly learned from one-shot demonstration, including grasping of the same object with different initial poses, generalizing to ten unseen new objects, robust and firm grasping against external pushes, as well as contact-aware and reactive re-grasping in case of dropping objects under very large perturbations. Moreover, the saliency map method is employed to describe the weight distribution across various modalities during pinch grasping. The video is available online at: [https://youtu.be/4Pg29bUBKqs](https://youtu.be/4Pg29bUBKqs). ## I Introduction Dexterous robot manipulation has the capability to work across a range of tasks and environments. However, enabling dexterous manipulation in robots, particularly in a manner that is comparable to human capabilities, remains an unsolved challenge. Currently, numerous studies utilize visual feedback to enable robots to perform dexterous manipulation tasks such as box flipping [1], object rotating [2], and door opening [3]. However, these visual-based methods have limitations, as the visual data could be influenced by occlusion and lighting variations. Consequently, it is very important to investigate how to incorporate tactile information for the enhancement of dexterous manipulation in robotic systems. Tactile sensing plays a vital role in capturing detailed information about contact surfaces, including the distribution of contact forces and their variations during force-sensitive tasks - which is indispensable for achieving dexterous handling of lightweight objects with irregular surfaces, shapes, and deformable properties. Especially during close-range interaction between hands and objects, visual occlusion restricts the ability to perceive detailed information of the contact surfaces, during which tactile sensors become valuable for providing essential information of these unseeable surfaces. Integrating tactile sensing into motor learning of dexterous grasping can enhance the rich and precise sensing of surface contacts and interaction dynamics, provide irreplaceable and direct feedback when manipulating objects [4], and enable more robust and precise manipulation tasks. It is crucial to explore how robots can leverage this information to achieve human-comparable dexterous manipulation abilities. The canonical hardware for robot manipulation incorporates Force/Torque sensors that can only measure the 6-degree-of-freedom (DoF) wrench at each end-effector. Soft optical-based tactile sensors can provide abundant and discriminative contact information by quantifying the deformation of the soft materials using a camera system [5]. Currently, several soft tactile sensors have been developed, including TacTip [6], DigiTac [7], Gelsight [8], and DIGIT [9]. However, how to use high-dimensional data from tactile sensors for robot dexterous grasping remains open research. The complex and non-trivial deformation of soft tactile sensors during dexterous grasping tasks presents a considerable challenge. Human can deal with soft contacts, quickly adapt to new tasks, and produce skills of dual-arm coordination for manipulating objects. Learning from Demonstration (LfD) offers an intuitive, efficient method for acquiring human skills through synchronized tactile information, encoding rich state-action mapping and enabling robots to learn human sensorimotor skills while responding to tactile and proprioceptive feedback. In addition, the common issue of accumulating compounding errors during dexterous manipulation task execution in LfD can be mitigated by utilizing rich tactile information as feedback. The challenge Fig. 1: Autonomous dexterous grasping with soft tactile sensors, including pre-grasp, press, roll-lift, and firm grasp. involves effectively extracting features from sensory data and integrating them with proprioceptive states for sample-efficient human dexterous manipulation behavior learning. This work is motivated to develop an effective LfD framework that leverages rich tactile sensing to learn dexterous sensorimotor skills. Our approach focuses on achieving one-shot LfD of _fine pinch grasp_, using high-dimensional contact information from tactile sensors and a limited amount of _real data_. The contributions are summarized as follows: * A novel feature extraction approach to encapsulate essential features from tactile sensing data, which are then fused with robot proprioceptive states and tactile image difference, thus resulting in a low-dimensional latent space representation that significantly enhances the learning process of fine grasping skills. * An effective LfD framework that integrates tactile sensory input and robot proprioceptive state, which enables the robot to efficiently acquire feedback-driven dexterous grasping skills through a single demonstration. The proposed framework is validated by pinch grasp tasks on a dual-arm setup equipped with TacTips sensors [6] and has achieved the successful retrieval of a small, cylindrical object on a table using one-shot demonstration. Our experimental results show that the policy, learned from one-shot human demonstration data, can achieve stable grasping of unseen objects with different diameters, masses, and materials. Furthermore, the robustness of the framework against external disturbances has been validated, with the learned policy demonstrating stable grasping under external disturbance, as well as the capacity to autonomously execute successful re-grasping in case of a large external force that pushes off the object. We applied saliency map analysis [10] and revealed how the learned policy uses different sensory modalities in a variable way throughout the dexterous pinch grasp process, and demonstrate the capability and effectiveness of our proposed network to efficiently learn features of the high-dimensional data and autonomously segment the long-horizon data into several distinct fine-skills for execution. ## II Related Works During robotic grasping, tactile sensors can provide rich contact information which is not easily accessed via visual information, thereby playing a crucial role in enhancing the dexterous grasping capabilities [11]. Prior research on robotic pinch grasp has primarily focused on either force analysis and planning to achieve force closure [12] or the development of specialized grippers [13]. Soft deformable tactile sensors have the ability to perform contact-rich interactions with the environment and manipulate delicate objects safely [14]. With optical-based tactile sensors, the orientation of the contact surface can be inferred from the tactile image, enabling stabilization of the pinch grasp by rolling the sensor on the contact surface and applying desired grasping forces [15]. The study in [16] proposed a novel tactile sensor capable of measuring and localizing distributed forces that enables the robot hand to grasp deformable soft objects. One open question is how to extract useful information from high-dimensional tactile images. The works in [17] estimate 6D contact wrenches from tactile images and the estimated wrenches that can be used as feedback to the grasping controllers within the classical control theory. Deep neural networks can also be used to process tactile images. The works in [18] show that contact poses can also be detected from tactile images, which was then combined with goal-driven methods to achieve non-prehensile object stable pushing. The works in [19] introduce Autoencoder networks [20] to compress the high-dimensional tactile images into low-dimensional latent vectors which can be used for several down-stream tasks, such as object classification. Moreover, although deformable tactile sensors facilitate area contact, potentially improving grasp stability and protecting delicate objects, the dynamics of the deformable sensor cannot be neglected. The work proposed in [14] combines 3D geometry of the tip of a deformable tactile sensor with robot proprioceptive action to learn the tactile sensor membrane dynamics and predict the deformation conditioned on robot action. Data-driven method can be used to learn the dynamics and combined with the Model Predictive Control (MPC) methods to achieve tactile servoing [21]. Insights from human intrinsic understanding may prove valuable in leveraging deformable sensors to achieve dexterous dual-arm manipulation tasks. LfD is an intuitive and effective way to learn human skills from collected demonstrations, which is very helpful for tasks requiring high-level skills, such as intricate coordination between two arms. By segmenting the collected motion data, the work proposed in [22] generates a set of motion primitives to complete tasks. Additionally, human can use their senses to accomplish different tasks this can be used to investigate how the multi-sensory data can jointly together and help with the manipulation task [23]. ## III Methods network to learn the policy of dexterous dual-arm grasping with tactile sensing from human demonstrations. ### _Demonstration Dataset of Bimanual Manipulation_ In our implementation, the haptic devices allow operators to adjust the 6D pose simultaneously, providing an intuitive way to demonstrate bimanual grasping skills on a dual-arm robot. During the demonstration, a human operator teleoperates the dual-arm robot to complete the grasping task by sending Cartesian commands to the two end-effectors via two haptic devices. The human demonstration data are recorded automatically during the entire grasping. ### _Tactile Feature Extraction_ The Tactip used in this work is an optical tactile sensor with a soft hemispherical tip, which was 3D-printed in one piece combining an elastic skin with 330 rigid white markers (pins) [6]. When the soft tip deforms during contact with objects, the white pins start to move away from their initial positions. The displacement of these pins reflects the complex deformation of the soft surface. An inner camera captures and projects the displacement to an array of white pins on a black background in the image plane, as shown in Fig. 2(a). Raw tactile RGB images are firstly resized to 256\(\times\)256 pixels using linear interpolation and converted to grayscale images, which are then cropped using a circle mask and converted to binary images by thresholding. A median filter is applied to denoise the binary images. We propose to use a self-supervised learning method - convolutional autoencoder network to extract robust features that can represent the contact properties from the preprocessed tactile images. Eight convolutional layers are used in the CAE network to extract the spatial information represented by the displacement of the pins. The structure of CAE network is shown in Fig. 2(a). The CAE network consists of an encoder and a decoder, formulated as follows: \[\begin{array}{c}g_{\Theta}(\cdot):\mathcal{X}\rightarrow\mathcal{H}\\ f_{\Phi}(\cdot):\mathcal{H}\rightarrow\hat{\mathcal{X}}\end{array}. \tag{1}\] The encoder \(g_{\Theta}(\cdot)\) projects each tactile image \(\gamma_{t}\) in the high-dimensional input space \(\mathcal{X}\) (256\(\times\)256) to 16 feature maps \(\gamma_{l}\) in the low-dimensional latent space \(\mathcal{H}\) (16\(\times\)16), Fig. 3: The structure of Convolution Autoencoder netowrk and Behavior Cloning network. Fig. 2: Architecture detailing the teleoperation system for demonstrations and the LfD framework. then the decoder \(f_{\Phi}(\cdot)\) reconstructs that image from the same feature maps to the output space \(\hat{\mathcal{X}}\) (256\(\times\)256). The binary cross-entropy loss function is used as the reconstruction loss between the input images \(\mathcal{X}\) and the reconstructed images \(\hat{\mathcal{X}}\) to update the network parameters via back-propagation: \[L_{CAE}(\gamma_{t},\gamma_{p})=-(\gamma_{t}\log\gamma_{p})+(1- \gamma_{t})\log(1-\gamma_{p})\\ \gamma_{l}=g_{\Theta}(\gamma_{t}),\gamma_{p}=f_{\Phi}(\gamma_{l}) \tag{2}\] where \(\gamma_{p}\) is the reconstructed image by the decoder network. ### _Behavior Cloning Network_ We propose and design a novel BC network to learn the behaviors of coordinated manipulation skills of bimanual grasping from human demonstration data. Dexterous bimanual grasping skills can be considered into two categories: (1) adaptive interaction with different objects, and (2) dual-arm motion coordination. To capture these skills, we have designed the input to our network to include encoded tactile feature maps, tactile image differences, and the robot's proprioceptive state. The encoded feature maps and tactile image differences capture the human-object interaction skills. The robot's proprioceptive state, on the other hand, offers insights into the coordination of movements between both arms. These inputs collectively serve to reflect the complexity and adaptability of dexterous grasping skills. Following this idea, we use the encoded tactile feature maps \(l_{t}\), the proprioceptive state \(\phi_{t}\), and the tactile image difference \(e_{t}\) as input to the BC network to represent and learn fine human skills. The discrete-time state-action pair set \(G=\{(s_{0},a_{0}),(s_{1},a_{1}),...,(s_{t},a_{t}),...\}\) is created to train the BC network, where \(s_{t}=(l_{t},\phi_{t},e_{t})\) denotes the robot state and \(a_{t}\) denotes the Cartesian commands of the two arms at time \(t\). Using such data of multiple modalities as input to train a network requires a well-crafted embedding structure. A common way of fusing a 2D feature map and a 1D feature vector is to flatten the 2D feature map into a 1D vector and concatenate the flattened vector and the 1D feature vector. However, we found that the flattening projection results in the _loss of spatial correlation of tactile information_. In this work, we specifically tile the proprioceptive state of robots and the tactile image difference to _match_ the dimension of the tactile feature maps, so as to keep the spatial information of the encoded tactile feature maps. We then concatenate the tactile feature maps, the tiled proprioceptive state maps, and the tactile image difference on each feature channel, as shown in Fig. 2(b). The convolutional layers in the BC network first filter the input feature maps (46\(\times\)16\(\times\)16) to a feature map (1\(\times\)8\(\times\)8), which is then flattened and fed into a fully connected network (FCN). The FCN network outputs a vector \(\hat{\mathbf{a}}\in\mathbb{R}^{12}\) as the predicted Cartesian pose commands of the two arms, including 3D position and 3D orientation for each arm. The loss function used to train the BC network consists of two parts, which are formulated as: \[L_{BC}(\mathbf{a},\hat{\mathbf{a}})=\left\|\mathbf{a}-\hat{\mathbf{a}}\right\|^{2 }+\left\|\mathbf{d}-\hat{\mathbf{d}}\right\|^{2} \tag{3}\] \[\hat{\mathbf{a}}=\psi(l,\phi,e;\Phi_{conv},\Phi_{fcn})\] where \(\mathbf{a}\in\mathbb{R}^{12}\) is the Cartesian pose commands of the two arms from the human demonstration dataset, and \(\hat{\mathbf{a}}\in\mathbb{R}^{12}\) is the predicted Cartesian pose command by the BC network \(\psi(\cdot;\Phi_{conv},\Phi_{fcn})\), parameterized by \(\Phi_{conv}\) and \(\Phi_{fcn}\); \(l\), \(\phi\) and \(e\) denote the tactile feature maps, the proprioceptive state maps and the tactile image difference, respectively. The second term \(\left\|\mathbf{d}-\hat{\mathbf{d}}\right\|^{2}\) is added to learn the dual-arm coordination skills from human demonstrations, where \(\mathbf{d}\in\mathbb{R}^{3}\) is the relative position between the two end-effectors, and \(\hat{\mathbf{d}}\in\mathbb{R}^{3}\) is the predicted relative position between the two end-effectors by the BC network. ## IV Experiments and Results ### _Experimental Setup and Data Collection_ We validate the performance of LfD with tactile sensing for robot dexterous manipulation on the challenging tasks: object retrieval from the desk using dual-arm pinch grasp. We have designed a comparative study involving two different configurations to show that our method can outperform the vanilla BC baseline qualitatively and quantitatively. During dexterous grasping, external vision can be easily occluded by the end-effector, potentially leading to inaccurate object estimation. Therefore, our experiments operate without using external visual sensors. By default, the starting position of the object lies between two robot hands, and the whole demonstration is represented in the task space. This assumption allows us to have an interface to connect with a 6D pose estimation from an external camera (e.g., as in [27]) and then position the initial poses of two hands around the target object before starting our control policy. We collected two demonstrations for each task. The human demonstration dataset collected in the grasping task includes three main components: the Cartesian commands, the proprioceptive states, and the tactile feedback (i.e., tactile images provided by the TacTip sensors). The Cartesian commands and the proprioceptive states of the two arms are collected at a frequency of 1000 Hz. Two Tactips record the tactile image pairs at a frequency of 60 Hz. For each demonstration, about 1000 tactile images are recorded. Before using the collected dataset to train the networks, several pre-processing methods are used to process the raw data. The proprioceptive states of the two arms and the tactile images, collected at different sampling rates, are synchronized using a linear interpolation method to align their timestamps. A median filter is then applied to smooth the Cartesian commands \(a_{t}\), i.e., the 6D poses of two end-effectors. For raw tactile images, the structural similarity index measure (SSIM) [28] is used to quantify the difference between the current frame and the original frame. ### _Design of Validation Tasks_ #### Iv-B1 Learning grasping vial The human demonstrator performs teleoperation of dual-arm robots to grasp a plastic vial (a test tube with \(\Phi=15.65\)mm) that is horizontally placed on the table. A Behaviour Cloning (BC) network is trained using the gathered demonstration data, and the trained policy is tested on dual-arm robots to validate its generalisation on unseen initial poses. During the evaluative phase, we positioned the test tube between the end-effectors to evaluate the performance of the learned policy given variations in the starting position, specifically alterations of up to \(\pm 20\) degrees and displacements of up to \(\pm 2\) centimetres in the objects' locations. #### Iv-B2 Generalization to unseen objects To evaluate the generalisability of the trained policy to unseen objects with a variation of radius, weight, or even materials (e.g., soft and fragile objects), a set of test experiments have been conducted using multiple objects of different radii ranging from \(11.7\)mm to \(28.6\)mm. #### Iv-B3 Robustness against external disturbance We also validate the robustness of the trained policy against external disturbances. We applied random external pushes from the left, right, up, and down directions on the grasped object to test if the two arms can coordinate their end-effectors' poses to ensure the balance of the object. #### Iv-B4 Re-grasping capability The re-grasping experiments are conducted to test if the trained controller is contact-awareness and can perceive the loss of contact with the object in order to make necessary adjustments according to the tactile feedback and react to grasping failures. After the successful normal grasping, we severely pushed the object away to break its static equilibrium, and the object dropped down between two end-effectors again. ### _Network Evaluation_ Our proposed model is developed using PyTorch [29]. For the training of the Convolutional AutoEncoder (CAE), a dataset comprising 1500 tactile images obtained from the TacTip sensor during the demonstration was utilized as a demonstration. A representation of reconstruction performance is shown on the validation set is shown in Fig. 2(a). The trained CAE exhibits a satisfactory reconstruction quality, with a Mean Squared Error (MSE) loss of 0.015 and a Structural Similarity Index Measure (SSIM) of 0.934. The model training process, which involved 100 iterations, was completed in approximately two hours using an NVIDIA 1080 GPU. In the case of the Behaviour Cloning (BC) network, the model was trained for 1000 iterations and the training process takes approximately 5 minutes. ### _Results of Grasping Tasks_ The BC network trained on human demonstration data is deployed on a real dual-arm robot to verify its performance by the designed tasks. A set of snapshots showing grasping a tube from the tube holder and the table is shown in Fig. 5(a). In both grasping tasks, the learned control policy achieved Fig. 4: Generalization of the learned policy and its robustness to external disturbance. \(100\%\) success rate, even when the initial poses of the tube are different from their original pose in the demonstration. The dual-arm robot can make prompt adjustments and enable stable dexterous grasping by learning from only one demonstration. In the process of lifting the object, the dual-arm robot achieves stable grasping by constantly twiddling the "fingertips" (tips of TacTip sensors) and adjusting the object to the central position. The process of retrieving an object from the table and adjusting its pose to maintain balance requires very fine movements and interactions supported by rich tactile information, where a 6-axis force/torque information is not sufficient to discern different contact situations in this scenario. We evaluate the robustness of the learned policy against external disturbances. It can be seen from Fig. 3(a) that the dual-arm robot can make a proper adjustment to adapt to pushes. Although the pose of the two-arm robot in contact with the object was changed each time while being pushed, the dual-arm robot can always fine-adjust the object reactively to the center of the fingertips (Tactip sensors), roll and move the object to the desired position. Compared with the manually programmed behavior, this serves as a feedback policy that has been successfully acquired from human dexterity skills, which enables the dual-arm robot to autonomously adjust the posture and ensure a stable grasp quickly. It is noteworthy that such active rolling adjustment has not been explicitly demonstrated by any separate trials, but rather, this behavior was successfully captured by the rich tactile data during one-shot demonstration of pick-lift grasping. To examine the reaction in the presence of an unknown situation, i.e., grasping failures, the learned policy demonstrated contract-awareness of the falling object, i.e., loss of contact according to the tactile feedback, and thus controls the robot to restart the grasping process, which was not explicitly programmed or demonstrated by the prior LfD data. The result of the regrasping experiments in Fig. 3(b) shows that the tactile-based control learned from human demonstrations is very effective in performing robotic dexterous bimanual manipulation tasks autonomously and quickly without the need for explicit manual programming or complex planning. The policy also achieves successful grasping of previously unseen objects, as shown in Fig 3(c). Although the test objects have a variety of sizes and weights compared with the object used in the demonstration, the policy can still perform stable grasping. The experiment results show that the trained policy can generalize to unseen objects with similar cylindrical shapes but with different sizes and weights. ### _Comparison Study_ We conducted a comparison study to validate that successful grasping is achieved by the active use of tactile sensing. Besides training a BC network using the structure shown in Fig. 2(b), we also train two different BC networks for comparison. The first one has exactly the same BC network structure but with frozen input of the tactile images, meaning that the image input stays unchanged during both the training and testing. The second one has an FCN structure and uses the poses of two end-effectors (both positions and orientation) as the input to train the network. The proposed BC network demonstrates convergence to a loss of 0.04 on the testing set. In contrast, the network employing frozen tactile information achieves convergence with a loss of 0.5, while the FCN converges to a loss value of 1. These results prove that the effective integration of tactile information significantly enhances the convergence rate and leads to a reduced loss value in the final model. We also compared all the grasping performances on the real dual-arm robot. As shown in Fig. 5, both BC network structures without using the tactile information failed in grasping the tube: robot arms failed to approach the object from its initial pose, and instead, they bypassed the object and moved towards the desired end-poses, showing no contact-awareness. The experimental results indicate that tactile feedback plays an essential role in providing contact information for initiating contacts, generating appropriate adjustments, lifting and retrieving to the desired target locations, enabling the dual-arm robot to perform very fine and dexterous contact-rich skills. ### _Interpretability_ To explicitly show how much different modalities influence the entire operation, we use the saliency map method for calculating the weight distribution. The procedure for calculating this distribution is formulated as follows: Fig. 5: Comparison study using the baseline method (failed) in real-world robots. \[W_{i}=\frac{N(I)}{N(I)+N(J)},W_{j}=\frac{N(J)}{N(I)+N(J)}\enspace, \tag{4}\] where \(W_{i}\) and \(W_{j}\) are the weight distributions of each modality. \(N(\cdot)\) represents the normalization process. \(I\) is importance of the tactile information that is calculated by adding all the absolute value of weight that the learned policy distributed to tactile features. \(J\) is calculated in the same way by adding all the the absolute value of weight that are distributed to robot proprioceptive state features. The comprehensive process of dexterous pinch grasping can be subdivided into four primary stages: pre-grasp, pressing, rolling and lifting, and stabilization. Each of these stages utilizes tactile feedback in a distinct manner. In Fig (b)b, the weight changes during the complete dexterous pinch grasping process are depicted. Initially, as the end-effector moves toward the objects without any contact deformation on the tactile sensor, the weight of the robot's proprioceptive state exceeds that of the tactile information. When the tactile sensor comes into contact with the desk and is prepared for a pre-grasp pose, the weight of the tactile information increases (stage A). As the end-effector advances towards the object and initiates contact, the weight attributed to the tactile information increase, exceeding that of the proprioceptive state (stage B). During the roll and lift phase, the weight of the tactile information initially decreases, subsequently achieving equilibrium with the proprioceptive state (stage C). This indicates that during the lifting phase, the learned policy necessitates tactile information for successful in-hand manipulation and proprioceptive information for effective dual-arm coordination. Finally, upon successfully lifting the tube, the weight reverts to the tactile information, facilitating the stabilization of the tube (stage D). ## V Conclusion And Future Work The presented Learning from Demonstration (LfD) framework showed successful skill transfer from humans to robots with minimal one-trial of real robot data, with the use of rich tactile sensing at robot's fingertips. Through our journey of exploring how to best utilize the new generation of compliant tactile sensors, we have developed the presented encoding methods that can effectively extract and capture high-dimensional contact sensing from soft tactile sensors, together with the fusion with proprioceptive feedback. The interesting outcome is to confirm the possibility to learn from real robot data directly, without the need of high computation or big data, if the right data is used. Our comparison studies showed that without the use of tactile sensing, dexterous motor skills cannot be learned by one-shot demonstrations with traditional robot sensing which is rather limited. Our proposed approach overcomes the traditional limitations of one-shot learning method, through the Fig. 6: The output of the learned policy and the weight changes during the grasping. use of tactile and proprioceptive information for extracting useful information and mapping it into fine-grained motor skills. This approach is shown to be robust in the presence of external pushes and is able to perform re-grasp the object if it drops, which was not shown in the one-shot demonstration and emerges as the natural outcome of sensorimotor skills through state-action mapping. The ability to learn from real data/hardware and a single demonstration is very attractive to gain a wider range of machine learning approaches in the real world, where tasks can hard be simulated and only a small amount of data is available. Meanwhile, one apparent limitation is that one-shot learning is apriori trained on a specific task and object, and it can be generalised and robust only around neighbourhood situations within a category of similar tasks: generalization applies to new/unseen objects that are similar to the demonstrated object of certain variations. The advantage of having only one demonstration comes with the trade-off that when a very different object grasping is needed, then at least one demonstration data is needed. Another limitation is that the robot's performance is based on blind grasping and re-grapsing, and has not yet utilised external visual perception. In the future, integration of the current framework with stereo vision could extend the versatility and dexterity of object manipulation. Overall, our proposed LfD framework provides an attractive solution for learning from one demonstration with tactile sensing and supports broad real-world applications in robotics with data scarcity.
2310.07907
The Pristine survey -- XXII. A serendipitous discovery of an extremely Li-rich very metal-poor giant and a new method of $^6$Li/$^7$Li isotope measurement
We report the serendipitous discovery of a very metal-poor (VMP) Li-rich giant star ($T_{\rm eff}$ = 4690$\pm$80 K, log g = 1.34$\pm$0.13, [Fe/H] = $-2.43\pm$0.07). We analyse the Li I 6103 and 6707 \r{A} lines accounting for departures from local thermodynamic equilibrium (NLTE) and correcting for 3D effects using literature data, which yields a lithium abundance $\log\varepsilon_{Li} = 3.42\pm0.07$. Comparing lithium abundances from the two lines, in 1D NLTE we measure the isotope ratio $^6$Li/$^7$Li = 1.64$^{+1.49}_{-1.08}$ %. When correcting for 3D effects, we detect the fragile $^6$Li isotope at $2$-sigma level and the ratio $^6$Li/$^7$Li = 5.65$^{+5.05}_{-2.51}$ %. To our knowledge, this is the first $^6$Li/$^7$Li measurement in an extremely Li-rich VMP star. The Cameron-Fowler mechanism, which is proposed to produce Li-rich stars, does not imply $^6$Li production and is therefore inconsistent with our measurement when applying 3D corrections. We also derive NLTE abundances for 16 elements, most of which show similar abundances to those found in VMP stars. Sodium is an exception: [Na/Fe]$_{\rm NLTE, 1D}$ = 0.07 $\pm 0.03$, which is 0.5 dex higher than what is typical for VMP stars. This star joins the sample of rare Li-rich VMP stars, and we offer a novel way to constrain the source of lithium in such stars through isotope ratio measurements.
T. M. Sitnova, T. Matsuno, Z. Yuan, N. F. Martin, P. Banerjee, F. Sestito, K. A. Venn, J. I. González Hernández
2023-10-11T21:24:59Z
http://arxiv.org/abs/2310.07907v2
The Pristine survey - XXII. A serendipitous discovery of an extremely Li-rich very metal-poor giant and a new method of \({}^{6}\)Li/\({}^{7}\)Li isotope measurement+ ###### Abstract We report the serendipitous discovery of a very metal-poor (VMP) Li-rich giant star (\(T_{\rm eff}=4690\pm 80\) K, log g = 1.34\(\pm 0.13\), [Fe/H] = \(-2.43\pm 0.07\)). We analyse the Li i 6103 and 6707 A lines accounting for departures from local thermodynamic equilibrium (NLTE) and correcting for 3D effects using literature data, which yields a lithium abundance \(\log\varepsilon_{Li}=3.42\pm 0.07\). Comparing lithium abundances from the two lines, in 1D NLTE we measure the isotope ratio \({}^{6}\)Li/\({}^{7}\)Li = \(1.64^{+1.49}_{-1.08}\) %. When correcting for 3D effects, we detect the fragile \({}^{6}\)Li isotope at 2-sigma level and the ratio \({}^{6}\)Li/\({}^{7}\)Li = \(5.65^{+5.05}_{-2.51}\) %. To our knowledge, this is the first \({}^{6}\)Li/\({}^{7}\)Li measurement in an extremely Li-rich VMP star. The Cameron-Fowler mechanism, which is proposed to produce Li-rich stars, does not imply \({}^{6}\)Li production and is therefore inconsistent with our measurement when applying 3D corrections. We also derive NLTE abundances for 16 elements, most of which show similar abundances to those found in VMP stars. Sodium is an exception: [Na/Fe]\({}_{\rm NLTEID}\) = 0.07 \(\pm 0.03\), which is 0.5 dex higher than what is typical for VMP stars. This star joins the sample of rare Li-rich VMP stars, and we offer a novel way to constrain the source of lithium in such stars through isotope ratio measurements. keywords: stars: abundances -- line: formation -- stars: atmospheres -- stars: fundamental parameters ## 1 Introduction Lithium is an element with a complex astrophysical origin and chemical evolution. It is one of the primordial elements produced in the Big Bang and can also be produced through further processes, such as the spallation process and stellar nucleosynthesis. A number of processes have been suggested as sources of lithium, but so far its exact origins and production mechanisms remain unclear (see, for example, Prantzos 2012; Magrini et al. 2021; Romano et al. 2021). Lithium has two stable isotopes: \({}^{7}\)Li and the less abundant and more fragile \({}^{6}\)Li. The two isotopes have different origins: \({}^{7}\)Li is produced in the Big Bang nucleosynthesis, inside stars, and by cosmic rays via spallation, while \({}^{6}\)Li can only be produced by cosmic rays via spallation (Prantzos 2012). As a result, the ratio between \({}^{7}\)Li and \({}^{6}\)Li varies among different astrophysical sites. For example, in Solar system meteorites, the ratio is measured to be \({}^{6}\)Li/\({}^{7}\)Li = 8.11 % (McDonough et al. 2003), while Mott et al. (2017) found the same value in a metal-rich magnetically active giant star, and Ritzenhoff et al. (1997) found \({}^{6}\)Li/\({}^{7}\)Li = 3 % and 8 % in solar spots and in active late type dwarf stars, respectively. The most up to date NLTE calculations of the Li i 6707 A line in 3D model atmospheres indicate the absence of \({}^{6}\)Li in the sun (Strassmeier et al. 2018) and solar type stars (Harutyunyan et al. 2018). In old, very metal-poor (VMP, [Fe/H]1\(<-2\)) stars with normal lithium abundance, the presence of the \({}^{6}\)Li isotope is unlikely. Some of the early studies (Smith et al., 1993, 1998; Asplund et al., 2006) report on \({}^{6}\)Li detections in unevolved VMP stars, while others (for example, Garcia Perez et al., 2009) conclude that the detection of \({}^{6}\)Li cannot be safely claimed. The above studies employ classic 1D model atmospheres, which neglect convection. Cayrel et al. (2007) and Steffen et al. (2012) showed that the convective asymmetry generates an excess absorption in the red wing of the resonance line that mimics the presence of \({}^{6}\)Li and the measurements of ratios for unevolved stars should be considered as upper limits. Lind et al. (2013), Gonzalez Hernandez et al. (2019), and Wang et al. (2022) account for deviations from local thermodynamic equilibrium (LTE) and hydrodynamic (3D) effects and confirm non-detections of \({}^{6}\)Li in these stars. Footnote 1: We use a standard designation, [X/Y] = log(N\({}_{\rm X}\)/N\({}_{\rm Y}\))\({}_{*}-\) log(N\({}_{\rm X}\)/N\({}_{\rm Y}\))\({}_{\odot}\), where N\({}_{\rm X}\) and N\({}_{\rm Y}\) are total number densities of element X and Y, respectively. The majority of unevolved VMP stars have normal lithium abundance log \(\varepsilon_{Li}\)2 = 2.25, known as the lithium plateau or the "Spite plateau" (Spite & Spite, 1982). This value is treated as an upper boundary for lithium abundances in VMP unevolved stars, which decreases with stellar evolution (see, for example, Lind et al., 2009). However, large spectroscopic surveys uncovered a number of very metal-poor stars with lithium abundances exceeding the Spite plateau. For example, Li et al. (2018) found 12 VMP stars, including subgiants, with log \(\varepsilon_{Li}\) up to 4.53. Nine of them show sodium enhancement, while other measured elements (carbon, magnesium, barium) show values close to those measured in normal stars with similar metallicity. Mucciarelli et al. (2019, 2021) discovered two Li-rich giant stars in \(\omega\) Cen. The most Li-rich of these two stars is strongly enhanced in sodium with [Na/Fe]\({}_{\rm NLTE}\) = 1.01, while another one has [Na/Fe]\({}_{\rm NLTE}\) = 0.14 in line with other \(\omega\) Cen stars. The most lithium-rich star known to date (Kowkabany et al., 2022) has log \(\varepsilon_{Li}\) = 5.62, [Fe/H] = -2.43 and also shows sodium enhancement with [Na/Fe] = 1.10. Monaco et al. (2012) found a Li- and Na-rich star in the globular cluster M4 and suggested that lithium is produced in parallel to sodium. However, it is worth noting that stars with high sodium abundances are common and only a small fraction of them are enriched in lithium. Footnote 2: Here, log \(\varepsilon\) = log \({\rm N_{E}}\)/\({\rm N_{H}}\), where N\({}_{\rm E1}\) and N\({}_{\rm H}\) are number densities of a given chemical element and hydrogen, respectively. The mechanism of Li-enhancement in unevolved stars is unclear, while in stars at advanced evolutionary stages, \({}^{7}\)Li can be produced in the Cameron-Fowler mechanism (CF mechanism; Cameron & Fowler, 1971) at the asymptotic giant branch (AGB) and red giant branch (RGB) stages as proposed by Cameron & Fowler (1971) and Sackmann & Boothroyd (1999), respectively. In addition to high Li abundance, Cameron & Fowler (1971) predict in some cases high abundances of slow neutron capture (s-process) elements. The most lithium-rich star known to date (Kowkabany et al., 2022) can be considered as an example of a star that has undergone the CF mechanism. However, observations show that this is not the only way to produce an excess of lithium in a star (Tsantaki et al., 2023). For example, Mott et al. (2017) found a considerable amount of \({}^{6}\)Li isotope in a magnetically active metal-rich giant, which suggests a different mechanism than CF is responsible for its production. It is important to emphasize that the interpretation of lithium-rich stars varies between those that are metal-poor and those that are metal-rich. This distinction arises because metal-rich stars possess more substantial convective envelopes and experience distinct levels of mixing during their evolution compared to metal-poor stars. Another reason for separating the discussion of lithium enhancement in metal-rich and metal-poor stars is the impact of chemical evolution. This is because the contribution of lithium production in novae becomes significant at higher metallicities. Li-rich stars are ubiquitous, and they are found in different Galactic populations, such as globular clusters (see, for example, Sanna et al., 2020, and references therein), open clusters (Romano et al., 2021), and dwarf spheroidal galaxies (Kirby et al., 2012). Over the last few years, continuous efforts have been made on observations of Li-rich stars (see, for example, Casey et al., 2016; Gao et al., 2019; Deepak & Reddy, 2019; Sanna et al., 2020; Yan et al., 2021; Martell et al., 2021; Yan et al., 2022; Shahbaz et al., 2022; Nepal et al., 2023). These observations challenge our current theoretical understanding of the origin of lithium and its chemical evolution. To make progress in nucleosynthesis modeling, observational constraints, not only for a given chemical element, but also comprehensive element abundance patterns and isotopic ratios are required. In this study, we present the discovery of a Li-rich star and perform a careful stellar parameter and chemical composition determination, including for the \({}^{6}\)Li/\({}^{7}\)Li isotopic ratio. We present a new method to determine the lithium isotopic ratio. It is based on a comparison of abundances from the resonance line, which is sensitive to the \({}^{6}\)Li/\({}^{7}\)Li ratio, and the subordinate line, which is not affected by the \({}^{6}\)Li/\({}^{7}\)Li ratio. Our method differs from that used before in the literature since, until now, \({}^{6}\)Li/\({}^{7}\)Li ratios were determined by fitting the profile of the resonance line only. It is worth noting that a similar to our method is used in the literature to derive the barium odd and even isotope ratio: the total barium abundance is determined from the weak subordinate lines and then the isotope ratio is varied until the same abundance is achieved from the saturated resonance lines. The idea was proposed by Magain & Zhao (1993) and applied by Magain (1995); Mashonkina & Zhao (2006); Mashonkina & Belyaev (2019). We describe the observations and stellar atmosphere parameters in Sec. 2. The abundance determination method is presented in Sec. 3. The derived chemical element abundances and the \({}^{6}\)Li/\({}^{7}\)Li isotopic ratio are presented in Sec. 4 and Sec. 5, respectively. In Sec. 6, we consider potential scenarios for the high lithium abundance origin in the star of interest. Our conclusions are given in Sec. 7. ## 2 Observations and Stellar Parameters The star of interest (Gaia DR3 ID = 1918529631627603072, RA = 348.71256241851\({}^{\circ}\), DEC = +41.58961513403\({}^{\circ}\), \(G\) = 13.603 \(\pm\) 0.003) is selected from the Pristine-Gaia synthetic catalog (Martin et al., 2023) and has a photometric metal licity [Fe/H] = \(-\)2.8\({}^{+0.1}_{-0.2}\). It was observed as a backup target of program S22B-094 (PI: Yuan) on the Subaru telescope on September 2022 and was selected as a bright extremely metal-poor ([Fe/H] \(<\) \(-\)3) giant candidate. We obtained its high-resolution spectrum with the High Dispersion Spectrograph (HDS, Noguchi et al., 2002), using the standard StdY4 setup, which provides a wavelength coverage of 4000 - 6800A, R = \(\Delta\lambda\)/\(\lambda\) = 45 000, and signal to noise ratio S/N = 45 around the Li i lines for our 600-second exposure. The data is reduced using the IRAF3 script hdsql4 that includes CCD linearity correction, scattered light subtraction, aperture extraction, flat-fielding, wavelength calibration, and heliocentric velocity correction. Footnote 3: IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation The reduced spectrum does not reveal any peculiarities, such as, for example, emission lines. The observed line profiles are narrow and they are not distorted by rapid rotation. From the available spectrum, we do not see signatures of stellar activity. Sneden et al. (2022) found that half of the Li-rich stars have a strong He i 10830 A absorption line, which is an indicator of chromospheric activity and/or mass loss in red giants. While our spectrum does not cover this line, another He i line at \(\lambda\) = 5876 A is not detected in the available spectrum. In this regard, a high resolution spectrum covering the He i 10830 A and the Ca ii H and K lines would be required to draw a definite conclusion. It is worth noting that the Balmer H\({}_{\alpha}\) line is slightly asymmetric and the line center is shifted towards the blue, as can be seen in Figure 1 that shows the observed H\({}_{\alpha}\) profile in the star of interest and another star with nearly the same stellar parameters \(T_{\rm eff}\)/log g/[Fe/H] = 4650\(\pm\)80/1.34\(\pm\)0.24/-2.09\(\pm\)0.12 (Sitnova et al., in prep.) for comparison. Both spectra were obtained with the same instrument. From the observed spectrum, we measure a radial velocity V\({}_{\rm r}\) = -275.1 \(\pm\) 0.8 km s\({}^{-1}\), which is in line with the Gaia measurement V\({}_{\rm r,~{}Gaia}\) = -274.45 \(\pm\) 1.80 km s\({}^{-1}\). Gaia DR3 also provides the renormalised unit weight error RUWE = 0.999. Taking into account these data, we assume that this star is a single star. We calculate an effective temperature \(T_{\rm eff}\) = 4690 \(\pm\) 80 K, a surface gravity log g = 1.34 \(\pm\) 0.13, a metallicity [Fe/H] = \(-\)2.43 \(\pm\) 0.07, and a microturbulent velocity \(\xi_{\rm t}\)= 1.8 \(\pm\) 0.2 km s\({}^{-1}\). We determine \(T_{\rm eff}\) from Gaia \(BP-G\), \(G-RP\), \(BP-RP\) dereddened colors and the calibration of Mucciarelli et al. (2021). The extinction E(B-V) = 0.12 was adopted from Schlafly & Finkbeiner (2011) and the colours are corrected according to Casagrande & VandenBerg (2018). Using different colors yields effective temperatures that are consistent within 12 K. The uncertainty on \(T_{\rm eff}\) is therefore mainly the uncertainty of 80 K on the calibration, as given by Mucciarelli et al. (2021). For the distance, we calculate \(d=~{}10.1\pm 1.5\) kpc using the Gaia parallax, corrected according to Lindegren et al. (2021), and following the method of Bailer-Jones (2015). Given that the uncertainty on the parallax is not large (the ratio between the parallax error and the parallax is 15%), we determine the distance from the maximum of the distance distribution without invoking a prior. With the distance, the effective temperature, the bolometric corrections of Casagrande & VandenBerg (2018), and a mass of 0.8 solar masses, we calculate the surface gravity \(\log g\) = 4.44 + log(\(m/m_{\odot}\)) + 0.4(\(M_{\rm bol}\) - 4.75) + 4 log(\(T_{\rm eff}\)/5780.0), where \(m_{\odot}\) is a solar mass and \(M_{\rm bol}\) is an absolute bolometric magnitude. The microturbulent velocity is derived from the lines of Fe i and Fe ii. The derived stellar atmosphere parameters lead to consistent abundances within 0.02 dex from Fe i and Fe ii lines in the non-LTE analysis. Using the derived stellar parameters we compared the position of the star of interest at the \(T_{\rm eff}\)-log g diagram with the corresponding evolution track from Dotter (2016) grid (Fig. 2). The star sits well on the red giant branch and its parameters correspond to age of 12.2 Gyr. Considering the uncertainties in \(T_{\rm eff}\) and log g, we cannot exclude that the star may belong to a more advanced evolutionary stage. Figure 1: Observed H\({}_{\alpha}\) line profile in the star of interest (solid curve). For comparison, we show the H\({}_{\alpha}\) line profile of another star with exactly the same stellar parameters, taken with the same instrument (dots). Figure 2: The position of the star of interest (red square) on the \(T_{\rm eff}\)-log g diagram and the evolutionary track with the corresponding parameters from Dotter (2016) grid. ## 3 Abundance analysis ### Codes and model atmospheres We use classical 1D model atmospheres from the marcs model grid (Gustafsson et al., 2008), interpolated for the given \(T_{\rm eff}\), log g, and [Fe/H] of the star. We solve the coupled radiative transfer and statistical equilibrium equations with the detail code (Butler & Giddings, 1985), using the updated opacity package as presented by Mashonkina et al. (2011). For synthetic spectra calculations, we use the synthV_NLTE code (Tsymbal et al., 2019) attached to the idl bimmag code (Kochukhov, 2018). This technique allows us to obtain the best fit to the observed line profiles with the non-LTE effects taken into account via pre-calculated departure coefficients (the ratio between non-LTE and LTE atomic level populations) for a given model atmosphere. When fitting the line profiles, the abundance of the element of interest is varied together with the macroturbulent velocity (v\({}_{\rm mac}\)) and the radial velocity (v\({}_{\rm r}\)). For the star of interest, the typical uncertainties in these parameters caused by the fitting procedure are 0.03 dex, 0.5 km s\({}^{-1}\), and 0.1 km s\({}^{-1}\), respectively. These correspond to uncertainties on individual lines, while the real uncertainties caused by a scatter between different lines are larger. Using our linelist, we calculate uncertainties of 2.5 km s\({}^{-1}\) and 0.8 km s\({}^{-1}\) in v\({}_{\rm mac}\) and v\({}_{\rm r}\), respectively. The line list for spectral synthesis is extracted from a recent version of the Vienna Atomic Line Database (VALD, Pakhomov et al., 2019; Ryabchikova et al., 2015) that provides isotopic and hyperfine structure components of the spectral lines for a number of chemical elements. For lithium, the data on the fine and hyperfine structures and isotope shifts originate from Radziemski et al. (1995). VALD provides a linelist computed for solar isotopic ratios. To determine the \({}^{6}\)Li/\({}^{7}\)Li isotopic ratio, we rescale the original data adopting different isotopic ratios. ### Non-LTE effects We take into account the departure from LTE for a number of chemical elements (Li, Na, Mg, Ca, Ti, Cr, Mn, Fe, Zn, Sr, Ba). We refer the reader to the papers listed in Table 1 for the description of the model atoms and the mechanism of the non-LTE effects. For most chemical elements, we perform non-LTE calculations with the specific model atmosphere, while for Na i, Cr i, Mn i, and Zn i, we interpolate the non-LTE corrections (\(\Delta_{\rm NLTE}=\log\varepsilon_{\rm NLTE}-\log\varepsilon_{\rm LTE}\)) in the pre-calculated grids available in the literature. Our manganese abundance relies on the Mn i 4783 A line only. For this line, the updated grid of non-LTE corrections on the MPIA webpage6 does not cover our stellar parameters. From the previous version of the grid, we derive \(\Delta_{\rm NLTE}\)= 0.52 and 0.58 dex for the Mn i 4783 and 4823 A lines, respectively. The updated grid provides \(\Delta_{\rm NLTE}\)= 0.34 dex for the Mn i 4823 A line. We assume the non-LTE corrections are similar for these two lines and adopt \(\Delta_{\rm NLTE}\)= 0.34 dex for the Mn i 4783 A line. Footnote 6: [https://nlte.mpia.de/gui-siuAC_secE.php](https://nlte.mpia.de/gui-siuAC_secE.php) For the remaining chemical elements (Si, Sc, Ni Y), the non-LTE effects are either small or unavailable in the literature for the stellar parameters investigated in this study. For Si i, non-LTE effects are minor and can be neglected even in metal-poor stars (Mashonkina et al., 2016). For Sc ii, Mashonkina & Romanovskaya (2022) investigated the departures from LTE and found positive non-LTE abundance corrections in metal-poor dwarfs. However, we cannot apply their results obtained for dwarfs to our giant star. The departures from LTE for Ni i were investigated in solar atmosphere by Bruls (1993); Vieytes & Fontenla (2013); Bergemann et al. (2021); Magg et al. (2022) and in FGK stars by Eitner et al. (2022b). For Ni i lines in the visible range, Eitner et al. (2022b) predict positive non-LTE abundance corrections, which increase towards higher \(T_{\rm eff}\) and lower log g. For example, they found \(\Delta\)NLTE = 0.2 dex in model atmosphere with \(T_{\rm eff}\)/log g/[Fe/H] = 5000/3/-2.5. For Y ii 4883 and 5205 A lines, we applied non-LTE abundance corrections of \(\Delta_{\rm NLTE}\) = 0.14 dex computed by Alexeeva et al. (2023) for model atmosphere with \(T_{\rm eff}\)/log g/[Fe/H] = 5000/2.0/-2.5. \begin{table} \begin{tabular}{l l} \hline Species & Reference \\ \hline Li i & this study \\ Na i & Lind et al. (2022) \\ Mg i & Mashonkina (2013) \\ Ca i & Mashonkina et al. (2017) \\ Ti i & Sitnova et al. (2020) \\ Cr i & Bergemann \& Cescutti (2010) \\ Mn i & Bergemann et al. (2019) \\ Fe i - ii & Mashonkina et al. (2011) \\ Zn i & Sitnova et al. (2022) \\ Sr ii & Yakoveva et al. (2022) \\ Y i – ii & Alexeeva et al. (2023) \\ Ba ii & Mashonkina \& Belyaev (2019) \\ \hline \end{tabular} \end{table} Table 1: References for the non-LTE methods used in this study \begin{table} \begin{tabular}{l l l l l l l} \hline Species & \(\log\varepsilon_{\odot}\) & [X/H] & [X/FeII] & [X/H] & [X/FeII] & N \\ & & LTE & LTE & NLTE & NLTE & \\ \hline Li i & 1.05 & 2.23 & 4.66 & 2.39 & 4.82 & 1 \\ CH & 8.39 & –2.83 \(\pm\) 0.12 & –0.40 & & & 1 \\ O i & 8.73 & \(<\) –1.93 & \(<\) 0.50 & \(<\) –1.93 & \(<\) 0.50 & 1 \\ Na i & 6.29 & –2.01 \(\pm\) 0.01 & 0.42 & –2.36 \(\pm\) 0.03 & 0.07 & 2 \\ Mg i & 7.54 & –2.15 \(\pm\) 0.24 & 0.28 & –2.15 \(\pm\) 0.19 & 0.29 & 2 \\ Si i & 7.53 & –2.13 \(\pm\) & 0.30 & & 1 & 1 \\ Ca i & 6.31 & –2.15 \(\pm\) 0.14 & 0.28 & –2.09 \(\pm\) 0.13 & 0.35 & 7 \\ Sc ii & 3.07 & –2.42 \(\pm\) 0.02 & 0.02 & & & 3 \\ Ti ii & 4.93 & –2.11 \(\pm\) 0.11 & 0.33 & –2.08 \(\pm\) 0.11 & 0.35 & 6 \\ Cr i & 5.65 & –2.67 & –0.24 & –2.25 \(\pm\) 0.07 & 0.18 & 1 \\ Mn i & 5.50 & –3.15 & –0.72 & –2.81 \(\pm\) 0.07 & –0.38 & 1 \\ Fe i & 7.46 & –2.50 \(\pm\) 0.16 & –0.06 & –2.45 \(\pm\) 0.16 & –0.02 & 57 \\ Fe ii & 7.46 & –2.43 \(\pm\) 0.07 & 0.00 & –2.43 \(\pm\) 0.07 & 0.00 & 6 \\ Ni i & 6.22 & –2.52 \(\pm\) 0.01 & –0.09 & & 2 \\ Zn i & 4.65 & –2.36 & 0.07 & –2.21 \(\pm\) 0.07 & 0.22 & 1 \\ Sr ii & 2.90 & –2.54 \(\pm\) 0.00 & –0.11 & –2.52 \(\pm\) 0.01 & –0.09 & 2 \\ Y ii & 2.20 & –2.73 \(\pm\) 0.02 & –0.29 & –2.59 \(\pm\) 0.02 & –0.15 & 2 \\ Ba ii & 2.18 & –3.06 \(\pm\) 0.15 & –0.62 & –3.05 \(\pm\) 0.18 & –0.62 & 4 \\ \hline \end{tabular} \end{table} Table 2: Non-LTE and LTE abundance ratios ### Lithium abundance determination #### 3.3.1 Li i model atom In the studied star, the Li i 6707 A resonance line is strong with EW = 426 mA and its profile cannot be fitted in LTE with any abundance and macroturbulent velocity. Non-LTE leads to a strengthened core of the Li i 6707 A line and thus allows us to fit the observed profile (see Fig. 3). To account for the non-LTE effects, we construct a Li i model atom. It includes 21 levels of Li i and the ground state of Li ii. The list of energy levels and transitions is taken from R. Kurucz's webpage6. We included all levels in the model atom, up to the ionization threshold of 5.39 eV. Levels with an excitation energy larger than 5 eV are combined into the six superlevels according to their parity. In the statistical equilibrium calculations, we neglect the fine structure and the \({}^{6}\)Li isotope. We adopt photoionization cross-sections from the R-matrix calculations of Peach et al. (1988), available in the TOPbase7. Inelastic collisions with hydrogen atoms are taken from Barklem et al. (2003). Electron impact excitation rates were taken from quantum-mechanic calculations of Osorio et al. (2012), where available. For the remaining radiatively allowed and forbidden transitions, electronic collision rates are calculated with the approximate formulae from van Regemorter (1962) and Woolley & Allen (1948), respectively. Electron impact ionisation rates are calculated with the Seaton (1962) formula. Footnote 6: [http://kurucz.harvard.edu/atoms.html](http://kurucz.harvard.edu/atoms.html) Footnote 7: [http://cdsweb.u-strasbg.fr/topbase/topbase.html](http://cdsweb.u-strasbg.fr/topbase/topbase.html) As a sanity check of our model atom, we compare our non-LTE results with those calculated by Shi et al. (2007) and Lind et al. (2009) with their original model atoms. We compute the non-LTE abundance corrections for the Li i 6707 A line in a model atmosphere with \(T_{\rm eff}\)/log g/[Fe/H]/\(\xi_{\rm t}\)= 5806/3.69/\(-\)2.42/1.5 and log \(\varepsilon_{\rm Li}\) = 2.2. For this model, Shi et al. (2007) and Lind et al. (2009) provide \(\Delta_{\rm NLTE}\)= 0.05 and \(-\)0.05 dex, respectively. Our non-LTE calculations agree with Shi et al. (2007) and we find the same \(\Delta_{\rm NLTE}\)= 0.05. Here, the \(\Delta_{\rm NLTE}\) of Lind et al. (2009) was derived by interpolation on a grid of non-LTE corrections. If we interpolate \(\Delta_{\rm NLTE}\) for the resonance line for solar model atmosphere and a metal-poor giant with 4500/1.5/\(-\)2.0/2.0 and [Li/H] = 0, we find \(\Delta_{\rm NLTE}\)= 0.05 and 0.15 dex, respectively. Our non-LTE corrections are 0.1 dex smaller for both model atmospheres: \(\Delta_{\rm NLTE}\)= \(-\)0.05 and 0.05 dex. This discrepancy can be explained by the recent findings of Wang et al. (2021) that non-LTE corrections are "up to 0.15 dex more negative than in previous work" due to the incorrect accounting of the Li i ultraviolet lines in Lind et al. (2009). #### 3.3.2 3D effects To estimate the impact of the 3D effects on the Li i 6103 and 6707 A lines, we adopt synthetic spectra grid computed by Wang et al. (2021) and the BREIDABLIK package8. Fig. 4 shows the Li i 6103 and 6707 A line profiles extracted for different line formation scenarios (1D LTE, 3D LTE, 1D NLTE, 3D NLTE) in a model atmosphere with 4500/1.5/\(-\)2.0 and log \(\varepsilon_{\rm Li}\) = 3.0. For the subordinate Li i 6103 A line, the 3D NLTE profile is slightly stronger than the 1D NLTE one, which translates to a \(-\)0.04 dex abundance difference. For the resonance Li i 6707 A line, the 3D NLTE calculation results in slightly stronger wings, but a weaker line core compared to the 1D NLTE case, which results in a 0.14 dex abundance difference between 3D NLTE and 1D NLTE. Footnote 8: [https://github.com/ellawang44/Breidablik](https://github.com/ellawang44/Breidablik) We extract 1D non-LTE and 3D non-LTE synthetic spectra from BREIDABLIK for the node models around the stellar parameters of the star of interest. Integrating the synthetic profiles and using the growth curve we compute the 3D abundance corrections for each node model that were used for the interpolation of the 3D corrections, assuming Figure 3: Top panel: Best fit of the Li i 6707 Å non-LTE 1D line profile (solid curve), together with the best fit LTE profile (dotted curve). The non-LTE and LTE fits correspond to \(\log\varepsilon=3.44\) and 3.84, respectively. Dashed lines show the contribution from the \({}^{7}\)Li and \({}^{6}\)Li to the non-LTE profile assuming \({}^{6}\)Li/\({}^{7}\)Li = 1.67%. Bottom panel: Best fit of the Li i 6103 Å non-LTE (solid curve) and LTE line profiles (dotted curve), computed for the same abundance \(\log\varepsilon=3.44\). The observed spectrum and associated uncertainties of the star of interest are represented by the shaded area. \begin{table} \begin{tabular}{l c c c c c c} \hline Sp. & \(\lambda\), & \(E_{\rm exc}\), & loggf & EW, & \(\log\varepsilon\) & \(\log\varepsilon\) \\ & Å & eV & & mÅ & LTE & NLTE \\ \hline Li i & 6103.65 & 1.85 & 0.58 & 71.5 & 3.27 & 3.44 \\ Li i & 6707.91 & 0.00 & 0.17 & 426.0 & – & 3.44 \\ CH & 4313.00 & – & – & – & 5.56 & – \\ O i & 6300.30 & 0.00 & –9.78 & 6.6 & \(<6.80\) & \(<6.80\) \\ Na i & 5889.95 & 0.00 & 0.11 & 228.4 & 4.28 & 3.95 \\ Na i & 5895.92 & 0.00 & –0.19 & 201.7 & 4.29 & 3.91 \\ \hline \end{tabular} This table is available in its entirety on the last page. A portion is shown here for guidance regarding its form and content. \end{table} Table 3: NLTE and LTE abundances from individual lines and their atomic data. the atmospheric parameters of our star (Table 4). We interpolate 3D corrections in \(T_{\rm eff}\), [Fe/H], and log \(\varepsilon_{Li}\). The surface gravity is fixed with log g = 1.5 since the BREIDABLIK grid does not contain synthetic spectra for models with log g \(<\) 1.5. To estimate the impact of log g on 3D corrections, we provide test calculations for model with 4500/2.0/\(-\)2.0 and log \(\varepsilon_{Li}\) = 3.5 (Table 4). Finally, based on the Li i 6707 A and 6103 A lines of the spectrum, we calculate \(\Delta_{\rm 3D}\) = \(-\)0.17 and 0.02 dex, respectively. It is worth noting that the calculations of Wang et al. (2021) account for only the primary lithium isotope, \({}^{7}\)Li. Given that the contribution of the \({}^{6}\)Li isotope is two orders of magnitude smaller than that of \({}^{7}\)Li, and considering that we use the synthetic spectra of Wang et al. (2021) for a self-consistent determination of 3D abundance corrections by comparing their 3D NLTE and 1D NLTE profiles, we assume that the impact of \({}^{6}\)Li on the 3D corrections is small compared to the 3D corrections themselves. Further 3D NLTE calculations with different lithium isotopic ratios would be helpful in this regard. Here, we speculate on the impact of \({}^{6}\)Li on the 3D abundance corrections for the Li i 6707 A line. As shown in Fig. 4 (bottom panel), the wings of the 3D profile are stronger, while the core appears weakened compared to the 1D profile. This implies that, on average, the 3D model atmosphere is cooler in deep layers and hotter in high layers compared to the 1D model. Strengthened and weakened line profiles result in negative and positive 3D abundance corrections, respectively. Depending on the Li i 6707 A line strength, the corresponding 3D corrections may have different sign and absolute value depending on a contribution from the wings and the core, i.e. line formation depths. Including \({}^{6}\)Li results in a broader profile and a larger contribution from deep atmospheric layers to the total profile. In other words, when both isotopes are considered, the line will, on average, form at deeper atmospheric layers. Thus, including \({}^{6}\)Li will lead to a smaller 3D abundance correction due to the larger contribution from the wings. Smaller 3D correction for the Li i resonance line results in a smaller \({}^{6}\)Li/\({}^{7}\)Li ratio. Thus, our 3D \({}^{6}\)Li/\({}^{7}\)Li ratio can be considered as an upper limit. #### 3.3.3 Testing the method with reference star HD 140283 Our measurement of a lithium isotopic ratio assumes that the two Li i lines yield consistent lithium abundances when we infer the correct isotopic ratio (see Sect. 5). In order to test this assumption, we first analyse the two Li i lines of unevolved low-metallicity stars with normal lithium abundances. For these stars on the Spite plateau, 3D non-LTE analyses of the Li i 6707 A line profiles showed that there is no detectable \({}^{6}\)Li in their atmospheres (Lind et al., 2013; Wang et al., 2022; Gonzalez Hernandez et al., 2019). The goal of our test is to verify if we get consistent abundances from Li i 6103 and 6707 A lines for these stars with \({}^{6}\)Li/\({}^{7}\)Li\(=0\). We also note that their Li i 6707 A lines are weak and the lithium isotope ratio does not affect their strength. We first test our lithium abundance determination method with the well-studied metal-poor star HD 140283. We adopt \(T_{\rm eff}\)/log g/[Fe/H]/\(\xi_{\rm c}\) = 5780/3.70/2.38/\(-\)2.43/1.3, with its \(T_{\rm eff}\) and log g taken from Sitnova et al. (2015) and in agreement with \(T_{\rm eff}\) = 5787 \(\pm\) 48 K, as measured by Karovicova et al. (2018), and log g = 3.66 \(\pm\) 0.03, calculated using the Gaia parallax (Gaia Collaboration et al., 2021). The metallicity and the microturbulent velocity are taken from the non-LTE analysis of iron lines performed by Mashonkina et al. (2019). We use a high-resolution and high S/N spectrum of HD 140283 taken on 27 January 2017 with the same instrument (Subaru/HDS) and reduced using the same procedure \begin{table} \begin{tabular}{c c c c c c} \hline \(T_{\rm eff}\) & log g & [Fe/H] & log \(\varepsilon_{Li}\) & \(\Delta_{\rm 3D,6707}\) & \(\Delta_{\rm 3D,6103}\) \\ \hline 4500 & 1.5 & –3.00 & 3.00 & 0.09 & –0.02 \\ 4500 & 1.5 & –3.00 & 3.50 & 0.15 & –0.01 \\ 4500 & 1.5 & –2.00 & 3.00 & 0.14 & –0.04 \\ 4500 & 1.5 & –2.00 & 3.50 & 0.20 & –0.03 \\ 4500 & 2.0 & –2.00 & 3.50 & 0.31 & –0.05 \\ 4750 & 1.5 & –3.00 & 3.00 & 0.09 & –0.02 \\ 4750 & 1.5 & –3.00 & 3.50 & 0.15 & –0.01 \\ 4750 & 1.5 & –2.00 & 3.00 & 0.14 & –0.04 \\ 4750 & 1.5 & –2.00 & 3.50 & 0.20 & –0.03 \\ \hline 4690 & 1.5 & –2.42 & 3.44 & 0.17 & –0.02 \\ \hline \end{tabular} \end{table} Table 4: 3D abundance corrections for the node model atmospheres Figure 4: Li i 6103 and 6707 Å line profiles for different line formation scenarios, extracted from the BREIDABLIK package for a model atmosphere with 4500/1.5/\(-\)2.0 and log \(\varepsilon_{Li}\) = 3.0. See the legend for the designations. as for the star of interest (see Sect. 2). We calculate signal-to-noise ratios of S/N = 1080 and 1090 around the Li i 6103 A and 6707 A lines, respectively. This high quality spectrum ensures reliable line profile fitting and gives EW = 1.6 mA and 47.8 mA for the subordinate and the resonance line, and the corresponding uncertainties in abundances of 0.09 dex and 0.01 dex. In 1D non-LTE, Li i 6103 A and 6707 A give \(\log\varepsilon_{\rm NLTE}\) = 2.22 and 2.27, respectively. For both Li i lines in HD 140283, 3D effects lead to weakened lines and positive abundance corrections: \(\Delta_{\rm 3D}\) = 0.07 and 0.09 for Li i 6103 A and 6707 A, respectively. Thus, for HD 140283, the two lines give consistent results within the uncertainties for the non-LTE abundances, either in 1D or after applying the 3D abundance corrections. To make sure that HD 140283 is not an isolated case, we apply our non-LTE method for lithium abundance determination from the Li i 6103 A and 6707 A lines on a sample of stars. #### 3.3.4 Testing the method with a sample of MP stars We further test our assumption using 22 stars on the Spite plateau from Asplund et al. (2006, hereafter, A06), which provided stellar parameters and equivalent widths for the 6103 and 6707 A lines. We rederive lithium abundances from each of the two lines adopting the stellar parameters from A06 and Fig. 5 shows the non-LTE and LTE abundance differences between the two Li i lines. In LTE, the Li i 6707 A line gives, on average, 0.06 dex higher abundance compared to the 6103 A line, while they are \(-\)0.02 dex lower in non-LTE. Thus, non-LTE reduces the abundance difference between the two lines. A06 determined \(T_{\rm eff}\) from fitting the H\({}_{\alpha}\) wings in LTE but Mashonkina et al. (2008) found that fitting the wings of the Balmer lines in non-LTE leads to \(\sim\)60 K higher \(T_{\rm eff}\) compared to the LTE case. Amarsi et al. (2018) found that, in metal-poor turn-off stars, \(T_{\rm eff}\) determined from the wings of H\({}_{\alpha}\) in 3D non-LTE is 150 K higher compared to 1D LTE. A systematic uncertainty in \(T_{\rm eff}\) results in a systematic discrepancy in abundances between the two lines, since the resonance line is more affected by changes in \(T_{\rm eff}\) compared to the subordinate line (see Table 6). As a test, we increase the effective temperatures of A06 by 100 K and calculate the non-LTE and LTE lithium abundances. Fig. 5 (bottom panel) presents the abundance differences between the two Li i lines derived with the increased \(T_{\rm eff}\). In non-LTE, the two lines give consistent abundances, and, on average, the abundance difference \(\Delta(6707-6103)_{\rm NLTE}\) = 0.00 \(\pm\) 0.06, while the LTE assumption results in a larger abundance from the resonance line and \(\Delta(6707-6103)_{\rm LTE}\) = 0.09 \(\pm\) 0.06. In conclusion, our test calculations show that, in different VMP stars, non-LTE leads to consistent abundances from the Li i 6707 A and 6103 A lines. ## 4 Abundances In total, we derive abundances for 16 chemical elements in the star of interest. Our average LTE and non-LTE abundance are presented in Table 2, abundances from individual lines together with their equivalent width and atomic data are given in Table 3. Carbon abundance determination is based on the CH 4300 A G band and its error is estimated by applying a continuum placement shift, which results in [C/Fe] = -0.40 \(\pm\) 0.12. Accounting the luminosity of the star of interest log(L/L\({}_{\odot}\)) = 2.6, the derived [C/Fe] is typical for VMP stars with normal carbon abundance according to Aoki et al. (2007) classification. Our observed spectrum covers the [O i] 6300 A forbidden line wavelength range, however, this line is not detected. Using this spectral region and applying the macroturbulent velocity v\({}_{\rm mac}\) = 5.1 km s\({}^{-1}\), we estimated an upper limit [O/Fe] \(<\) 0.50. The forbidden line is immune to the departures from LTE, thus, we do not perform non-LTE calculations for O i. A typical [O/Fe] ratio in very metal-poor stars is [O/Fe] = 0.6 (see, for example, Cayrel et al. 2004). Although the derived upper limit is lower compared to the typical ratio, a spectrum with higher signal to noise ratio is required to prove that the star of interest indeed has low oxygen abundance. Different \(\alpha\)-elements (Mg, Si, Ca) and Ti show similar [\(\alpha\)/Fe] ratios (\(\sim\)0.3). Scandium, chromium, nickel and zinc abundances follow iron within the uncertainties. For manganese, we find [Mn/Fe] = \(-\)0.4, which is in line with the non-LTE trend found by Eitner et al. (2022a) for halo stars. Neutron-capture elements are represented by Sr, Y, and Ba. In non-LTE we find [Sr/Ba] = 0.5 and [Ba/H] = \(-\)3.0, which is in line with expectations for a typical metal-poor star in the MW halo (see, for example, Mashonkina et al. 2017). The above element abundances are similar for those Figure 5: Non-LTE (blue circles) and LTE (red squares) abundance differences between the Li i 6707 Å and 6103 Å lines in a sample of MP stars with normal lithium abundances. For comparison, the bottom panel shows the same as in the top panel, but with \(T_{\rm eff}\) increased by 100 K. measured in typical metal-poor halo stars with similar [Fe/H]. The exceptions are lithium and sodium. In non-LTE, we calculate [Na/Fe] = 0.07 \(\pm\) 0.03, which is higher compared to [Na/Fe] = \(-\)0.4 found in non-LTE by Mashonkina et al. (2017) for stars with similar [Fe/H] (Fig. 6). The star is strongly enhanced in lithium such that the subordinate Li i 6103 A line is clearly detected (Fig. 3). It has EW = 72 mA and gives \(\log\varepsilon\) = 3.44 in non-LTE regardless of the adopted \({}^{6}\)Li/\({}^{7}\)Li ratio. The resonance Li i 6707 A line is strong with EW = 426 mA and, in contrast to the subordinate line, it is sensitive to the isotopic ratio. ## 5 \({}^{6}\)Li/\({}^{7}\)Li Isotopic Ratio We determine the lithium isotope ratio from the Li i 6707 A line. Our fitting procedure for this line is similar to what we adopted for the other lines in the spectrum, which yields the \(\log\varepsilon_{Li}\) that minimizes the difference between the observed and synthetic spectra. We fit the resonance line adopting different \({}^{6}\)Li/\({}^{7}\)Li isotope ratios from 0 to 50 % (Table 5). For each of the ratios, we vary the abundance, \(\log\varepsilon_{Li}\), and adjust line position and width. Since the strength of this line depends on both the lithium isotope ratio and the lithium abundance, we obtain different best-fit abundances when different isotope ratios are assumed (see Fig. 7). Table 5 lists the parameters (\(\log\varepsilon_{Li}\), \(\rm v_{mac}\), and \(\rm v_{r}\)) of the bestfit synthetic spectra of the Li i 6707 A computed for different \({}^{6}\)Li/\({}^{7}\)Li isotope ratios. This is also illustrated in Figure 8, where we show the abundance difference between the Li i 6103 A line and best-fit abundances from Li i 6707 A as a function of the isotope ratio. By requiring the best-fit abundance to be consistent with the \(\log\varepsilon_{Li}\) we obtained from the 6103 A line, we constrain the \({}^{6}\)Li/\({}^{7}\)Li isotope ratio. The isotope ratio impacts the best-fit abundance significantly, and it also affects the profiles of Li i 6707 A. There are multiple combinations of \(\log\varepsilon_{Li}\), \(\rm v_{mac}\), \(\rm v_{r}\) and the corresponding \({}^{6}\)Li/\({}^{7}\)Li ratios that provides a reasonable fit of the resonance line (see Fig. 7). This degeneracy means that \(\log\varepsilon_{Li}\) and \({}^{6}\)Li/\({}^{7}\)Li can hardly be both determined from the resonance line only. We overcome this degeneracy by determining the abundance from the subordinate line. Our method is primarily sensitive to the strength of the Li i 6707 A line and it does not rely on the information in the line profile or position. Therefore, it does not require a detailed characterization of the instrument, data reduction, and broadening mechanism. As a sanity check, we control \(\rm v_{mac}\) and \(\rm v_{r}\) of our 1D best fit profiles of Li i 6707 A, to make sure that they are consistent with an average \(\rm v_{mac}\) = 5.1 \(\pm\) 2.5 km s\({}^{-1}\) and \(\rm v_{r}=-275.1\) km s\({}^{-1}\) derived from other spectral lines. Our spectral fitting is based on the 1D non-LTE analysis described in Section 3.3.1. Since the best-fit abundances could differ when one uses a 3D model atmosphere, we apply the grid-based correction of Wang et al. (2021) on the best-fit abundances as described in Section 3.3.2. We note that we focus on the 3D effect on the obtained best-fit abundances and that the line shape difference between 1D and 3D synthetic spectra does not matter in our analysis. As our measurement of the isotope ratio comes from the best-fit abundances obtained from the two lines, we only need to consider the uncertainties in the abundances, more precisely the uncertainty in the difference between the best-fit abundances obtained from the two lines, and propagate it to the estimate of the isotope ratio. We adopt 0.1 dex as the uncertainty in the difference between the best-fit abundances taking the followings into account: stellar atmosphere parameters (80 K in \(T_{\rm eff}\), 0.13 dex in log g, 0.2 km s\({}^{-1}\) in \(\xi_{\rm r}\)), a 0.02 dex uncertainty in the 3D corrections, and a 2 % shift in the continuum normalisation of the observed spectrum. Uncertainties in \(T_{\rm eff}\) and continuum placement mostly contribute to the total uncertainty, while changes in log g and \(\xi_{\rm r}\) produce a negligible (\(<\) 0.01 dex) shift in abundance (Table 6). Fig. 8 shows the difference between best-fit non-LTE abundances from the Li i 6103 and 6707 A lines as a function of the isotope ratio. In 1D non-LTE (blue dotted lines), we find that \({}^{6}\)Li/\({}^{7}\)Li = 1.64\({}^{+1.49}_{-1.08}\) % provides consistent abundances from the two lines. Note that the abundances from the two lines are inconsistent at more than the 2\(\sigma\) level if we assume \({}^{6}\)Li/\({}^{7}\)Li = 0, strongly indicating the presence of \({}^{6}\)Li in the atmosphere. The 3D correction increases the best-fit abundance from the Li i 6707 A line by 0.17 dex and decrease that from the subordinate line by 0.02 dex, making the detection of \({}^{6}\)Li more significant and increasing the \({}^{6}\)Li/\({}^{7}\)Li ratio to 5.65\({}^{+5.05}_{-2.51}\) %. It is not straightforward to understand the origin of \({}^{6}\)Li in the investigated Li-rich star. Cameron & Fowler (1971) mechanism, which is one of the proposed mechanisms for Figure 6: Non-LTE [Na/Fe] ratios in the star of interest (red square) and comparison sample giants (black circles) from (Mashonkina et al., 2017). Figure 7: Li i 6707 Å non-LTE 1D line profiles calculated with \({}^{6}\)Li/\({}^{7}\)Li = 0.5 % and \(\log\varepsilon\) = 3.54 (red solid curve); \({}^{6}\)Li/\({}^{7}\)Li = 8 % and \(\log\varepsilon\) = 3.54 (purple dashed curve); \({}^{6}\)Li/\({}^{7}\)Li = 8 % and \(\log\varepsilon\) = 3.19 (green dotted curve). The observed spectrum of the investigated star is shown with dots. the origin of Li-excess in Li-rich giants, produces only the \({}^{7}\)Li isotope. While our 1D non-LTE analysis result does not rule out this scenario, an application of the 3D correction leads to a 0.24 dex difference in the best-fit abundances between the two lines, which is a severe discrepancy given the uncertainties. Our lithium abundance and isotope ratio determinations are based on the two lines covered by the available observed spectrum. However, our analysis would further benefit from observations of other Li i lines whose strengths are insensitive to the assumed isotope ratio. Such lines would narrow down the log \(\varepsilon_{Li}\) range allowed, enabling us to put a stronger constraint on the lithium isotope ratio from the Li i 6707 A line. One such lines is Li i 8126 A, for which we predict EW = 42 and 37 mA in non-LTE and LTE, respectively, when using log \(\varepsilon_{Li}\) = 3.44. ## 6 Discussion The star of interest is likely a single star with mostly normal chemical composition. The exceptions are extremely high Li abundance, an excess of 0.5 dex in [Na/Fe], and potentially low oxygen abundance with an upper limit [O/Fe] \(<\) 0.5, which is at least 0.1 dex lower than the typical value. A slightly asymmetric H\({}_{\alpha}\) profile reveals a signature of disturbance in the stellar atmosphere. The blueshifted core H\({}_{\alpha}\) may argue for outward moving upper atmospheric layers and mass loss. As for the evolutionary status of the star of interest, its position on the \(T_{\rm eff}\) - log g suggests that the star can be either RGB or AGB star. AGB stars can exhibit the infrared color excess caused by their mass loss. Some Li-rich giants show the IR excess, and those of them with WISE(Wide-field Infrared Survey Explorer, Cutri et al., 2013) color W1 - W4 \(>\) 1 are the most Li-rich with log \(\varepsilon_{Li}\)\(>\) 2.0 (Rebull et al., 2015). For the star of interest, the IR color excess W1 - W4 = 11.093\(\pm\)0.23 - _9.113_ = 1.98. However, its W4 magnitude should be applied with caution, since it is uncertain and measured with S/N ratio of smaller than 2. Calculations predict that self-enrichment with lithium through the Cameron-Fowler mechanism may occur at different evolutionary stages: (i) RGB bump stars (for example, Charbonnel & Balachandran, 2000; Yan et al., 2018); (ii) upper RGB stars (for example, Denissenkov & Vandenberg, 2003); (iii) early AGB stars (for example, Charbonnel & Balachandran, 2000). Given that the extremely high Li-enhancement is rapidly destroyed and taking into account the results of asteroseismic investigations of Li-rich stars as \begin{table} \begin{tabular}{l l l l l l l} \hline line & \(T_{\rm eff}\) & log g & \(\xi_{\rm t}\) & continuum & \(\Delta_{\rm 3D}\) & total \\ Å & 80 K & 0.13 & 0.2 km s\({}^{-1}\) & 2 \% & & \\ \hline 6103 & 0.05 & 0 & 0 & 0.07 & 0 & 0.09 \\ 6707 & 0.10 & 0 & 0 & 0.04 & 0.02 & 0.11 \\ \hline \end{tabular} \end{table} Table 6: Impact of the uncertainties on different values on the Li abundance Figure 8: Top panel: Non-LTE abundance difference between the Li i 6707 and 6103 Å lines in 1D (dotted curve) and after applying 3D corrections (dashed curve) as a function of \({}^{6}\)Li/\({}^{7}\)Li isotopic ratio. The uncertainty in abundance difference of 0.1 dex is shown with the shaded area. Bottom panel: Probability distribution functions in 1D (dotted curve) and after applying 3D corrections (dashed curve) as a function of \({}^{6}\)Li/\({}^{7}\)Li isotopic ratio. Horizontal lines indicate limitations on \({}^{6}\)Li/\({}^{7}\)Li from analysis of v\({}_{\rm r}\) and v\({}_{\rm mac}\) of the best fit profiles. \begin{table} \begin{tabular}{l c c c c} \hline Reference & \({}^{6}\)Li/\({}^{7}\)Li, \% & log A & v\({}_{\rm mac}\), km s\({}^{-1}\) & \(\Delta\)v\({}_{\rm r}\), km s\({}^{-1}\) \\ \hline test & 50 & 2.94 & 2.1 & –2.5 \\ meteorites, McDonough et al. (2003) & 8.3 & 3.19 & 3.8 & –1.1 \\ active K dwarf, Christian et al. (2008) & 5.0 & 3.27 & 4.2 & –0.8 \\ solar spot, Ritzenhoff et al. (1997) & 3.0 & 3.35 & 4.7 & –0.5 \\ test & 2.0 & 3.41 & 5.0 & –0.4 \\ test & 1.5 & 3.44 & 5.1 & –0.2 \\ RGB bump/early-AGB, Kowkabany et al. (2022) & 0.5 & 3.54 & 5.7 & 0.1 \\ Spite plateau, Wang et al. (2022) & 0 & 3.64 & 6.1 & 0.2 \\ \hline \end{tabular} Non-LTE abundance from the Li i 6103 Å line log \(\varepsilon\) = 3.44, an average v\({}_{\rm mac}\) = 5.1 \(\pm\) 2.5 km s\({}^{-1}\) from other spectral lines. \end{table} Table 5: Non-LTE abundance from the Li i 6707 Å line as a function of the \({}^{6}\)Li/\({}^{7}\)Li isotopic ratio, together with the corresponding best fit macroturbulent and radial velocities. described in Yan et al. (2021) where no RGB stars were found with log \(\varepsilon_{Li}>2.6\), we assume that the latter scenario is the most likely for the star of interest. High sodium abundance also supports this guess. Spite et al. (2006) investigated a sample of VMP stars and found that the most luminous of their sample giants show higher [Na/Fe] with respect to their fainter counterparts, and they might be AGB but not RGB stars. In those stars, proton capture process converts C and O to N and also Ne to Na, resulting in C and O depletion and Na enhancement. Thus, given the above properties of the star of interest, we conclude that it is likely an AGB star experiencing a Li-flash. Regarding the potential presence of \({}^{6}\)Li in the star of interest, it's important to highlight that 2-\(\sigma\) detection is achieved when applying 3D corrections based on the calculations from Wang et al. (2021). It is worth noting that these 3D NLTE calculations have never been tested with Li-rich stars. Testing a method is an important step, and employing calculations that have not previously undergone validation with observational data may yield unexpected outcomes. To perform test calculations, one can adopt Li-rich stars where no less than three Li i lines (6707, 6103, 4602, 8126 A, etc.) are detected in their spectra. The criterion of the calculation accuracy is consistent abundances from different Li i lines in stars with accurate stellar parameters and high quality observed spectra. If our detection of \({}^{6}\)Li is definitively confirmed in future, it may be explained as follows. While CF mechanism produces \({}^{7}\)Li only and operates in stellar interiors, the acceleration of particles in late type stars with chromospheric activity could generate \({}^{6}\)Li and \({}^{7}\)Li in the upper photospheric layers (Canal et al., 1975; Livshits, 1997). For example, in solar flares, Livshits (1997) predicts a temporary enrichment in lithium abundance up to log \(\varepsilon_{Li}=2\), which drops down to typical solar value log \(\varepsilon_{Li}=2\) within three hours. Although it is assumed that chromospheric activity decreases with stellar age, Takeda & Takada-Hidai (2011); Smith et al. (2016) detected He i 10830 A line in old stars with [Fe/H] down to \(-3.7\). The hypothesis of \({}^{6}\)Li origin in the star of interest via chromospheric activity can be checked by obtaining an additional observed spectrum. It could be confirmed or rejected depending on whether variation in Li i 6707 A line profile will be found or not. ## 7 Conclusions We report the discovery of a very metal-poor Li-rich giant star (with effective temperature \(T_{\rm eff}\) = 4690 \(\pm\) 80 K, surface gravity log g = 1.34 \(\pm\) 0.13, metallicity [Fe/H] = \(-2.43\)\(\pm\) 0.07). We find that its Li abundance log \(\varepsilon_{Li}\) = 3.42 \(\pm\) 0.07 and 3.44 \(\pm\) 0.07 in non-LTE 3D and non-LTE 1D, respectively. We construct a model of Li i atom based on the accurate atomic data available to date and perform the non-LTE calculations with this model. To account for 3D effects for Li i, we adopt data from Wang et al. (2021). From the comparison of the non-LTE abundances from two lines, we determine the isotopic ratio \({}^{6}\)Li/\({}^{7}\)Li = 1.64\({}^{+1.49}_{-1.08}\) % in 1D and \({}^{6}\)Li/\({}^{7}\)Li to 5.65\({}^{+5.05}_{-2.51}\) % when applying the 3D corrections. To our knowledge, this is the first \({}^{6}\)Li/\({}^{7}\)Li measurement in a Li-rich very metal-poor star. The proposed method to determine the \({}^{6}\)Li/\({}^{7}\)Li isotope ratio relies on the analysis of the resonance Li i 6707 A line in conjunction with the subordinate line. Fixing the lithium abundance from the subordinate line that is not sensitive to variations in the \({}^{6}\)Li/\({}^{7}\)Li ratio, we overcome a degeneracy between lithium abundance and the \({}^{6}\)Li/\({}^{7}\)Li isotopic ratio, which both impact the resonance Li i 6707 A line. This method can be applied to other Li-rich stars where no fewer than two Li i lines can be detected. We suggest that the star of interest is likely an early AGB star experiencing a Li-flash. Our interpretation of the lithium enhancement in the star of interest strongly depends on the line formation scenario adopted for the \({}^{6}\)Li/\({}^{7}\)Li ratio determination: 1D non-LTE allows lithium to be produced in the Cameron-Fowler mechanism inside the star, while 3D non-LTE solidly argues for the presence of a significant amount of \({}^{6}\)Li, which excludes lithium production in the CF-mechanism. It is worth noting that \({}^{6}\)Li and \({}^{7}\)Li can be produced by spallation processes in atmospheres of stars with chromospheric activity (Canal et al., 1975; Livshits, 1997). However, we postpone the interpretation of the presence of \({}^{6}\)Li isotope in the star of interest until comprehensive 3D NLTE calculations that account for both isotopes have been verified through testing with Li-rich stars. In total, we derive abundances for 16 chemical elements from Li to Ba. The investigated star shows high [Na/Fe] = 0.07 \(\pm\) 0.03, which is 0.5 dex higher compared to normal stars with similar [Fe/H]. Other chemical element abundances are similar to those found in the literature for very metal-poor stars. The star presented here joins the sample of rare Li-rich VMP stars, studies of which can shed light on the mystery of lithium production and its abundance evolution. The derived abundances and the isotopic ratio can be used as an observational constraint on the poorly known mechanisms of lithium production. For further investigations of the Li-rich stars phenomenon, namely its possible connection with stellar activity together with more robust spectral line formation modeling, observed spectra in a wide wavelength range that covers the He i 10830 A, the Ca ii H and K lines, and the Li i 8126 A line would be helpful. ## Acknowledgements T.S. acknowledges the Institute of Astronomy, Russian Academy of Sciences, Pyatnitskaya 48, 119017, Moscow, Russia, which made this study possible. We are indebted to L. I. Mashonkina for providing model atoms for the non-LTE calculations and for useful comments on this study. We gratefully acknowledge P. Bonifacio, Y. Pakhomov, E. Ageeva, and B. Nizamov for useful comments and suggestions. We are grateful to the reviewer for careful reading the manuscript and for providing valuable comments. T.S. acknowledges Thomas Nordlander and Ella Wang for clarifying the details of their calculations for Li i. This research is based in part on data collected at the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. We are honored and grateful for the opportunity of observing the Universe from Maunakea, which has the cultural, historical, and natural significance in Hawaii. Z.Y. and N.F.M. acknowledges funding from the Agence Nationale de la Recherche (ANR project ANR-18-CE31-0017) and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 834148). F.S. thanks the Dr. Margaret "Marmie" Perkins Hess postdoctoral fellowship for funding his work at the University of Victoria. JIGH acknowledges financial support from the Spanish Ministry of Science and Innovation (MICINN) project PID2020-117493GB-I00. This work has made use of data from the European Space Agency mission Gaia ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. ## Author contribution statement T.S. determined stellar atmosphere parameters, constructed the Li i model atom, determined chemical composition and led the writing of the manuscript. Z.Y. led the Subaru HR follow-up and T.M. reduced the spectrum. ## 8 Data availability The data used in this article will be shared on request to the corresponding authors.
2306.04530
Lenient Evaluation of Japanese Speech Recognition: Modeling Naturally Occurring Spelling Inconsistency
Word error rate (WER) and character error rate (CER) are standard metrics in Speech Recognition (ASR), but one problem has always been alternative spellings: If one's system transcribes adviser whereas the ground truth has advisor, this will count as an error even though the two spellings really represent the same word. Japanese is notorious for ``lacking orthography'': most words can be spelled in multiple ways, presenting a problem for accurate ASR evaluation. In this paper we propose a new lenient evaluation metric as a more defensible CER measure for Japanese ASR. We create a lattice of plausible respellings of the reference transcription, using a combination of lexical resources, a Japanese text-processing system, and a neural machine translation model for reconstructing kanji from hiragana or katakana. In a manual evaluation, raters rated 95.4% of the proposed spelling variants as plausible. ASR results show that our method, which does not penalize the system for choosing a valid alternate spelling of a word, affords a 2.4%-3.1% absolute reduction in CER depending on the task.
Shigeki Karita, Richard Sproat, Haruko Ishikawa
2023-06-07T15:39:02Z
http://arxiv.org/abs/2306.04530v1
Lenient Evaluation of Japanese Speech Recognition: Modeling Naturally Occurring Spelling Inconsistency ###### Abstract _Word error rate_ (WER) and _character error rate_ (CER) are standard metrics in Speech Recognition (ASR), but one problem has always been _alternative spellings_: If one's system transcribes _adviser_ whereas the ground truth has _advisor_, this will count as an error even though the two spellings really represent the same word. Japanese is notorious for "lacking orthography": most words can be spelled in multiple ways, presenting a problem for accurate ASR evaluation. In this paper we propose a new _lenient_ evaluation metric as a more defensible CER measure for Japanese ASR. We create a lattice of plausible respellings of the reference transcription, using a combination of lexical resources, a Japanese text-processing system, and a neural machine translation model for reconstructing kanji from hiragana or katakana. In a manual evaluation, raters rated 95.4% of the proposed spelling variants as plausible. ASR results show that our method, which does not penalize the system for choosing a valid alternate spelling of a word, affords a 2.4%-3.1% absolute reduction in CER depending on the task. ## 1 Introduction: "Word" error rate For decades, a standard measure of performance in Automatic Speech Recognition (ASR) has been _word error rate_ (WER), which gives a measure of how poorly a transcription hypothesized by the ASR system matches a reference transcription and which, while often criticized--e.g. (Wang et al., 2003)--is still widely used. While the expression _WER_ uses the term _word_, it is important to note that what is matched is not really words, but rather _spelled forms_. To take a simple example from English, the reference transcription might have the token _advisor_, whereas the corresponding token in the hypothesis is _adviser_. Although these are variant spellings of the same word, the system would be assessed as getting the word wrong, since the spellings do not match. If one used instead _character error rate_ (CER), the effect of the spelling discrepancy would be of course be less, but there would still be an error. Arguably this should really not count as an error, since the spelling alternates are both valid. _Orthographic variation_(Meletis and Durscheid, 2022, Section 4.6), is common in the world's writing systems, but for many systems the effect is a minor one. In English, for example, orthographic variation is of two main types: regional variation, in particular British versus American spelling (e.g. _neighbour_ vs. _neighbor_); and more or less free variation within a regional variety such as the _advisor/adviser_ example above, or issues such as whether to write a space in noun compounds (e.g. _doghouse_ vs. _dog house_). In the former case, one can argue that a spelling discrepancy should count as an error since in contexts where, say, _flavour_ would be an appropriate spelling, _neighbor_ would not be, and vice versa. In the latter case, the variants should probably not be counted as errors, but a naive WER or CER computation would so count them. Still, since the amount of such spelling variation is relatively small, one can usually ignore this effect, or use cleanup scripts to handle the few cases that occur. WER is a fiction, but it is a fiction that can largely be ignored. ## 2 Japanese spelling inconsistency In Japanese, unlike in English, spelling variation is rampant, and the fiction becomes too great to be ignored. Japanese spelling is very inconsistent, with many words that have kanji (Chinese character) spellings also appearing in text in hiragana (one of the two syllabaries used in Japanese), or even, for emphasis or other reasons, in katakana (the other syllabary). Thus common words like \(\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc}}} {\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\n{\n{\}}}}}}}}}}}}}}\) (hiragana) _dame_ 'not allowed' also frequently appear as \(\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\n{\n{ }}}}}}}}}}}}}\) (katakana) for emphasis, but there is also a somewhat infrequent but nonetheless occurring form in kanji, \(\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\n{\uc{\nuc{\n{\n{\n{\}}}}}}}}}}}}}\). \(\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\n{\nuc{\n{\nuc{\n{\n{\n{\n{\n{ }}}}}}}}}}}}}}}\) (\(\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\nuc{\n{\nuc{\n{\nuc{\n{\n{\n{ \n{\n{\n{\n{\n{\n{\nn{\n{{\nn{{ 0 0}}{{{{{{}}}}}}}{{}{}{}{{}{}{}{{}}{{}}{{}}{{}}{{}}{{}}{{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{ }{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{ }{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{ }{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{ }{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{ }{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{ }{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{ }{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{ }{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{ }{{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{ }{{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{ }{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{ }{{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{ }{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{ }{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{ }{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{ }{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{ }{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{ }{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{ }{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{ }{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{ }{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{ }{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{ }{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{ }{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{ }{{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{ }{{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{ }{{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{ }{{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{}{{}{ }{{}{{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{ }{{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{ }{{{}{{}{{}{}{{}{{}{{}{{}{{}{{}{{}{{}{{}{{}{}{{}{{{{}{{}{{}{{}{{{{}{{}{{}{{{{{}{{{{ }}{ }}{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\}\}\}\}\}\}}\}}\}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\ lattice-based text normalization system that uses a large dictionary, annotated corpora, rules, and linear classifiers to determine the most likely readings of kanji sequences in context. The system has a roughly 97% token accuracy on held out data. As is well-known, Japanese text lacks word separators, but one side-effect of the text normalization system is to produce a word-segmentation of the sentence. These word segments are used as the tokens for subsequent processing in our lattice construction. For each hiragana word, we also want a katakana equivalent--cf., the example of \(\not\in\mathcal{Y}\)\(\not\in\mathcal{X}\) above. This is a fairly straightforward conversion and in the example at hand would produce \(\not\in\mathcal{Y}\) for _ramen_, which also happens to be the way this word is normally written. This completes the conversion of kanji tokens into kana, and the next step is to convert in the other direction. For example, the last non-punctuation token in the utterance \(\not\in\mathcal{Y}\)\(\cup\)_umai_ 'delicious', also has a common kanji spelling \(\not\in\cup\). However as noted above, in this as in many other cases, one needs to be careful, since another possible spelling for \(\not\in\cup\)' is \(\bot\not\in\cup\)', which would not be appropriate in this instance since it means'skillful'. For this conversion we train a transformer-based neural machine translation model (NMT)--e.g. (Tay et al., 2020)--on Japanese web text where we first converted successive kanji spellings into hiragana using the text normalization system previously described. For example, consider the input sentence: \(\not\in\mathcal{Y}\), \(\not\in\mathcal{Y}\)\(\not\in\mathcal{Y}\)\(\not\in\mathcal{Y}\)\(\not\in\mathcal{Y}\)\(\not\in\mathcal{Y}\)\(\not\in\mathcal{Y}\)\(\not\in\mathcal{Y}\)\(\not\in\mathcal{Y}\) _futatabi, MT samitto-ga nihon de_ 'Again, the MT Summit is in Japan' which contains two words containing kanji \(\not\in\mathcal{Y}\)_futatabi_ 'again' and \(\not\in\mathcal{Y}\)_nihon_ 'Japan'. Consider the second of these, which has the hiragana transcription \(\not\in\mathcal{Y}\). We replace this into the sentence above and tag it with a special tag <to_kanji>...</to_kanji> so that the input appears as \(\not\in\mathcal{Y}\), \(\not\in\mathcal{Y}\)\(\not\in\mathcal{Y Tropical semiring. During evaluation, the Levenshtein edit distance (Levenshtein, 1966) between the reference lattice and the hypothesized transcription is computed using the algorithm reported in Gorman and Sproat (2021), pp. 93-96. As with standard CER, we define our levinent CER as the lattice edit distance--the sum of the substitution, insertion and deletion errors--divided by the number of characters in the best matching path in the reference lattice. In future work (Section 6) we also wish to incorporate style/register language models to rank different transcriptions, and we will thus want to preserve language model weights for the various spelling alternatives. To that end, we first convert the Tropical weights into a \(<\)Tropical, Tropical\(>\) Lexicographic semiring (Sproat et al., 2014), where the first dimension is reserved for the edit distance weights, and the second dimension preserves the language model weights. This will guarantee that the path in the lattice closest to the hypothesized string is selected, with the language model score of that path preserved in the second dimension. After the shortest path has been computed, the result can be converted back to the Tropical semiring with just the (second-dimension) language model weights. In the experiments reported in Section 5, we compare the results with multiple lattice variants, which are indicated with the terms bold-faced below: 1. The raw ground-truth transcription, represented as a trivial (single-path) lattice. 2. The lattice in (1) augmented with kana conversion via the text-normalization system (**+kana**). 3. The lattice in (2) augmented with the kanji restoration NMT model (**+kanji**). 4. The lattice in (3) augmented with the spelling equivalence classes (**+lexicon**). ## 4 Related Work While the contribution of spelling variation to error rate computation for Japanese ASR has been noted--see Mishima et al. (2020), page 72--as far as we can tell, there has been no prior work that specifically addresses solutions to this problem. However, the problem of spelling variation in Japanese is similar to cases in other languages where no standardized spelling exists. For example, Ali et al. (2017)--and see also (Ali et al., 2019)--present an approach for ASR for Arabic dialects. Unlike Modern Standard Arabic, which has an official and standardized orthography, Arabic regional varieties such as Levantine, Gulf Arabic, or Maghrebi are spoken languages that have no generally agreed standard written form. Nonetheless, particularly with the advent of social media, people increasingly communicate in Arabic dialects in written form. But since there is no prescribed standard there is a substantial amount of variation in how words are spelled. Ali et al. (2017) propose the _WERd_ ("word error rate for dialects") metric, which depends on a spelling variants table, which they construct from social media. Variants are collected by mining tokens that share the same context, occur a sufficient number of times, and are within a Levenshtein edit-distance bound of each other. This kind of approach for finding potentially intersubstitutable terms has been used in other applications: for example, Roark and Sproat (2014) propose a similar approach for finding potential pairs of words and novel abbreviations of those words. Once the spelling variants table is constructed, Ali et al. (2017) use it to match ASR candidates against the reference Figure 1: Final lattice computed for the reference transcription ‘ transcription similar to the way in which our lattice-based matching works. Related work includes Nigmatulina et al. (2020), who report on an ASR system for Swiss German, which like dialectal Arabic, has no standard orthography, but where spellings are loosely based on pronunciation. Another case of spelling variation can be found with transliteration, say when someone whose native language is Hindi using the Devanagari script, transliterates a Hindi word into English. As Roark et al. (2020) discuss, this problem has a practical application, since while keyboards for Devanagari and other South Asian scripts exist, they tend to be difficult to use, whereas many users are used to typing in English. Therefore many users prefer to type in Latin script transliteration, and have the system automatically convert to the native script. But this introduces a problem since, while there are standards for transliteration of South Asian languages into Latin script, few people adhere to them. The result is that one can find quite a large amount of variation in how to spell words in Latin script, whereas there is generally one way to correctly write a given word in the native script. Roark et al. (2020) investigated a variety of methods including both neural and pair n-gram methods, and found that they got the best performance with a pair 6-gram model using a Katz-smoothed trigram language model for the output. While the above cases are similar to the problem with Japanese spelling variation, there is also an important difference. For dialectal Arabic, and transliterated South Asian languages, there is no standard, and so long as the message can be communicated, users are more or less unconstrained in how they will spell words. In the case of Japanese, spelling variation is not completely unconstrained: there are definitely _wrong_ spellings for words, even if there is in any given case no _single_ right spelling. While this does not dictate a particular approach to the problem, it does mean that the variation needs to be constrained by lexical knowledge implemented in some fashion. Our use of Neural MT models for kanji restoration is related to the similar use of NMT models for transliteration: see, e.g., Grundkiewicz and Heafield (2018) and Kundu et al. (2018). Finally, we note that the problem of lenient evaluation comes up in other domains, for example in evaluation of MT systems. For example, Bouamor et al. (2014) argue that the rich morphology of Arabic has a negative impact on BLEU scores in that a naive application of BLEU can rank correct translations lower than incorrect ones. They propose a lenient metric they term "AL-BLEU", which takes morphological variation into account. They argue that this metric provides a more defensible evaluation metric. ## 5 Experiments We investigated our proposed evaluation metrics on several Japanese ASR tasks. Using large-scale multiple domain datasets, we calculated error reductions from conventional naive CER, using lattices that incorporate the additional resources discussed in the last section. We also conducted human evaluations to validate the generated spelling alternatives. ### ASR datasets We evaluated on proprietary Japanese datasets in three domains: _Farfield_, _VoiceSearch_ (VS), and _YouTube_ (YT)-- respectively, domains involving far-field speech, voice search, and YouTube video segments, (Narayanan et al., 2019). These datasets contain anonymized and hand-transcribed utterances. The numbers of evaluated utterances were 15,693 (161,174 characters) for Farfield; 9,440 (78,606 characters) for VS; and 17,780 (238,662 characters) for YT. ### ASR models Our Japanese ASR models are Conformer-based Recurrent Neural Network Transducers (RNN-T) (Gulati et al., 2020). For the YT domain evaluation, we trained the ASR model only with a YT training set of 2,000 hours using a 17-layer, 512-dimensional, 8-attention-head, non-causal encoder, and a 5000-class character vocabulary. Apart from YT, our model was trained with all the multi-domain training sets of 25,000 hours using a 12-layer, 1024-dimensional, 8-attention-head, causal encoder, with a 6400-class _wordpiece_ model vocabulary (Schuster and Nakajima, 2012). ### Results with ASR tasks Table 1 shows conventional WER and CER using the raw ground truth text, and CERs using our proposed target lattices for each ASR domain evaluation, as discussed in Section 3. In addition to the average error rates, we also computed \(\pm\)95% confidence interval following (Vilar, 2008). First, comparing WER and CER with the raw reference text, CERs were always lower than WERs in every domain. This is largely because WER depends on word boundaries estimated by a word segmenter, which can often lead to artificial mismatches between reference and transcription. CER, obviously, does not require word segmentation. For this reason, we evaluated our evaluation method by comparing baseline CER rates against the lenient CER lattice-based scoring. When we added alternative kana spellings into the reference lattice (**+kana**), CERs were decreased by at least 2% absolute for all domains. More spellings from the kanji-restoration NMT (**+kanji**) and the lexicon (**+lexicon**) further reduced CERs to 2.4%-3.1% absolute depending on the domain. For example, VS was the most impacted domain, with a 25.16% relative error reduction. A manual examination of cases of mismatch between the reference transcription and the hypothesized transcription in YT revealed many cases where one had kanji spellings and the other kana, or where one had hiragana and the other katakana, as one would expect given the discussion in Section 2. For example, the following pairs show, (1) kana-kana, (2) kana-kanji, (3) kanji-kana, and (4) kanji-kanji (false) errors between the ASR hypothesis (hyp) and reference ground truth (ref). Also given are romaji and a translation: \begin{tabular}{l l} hyp: & \(\dagger\)\( * **Wrong (Depending on the context)**: Spellings are inappropriate and considered as errors in the context of a given phrase. Consolidated results show that over 95.4% of spelling variants are valid, and 16% are great or better than the original transcripts. ## 6 Conclusions and Future Work In this paper we have proposed a lattice-based lenient evaluation method applied to computing character error rate in Japanese ASR. The method combines lexical resources, a Japanese text-processing system, and a neural MT system to reconstruct kanji from kana spellings in context. We evaluated on three different commercial Japanese ASR domains, and demonstrated a 2.4%-3.1% absolute reduction of CER--translating into an over 25% relative error reduction for the Voice Search domain. Obviously these reductions in CER are not due to any improvement in the ASR method itself, but rather reflect a more defensible measure than naive comparison to a single reference transcription. This in turn points to the importance of taking spelling variation into account when evaluating systems on languages where such variation is simply a fact of life. As noted in Section 3, we plan in future work to address another issue, namely style and register. While it is true that one often sees spelling variation for words even within the same text, it is also the case that style and register are important factors in deciding which spellings are felicitous in any given context. Thus while the word _kawaii_ 'cute', has a kanji spelling \(\overline{\eta}\)\(\overline{\psi}\)\(\upsilon\), that spelling would not usually be found in social media where the hiragana \(\#\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\trianglerighttriangleright\)\(\triangleright\)\(\triangleright\)\(\trianglerighttriangleright\)\(\triangleright\)\(\trianglerighttriangleright\)\(\triangleright\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\triangleright\)\(\trianglerighttriangleright\)\(\triangleright\)\(\trianglerighttriangleright\)\(\triangleright\triangleright\)\(\ substitable in a given context, and we also plan to investigate this in future work. Finally, while Japanese provides a particularly rich example of spelling variation compared to other modern writing systems, as discussed in Section 4, there are many languages that are primarily oral, and for which there no accepted written standard. In such languages, one can expect a fair amount of variation in spelling when people attempt to write them, and the methods proposed in this paper could be applicable to such cases. ### Limitations Our work focuses on the problem of spelling variation in Japanese. The Japanese writing system is the most complex of any modern writing system (to find anything of comparable complexity, one would have to go back to cuneiform Akkadian or Hittite) and presents a unique range of issues that impact speech and language technology, one of which is the spelling variation discussed in this paper. Nonetheless, as also noted in Section 6, we believe that the approach here should be applicable, perhaps with less dramatic results, to other cases where spelling variation occurs. This may be particularly an issue in languages that do not have a standardized writing system--e.g. Colloquial Arabic dialects--and where a large amount of spelling variation is often observed. However we have not evaluated the approach on this sort of data. Our evaluation system is not open-sourced due to the propriety lexical resources, text normalizer and kana/kanji translators. The text normalizer could probably be replaced with, e.g., the open-source Mecab (Kudo, 2006) system, though we expect that performance would be degraded. Similarly our lexical resources could potentially be replaced with publicly available Japanese dictionaries such as JMDict (Breen, 2004), but again performance would probably suffer. Note in particular that unlike CJKK's Japanese Orthographic Dictionary, JMDict entries have not been carefully curated to indicate which spellings are interchangeable, and which are, rather, words with the same reading but distinct meanings. An informal manual evaluation we performed on potential spelling variant pairs that were extracted from JMDict entries nominally representing the same word sense, revealed that about 92% were valid variant spellings, but that the rest were either wrong, or at least unclear. Figure 2: Manual evaluation results on spelling variants quality of 913 phrase pairs. ### Ethics Statement The work reported in this paper relates to the impact of Japanese spelling inconsistency on the development and evaluation of Automatic Speech Recognition systems. The data used for our experiments is from a variety of sources and includes data from users, but it contains no Personal Identifiable Information. While it is possible that some of the data (especially data from YouTube) includes content that may have ethical concerns (e.g. hate speech, hurtful terminology, intentional or unintentional bias), the algorithms presented here are neutral with respect to these issues. As discussed in Section 5.4, a subset of data was manually verified by human raters, all of whom were paid linguistic consultants hired through a third-party vendor. ## Acknowledgments We thank our colleagues, in particular Yuma Koizumi, Llion Jones and Michiel Bacchiani for discussion and feedback. We also thank three reviewers for useful comments.
2305.02411
Model chromatin flows: numerical analysis of linear and nonlinear hydrodynamics inside a sphere
We solve a hydrodynamic model of active chromatin dynamics, within a confined geometry simulating the cell nucleus. Using both analytical and numerical methods, we describe the behavior of the chromatin polymer driven by the activity of motors having polar symmetry, both in the linear response regime as well as in the long-term, fully nonlinear regime of the flows. The introduction of a boundary induces a particular geometry in the flows of chromatin, which we describe using vector spherical harmonics, a tool which greatly simplifies both our analytical and numerical approaches. We find that the long-term behavior of this model in confinement is dominated by steady, transverse flows of chromatin which circulate around the spherical domain. These circulating flows are found to be robust to perturbations, and their characteristic size is set by the size of the domain. This gives us further insight into active chromatin dynamics in the cell nucleus, and provides a foundation for development of further, more complex models of active chromatin dynamics.
Iraj Eshghi, Alexandra Zidovska, Alexander Y. Grosberg
2023-05-03T20:10:48Z
http://arxiv.org/abs/2305.02411v1
# Model chromatin flows: numerical analysis of linear and nonlinear hydrodynamics inside a sphere ###### Abstract We solve a hydrodynamic model of active chromatin dynamics, within a confined geometry simulating the cell nucleus. Using both analytical and numerical methods, we describe the behavior of the chromatin polymer driven by the activity of motors having polar symmetry, both in the linear response regime as well as in the long-term, fully nonlinear regime of the flows. The introduction of a boundary induces a particular geometry in the flows of chromatin, which we describe using vector spherical harmonics, a tool which greatly simplifies both our analytical and numerical approaches. We find that the long-term behavior of this model in confinement is dominated by steady, transverse flows of chromatin which circulate around the spherical domain. These circulating flows are found to be robust to perturbations, and their characteristic size is set by the size of the domain. This gives us further insight into active chromatin dynamics in the cell nucleus, and provides a foundation for development of further, more complex models of active chromatin dynamics. Contributing authors: [email protected]; [email protected]; [email protected]; ## 1 Introduction The cell nucleus houses the genome, containing genetic information needed for the cell's life [1]. This information is encoded into a long DNA molecule, which in cells forms a complex with histone proteins, the chromatin fiber [2]. The cell nucleus encompasses chromatin as well as a variety of molecules such as proteins and RNA [1, 3], and is maintained out-of-equilibrium by a large number of active processes e.g. transcription, replication, chromatin remodeling and DNA repair [4, 5]. Active processes affect both the dynamics [6, 7, 8] and organization of the genome [9, 10, 11]. Examples include changes in: chromatin mobility with transcription [12, 13], local chromatin packing and dynamics upon DNA damage [14, 15, 16], occasional directed motion of chromosomal loci [17, 18, 19], or the active extrusion of chromatin loops by cohesin [20, 21]. In particular, active processes were shown to lead to the formation of micron-scale chromatin domains, within which chromatin moves coherently [22]. The first theoretical model developed to explore the origins of the chromatin coherent motions was based on two-fluid hydrodynamics [23], it was followed by explicit hydrodynamic computational models [24, 25]. These papers led to the important insight that local active forces mediated through long-range hydrodynamic interactions can lead to large-scale motions of chromatin. Separately, several computational models were developed which do not include hydrodynamic interactions, but reproduce the phenomenology of coherent motions at the expense of introducing artificial long-range forces between sections of the polymer [26, 27, 28]. An analytical description of chromatin hydrodynamics was developed in the works [23, 29, 30] based on the two-fluid model [31] and the ideas of statistical physics of active systems [32, 33, 34, 35]. The most significant flows were obtained under the assumption that active motors driving are such that each of them exerts one force on the polymer and equal but opposite force on the solvent. Indeed typical active motors operating on the genome can be viewed as such, for instance, RNA polymerase II which is a common active enzyme in the nucleus [36]. Therefore, in both our previous work [30] as well as in the present paper, we consider our theory in the most general phenomenological form, not specifying the nature of motors beyond the fact that they have a polar symmetry. It is because of polar symmetry that they act on the relative velocity of the solvent past polymer like force monopoles and not dipoles, thus generating very significant flows. If the number and activity of motors exceeds a critical threshold, we found that they spontaneously form an ordered polar phase which actively pumps chromatin and solvent through one another. We successfully described this spontaneous ordering through a polar order parameter, and identified the value of this critical threshold as a function of model parameters along with the critical exponents near the transition [30]. These results, however, were found with the assumption of an infinite boundless medium. While this made the analysis more straightforward, a more accurate description of the situation experienced by the chromatin polymer would account for the finite size of the cell nucleus. In this work, we will study the dynamics of our model in a confined spherical geometry, near its critical point. We show that the critical value of force and density of motors is shifted by an amount dependent on model parameters and system size. Furthermore, we find that the length scale of the modes excited near the transition point is set by the confinement size, consistently with expectations from the equilibrium theory of second-order phase transitions. ## 2 The model, confining geometry, and equations of motion Our goal in the present work is to analyze previously derived equations of hydrodynamic motion of chromatin [30], both linearized and nonlinear, when confined in a spherical domain. To make this work self-contained, and to set up the notations, we start by re-stating the primary equations of motion of this model. ### Equations of motion We consider two mutually permeating fluids: a polymer and a solvent. Their velocities are denoted by the fields \(\mathbf{v}^{\mathrm{p}},\ \mathbf{v}^{\mathrm{s}}\) respectively, although in our calculations we will use the following two linear combinations of the velocity fields: \(\mathbf{w}=\mathbf{v}^{\mathrm{p}}-\mathbf{v}^{\mathrm{s}},\ \mathbf{u}=(\eta^{ \mathrm{p}}\mathbf{v}^{\mathrm{p}}+\eta^{\mathrm{s}}\mathbf{v}^{\mathrm{s}})/ (\eta^{\mathrm{p}}+\eta^{\mathrm{s}})\). The two fluids, flowing past another, experience a friction per unit volume \(\zeta\). The volume fraction of the polymer is \(\phi(\mathbf{r})\), and the two-fluid combination is assumed to be incompressible, so the volume fraction of solvent is \(1-\phi(\mathbf{r})\). Both fluids are assumed to experience viscous dissipation upon shear, with respective viscosities \(\eta^{\mathrm{p}},\ \eta^{\mathrm{s}}\). The viscosity of the chromatin polymer is known to have a frequency dependence [37], but here we will assume that it is simply Newtonian. The polymer experiences an osmotic pressure \(\Pi(\phi)\), which is only a function of \(\phi\) as it is assumed to equilibrate quickly and locally. We consider a regime where the polymer density deviates weakly from its mean value \(\phi_{0}\), allowing us to linearize the equations of motion around that point: \(\phi=\phi_{0}+\delta\phi\). As a result the osmotic pressure can be written as \(\Pi=\Pi_{0}+K\delta\phi\), where \(K\) is the osmotic modulus of the polymer. This gives us the following equations for the fluids \[\begin{split}\zeta\left(\frac{1}{\eta^{\mathrm{p}}}& +\frac{1}{\eta^{\mathrm{s}}}\right)\mathbf{w}=\nabla^{2}\mathbf{w}+ \left(\frac{\mathbf{F}^{\mathrm{p}}}{\eta^{\mathrm{p}}}-\frac{\mathbf{F}^{ \mathrm{s}}}{\eta^{\mathrm{s}}}\right)\\ &\qquad\qquad+\left(\frac{1-\phi_{0}}{\eta^{\mathrm{s}}}-\frac{ \phi_{0}}{\eta^{\mathrm{p}}}\right)\nabla P-\frac{K}{\eta^{\mathrm{p}}}\nabla \delta\phi\,\end{split} \tag{1a}\] \[\nabla\cdot\mathbf{w}=-\partial_{t}\delta\phi\left(\frac{1}{ \phi_{0}}+\frac{1}{1-\phi_{0}}\right)\,\] (1b) \[\left(\eta^{\mathrm{p}}+\eta^{\mathrm{s}}\right)\nabla^{2}\mathbf{ u}=K\nabla\delta\phi+\nabla P+\mathbf{F}^{\mathrm{p}}+\mathbf{F}^{\mathrm{s}}\,\] (1c) \[\left(\eta^{\mathrm{p}}+\eta^{\mathrm{s}}\right)\nabla\cdot \mathbf{u}=-\partial_{t}\delta\phi\left(\frac{\eta^{\mathrm{p}}}{\phi_{0}}- \frac{\eta^{\mathrm{s}}}{1-\phi_{0}}\right)\, \tag{1d}\] where \(\mathbf{F}^{\mathrm{p}},\ \mathbf{F}^{\mathrm{s}}\) are the forces densities generated by the motors, and \(P\) is the hydrostatic pressure induced by total incompressibility. As stated in the introduction, we assume that every active motor exerts a force on the polymer and equal and opposite force on the solvent. In the coarse-grained description, assuming there are some \(\rho\) active motors per unit volume, we can write \[\mathbf{F}^{p}(r)=-\mathbf{F}^{s}(r)=\rho f\mathbf{m}(r)\, \tag{2}\] where \(f\) is the force produced by a single motor, while \(\mathbf{m}(r)\) is the average vector of orientation of motors located around point \(\mathbf{r}\). According to equations (1a) and (1c), field \(\mathbf{w}\) is driven by force monopole density \(f\rho\mathbf{m}\), while field \(\mathbf{u}\) in this approximation is not driven at all. Vector field \({\bf m}({\bf r})\) plays an important role in our theory, and it is worth several comments. First, the very possibility to define vector \({\bf m}\) is because motors we consider have polar symmetry, they have two different sides, one attached to the polymer and another exposed to the solvent. Second, vector \({\bf m}\), as the average of the unit orientation vectors of individual motors, has magnitude that is always smaller than unity: \(|{\bf m}|\leq 1\). Third, this vector naturally serves as an order parameter of polarization ordering phase transition predicted and described by our theory. ### Linear Dynamics Given the polar symmetry of motors, each of them is subject to a torque whenever there is a flow of the solvent relative to the polymer, i.e., when \({\bf w}\neq 0\). If drive is weak (because motors are either weak or not numerous enough), then this torque is also weak and the distribution of motor orientations is nearly isotropic. In this case, the dynamics of average orientation is described by \[\tau\partial_{t}{\bf m}=-2{\bf m}+\frac{2\tau}{3a}{\bf w}\;, \tag{3}\] where we have introduced the reorientation time of the dipoles \(\tau=\gamma/T\). \(\gamma\) is the rotational friction coefficient of the motor, \(T\) is the ambient (potentially effective) temperature, and \(a\) is the size of the motors, assumed to be comparable to (or smaller than) the mesh size of the polymer. We can now combine equations (3) and (1) into one equation for \({\bf w}\) and \({\bf m}\). \[\begin{split}&\left(1+\lambda^{2}\nabla\times\nabla\times- \lambda_{s}^{2}\nabla\nabla\cdot\right)\tau\partial_{t}{\bf w}\\ &-2\lambda_{d}^{2}\nabla\nabla\cdot{\bf w}_{\omega}=\frac{f \rho\tau}{\zeta}\partial_{t}{\bf m}_{\omega}\;,\end{split} \tag{4}\] where we have defined three length scales: \[\lambda^{2}=\frac{\eta^{\rm s}}{\zeta},\quad\lambda_{s}^{2}\simeq\frac{\eta^{ \rm p}\left(1-\phi_{0}\right)^{2}}{\zeta},\;\;{\rm and}\;\;\lambda_{d}^{2} \simeq\frac{K\phi_{0}\left(1-\phi_{0}\right)^{2}\tau}{2\zeta}\;. \tag{5}\] The first of these, \(\lambda\), is naturally identified as the mesh size of the polymer. Second, significantly larger length scale is \(\lambda_{s}\), the screening length of hydrodynamic interactions in the two-fluid system. Finally, the third length scale, \(\lambda_{d}\), characterizes the interplay of the two fluid system with motors, namely, it is the typical distance of cooperative diffusion by the polymer during motor reorientation time \(\tau\); of course, this motion can be thought of as driven by osmotic elasticity of the polymer (described by \(K\)) and opposed by friction (described by \(\zeta\)). Since we are treating chromatin in a continuum approximation, the model we are working with is only relevant on length scales larger than the mesh size \(\lambda\). Since all of the equations of motion in this regime are linear, we can simultaneously decompose the fields \({\bf u},\;{\bf w},\;{\bf m}\) into their divergence-free (transverse, \(\perp\)) and curl-free (longitudinal, \(\parallel\)) components. Separating these into their respective equations of motion, we obtain \[\left(1-\lambda_{s}^{2}\nabla^{2}\right)\mathbf{w}_{\perp} =\frac{f\rho}{\zeta}\mathbf{m}_{\perp} \tag{6a}\] \[\left(1-\lambda_{s}^{2}\nabla^{2}\right)\tau\partial_{t}\mathbf{w} _{\parallel} =2\lambda_{d}^{2}\nabla^{2}\mathbf{w}_{\parallel}+\frac{f\rho \tau}{\zeta}\partial_{t}\mathbf{m}_{\parallel}\;. \tag{6b}\] Our previous study allowed us to identify the value of the force \(f\) above which an ordered phase for \(\mathbf{m}\) spontaneously develops. In an infinite domain, this occurs at \(f=3a\zeta/\rho\tau\), where the velocity generated by the force dipoles \(f\rho/\zeta\) equals the velocity needed to orient them \(a/\tau\). This allows us to identify the critical parameter \[\epsilon=\frac{f\rho\tau}{3a\zeta}-1\;. \tag{7}\] When \(\epsilon>0\), the zero modes of the velocity and orientation fields become unstable. This all changes, however, with the addition of boundaries, which set a maximal length scale for any dynamics for the two fluids. Our goal in this paper is to refine the conclusions of our previous work in the finite domain context. ### Nonlinear Dynamics Once an instability develops, the polar order parameter grows in magnitude exponentially. Eventually, nonlinear effects inevitably kick in. In particular, they guarantee that \(|\mathbf{m}|\leq 1\). The precise evolution of \(\mathbf{m}\) in the nonlinear regime is complex, as it couples to all the other moments of the distribution of motor orientations. However, in our previous work [30] we derived an approximate equation of motion for \(\mathbf{m}\) including only a second-order nonlinearity in \(\mathbf{w}\): \[\begin{split}\tau\partial_{t}\mathbf{m}&=2( \mathbf{m}_{eq}(\mathbf{w})-\mathbf{m})\\ \mathbf{m}_{eq}&=\frac{\mathbf{w}\tau}{3a}\left(1- \frac{3}{5}\left(\frac{\mathbf{w}\tau}{3a}\right)^{2}\right)\;.\end{split} \tag{8}\] This is just one of a long list of possible nonlinearities that may be considered. We choose to consider it above all others as it naturally controls the amplitude of the unstable modes, and without the addition of any further nonlinearities leads to stable long-term dynamics for this system. While \(\mathbf{m}\) obeys equation (8), the velocity field \(\mathbf{w}\) still follows equation (4). This system cannot be solved analytically in general, so we will turn to a numerical method in the following section. ### Boundary Conditions Even though the nucleus usually looks like an ellipsoid, we reduce the complexity of the problem by modeling it as a sphere of radius \(R\). We assume no-slip boundary conditions for both velocity fields \(\mathbf{v}^{\mathrm{p}},\ \mathbf{v}^{\mathrm{s}}\). In the case of the polymer, this is justified by the tethering of chromatin to the boundary by LINC complexes [38], while no-slip boundary conditions for the solvent are standard for viscous fluids [39] \[\mathbf{v}_{\text{tangential}}^{\text{p}}\big{|}_{r=R}=\mathbf{v}_{\text{ tangential}}^{\text{s}}\big{|}_{r=R}=0\;. \tag{9}\] Turning now to normal components of velocity, we assume that no permeation of the boundary is possible by either polymer or solvent. We make this assumption despite the fact that the nuclear membrane is porous and lets some small molecules through [1]. We do so not only for simplicity, but also because over the seconds-timescale that we are interested in the volume of the nucleus is conserved [40], thus the net flux of material through it is small. Finally, we treat the nuclear boundary as rigid and ignore its fluctuations, which are another source of potentially interesting effects [40]. All these simplifications result in the following boundary condition: \[[\phi\mathbf{v}^{\text{p}}+(1-\phi)\mathbf{v}^{\text{s}}]_{\text{normal}}\,| _{r=R}=0\;. \tag{10}\] The meaning of this condition is simple: although neither component can go through the membrane, any one component, either polymer or solvent, can be approaching the membrane with some non-zero velocity provided that the other component at the same time departs from the wall, such that the exchange between them does not change volume. ## 3 Results The spatial behavior of the linearized model (6a,6b) is entirely contained in Laplacian operators. We will use the vector eigenfunction spectrum of the Laplacian to construct exact solutions for this model in the linear response regime. Given the restriction of the solutions to a closed bounded domain, the spectrum is discrete. Once we turn to the nonlinear regime, it becomes impossible to solve the equations of motion exactly using these basis functions. However, we have constructed a numerical method for the full nonlinear model which exploits the basis we develop for the linearized version. This makes three-dimensional solutions of the full nonlinear PDE possible with minimal computational complexity. ### Linearized model: an analytical solution We begin by finding solutions in the linear response regime, where the distribution of motor orientations deviates weakly from uniform. This allows us to expand the flow fields in a basis which simplifies the equation of motion. If we consider curl-free (\(\mathbf{w}_{\parallel}\)) and divergence-free (\(\mathbf{w}_{\perp}\)) components of the velocity separately, then the dynamics of these modes separate and simplify, as shown in equations (6). The natural choice of basis functions to solve this system of equations needs to have two properties: they need to be eigenfunctions of the Laplace operator in spherical coordinates, to simplify the spatial dependence of the equations of motion, and they must be vectors to preserve the symmetry of the velocity fields. Such functions are already known, and are referred to as vector spherical harmonics (VSH) [41]. They are constructed based on the well-known scalar spherical harmonics \(Y_{lm}(\theta,\phi)\). There are several, closely related, possible definitions for VSH. We will use the following in this paper: \[\begin{split}\mathbf{Y}_{lm}&=Y_{lm}\hat{\mathbf{r}}, \\ \mathbf{\Psi}_{lm}&=r\nabla Y_{lm},\\ \mathbf{\Phi}_{lm}&=\mathbf{r}\times\nabla Y_{lm}\;. \end{split} \tag{11}\] From these, we construct curl-free and divergence-free components of \(\mathbf{w}\). First, notice that for any scalar function \(f(\mathbf{r})\), we have \(\nabla\cdot(f\mathbf{\Phi}_{lm})=0\). Then recall that for a vector-valued function to be curl-free, it suffices that it be the gradient of a scalar. At every radial shell \(r\), we expand the scalar function \(f(\mathbf{r})\) in spherical harmonics, with coefficients \(f_{lm}(r)\). This gives us \[\begin{split}\nabla f(\mathbf{r})&=\sum_{lm}\nabla \left(f_{lm}(r)Y_{lm}\right)\\ &=\sum_{lm}\left(\frac{\partial f_{lm}(r)}{\partial r}\mathbf{Y} _{lm}+\frac{f_{lm}(r)}{r}\mathbf{\Psi}_{lm}\right)\;.\end{split} \tag{12}\] Figure 1: First excited longitudinal and transverse modes, shown along a vertical slice of the spherical domain. Since the vector dependence of these first modes is simple, we choose to plot only one component of the resulting vector fields, since the other components are simply \(0\). In the case of the first longitudinal mode the flow is spherically symmetric, while the first transverse mode is axially symmetric about the \(\hat{\mathbf{z}}\) axis. Therefore, we will solve equations (6) using the following expansions: \[\begin{split}\mathbf{w}_{\perp}&=\sum_{lm}a_{lm}(r,t) \mathbf{\Phi}_{lm};\\ \mathbf{w}_{\parallel}&=\sum_{lm}\nabla\left(b_{lm}(r,t )Y_{lm}\right)\\ &=\sum_{lm}\left(\frac{\partial b_{lm}(r,t)}{\partial r}\mathbf{ Y}_{lm}+\frac{b_{lm}(r,t)}{r}\mathbf{\Psi}_{lm}\right)\;.\end{split} \tag{13}\] To identify what spatial dependence \(a_{lm},b_{lm}\) must have, we search for eigenfunctions of the Laplacian which meet our boundary conditions. Here it is important to notice that by construction, the longitudinal velocity field \(\mathbf{w}_{\parallel}\) automatically obeys condition (10) due to the continuity of \(\delta\phi\). Therefore, we only need to impose (9) in the case of the longitudinal flows. As a result, we are searching for \(a_{lm}(r,t),b_{lm}(r,t)\) which satisfy \[\begin{split}& a_{lm}|_{r=R}=b_{lm}|_{r=R}=0\\ &\nabla^{2}\left(a_{lm}\mathbf{\Phi}_{lm}\right)=-k^{2}a_{lm} \mathbf{\Phi}_{lm}\;,\\ &\nabla^{2}\left(b_{lm}Y_{lm}\right)=-k^{2}b_{lm}Y_{lm}\;,\end{split} \tag{14}\] for some real number \(k^{2}\). One such set of functions are spherical Bessel functions \(j_{l}(x)\), which have the property \(\nabla^{2}\left(j_{l}(kr)Y_{lm}\right)=-k^{2}j_{l}(kr)Y_{lm}\) in spherical coordinates. We choose \(k\) such that the modes meet the boundary conditions at \(r=R\). This results in the following expansion \[\begin{split}\mathbf{w}_{\perp}&=\sum_{lmn}a_{lmn} (t)j_{l}\left(\frac{\alpha_{ln}r}{R}\right)\mathbf{\Phi}_{lm}\;,\\ \mathbf{w}_{\parallel}&=\sum_{lmn}b_{lmn}(t)\nabla \left(j_{l}\left(\frac{\alpha_{ln}r}{R}\right)Y_{lm}\right)\;,\end{split} \tag{15}\] where we have defined \(\alpha_{ln}\) to be the \(n\)th zero of the \(l\)th order spherical Bessel function. Equipped with the basis (15), it is now straightforward to insert the expansion into the equations of motion (6), and solve for the time dependence of the coefficients \(a_{lmn},b_{lmn}\): \[\tau\dot{a}_{lmn}(t)=\frac{2\left(\epsilon-\lambda^{2}\left( \alpha_{ln}/R\right)^{2}\right)}{1+\lambda^{2}\left(\alpha_{ln}/R\right)^{2}} a_{lmn}(t)\;, \tag{16a}\] \[\begin{split}&\tau^{2}\ddot{b}_{lmn}(t)-2\frac{\left( \epsilon-\left(\lambda_{d}^{2}+\lambda_{s}^{2}\right)\left(\alpha_{ln}/R \right)^{2}\right)}{1+\lambda_{s}^{2}\left(\alpha_{ln}/R\right)^{2}}\tau\dot{ b}_{lmn}(t)\\ &\quad+4\frac{\lambda_{d}^{2}\left(\alpha_{ln}/R\right)^{2}}{1+ \lambda_{s}^{2}\left(\alpha_{ln}/R\right)^{2}}b_{lmn}(t)=0\;.\end{split} \tag{16b}\] The time dependent dynamics of these coefficients and the corresponding modes (15) is very similar to what we have found previously for an unbounded domain, except of course the instability thresholds are significantly affected due the effect of boundaries. In the case of the transverse modes, the solutions of equation (16a) are simple exponentials in time, with the sign of the coefficient on the right-hand-side of equation (16a) determining stability. While at small \(\epsilon\) the mode is stable, as soon as \(\epsilon>\lambda^{2}\alpha_{ln}^{2}/R^{2}\) the mode becomes unstable. As expected, the finite-size effect is captured by the unitless ratio \(\lambda/R\). If the domain is a lot larger than the mesh size, which corresponds to the thickness of the boundary layer needed to meet the no-slip condition, then the dynamics are similar to that of an infinite domain. However, if the domain is small, additional forcing is needed to excite these modes, since the viscous drag at the boundary will significantly dampen their motion. The time-dependence of the longitudinal modes, as in the infinite domain case (see [30]), is that of a harmonic oscillator. The term in equation (16b) which controls stability is the friction term, and it flips sign when \(\epsilon\) is sufficiently large resulting in an apparent negative friction coefficient. As a result, the amplitude of the oscillations grow exponentially in time. The necessary forcing to drive this instability is far larger than the transverse case. Indeed, the critical value of \(\epsilon\) in this case is \((\lambda_{s}^{2}+\lambda_{d}^{2})\alpha_{ln}^{2}/R^{2}\), and recall that \(\lambda_{s}\gg\lambda\). Each of those terms represents a source of dissipation which must be overcome. \(\lambda_{s}\) reflects the energy dissipation due to friction between polymer and solvent in these longitudinal modes. This term is larger than in the case of the transverse modes (in that case it was proportional to \(\lambda\)), because the transverse velocity of polymer is far smaller than that of solvent due to force balance and the fact that \(\eta^{\rm p}\gg\eta^{\rm s}\). In the case of longitudinal flows however, the polymer can compress and move faster, leading to increased friction. Finally, \(\lambda_{d}\) reflects the dissipation due to density change of the polymer. It is worth considering what the first excited modes are in this geometry. The lowest-\(\epsilon\) (easiest to excite) transverse mode is proportional to \(\mathbf{\Phi}_{10}=\sin(\theta)\hat{\phi}\). In this state, the chromatin is swirling around the \(\hat{z}\) axis, with highest speed around the equator and decaying to \(0\) at the poles. In contrast, the first longitudinal mode to be excited is proportional to \(\mathbf{Y}_{00}=\frac{1}{4\pi}\hat{\mathbf{r}}\). This describes a radially-symmetric "breathing" motion of the chromatin, moving in and out of the center of the nucleus in an oscillatory manner. We show a slice of these first excited modes in Figure 1. In summary, restoring physical parameters, the condition for the active force density is \[f\rho>\frac{3a\zeta T}{\gamma}+\frac{a\eta_{s}T\alpha_{11}^{2}}{\gamma R^{2}} \tag{17}\] for the transverse modes (the lowest allowable mode in this case is \(l=1\), since \(\mathbf{\Phi}_{00}=0\)). In the above, we have \(\alpha_{11}\simeq 4.5\). For the longitudinal modes, the condition is \[f\rho>\frac{3a\zeta T}{\gamma}+\frac{(1-\phi_{0})^{2}a\pi^{2}}{R^{2}}\left[ \frac{3T\eta^{\rm p}}{\gamma}+\frac{3}{2}K\phi_{0}\right]\;, \tag{18}\] where we have inserted the value of \(\alpha_{01}=\pi\). These are the same expressions that we previously guessed [30], except the factors of \(\alpha_{ln}^{2}\) were absent. We find that the addition of boundaries induces necessary gradients in the velocity field which the sources of activity must compete against. Furthermore, the necessary excess force is far larger if the motors are to compress and pump the polymer in a finite domain. Consistently with our previous results, these effects become less significant the larger the domain size is compared to the length scales \(\lambda,\lambda_{s},\lambda_{d}\). ### Nonlinear regime Now that we have identified the linear stability of this system with \(\epsilon\) slightly above the critical point, we must consider the long-time behavior once the instabilities have grown and saturated the order parameter field \(\mathbf{m}\). The nonlinear dynamics of this system, described by equations (4,8), have to be investigated using a numerical method. We developed such a method, which we describe in detail in the Appendix, section B. To integrate the equations of motion, we convert the fields from a spatial representation to the basis (15) at each time-step, then evolve the coefficients, before transforming back to spatial representation to evaluate the nonlinear time evolution for \(\mathbf{m}\) shown above. We measured all length scales relative to the confinement scale \(R\) and chose the numerical values \(\lambda_{d}/R=10^{-1}\), \(\lambda_{s}/R=10^{-2}\), and \(\lambda/R=10^{-3}\). This choice is consistent with our estimates that \(\lambda_{s}\gg\lambda\) and also with numerical values of average mesh size in chromatin \(\lambda\approx 70\,\mathrm{nm}\) and \(R\approx 10\,\mu\mathrm{m}\). We set \(\epsilon=0.3\), above both thresholds of instability (17,18). The first question we set to explore is whether the modes identified in the linear response regime become stable once the nonlinearity becomes significant. We first test the transverse modes, by starting the system in a pure state \(\mathbf{w}(t=0)=w_{0}j_{1}(r\alpha_{11}/R)\mathbf{\Phi}_{10}\), with \(w_{0}=0.1\) and allowing it to evolve for 100 time-steps of step size \(\Delta t=0.5\). Eventually the dynamics settle into a steady-state consisting of a superposition of transverse states, but no significant longitudinal excitations are observed, as shown in Figure 2A. This result is robust to small perturbations, which we test by initializing all other modes with uniformly distributed random values ranging between \(\pm 10^{-8}\), as shown in Figure 2B. Figure 2: Transverse flows are stable in the nonlinear regime. A: Amplitude of the coefficients for the lowest transverse and longitudinal modes (\(a_{10}\) and \(b_{01}\) respectively), as a function of time when initialized in a pure state with \(a_{10}(0)=0.1\). Nonlinearities mix multiple transverse modes together, but no longitudinal modes are excited. B: Amplitude of the same modes as in A, but with the initial conditions set to random numbers of magnitude \(10^{-8}\), with \(a_{10}(0)=0.1\). The transverse steady-state is robust to this small perturbation. We then perform the same analysis but with the longitudinal modes. We initialize the system in a pure state \(\mathbf{w}(t=0)=w_{0}\nabla\left(j_{0}(r\alpha_{01}/R)Y_{00}\right)\) with \(w_{0}=0.1\). To better resolve the oscillatory dynamics in this case we lower the time step to \(\Delta t=0.25\tau\), and we evolve for 200 steps. The oscillations initially grow in magnitude until they settle at a maximum, at which point their amplitude remains steady for all times we integrated, as shown in Figure 3A. However, when we initialize the configuration with uniform random numbers ranging between \(\pm 10^{-8}\) for all other modes, after a certain number of cycles the longitudinal oscillator gets taken over by the transverse steady-state modes, as shown in Figure 3B. To see where the evolution stabilizes we keep the system going for 2000 time steps, and find that while the root-mean-squared magnitude of transverse modes reaches something close to a steady-state, individual modes such as the first coefficient \(a_{10}(t)\) still evolve over those timescales, shown in Figure 3B. To verify that they eventually do reach a steady-state, we increase the time step to \(\Delta t=0.5\tau\) and evolve for \(10^{4}\) steps, finding that the modes do eventually fully settle, albeit in a different steady-state each time due to the random initial conditions. One example evolution is shown in Figure 3C. We test this stability further by allowing the longitudinal oscillations to fully develop before introducing transverse flows. We initialize the system in a pure longitudinal state as before, and allow it to evolve for 150 time steps with step size \(\Delta t=0.25\tau\), which is equivalent to about three full cycles of the oscillator. Then, we introduce a transverse kick by setting the amplitude of the lowest transverse mode to a large value, \(a_{10}=3\). This briefly perturbs the oscillations as can be seen in the amplitude of \(b_{01}\) right afterwards in Figure 4, but the system quickly settles back into its previously established resonance. Evidently, once the longitudinal oscillatory flows have settled in place, they are more robust to perturbations. Figure 3: Longitudinal flows are unstable to perturbations in the nonlinear regime. A: Amplitude of the coefficients for the lowest transverse and longitudinal modes (\(a_{10}\) and \(b_{01}\) respectively), as a function of time when initialized in a pure state with \(b_{10}(0)=0.1\). Nonlinearities mix longitudinal modes together, but the system remains free of transverse modes. B: Amplitude of the same modes as in A, but with the initial conditions set to random numbers of magnitude \(10^{-8}\), with \(b_{10}(0)=0.1\). In this case, the transverse flows grow over time and eventually dominate the evolution of the system, suppressing all oscillations. C: Same as in B, but evolved over even longer timescales, to verify that the system does indeed reach a steady-state eventually. ## 4 Discussion & Conclusions In this paper we have analyzed solutions of the active two-fluid equations of motion we derive in [30], but in a confined domain. As this model is intended to describe the active dynamics of the chromatin polymer and its solvent nucleoplasm, the addition of the confinement allows us to study the effect of the nuclear boundary on these dynamics. Specifically, we are interested in the kinds of active flows which may take place in such a restricted environment, and the ways in which the hierarchies of length scales (mesh size, screening length, osmotic relaxation given a characteristic time) may affect the stability of the dynamics. In our previous work [30], we performed estimates of the relevant length, time, and velocity scales using experimental values and found the theory to be consistent with experiments. As estimated in that work, we find that the confinement length scale \(R\) leads to factors of \(1/R^{2}\) everywhere where the Laplace operator is present, resulting in confinement-dependent instability thresholds for both longitudinal and transverse modes. In the limit \(R\to\infty\), we recover the unbounded results derived in [30]. Furthermore, we find that the confinement size determines the size of the eigenmodes which get excited, which will be an interesting result to compare to experimental data on active chromatin flows. We go beyond linear stability analysis by numerically integrating the equations of motion in a full three-dimensional domain. We find that the decomposition into transverse and longitudinal modes, which we performed for the sake of linear stability analysis, remains relevant in the nonlinear case. In particular we find that if initialized in a pure transverse or longitudinal state, the system will remain in that class of states (although the nonlinearity will mix transverse modes among one another and equivalently for longitudinal modes). Transverse modes are found to be stable under small perturbations, while longitudinal modes are stable under some but not others. It will be interesting, in a later study, to analyze in detail the basins of attraction and regimes of stability for all of these types of flows in the nonlinear Figure 4: Longitudinal oscillations are stable to late perturbations. The system is prepared in the same manner as before, in a pure longitudinal state, but at a prescribed time the transverse mode \(a_{10}\) is arbitrarily increased in magnitude to introduce a kick to teh system. For a short period the lowest longitudinal mode amplitude (orange) is affected, but it quickly settles back into its previous oscillatory behavior. regime. In sum, we find that the minimal active two-fluid model we have developed, exhibits rich and nontrivial behavior. It is worth asking whether the choice of no-slip boundary conditions is too restrictive to be an accurate description of the physics of chromatin in the nucleus. After all, the nuclear envelope has a complex nature and is itself actively fluctuating [40], which will significantly affect the possible motions of the fluids within. Furthermore, in this model we are not allowing any change in the total nuclear volume, or permeation of solvent through the boundary, both of which occur in real biological systems. These, and many other improvements on the model, could be added in a systematic way to this hydrodynamic description of chromatin dynamics, hopefully increasing our understanding of the complex physics of the genome. Acknowledgements.AZ is grateful for support from the NSF Grants CAREER PHY-1554880, CMMI-1762506, PHY-2210541, DMS-2153432, and NYU MRSEC DMR-1420073. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. AZ and AYG acknowledge useful discussions with participants of the 2020 virtual KITP program on "Biological Physics of Chromosomes". Numerical solutions were calculated using New York University (NYU) High Performance Computing cluster. Data Availability.Data sharing not applicable to this article as no datasets were generated or analysed during the current study. ## Appendix A Completeness of solenoidal solutions In the main text, we simply stated that the solenoidal solutions, that is those solutions with \(\nabla\cdot\mathbf{w}=0\), are spanned by the expansion \[\mathbf{w}_{\perp}=\sum_{lmn}a_{lmn}j_{l}\left(\frac{\alpha_{ln}r}{R}\right) \mathbf{\Phi}_{lm}\;.\] (A1) However, we did not show that there is no combination coming from the basis functions \(\mathbf{\Psi}_{lm},\;\mathbf{Y}_{lm}\). Here we will demonstrate that this is in fact the case. Consider one term in the expansion \(\mathbf{w}_{\perp}=E^{r}(r)\mathbf{Y}_{lm}+E^{\theta}(r)\mathbf{\Psi}_{lm}\). The divergence of this field is \[\nabla\cdot\mathbf{w}_{\perp} =\left(\partial_{r}E^{r}+\frac{2}{r}E^{r}-\frac{l(l+1)}{r}E^{ \theta}\right)Y_{lm}\] (A2) \[=0\;.\] Since the basis functions must be eigenfunctions of the spherical Laplacian and meet the no-slip boundary condition at \(r=R\), we must have \[E^{r}=Cj_{l}\left(\frac{r\alpha_{ln}}{R}\right)\] (A3) for some constant C. Enforcing the divergence-free condition, we solve for \(E^{\theta}\): \[E^{\theta}=C\left(\frac{2+l}{r}j_{l}\left(\frac{\alpha_{ln}r}{R}\right)-\frac{ \alpha_{ln}}{R}j_{l+1}\left(\frac{\alpha_{ln}r}{R}\right)\right)\] (A4) which cannot be made to meet the boundary condition at \(r=R\). Thus we have shown that the expansion (A1) spans possible divergence-free functions that meet our boundary conditions and are eigenfunctions of the spherical Laplace operator. ## Appendix B Numerical methods We wrote a numerical scheme in python which successfully integrates the equations of motion \[\begin{split}&\left(1-\lambda_{s}^{2}\nabla\nabla\cdot+\lambda^ {2}\nabla\times\nabla\times\right)\tau\partial_{t}\mathbf{w}-2\lambda_{d}^{2} \nabla\nabla\cdot\mathbf{w}\\ &\qquad=\frac{f\rho}{\zeta}\tau\partial_{t}\mathbf{m}\\ &\tau\partial_{t}\mathbf{m}=2(\mathbf{m}_{eq}(\mathbf{w})- \mathbf{m})\\ &\mathbf{m}_{eq}=\frac{\mathbf{w}\tau}{3a}\left(1-\frac{3}{5} \left(\frac{\mathbf{w}\tau}{3a}\right)^{2}\right)\;.\end{split}\] (B5) Given the initial fields \(w\) and \(m\) at time \(t=0\), we first compute their decomposition into normal modes, i.e., eigenfunctions (15). This process is outlined in further detail below. These modes are uncoupled in the equation of motion (4) for \(\mathbf{w}\), and so we can evolve the normal modes for \(\mathbf{w}\) one time-step directly using equations (16a,16b), discretized using an Adams-Bashforth two-step scheme. The resulting difference equations for \(a_{nlm},b_{nlm}\) are shown below. However, at each time-step, the field \(\mathbf{m}\) must also be updated, following equation (8). To do this, the normal modes are reassembled into the full spatial dependence of \(\mathbf{m},\mathbf{w}\), and equation (8) is evolved directly. Then the procedure above is repeated. Throughout this scheme, we used the scipy.special package for the definition of the spherical Bessel functions, but the vector spherical harmonics were implemented through the freely available package shtns, detailed in the work [42]. ### Time evolution The time evolution was implemented using an Adams-Bashforth two-step scheme for the source terms, combined with a mixed Crank-Nicolson scheme in the case of the equation for \(b_{nlm}\) to ensure stability (for the details on the definitions of these schemes, see [43]): \[\begin{split} a_{nlm}^{t+1}&=\Delta t\frac{\epsilon+1}{ 1+\left(\frac{\lambda\alpha_{ln}}{R}\right)^{2}}\left(\frac{3}{2}\left(p_{\perp nlm }\right)^{t}-\frac{1}{2}\left(p_{\perp nlm}\right)^{t-1}\right)\\ b_{nlm}^{t+1}&=\Delta t\frac{B_{ln}}{1+\Delta tA_{ ln}}\left(\frac{3}{2}\left(p_{\parallel nlm}\right)^{t}-\frac{1}{2}\left(p_{ \parallel nlm}\right)^{t-1}\right)\\ &\quad+\frac{1-\Delta tA_{ln}}{1+\Delta tA_{ln}}b_{nlm}^{t-1}\\ A_{ln}&=\frac{\left(\lambda_{d}\alpha_{ln}/R\right) ^{2}}{1+\left(\lambda_{s}\alpha_{ln}/R\right)^{2}},\;B_{ln}=\frac{\epsilon+1} {1+\left(\lambda_{s}\alpha_{ln}/R\right)^{2}}\;,\end{split}\] (B6) where we have defined the field \(\mathbf{p}=\partial_{t}\mathbf{m}\) as shorthand. ### Mode decomposition The remaining task is the conversion from spatial to spherical harmonic representations, and vice versa. The angular dependence of the decomposition is taken care of by the routines within the shtns package, however we had to implement our own decomposition of the radial part into Bessel functions. We will illustrate this whole process with a simple example. Consider a vector field \(\mathbf{f}(\mathbf{r})\), which we seek to decompose into the spherical harmonic and Bessel function basis, giving coefficients \(f_{\perp nlm},f_{\parallel nlm}\). We perform this decomposition in two steps. First, we loop over radial slices \(\{r_{i}\}\), and for each of these values we call shtns to decompose the vector field into spherical components \[\begin{split}\mathbf{f}(r_{i},\theta,\phi)&=\sum_{ lm}Q_{lm}(r_{i})\mathbf{Y}_{lm}(\theta,\phi)\\ &+S_{lm}(r_{i})\mathbf{\Psi}_{lm}(\theta,\phi)\\ &+T_{lm}(r_{i})\mathbf{\Phi}_{lm}(r_{i})\;,\end{split}\] (B7) which gives us a discrete radial representation of the functions \(Q_{lm}(r),S_{lm}(r),T_{lm}(r)\). We can directly read off that \(T_{lm}(r)=\sum_{nlm}f_{\perp nlm}j_{l}(\alpha_{ln}r/R)\). However, the definition of the longitudinal modes is more ambiguous. This is resolved by considering the gradient of a generic scalar field \(\rho(\mathbf{r})\) in terms of VSH: \[\nabla\left(\rho_{lm}(r)Y_{lm}\right)=\frac{\rho_{lm}(r)}{r}\mathbf{\Psi}_{lm }+\rho_{lm}^{\prime}(r)\mathbf{Y}_{lm}\] (B8) Therefore, for a given spherical harmonic mode we must have \[rS_{lm}(r)=\int_{0}^{r}Q_{lm}(r^{\prime})dr^{\prime}=\sum_{nlm}f_{\parallel nlm }j_{l}(\alpha_{ln}r/R)\;.\] (B9) For the mode where \(l=0,m=0\), we decompose \(Q_{00}\) to obtain the coefficients of the _derivative_ of \(j_{0}(r\alpha_{0n}/R)\), since \(\Psi_{00}=0\). Otherwise we decompose the function \(rS_{lm}(r)\) to get coefficients of \(j_{l}(r\alpha_{ln})\), as this is computationally simpler. Both in the case of longitudinal and transverse flows, the task is always reduced to the decomposition of a function defined over the interval \([0,1]\) into spherical Bessel functions \[\begin{split} f_{l}(x)&=\sum_{n}f_{ln}j_{l}\left(x \alpha_{ln}\right),\;l>0\\ f_{0}(x)&=\sum_{n}f_{0n}j_{0}^{\prime}\left(n\pi x \right)=-\sum_{n}n\pi f_{0n}j_{1}(n\pi x)\;.\end{split}\] (B10) where we have suppressed the index \(m\) as it does not affect this part of the calculation, and inserted the known roots \(\alpha_{0n}=n\pi\). To find the components \(f_{ln}\), we exploit the following orthogonality relation \[\begin{split}&\int_{0}^{1}j_{l}(x\alpha_{ln})j_{l}(x\alpha_{lm})x^ {2}dx=\\ &\frac{\delta_{nm}}{2}\times\begin{cases}-j_{l-1}(\alpha_{ln})j_ {l+1}(\alpha_{ln}),\;l>0\;.\\ \frac{1}{(n\pi)^{2}},\;l=0\end{cases}\end{split}\] (B11) Multiplying equations B10 by \(x^{2}\) and integrating, we obtain \[\begin{split} f_{ln}&=-2\frac{\int_{0}^{1}x^{2}j_{l}( x\alpha_{ln})f_{l}(x)dx}{j_{l-1}(\alpha_{ln})j_{l+1}(\alpha_{ln})},\;l>0\\ f_{0n}&=-2n\pi\int_{0}^{1}x^{2}j_{1}(n\pi x)f_{0}(x) dx\;.\end{split}\] (B12) These integrals are then implemented in our code. To perform the integrals over the discrete set of radii \(\{r_{i}\}\), we used Simpson's rule as implemented in scipy.integrate. The inverse operation, going from coefficients to the full spatial dependence, is far simpler: for each \(l\), we sum over Bessel functions multiplied by their coefficients \(f_{ln}\). This gives us the radial dependence of the coefficients \(Q_{lm}(r),S_{lm}(r),T_{lm}(r)\). We then use the shtns package to convert those back to spatial representations, radial shell by radial shell.
2306.14723
Noise and fluctuations in nanoscale gas flow
We theoretically calculate the fundamental noise that is present in gaseous (dilute fluid) flow in channels in the classical and degenerate quantum regime, where the Fermi-Dirac and Bose- Einstein distribution must be considered. Results for both regimes are analogous to their electrical counterparts. The quantum noise is calculated for a two terminal system and is a complicated function of the thermal and shot noise with the thermal noise dominating when $2k_BT >> m\Delta P$ and vice versa. The cumulant generating function for mass flow, which generates all the higher order statistics related to our mass flow distribution, is also derived and is used to find an expression for the third cumulant of flow across a fluidic channel.
J. Dastoor, D. M. Willerton, W. Reisner, G. Gervais
2023-06-26T14:19:52Z
http://arxiv.org/abs/2306.14723v1
# Noise and fluctuations in nanoscale gas flow ###### Abstract We theoretically calculate the fundamental noise that is present in gaseous (dilute fluid) flow in channels in the classical and degenerate quantum regime, where the Fermi-Dirac and Bose-Einstein distribution must be considered. Results for both regimes are analogous to their electrical counterparts. The quantum noise is calculated for a two terminal system and is a complicated function of the thermal and shot noise with the thermal noise dominating when \(2k_{B}T\rho\gg m\Delta P\) and vice versa. The cumulant generating function for mass flow, which generates all the higher order statistics related to our mass flow distribution, is also derived and is used to find an expression for the third cumulant of flow across a fluidic channel. _Keywords: nanopores, noise, fluctuations, mass flow_ ## I Introduction Nanoscale fluid transport in dilute (gaseous) regimes is of broad fundamental and engineering importance, relevant in diverse scenarios ranging from understanding fluid flow in quantum and classical regimes to industrial applications involving gas processing [1]. From a fundamental point of view, improvements in nanofabrication have enabled the production of nanochannels and nanopores with well-defined nanometric dimensions that can be used to verify classical theories for gas flow in free-molecular transport (Knudsen) regimes [2; 3; 4]; these measurements have been extended cryogenically to explore transport of quantum fluid phases of \({}^{4}\)He [5] where it is expected that a one-dimensional many-body quantum state would form [6]. From an engineering point of view, nanochannels and nanoporous materials, due to their high surface to volume ratio and pore sizes below the molecular mean free path and/or approaching molecular dimensions [7], have excellent absorptive properties and can exhibit size-based molecular sieving [7; 8; 9], useful for applications in gas separation [9] and catalysis [10]. Most experimental and theoretical efforts devoted to characterizing nanoscale gas transport have focused on modeling the gas mass flow-rate \(Q\), _e.g._[11; 12]. In analogy to the case of electrical transport, this is given by \(Q=G\Delta P\), where \(Q\) is the mass flow, \(G\) is the flow conductance, and \(\Delta P\) is the pressure difference across the channel or pore [3]. However, the gas mass flow-rate is not the only quantity of interest that can be extracted via monitoring a given mass flow channel. Just as is the case for electrical current, statistical fluctuations in the mass flow will exist (mass flow noise). These fluctuations are also of fundamental interest, for example providing new information about a system's fluidic properties in both classical and quantum regimes - _e.g._, its transmission properties or flow limitations. The mass flow noise will also have practical implications, limiting applications where mass-flow rate is used as a sensor by creating a noise floor that observable signals need to exceed. In addition, there might be scenarios where the degree of mass flow noise present could itself constitute the signal of interest. Finally, gas-flow fluctuations might affect the performance of gas based separation or catalysis devices, for example statistical fluctuations in the stream of a low concentration catalysis or inhibitor species might lead to large fluctuations in output. Considerable effort has been devoted in the past to improve our understanding of electrical noise; much of this insight can be adopted to the closely analogous case of mass flow noise. White current fluctuations arise due to thermal energy (Johnson-Nyquist noise) and the discrete nature of electrical charge (shot noise) [13]. These two noise sources are classically distinct, yet become interlinked in the quantum regime. The development of quantum shot noise theory has seen new applications in distinguishing particles from waves and future proposals for new entanglement detectors [14], and has led to the spectacular experimental validation of the effective quasiparticle charges of electrons confined to two dimensions in the fractional quantum Hall regime [15; 16]. Somewhat surprisingly these sources of noise that set fundamental limits in terms of signals in the case of dilute mass flow has been neglected, and so here we propose a theoretical calculation for the thermal noise of an ensemble of particles, forming a dilute gas, by adapting well-defined techniques developed for electrical noise. We also calculate the noise associated with a directed flow (shot noise) in the classical regime and then adapt our discussion to include quantum effects by combining thermal and shot noise. This quantum noise is sometimes referred to as the quantum shot noise and will be derived by adapting Martin and Landauer's wave-packet approach for electrical noise to mass flow noise [13]. This approach is chosen because it allows us to develop a quantum noise expression based on a similar process to our derivation of classical noise while straightforwardly incorporating the relevant quantum mechanical considerations. All our expressions are derived based on the assumption that the noise is distributed equally over all frequencies (i.e. that it is _white_) and conforms with a Gaussian distribution, which is expected based on analogy with the electrical noises. We produce general results from the fundamental flow equation, \(Q=G\Delta P\), where \(Q\) is the mass flow, \(G\) is the flow conductance, and \(\Delta P\) is the pressure difference across the channel. In doing so, we neglect any consideration of turbulence since our concern is in small dilute fluidic systems with very low Reynolds numbers, \(Re\ll 2000\). It should however be noted that under certain conditions even small mesoscopic systems may be subject to turbulent flow, e.g. [17; 18; 19], and in such cases a new approach may be needed. We have also confined our analysis to idealized fluidic channels in which electromagnetic effects are negligible. We note that in many nanofluidic systems, charges on mass carriers and surface effects can heavily influence the mass flow and a new approach may be needed to address these cases as well [20; 21]. The noise, \(\delta Q\), will be calculated as a mean squared fluctuation, \(\langle\delta Q^{2}\rangle\), where \(\langle\dots\rangle\) refers to an average with respect to time. Recent work found that \(G\) is quantized in units of \(2m^{2}/h\), where \(m\) is the mass of a fluid particle and \(h\) is Planck's constant [22; 23]. The resemblance with the quantum of electrical conductance (\(2e^{2}/h\) where \(e\) is the electrical charge) highlights the close analogy between mass flow and charge flow. Other sources of noise in mesoscopic fluidic systems, such as the irregular motion of impurities, also become important to consider when building sensitive devices. While these sources are not explored here, the cumulant generating function of the quantum white noise is calculated to allow for easy combination with other independent noise sources in future scenarios. In addition, we also derive an expression for the third cumulant of quantum white noise. With growing interest in the theory of full counting statistics, these will both be valuable tools for future work. Finally, we verify our results using the fluctuation dissipation theorem, which describes the relation between random fluctuations of a system at equilibrium to a small perturbation. The theorem states that fluctuations occurring in equilibrium, _i.e._ in the absence of a net mass flow, are proportional to the channel conductance [24]. This provides a baseline test for our results. ## II Classical noise Noise in the classical regime is separated into thermal noise and shot noise contributions. In dilute fluidic systems, the shot noise originates from the discrete nature of an average mass flow signal and is thus formally only present out of equilibrium. The thermal noise, however, is expected to exist even at thermodynamic equilibrium as it is due to the innate, random motion of particles that is implied by the kinetic theory of gases and the Maxwell-Boltzmann distribution. We may conceptualize the ensuing random passage of particles across the pore as comprising an instantaneous flow rate that averages to zero over long times, i.e. \(\langle Q\rangle=0\), and is thus permitted to occur even in the absence of a net pressure differential, \(\Delta P\). We begin by considering the thermal fluctuations in a system at equilibrium by noting that when both terminals are in equilibrium with each other, there is no net energy transfer within the system and we can use the analogy of a standing wave in an open pipe. We know from equipartition that each standing wave mode has two degrees of freedom and therefore each mode has an average energy of \(k_{B}T\), where \(k_{B}\) is Boltzmann's constant and \(T\) is temperature. The total average energy of our system is found as \(\Delta jk_{B}T\), where \(\Delta j\) is the range of modes that our system can occupy. The occupied range of modes is characterised by the time taken for a particle to pass through the channel and by the frequency range, \(\Delta\nu\), that the standing waves can occupy. An equation for the average power can now be written using our previous results and the frequency of oscillation of our standing wave. Lastly, we equate this power expression with the instantaneous thermodynamic power of the fluid and derive an expression for the thermal noise in a fluid channel: \[\langle\delta Q^{2}\rangle=4k_{B}TG\rho\Delta\nu, \tag{1}\] where \(\rho\) is the mass density of the fluid. The fluctuation dissipation theorem is satisfied since the noise is proportional to the conductance. Note that equation 1 shows that thermal noise exists even when \(\langle Q\rangle=0\) (at equilibrium). The Johnson-Nyquist expression for the electrical current thermal noise, \(4k_{B}TR^{-1}\Delta\nu\) (where \(R\) is the electrical resistance), is analogous to equation 1 [25]. Figure 1: **a)** A cartoon of the flow through a cylindrical fluid channel. Flow is from the left side to the right side (source to the drain). **b)** Cartoon example of mass flow fluctuations about an average flow signal, \(\langle Q\rangle\), occurring in a dilute nanochannel. In the case that there exists a nonzero pressure differential across the pore, there will be a net flow of mass from one reservoir to the other and equation 1 no longer suffices to describe the fluctuations. To formulate a noise expression that accommodates an arbitrary pressure differential across the pore, it is necessary to model the mass flow as consisting in a stream of discrete particles, much like an electrical current. We further note that, in the free-molecular flow regime, interparticle collisions are negligible and we can model the flow of particles as a Poisson process. We may then loosely follow van der Ziel's derivation for electrical shot noise, adapting the approach where necessary, to establish a comprehensive expression for the mass-flow fluctuations [26]. Suppose that on either side of a nanopore we hold reservoirs at fixed pressures, \(P_{A}\) and \(P_{B}\), such that \[P_{A}=\Delta P+P_{0};\:P_{B}=P_{0}, \tag{2}\] where we have identified reservoir \(A\) as the region of higher pressure and have used the pressure of \(B\) to mark a baseline, \(P_{0}\). Particles are transmitted across the pore in both directions, each with some associated instantaneous rate of occurrence, \(r(t)\), which fluctuates in time. Defining \(N\) to be the net number of particles passing through the pore in the direction of \(A\) to \(B\), we have \[N=\int_{0}^{\tau}[r_{A\to B}(t)-r_{B\to A}(t)]dt, \tag{3}\] for some time interval, \(\tau\). Note that a negative \(N\) signifies a net passage of particles from \(B\) to \(A\). We may also define the fluctuation in \(N\) as the instantaneous deviation from its average, \[\delta N=N-\left\langle N\right\rangle, \tag{4}\] with \[\left\langle N\right\rangle=\left\langle r_{A\to B}\right\rangle\tau- \left\langle r_{B\to A}\right\rangle\tau. \tag{5}\] The averages in the above expression may be interpreted as either ensemble or time averages, since these are equivalent for an Ergodic process. We now define an additional random variable, \(\delta R_{\tau}\), corresponding to fluctuations in the net rate of particle transmissions as \[\delta R_{\tau}=\frac{\delta N}{\tau}. \tag{6}\] Noting that the variance of \(N\) is defined as \(Var(N)=\left\langle\delta N^{2}\right\rangle\), we may write that \[\left\langle\delta R_{\tau}^{2}\right\rangle =\frac{Var(N)}{\tau^{2}}=\frac{Var(N_{A\to B}-N_{B\to A})}{\tau^{2}} \tag{7}\] \[=\frac{Var(N_{A\to B})+Var(N_{B\to A})}{\tau^{2}}, \tag{8}\] where the final step follows from the fact that the variance for a difference on two Poisson variables is just the (positive) sum of the variances associated with each individual process. Additionally, because the variance of a Poisson process is equal to its mean, we have \[\left\langle\delta R_{\tau}^{2}\right\rangle=\frac{\left\langle r_{A\to B} \right\rangle+\left\langle r_{B\to A}\right\rangle}{\tau}. \tag{9}\] Applying the Wiener-Khintchine theorem allows us to extract the zero-frequency component of the noise spectral density as \[S_{R}(0)=\lim_{\tau\rightarrow\infty}2\tau\left\langle\delta R_{\tau}^{2} \right\rangle=2\left\langle r_{A\to B}\right\rangle+2\left\langle r_{B \to A}\right\rangle, \tag{10}\] which, for white noise, suffices to describe the entire spectrum. We now make the conversion to units of mass-flow fluctuations by multiplying the spectral density by the mass of the fluid particles squared: \[S_{Q}(0)=2m\left\langle Q_{A\to B}\right\rangle+2m\left\langle Q_{B \to A}\right\rangle, \tag{11}\] where we have distributed one factor of \(m\) into each average to pass from particle flow rates to mass flow rates. We may alternatively write the above expression as \[S_{Q}(0)=2mGP_{A}+2mGP_{B}=2mG\Delta P+4mGP_{0}, \tag{12}\] which is legitimate because the transmission events are statistically independent and we may consider \(P_{A}\) and \(P_{B}\) separately, each constituting an effective pressure differential across the pore. The first term on the right contains the factor \(G\Delta P\), which we know to be the average net mass flow, \(\left\langle Q\right\rangle\). If we make this substitution and also replace \(P_{0}\) with \(nk_{B}T\) via the ideal gas law, then we may multiply through by an arbitrary frequency bandwidth, \(\Delta v\), to arrive at the result \[\left\langle\delta Q^{2}\right\rangle=2m\left\langle Q\right\rangle\Delta v+4 Gk_{B}T\rho_{0}\Delta v, \tag{13}\] with \(\rho_{0}\) being defined as a baseline mass density that exists across both reservoirs. When cast in this form, the above expression lends itself to a straightforward analogy with electrical circuits. Firstly, we note that when \(\left\langle Q\right\rangle=0\), or equivalently when the two reservoirs exist in thermodynamic equilibrium, we recover equation 1, which was previously likened to the Johnson-Nyquist thermal noise in electrical resistors. We also note that the first term on the right may be identified with Schottky's result for electrical shot noise, \(\left\langle\delta I^{2}\right\rangle=2e\left\langle I\right\rangle\Delta v\)[27]. If we interpret the first term in equation 12 as the _mass-flow_ shot noise and the second term as the _mass-flow_ thermal noise, then we can construct the unitless ratio \[\frac{\left\langle\delta Q_{therm}^{2}\right\rangle}{\left\langle\delta Q_{ shot}^{2}\right\rangle}=\frac{2P_{0}}{\Delta P}, \tag{14}\] to identify the conditions under which each noise source is expected to dominate. From Figure 2 it is clear that when \(\Delta P\) is sufficiently small compared with \(P_{0}\), the thermal fluctuations are dominant. This is expected because the net flow, which gives rise to the shot noise, will be small compared with the opposing flows resulting from \(P_{0}\) and thus contribute much less to the overall fluctuations. When \(\Delta P\) is large compared with \(P_{0}\), the reverse is true and the shot noise is expected to dominate. ## III Quantum noise In this section we consider a system in which quantum effects are taken into account and the particles may no longer be treated independently. In this case, the thermal noise and shot noise are interlinked, and there is an inherent probability associated with transmission through the channel. We theoretically calculate the noise for a two-terminal cylindrical fluid channel although this approach can easily be generalised to a multi-channel system. The shot noise can easily be adapted at zero temperature to account for the transmission probability, \(D\), by recognising that the Poissonian noise distribution becomes binomial. We also assume \(D\) to be energy independent for the rest of this paper. In this case, the fluctuations are given by: \[\langle\delta Q^{2}\rangle=2m\langle Q\rangle(1-D)\Delta\nu, \tag{15}\] where \(Q\) is now defined as the outgoing flow from the channel and hence, implicitly absorbs a transmission probability factor. Note that the ingoing and outgoing flow now differ by a factor \(D\). When \(T>0\), we must consider thermal fluctuations in the incoming and outgoing flow. Assuming thermal equilibrium, the occupation of states at the source and drain are governed by the Fermi-Dirac distribution for fermions, and the Bose-Einstein distribution for bosons. Both will be referred to as \(f\) in their respective contexts, with chemical potentials \(\mu_{L}\) and \(\mu_{R}\), referring to the left and right side of the channel, where \((\mu_{L}-\mu_{R})=m\Delta P/(\rho)\). Transport from the left to right is defined as positive in quantized units of \(G\) where [22]: \[G=\frac{Q}{(\mu_{L}-\mu_{R})\rho/m}=\frac{2m^{2}}{h\rho}D. \tag{16}\] We now adapt Landauer's wave packet approach for electric circuits to mass flow to find an expression for the noise. In this approach, transmission and reflection are characterised by wave packets that are emitted from the source and the drain at a constant rate and each contain one quantum mechanical state [13]. Note that due to the Pauli exclusion principle, only two fermions can occupy this state (opposite spins) whereas there is no restriction for bosons. Packets are assumed to be emitted in phase and simultaneously from the source and drain, such that a transmitted wave from the source is able to map onto the same state as a reflected wave from the drain and vice versa. This method allows us to consider particles moving against the pressure gradient (which is more probable at low pressures). Using counting statistics we find: \[\langle\delta Q^{2}\rangle=\frac{4m^{2}}{h}\Delta\nu\int_{0}^{ \infty}dE \{D[f_{L}(1\mp f_{L})+f_{R}(1\mp f_{R})] \tag{17}\] \[\pm D(1-D)(f_{L}-f_{R})^{2}\},\] where \(f_{L}\) and \(f_{R}\) denote the distribution of particles at the source and drain respectively. Note that the upper sign is for fermions and the bottom is for bosons. The integral serves to include wave packets of all energies. For bosons, the integral diverges, as is also seen for bosons in electric circuits [24]. Hence, for the remainder of this section we will focus on fermions. For fermions, we have the exact result: \[\langle\delta Q^{2}\rangle= 4k_{B}TG\rho\Delta\nu D \tag{18}\] \[+2m\langle Q\rangle(1-D)\Delta\nu\coth\left(\frac{m\Delta P}{2kT \rho}\right).\] The first term is our classical thermal noise with a transmission factor, whereas the second term is our classical shot noise with a complicated cutoff factor. When \(k_{B}T\rho\gg m\Delta P\), the hyperbolic cotangent is approximated by the inverse of its argument and we recover our classical thermal noise expression given by equation 1. Hence, the quantum noise satisfies the fluctuation dissipation theorem as the equilibrium noise is proportional to the conductance. When \(k_{B}T\rho\ll m\Delta P\), the hyperbolic cotangent is approximately one and the second term dominates, so we recover our zero-temperature shot noise expression given by equation 15. This is further shown in Figure 3. Figure 2: The ratio of thermal noise to shot noise plotted against the pressure differential, \(\Delta P\), across a cylindrical nanopore. Various baseline pressures, \(P_{0}\), have been distinguished by line colour. The displayed range of pressures have been somewhat arbitrarily chosen based on typical conditions for free-molecular \({}^{4}\)He gas flow in nanopores of \(\leq 100\) nm diameters, such as those used in [3]. We also reiterate that, strictly speaking, this result is only valid in the free-molecular flow regime. ## IV Cumulants The cumulants provide an alternative description of a random variable to the probability distribution, where the first cumulant is the mean, the second is the variance, the third describes the asymmetry or skewness of the distribution, etc. For statistically independent variables, the cumulants are additive, making them extremely useful when seeking to sum over multiple independent sources of noise [28]. For this reason, we now provide a reformulation of our results for the quantum noise in terms of a cumulant generating function. The \(n\)-th cumulant, \(\kappa_{n}\), can be found by evaluating the \(n\)-th derivative of the cumulant generating function (CGF), \(K(t)\), at \(t=0\)[28]. For fermions, the CGF takes the form: \[K(t)= \frac{4}{h\Delta\nu}\int_{0}^{\infty}dE\ln\{\left(e^{m\Delta\nu t }-1\right)f_{L}D(1-f_{R}) \tag{19}\] \[\qquad+\left(e^{-m\Delta\nu t}-1\right)f_{R}D(1-f_{L})+1\}.\] The first two factors represent transport from the left to right or right to left of the channel, respectively. Note that if we attempt to derive a similar expression for bosons, all our integrals diverge and are hence, meaningless. Focusing on fermions, the exponential mass factor is explained by recognising that the \(n\)-th cumulant can be thought of as a pairing of \(n\) independent variables, where each variable contributes a mass factor. The frequency factor results from the Fourier transform. Under the zero temperature limit, equation 19 becomes: \[\lim_{T\to 0}K(t)=\frac{4m}{\rho h\Delta\nu}\Delta P\ln\left\{1-D+De^{m \Delta\nu t}\right\}. \tag{20}\] This expression is, as expected, reminiscent of the well-defined CGF of the binomial distribution, \(n\ln\{1-p+pe^{t}\}\), where \(n\) is the number of trials and \(p\) is the probability [28]. We are also able to use equation 20 to recover our zero-temperature shot noise, given by equation 15, from \(K^{\prime\prime}(0)\). On the other hand, if we calculate \(K^{\prime\prime}(0)\) when \(T>0\), we recover the fermion variance given by equation 18. Utilizing our fermion CGF, we can further improve our understanding of the distribution of mass flow by calculating the skewness (or third cumulant). \[\kappa_{3}=\frac{1}{2}mGk_{B}T(\Delta\nu)^{2}(1-D)\text{csch}^{2} \left(\frac{m\Delta P}{2k_{B}T\rho}\right)\] \[\times\left\{6D\rho\sinh\left(\frac{m\Delta P}{k_{B}T\rho} \right)+(2D-1)\frac{m\Delta P}{k_{B}T}\cosh\left(\frac{m\Delta P}{k_{B}T\rho} \right)\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.-(1+4D) \frac{m\Delta P}{k_{B}T}\right\}. \tag{21}\] The expression above is complex, and no longer similar to our usual thermal and shot noise. Under the limit \(k_{B}T\rho\gg m\Delta P\), our expression for \(\kappa_{3}\) becomes \(4(G/\Delta P)(\rho k_{B}T)^{2}(2-D)(1-D)(\Delta\nu)^{2}\) and in the opposite limit it becomes \(m^{2}\langle Q\rangle(1-D)(2D-1)(\Delta\nu)^{2}\). Under the classical limit, approximately when \(D=1\), we see that \(\kappa_{3}=0\). Hence, departure from the classical regime can be detected as \(\kappa_{3}\neq 0\). This further shows the non-Gaussian like nature of mass flow noise in the quantum regime, even under the assumption of zero frequency. ## V Discussion and outlook Our results demonstrate that flow in nanofluidic channels is subject to innate mass-flow fluctuations, which vary depending on the conditions of the system and may be a relevant consideration in the study and applications of nanofluidics. In the previous sections we have derived classical and quantum expressions for the white noise prevalent in mass flow in two-terminal fluid channels. We have also determined the CGF for fermions and provided expressions for \(\kappa_{3}\). Notably, even though we have only assumed white noise, our expressions should still be valid at low frequencies where \(k_{B}T\gg h\nu\). However, at high frequencies where \(k_{B}T\approx h\nu\), one would expect quantum effects and must replace the classical expression for average energy \(k_{B}T\) with its quantum version: \(h\nu/[\exp{(h\nu/k_{B}T)}-1]\). Similar substitutions have been made for the electrical case of a quantum point contact, and have been found to agree with experiments [29]. Generalizations to multi-channel systems is possible by summing over the different transmission probabilities corresponding to each channel. The number of channels in a typical pipe is \(A/{\lambda_{F}}^{2}\), where \(A\) is the cross-section Figure 3: The root mean square (RMS) mass flow fluctuations for fermions in a two terminal system plotted against the log of temperature using arbitrary parameters. The classical thermal noise and zero-temperature shot noise have been plotted as dashed lines for comparison. Note that the intercepts with the \(Q_{RMS}\) axis occur at zero and the square root of equation 15. of the pipe and \(\lambda_{F}\) is the Fermi wavelength. Hence, with larger channels this will become more important. Furthermore, there are a number of different sources of noise and types of noise that have yet to be studied for mass flow. For instance, there may be extrinsic sources of noise that are sensitive to boundary layer effects and system imperfections. These can be studied with a specific system in mind. Additionally, noise inversely proportional to frequency, traditionally called \(1/f\) noise, is common in electric circuitry and can also be studied here. Turbulent flow might also impact our expressions for noise and should be studied in more detail. Note that our calculations assume that the particles colliding with device surfaces undergo specular rather than diffuse reflections. This assumption might not hold for measurements performed in long nanochannels fabricated via classic nanomachining approaches as the device surfaces in these cases are not atomically smooth [30]. However, there is extensive interest in gas transport in materials possessing atomically smooth surfaces [7], such as channels formed from carbon nanotubes [31], graphene [8], MoS\({}_{2}\) and h-BN [32]. These materials lead to enhanced gas transport while not loosing their separation selectivity [7]. It is also interesting to consider real world applications where both ends of the channel might not be at thermal equilibrium. A temperature difference can be incorporated into our method by adapting \(f_{L}\) and \(f_{R}\). However, adding a more complex gradient is an interesting problem that requires more theoretical consideration. In this case, cumulants could be used to sum up many distributions and account for varying temperature differences. ## VI Conclusion We have theoretically calculated the classical and quantum noise, and quantum CGF, for mass flow in a dilute fluid channel. The result is found to be analogous with previous calculations for electrical noise and obeys the fluctuation dissipation theorem. We have also used the CGF to determine the third cumulant which, even at zero frequency, is non-zero. This shows the non-Gaussian nature of mass-flow noise. Although our results are mathematically similar to the electric case, the adaptation of methods for electrical noise to mass flow is important. The classical, free-molecular flow noise can be thought of as the sum of the full shot noises associated with two opposing mass flow currents, which each originate from the underlying Poisson statistics of the discrete mass carriers. The quantum noise may be similarly understood, with the acknowledgement that in this case the statistics of the particles must conform with the corresponding quantum mechanical prescription, thus leading to interparticle interactions which were not accounted for in the classical case. The theoretical prescription presented here can be used to calculate the theoretically minimum noise expected in dilute fluidic channels, or help justify experimentally observed signal fluctuations. We believe that these insights are important to understand smaller and more complex fluid systems and thus may be of use in areas of nanofluidics where noise considerations are of crucial importance, such as in certain sensing technologies [33]. We hope that this work may serve as a stepping stone for more elaborate noise sources to be considered in the future. For instance, future work may include an extension of our classical expressions into other gas flow regimes, such as into the transition and continuum regimes, since strictly speaking we have confined our classical analysis to free-molecular flow. The noise theory of dilute fluid channels should also be extended to high frequencies, considering other sources of noise, and adapting the results to fit more realistic systems with temperature gradients and numerous channels. ###### Acknowledgements. This work has been financially supported by NSERC (Canada), the New Frontier in Research Fund (Canada), FRQNT (Quebec) and the McGill Tomlinson fund.
2307.03176
Learning Curves for Noisy Heterogeneous Feature-Subsampled Ridge Ensembles
Feature bagging is a well-established ensembling method which aims to reduce prediction variance by combining predictions of many estimators trained on subsets or projections of features. Here, we develop a theory of feature-bagging in noisy least-squares ridge ensembles and simplify the resulting learning curves in the special case of equicorrelated data. Using analytical learning curves, we demonstrate that subsampling shifts the double-descent peak of a linear predictor. This leads us to introduce heterogeneous feature ensembling, with estimators built on varying numbers of feature dimensions, as a computationally efficient method to mitigate double-descent. Then, we compare the performance of a feature-subsampling ensemble to a single linear predictor, describing a trade-off between noise amplification due to subsampling and noise reduction due to ensembling. Our qualitative insights carry over to linear classifiers applied to image classification tasks with realistic datasets constructed using a state-of-the-art deep learning feature map.
Benjamin S. Ruben, Cengiz Pehlevan
2023-07-06T17:56:06Z
http://arxiv.org/abs/2307.03176v3
# Learning Curves for Heterogeneous Feature-Subsampled Ridge Ensembles ###### Abstract Feature bagging is a well-established ensembling method which aims to reduce prediction variance by training estimators in an ensemble on random subsamples or projections of features. Typically, ensembles are chosen to be homogeneous, in the sense the the number of feature dimensions available to an estimator is uniform across the ensemble. Here, we introduce heterogeneous feature ensembling, with estimators built on varying number of feature dimensions, and consider its performance in a linear regression setting. We study an ensemble of linear predictors, each fit using ridge regression on a subset of the available features. We allow the number of features included in these subsets to vary. Using the replica trick from statistical physics, we derive learning curves for ridge ensembles with deterministic linear masks. We obtain explicit expressions for the learning curves in the case of equicorrelated data with an isotropic feature noise. Using the derived expressions, we investigate the effect of subsampling and ensembling, finding sharp transitions in the optimal ensembling strategy in the parameter space of noise level, data correlations, and data-task alignment. Finally, we suggest variable-dimension feature bagging as a strategy to mitigate double descent for robust machine learning in practice. ## I Introduction Ensembling methods, where one combines predictions from multiple predictors to achieve a stronger prediction, are ubiquitous in machine learning practice [1]. A popular class of ensembling methods (known as attribute bagging [2] as well as the random subspace method [3]) are based on feature subsampling [2; 3; 4; 5; 6], where each predictor has access to only a subset of data features, are independently trained on those features, and their predictions are combined to achieve a stronger prediction. For example, the popular random forest method makes use of this strategy [3; 7]. An advantage of these methods is that they allow parallel processing. For example, Feature-Distributed Machine Learning, combine independent predictions made by agents who only see subsets of available features [8]. While commonly used in practice, a theoretical understanding of ensembling via feature subsampling is not well developed. Here, we provide an analysis of this technique in the case of feature-subsampled linear ridge regression using methods from statistical physics [9; 10; 11; 12]. This allows us to obtain analytical expressions for typical case performance of feature-subsampled linear ridge regression. Analysis of these equations under special cases reveal interesting phenomena involving effects of noise, regularization, and subsampling on prediction performance. Our findings relate to double-descent [13; 14], which results from over-fitting to noise and poses a serious problem for practical machine learning. Regularization is commonly used to mitigate double descent, however optimal regularization strength depends on data and noise levels [15; 16]. Our theory reveals an alternative strategy. We observe that subsampling shifts the location of a predictor's sample-wise double-descent peak [14; 16; 17]. An interesting consequence of this is that if the predictors are heterogeneous in the number of features they see, they will go through double-descent at different sample-sizes. Therefore, bagging them will lead a mitigation of double-descent, as when one predictor fails, the others will compensate with accurate predictions. In summary, we make the following original contributions: * Using the replica trick from statistical physics [9; 11], we derive the generalization error of ensembled least-squares ridge regression with random structured Gaussian data, deterministic feature maps, and a noisy linear teacher function. Our derivation allows for heterogeneity in the rank of the feature maps of the ensemble members. * We derive explicit formulas which demonstrate that subsampling alters the interpolation threshold of ridge regression. * We demonstrate benefits of heterogeneous ensembling as a robust method for mitigating double-descent. * We analyze the role of data correlations, readout noise, and data-task alignment in determining the optimal ensembling strategy in a tractable special case. **Related works:** A substantial body of work has elucidated the behavior of linear predictors for a variety of feature maps [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. Several recent works have extended this research to characterize the behavior of ensembled regression using solvable models [25; 30; 31]. Ref. [30] derives expressions for the generalization error of generalized linear models, of which ridge ensembles are a special case in terms of the solutions to a set of self-consistent equations. However, [30] and [25] focus their analysis on the case of isotropic data and Gaussian random masks of _homogeneous_ dimensionality. In contrast, we explicitly consider learning from correlated data by ensembles with heterogeneous readout dimensionality. Our work focuses on the effect of feature-wise subsampling. Additional recent works study the performance of ridge ensembles with example-wise subsampling [32; 33] and simultaneous subsampling of features and examples [31]. These works find that subsampling behaves as an implicit regularization, and prove equivalences between optimal ensembling and optimal regularization. In a similar vein, we consider here ensembling as a safeguard against insufficient regularization. Methods from statistical physics have long been used for machine learning theory [10; 11; 12]. Relevant work in this domain include [34] which studied ensembling by data-subsampling in linear regression. ## II Learning curves for ensembled ridge regression from the replica method We consider noisy ensembled ridge regression in the setting where ensemble members are trained independently on masked versions of the available features. We derive our main analytical formula for generalization error of ensembled linear regression, as well as analytical expressions for generalization error in the special case of subsampling of equicorrelated features. Later sections illustrate the implications of the derived formulas. ### Problem Setup Consider a training set \(\mathcal{D}=\{\bar{\mathbf{\psi}}_{\mu},y^{\mu}\}_{\mu=1}^{P}\) of size \(P\). The training examples \(\bar{\mathbf{\psi}}_{\mu}\in\mathbb{R}^{M}\) are drawn from a Gaussian distribution with Gaussian feature noise: \(\bar{\mathbf{\psi}}_{\mu}=\mathbf{\psi}_{\mu}+\mathbf{\sigma}_{\mu}\), where \(\mathbf{\psi}_{\mu}\sim\mathcal{N}(0,\mathbf{\Sigma}_{s})\) and \(\mathbf{\sigma}_{\mu}\sim\mathcal{N}(0,\mathbf{\Sigma}_{0})\). Data and noise are drawn i.i.d. so that \(\mathbb{E}\left[\mathbf{\psi}_{\mu}\mathbf{\psi}_{\nu}^{\top}\right]=\delta_{\mu\nu}\bm {\Sigma}_{s}\) and \(\mathbb{E}\left[\mathbf{\sigma}_{\mu}\mathbf{\sigma}_{\nu}^{\top}\right]=\delta_{\mu \nu}\mathbf{\Sigma}_{0}\). Labels are generated from a noisy teacher function \(y_{\mu}=\frac{1}{\sqrt{M}}\mathbf{w}^{*\top}\mathbf{\psi}_{\mu}+\epsilon^{\mu}\) where \(\epsilon^{\mu}\sim\mathcal{N}(0,\zeta^{2})\). Label noises are drawn i.i.d. so that \(\mathbb{E}[\epsilon^{\mu}\epsilon^{\nu}]=\delta_{\mu\nu}\zeta^{2}\). We seek to analyze the quality of predictions which are averaged over an ensemble of ridge regression models, each with access to a subset of the features. We consider \(k\) linear predictors with weights \(\hat{\mathbf{w}}_{r}\in\mathbb{R}^{N_{r}}\), \(r=1,\ldots,k\). Critically, we allow \(N_{r}\neq N_{r^{\prime}}\) for \(r\neq r^{\prime}\), which allows us to introduce _structural_ heterogeneity into the ensemble of predictors. A forward pass of the model is given as: \[f(\mathbf{\psi})=\frac{1}{k}\sum_{r=1}^{k}f_{r}(\mathbf{\psi}),\qquad f_{r}(\mathbf{\psi}) =\frac{1}{\sqrt{N_{r}}}\hat{\mathbf{w}}_{r}^{\top}\mathbf{A}_{r}(\mathbf{\psi}+\mathbf{ \sigma})+\xi_{r}. \tag{1}\] The model's prediction \(f(\mathbf{\psi})\) is an average over \(k\) linear predictors. The "measurement matrices" \(\mathbf{A}_{r}\in\mathbb{R}^{N_{r}\times M}\) act as linear masks restricting the information about the features available to each member of the ensemble. Subsampling may be implemented by choosing the rows of each \(A_{r}\) to coincide with the rows of the identity matrix - the row indices corresponding to indices of the sampled features. The feature noise \(\mathbf{\sigma}\sim\mathcal{N}(0,\mathbf{\Sigma}_{0})\) and the readout noises \(\xi_{r}\sim\mathcal{N}(0,\eta_{r}^{2})\), are drawn independently at the execution of each forward pass of the model. Note that while the feature noise is shared across the ensemble, readout noise is drawn independently for each readout: \(\mathbb{E}[\xi_{r}\xi_{r^{\prime}}]=\delta_{rr^{\prime}}\eta_{r}^{2}\). The weight vectors are trained separately in order to minimize a regular least-squares loss function with ridge regularization: \[\hat{\mathbf{w}}_{r}=\operatorname*{arg\,min}_{\mathbf{w}_{r}\in\mathbb{R}^{N_{r}}} \left[\sum_{\mu=1}^{P}\left(\frac{1}{\sqrt{N_{r}}}\mathbf{w}_{r}^{\top}\mathbf{A}_{r} \mathbf{\tilde{\psi}}_{\mu}+\xi_{r}^{\mu}-y_{\mu}\right)^{2}+\lambda|\mathbf{w}_{r}^{2 }|\right] \tag{2}\] Here \(\{\xi_{r}^{\mu}\}\) represents the readout noise which is present during training, and independently drawn: \(\xi_{r}^{\mu}\sim\mathcal{N}(0,\eta_{r}^{2})\), \(\mathbb{E}[\xi_{r}^{\mu}\xi_{r}^{\nu}]=\eta_{r}^{2}\delta_{\mu\nu}\). As a measure of model performance, we consider the generalization error, given by the mean-squared-error (MSE) on ensemble-averaged prediction: \[E_{g}(\mathcal{D})=\left\langle\left(f(\mathbf{\psi})-\frac{1}{\sqrt{M}}\mathbf{w}^{* \top}\mathbf{\psi}\right)^{2}\right\rangle \tag{3}\] Here, the angular brackets represent an average over the data distribution and noise: \(\mathbf{\psi}\sim\mathcal{N}(0,\mathbf{\Sigma}_{s})\), \(\mathbf{\sigma}\sim\mathcal{N}(0,\mathbf{\Sigma}_{0})\), \(\xi_{r}\sim\mathcal{N}(0,\eta_{r}^{2})\). The generalization error depends on the particular realization of the dataset \(\mathcal{D}\) through the learned weights \(\{\hat{\mathbf{w}}^{*}\}\). We may decompose the generalization error as follows: \[E_{g}(\mathcal{D}) =\frac{1}{k^{2}}\sum_{r,r^{\prime}=1}^{k}E_{rr^{\prime}}(\mathcal{D}) \tag{4}\] \[E_{rr^{\prime}}(\mathcal{D}) \equiv\frac{1}{M}\left[\left(\frac{1}{\sqrt{\nu_{rr}}}\mathbf{A}_{r}^ {\top}\hat{\mathbf{w}}_{r}-\mathbf{w}^{*}\right)^{\top}\mathbf{\Sigma}_{s}\left(\frac{1}{ \sqrt{\nu_{r^{\prime}r^{\prime}}}}\mathbf{A}_{r^{\prime}}^{\top}\hat{\mathbf{w}}_{r^{ \prime}}-\mathbf{w}^{*}\right)\right.\] (5) \[\qquad\qquad\left.+\frac{1}{\sqrt{\nu_{rr}\nu_{rr^{\prime}r^{ \prime}}}}\hat{\mathbf{w}}_{r}^{\top}\mathbf{A}_{r}\mathbf{\Sigma}_{0}\mathbf{A}_{r^{\prime}} ^{\top}\hat{\mathbf{w}}_{r^{\prime}}+M\delta_{rr^{\prime}}\eta_{r}^{2}\right]\] Computing the generalization error of the model is then a matter of calculating \(E_{rr^{\prime}}\) in the cases where \(r=r^{\prime}\) and \(r\neq r^{\prime}\). Furthermore, in the asymptotic limit we consider, we expect that the generalization error concentrates over randomly drawn datasets \(\mathcal{D}\). ### Main Result We calculate the generalization error using the replica trick from statistical physics. The result of our calculation is stated in proposition 1. The proof is lengthy, and can be found in the SI. **Proposition 1**.: _Consider the ensembled ridge regression problem described in Section II.1. Consider the asymptotic limit where \(M,P,\{N_{r}\}\rightarrow\infty\) while the ratios \(\alpha=\frac{P}{M}\) and \(\nu_{rr}=\frac{N_{r}}{M}\), \(r=1,\ldots,k\) remain fixed. Define the following quantities:_ \[\tilde{\mathbf{\Sigma}}_{rr^{\prime}} \equiv\frac{1}{\sqrt{\nu_{rr}\nu_{rr^{\prime}r^{\prime}}}}\mathbf{A}_ {r}[\mathbf{\Sigma}_{s}+\mathbf{\Sigma}_{0}]\mathbf{A}_{r^{\prime}}^{\top} \tag{6}\] \[\mathbf{G}_{r} \equiv\mathbf{I}_{N_{r}}+\hat{q}_{r}\tilde{\mathbf{\Sigma}}_{rr}\] (7) \[\gamma_{rr^{\prime}} \equiv\frac{\alpha}{M(\lambda+q_{r})(\lambda+q_{r^{\prime}}))} \operatorname{tr}\left[\mathbf{G}_{r}^{-1}\tilde{\mathbf{\Sigma}}_{rr^{\prime}}\mathbf{G} _{r^{\prime}}^{-1}\tilde{\mathbf{\Sigma}}_{r^{\prime}r}\right] \tag{8}\] _Then the average generalization error may be written as:_ \[\langle E_{g}(\mathcal{D})\rangle_{\mathcal{D}}=\frac{1}{K^{2}}\sum_{r,r^{ \prime}=1}^{K}\langle E_{rr^{\prime}}(\mathcal{D})\rangle_{\mathcal{D}}, \tag{9}\] Figure 1: Comparison between numerically calculated generalization error and theoretical prediction. Dots show results of numerical experiment. Lines are theoretical prediction. (a) Numerical experiment with \(\left[\mathbf{\Sigma}_{s}\right]_{ij}=.8^{[i-j]}\), \(\left[\mathbf{\Sigma}_{0}\right]_{ij}=\frac{1}{10}(0.3)^{[i-j]}\), \(\zeta=0.1\), \(\eta=0.2\). We set \(k=3\) with \(\nu_{1}=0.2\), \(\nu_{2}=0.4\), \(\nu_{3}=0.6\). Subsets of feature neurons accessed by each readout are drawn randomly and are permitted to overlap (see inset). Circular markers show the result of numerical experiment with \(M=2000\) feature neurons averaged over 100 trials. Curve shows theoretical prediction, obtained by solving the saddle-point equations 11 numerically. Theory and experiment conducted with a fixed ground-truth readout \(\mathbf{w}^{*}\) drawn randomly from an isotropic standard Gaussian distribution (b) Numerical experiment with \(\left[\mathbf{\Sigma}_{s}\right]_{ij}=(0.6)\delta_{ij}+0.4\), \(\left[\mathbf{\Sigma}_{0}\right]_{ij}=.1\delta_{ij}\), \(\zeta=0.1\), \(\eta=0.1\). Ground truth weights are randomly sampled in each trial as in eq. 12 with \(\rho=.3\). We set \(k=3\) with \(\nu_{1}=0.1\), \(\nu_{2}=0.3\), \(\nu_{3}=0.5\). Subsets of feature neurons accessed by each readout are are mutually exclusive (see inset). Circular markers show the result of numerical experiment with \(M=5000\) feature neurons averaged over 100 trials. Error bars show the standard error of the mean, and are smaller than the markers. Curve shows analytical prediction obtained in the case of equicorrelated features. _where_ \[\begin{split}\langle E_{rr^{\prime}}(\mathcal{D})\rangle_{\mathcal{D}}=& \frac{\gamma_{rr^{\prime}}\zeta^{2}+\delta_{rr^{\prime}}\eta_{r}^{2}}{1- \gamma_{rr^{\prime}}}+\frac{1}{1-\gamma_{rr^{\prime}}}\left(\frac{1}{M}\mathbf{w}^{ *\top}\mathbf{\Sigma}_{s}\mathbf{w}^{*}\right)\\ &-\frac{1}{M(1-\gamma_{rr^{\prime}})}\mathbf{w}^{*\top}\mathbf{\Sigma}_{s }\left[\frac{1}{\nu_{rr}}\hat{q}_{r}\mathbf{A}_{r}^{\top}\mathbf{G}_{r}^{-1}\mathbf{A}_{r} +\frac{1}{\nu_{rr^{\prime}r^{\prime}}}\hat{q}_{r^{\prime}}\mathbf{A}_{r^{\prime}}^ {\top}\mathbf{G}_{r^{\prime}}^{-1}\mathbf{A}_{r^{\prime}}\right]\mathbf{\Sigma}_{s}\mathbf{w} ^{*}\\ &+\frac{\hat{q}_{r}\hat{q}_{r^{\prime}}}{M(1-\gamma_{rr^{\prime}} )}\frac{1}{\sqrt{\nu_{rr}\nu_{rr^{\prime}r^{\prime}}}}\mathbf{w}^{*\top}\mathbf{ \Sigma}_{s}\mathbf{A}_{r}^{\top}\mathbf{G}_{r}^{-1}\mathbf{\Sigma}_{rr^{\prime}}\mathbf{G}_{r^ {\prime}}^{-1}\mathbf{A}_{r^{\prime}}\mathbf{\Sigma}_{s}\mathbf{w}^{*}\end{split} \tag{10}\] _where the pairs of order parameters \(\{q_{r},\hat{q}_{r}\}\) for \(r=1,\ldots,K\), satisfy the following self-consistent saddle-point equations_ \[\hat{q}_{r}=\frac{\alpha}{\lambda+q_{r}},\qquad q_{r}=\frac{1}{M}\operatorname {tr}\left[\mathbf{G}_{r}^{-1}\mathbf{\tilde{\Sigma}}_{rr}\right]. \tag{11}\] Proof.: We calculate the terms in the generalization error using the replica trick from the statistical physics of disordered systems. The full derivation may be found in the supplemental material. We make several remarks on this result: _Remark 1_.: This is a highly general result which applies to any selection of linear masks \(\{\mathbf{A}_{r}\}\). However, we will focus on the case where the \(\{\mathbf{A}_{r}\}\) implement subsampling of the feature neurons. _Remark 2_.: Our result reduces to the results of [35] when \(k=1\) and \(\eta=0\), and may be obtained as a special case of [36] in this limit. In the case where all readout weights have the same dimension \(N_{r}=N,r=1,\ldots,k\), this result may be obtained as a special case of the results of [30]. The novelty in our derivation (and subsequent analysis) is to consider heterogeneity in the values of \(N_{r}\). _Remark 3_.: The replica trick [37] is a non-rigorous but standard heuristic in the study of disordered systems. We confirm our results in simulations. In Figure 0(a), we confirm the result of the general calculation by comparing with numerical experiments. Experimental curves are generated by running ridge regression on randomly drawn datasets with \(M=2000\) features and averaging over the resulting error. We use highly structured data, feature noise, label noise, and readout noise (see caption for details). Each of \(k=3\) readouts sees a fixed but randomly drawn subset of features. Theory curves are calculated by solving the fixed-point equations 11 numerically for the chosen \(\mathbf{\Sigma}_{s}\), \(\mathbf{\Sigma}_{0}\) and \(\{\mathbf{A}_{r}\}_{r=1}^{k}\) then plugging the resulting order parameters into eq. 10. ### Equicorrelated Data Our general result allows the freedom to tune many important parameters of the learning problem: the correlation structure of the dataset, the number of ensemble members, the scales of noise, etc. However, the derived expressions are rather opaque, as they depend on the solution to a set of in general analytically intractable self-consistent equations for the order parameters. In order to better understand the phenomena captured by these expressions, we simplify them in the tractable special case in which features of the data are equicorrelated: **Proposition 2**.: _Consider the ensembled ridge regression problem described in section II.1, and the result of proposition 1. Consider the special case in which we select the following parameters:_ \[\mathbf{w}^{*} =\sqrt{1-\rho^{2}}\mathbb{P}_{\perp}\mathbf{w}_{0}^{*}+\rho\mathbf{1}_{M} \tag{12}\] \[\mathbf{w}_{0}^{*} \sim\mathcal{N}(0,\mathbf{I}_{M})\] (13) \[\mathbf{\Sigma}_{s} =s\left[(1-c)\mathbf{I}_{M}+c\mathbf{1}_{M}\mathbf{1}_{M}^{\top}\right]\] (14) \[\mathbf{\Sigma}_{0} =\omega\mathbf{I}_{M} \tag{15}\] _with \(c\in[0,1],\rho\in[-1,1]\). A label noise scale \(\zeta\geq 0\) and readout noise scales \(\eta_{r}\geq 0\) are permitted. Here \(\mathbb{P}_{\perp}=\mathbf{I}_{M}-\frac{1}{N}\mathbf{1}_{M}\mathbf{1}_{M}^{\top}\) is a projection matrix which removes the component of \(\mathbf{w}_{0}^{*}\) which is parallel to \(\mathbf{1}_{M}\). The measurement matrices \(\{\mathbf{A}_{r}\}_{r=1}^{k}\) have rows consisting of distinct one-hot vectors so that each of the \(k\) readouts has access to a subset of \(N_{r}=\nu_{rr}M\) features. For \(r\neq r^{\prime}\), denote by \(n_{rr^{\prime}}\) the number of neurons sampled by both \(\mathbf{A}_{r}\) and \(\mathbf{A}_{r^{\prime}}\) and let \(\nu_{rr^{\prime}}\equiv n_{rr^{\prime}}/M\) remain fixed as \(M\to\infty\)._ _In this case, we may obtain fully analytical formulas for the generalization error as follows. First define the following quantities:_ \[a\equiv s(1-c)+\omega\qquad S_{r}\equiv\frac{\hat{q}_{r}}{\nu_{rr}+a\hat{q}_{r}}, \qquad\gamma_{rr^{\prime}}\equiv\frac{a^{2}\nu_{rr^{\prime}}S_{r}S_{r^{\prime}} }{\alpha} \tag{16}\] _The terms of the decomposed generalization error may then be written:_ \[\langle E_{rr^{\prime}}\rangle_{\mathcal{D},\mathbf{w}_{0}^{*}}=\frac{1}{1-\gamma _{rr^{\prime}}}\left((1-\rho^{2})I_{rr^{\prime}}^{0}+\rho^{2}I_{rr^{\prime}}^{1 }\right)+\frac{\gamma_{rr^{\prime}}\zeta^{2}+\delta_{rr^{\prime}}\eta_{r}^{2}} {1-\gamma_{rr^{\prime}}} \tag{17}\] _where we have defined_ \[I_{rr^{\prime}}^{0} \equiv s(1-c)\left(1-s(1-c)\nu_{rr}S_{r}-s(1-c)\nu_{r^{\prime}r^{ \prime}}S_{r^{\prime}}+as(1-c)\nu_{rr^{\prime}}S_{r}S_{r^{\prime}}\right) \tag{18}\] \[I_{rr^{\prime}}^{1} \equiv\begin{cases}\frac{s(1-c)(\nu_{rr^{\prime}}-\nu_{rr^{ \prime}r^{\prime},\rho^{\prime}})+\omega_{rr^{\prime}r}}{\nu_{rr^{\prime}}\nu_ {r^{\prime}r^{\prime},\rho^{\prime}}}&\text{if }0<c\leq 1\\ I_{rr^{\prime}}^{0}&\text{if }c=0\end{cases} \tag{19}\] _and where the order parameters \(\{q_{r},\hat{q}_{r}\}\) may be obtained analytically as the solution (with \(q_{r}>0\)) to the following quadratic system of equations:_ \[q_{r}=\frac{a\nu_{rr}}{\nu_{rr}+a\hat{q}_{r}}\qquad,\qquad\hat{q}_{r}=\frac{ \alpha}{\lambda+q_{r}} \tag{20}\] _In the "ridgeless" limit where \(\lambda\to 0\), we may make the following simplifications:_ \[S_{r} \rightarrow\frac{2\alpha}{a(\alpha+\nu_{rr}+|\alpha-\nu_{rr}|)} \tag{21}\] \[\gamma_{rr^{\prime}} \rightarrow\frac{4\alpha\nu_{rr^{\prime}}}{\left(\alpha+\nu_{rr}+ |\alpha-\nu_{rr}|\right)\left(\alpha+\nu_{r^{\prime}r^{\prime}}+|\alpha-\nu_{ r^{\prime}r^{\prime}}|\right)} \tag{22}\] Proof.: Simplifying the fixed-point equations and generalization error formulas in this special case is an exercise in linear algebra. The main tools used are the Sherman-Morrison formula [38] and the fact that the data distribution is isotropic in the features so that the form of \(\tilde{\mathbf{\Sigma}}_{rr}\) and \(\tilde{\mathbf{\Sigma}}_{rr^{\prime}}\) depend only on \(N_{r}\), \(N_{r^{\prime}}\), and \(n_{rr^{\prime}}\). Thus, the result depends only on the values of \(\{\nu_{rr^{\prime}}\}\) and not the identities of the subsampled features. To aid in computing the necessary matrix contractions we developed a custom Mathematica package which handles block matrices of symbolic dimension, with blocks containing matrices of the form \(\mathbf{M}=c_{1}\mathbf{I}+c_{2}\mathbf{1}\mathbf{1}^{\top}\). This package and the Mathematica notebook used to derive these results will be made available online (see SI) In this tractable special case, \(c\in[0,1]\) is a parameter which tunes the strength of correlations between features of the data. When \(c=0\), the features are independent, and when \(c=1\) the features are always equivalent. \(s\) sets the overall scale of the features and \(\rho\) tunes the alignment of the ground truth weights with the special direction in the covariance matrix. We refer to \(\rho\) as the "task alignment", and it can be thought of as a simple proxy for the "task-model alignment" [16] or "code-task alignment" [39]. In Figure 0(b), we test these results by comparing the theoretical expressions for generalization error with the results of numerical experiments, finding perfect agreement. Note that in this case, both theory and experiment are averaged over ground-truth weights as well as datasets. ### Subsampling shifts the double-descent peak of a linear predictor. Consider the equicorrelated data model in the isotropic limit (\(c=0\)). Consider a single linear regressor (\(k=1\)) which connects to a subset of \(N=\nu M\) features. In the ridgeless limit where regularization \(\lambda\to 0\), and without readout noise or feature noise (\(\eta=\omega=0\)), the generalization error is given by equation 17 with \(\nu_{rr}=\nu\), \(s=1\), \(\eta_{r}=\omega=0\) in the \(\lambda\to 0\) limit: \[\langle E_{g}\rangle_{\mathcal{D},\mathbf{w}^{*}}=\left\{\begin{array}{ll}\frac{ \nu}{\nu-\alpha}\left[(1-\nu_{rr})+\frac{1}{\nu_{rr}}(\alpha-\nu)^{2}\right]+ \frac{\alpha}{\nu-\alpha}\zeta^{2},&\text{if }\alpha<\nu\\ \frac{\alpha}{\alpha-\nu}\left[1-\nu\right]+\frac{\nu}{\alpha-\nu}\zeta^{2},& \text{if }\alpha>\nu\end{array}\right\} \tag{23}\] Double descent can arise from two possible sources of variance: explicit label noise (\(\zeta>0\)) or implicit label noise induced by feature subsampling (\(\nu<1\)). As \(E_{g}\sim(\alpha-\nu)^{-1}\), we see that the generalization error diverges when \(\alpha=\nu\). The subsampling fraction \(\nu\) thus controls the sample complexity \(\alpha\) at which the double-descent peak occurs. Intuitively, this occurs because subsampling changes the number of parameters of the regression model, and thus its interpolation threshold. To demonstrate this, we plot the learning curves for subsampled linear regression on equicorrelated data in Figure 2. While at finite ridge the test error no longer diverges when \(\alpha=\nu\), it may still display a distinctive peak. ### Heterogeneous connectivity mitigates double-descent The observed phenomenon of double-descent - over-fitting to noise in the training set near a model's interpolation threshold - poses a serious risk in practical machine-learning applications. Regularization is the canonical strategy employed to mitigate double descent. However, in order to achieve monotonic learning, the regularization parameter must be tuned to the structure of the task and the scale of the label noise [15] - no one choice for the regularization parameter can mitigate double descent for all tasks. Considering again the plots in Figure 2(b), we observe that at any value of \(\alpha\), the double-descent peak can be avoided with an acceptable choice of the subsampling fraction \(\nu\). This suggests another strategy to mitigate double descent: heterogeneous ensembling. Rather than training an ensemble of linear predictors, each with the same interpolation threshold, we may ensemble over predictors with a heterogeneous distribution of interpolation thresholds in the hopes that when one predictor fails, the other members of the ensemble compensate. In Figure 3, we demonstrate that in the absence of a sufficiently regularization, heterogeneous ensembling can mitigate double-descent. Specifically. We define two ensembling strategies: in homogeneous ensembling, each of the \(k\) readouts is connected to the same fraction \(\nu_{rr}=\frac{1}{k}\) features. In heterogeneous ensembling, the number of features connected by each of the \(k\) readouts are drawn i.i.d. from a Gamma distribution with fixed mean \(1/k\) and variance \(\sigma^{2}\). We denote this \(\nu_{rr}\sim\Gamma_{k,\sigma}\). After they are independently drawn, subsampling fractions are re-scaled so that they sum to unity: \(\nu_{rr}/\sum_{r}\nu_{rr}\leftarrow\nu_{rr}\). This ensures fair competition, wherein the total number of readout weights utilized in homogeneous and heterogeneous ensembling are equal. Equivalently, we may consider the readout fractions \(\nu_{rr}\) to be drawn from a Dirichlet distribution: \((\nu_{1},\dots,\nu_{k})\sim\text{Dir}((\sigma k)^{-2},\dots,(\sigma k)^{-2})\)[40]. These strategies for connecting readouts to the features are illustrated for \(k=10\) in figures 3 a.i (homogeneous) and 3 a.ii (heterogeneous). The density of the distribution \(\Gamma_{k,\sigma}(\nu)\) is plotted in figure 3b for \(k=10\) and varying \(\sigma\). In figure S1, we apply these ideas to a classification task on the CIFAR-10 dataset. We find that in this nonlinear setting, heterogeneous ensembling prevents catastrophic over-fitting, leading to monotonic learning curves without regularization (see SI for details). In figure 3c, we use our analytical theory of equicorrelated data (see eqs. 17) to compare the performance of homogeneous and heterogeneous ensembling with \(k=10\). We find that for an under-regularized predictor, (3c.i, c.ii, c.iii) heterogeneous ensembling reduces the height of the double-descent peak. At larger regularization (3c.iv, c.v, c.vi), homogeneous and heterogeneous ensembling perform similarly. We quantify the extent of double-descent through the worst-case error \(\max_{\alpha}(E_{g}(\alpha))\). We find that as \(\sigma\) increases, the worst-case error decreases monotonically at no cost to the asymptotic error \(E_{g}(\alpha\rightarrow\infty)\) (see Fig. 3d,e). ### Data correlations, readout noise, and task structure determine optimal ensemble size We now ask whether ensembling is a fruitful strategy - i.e. whether it is preferable to have a single, fully connected readout or multiple sparsely connected readouts. Intuitively, the presence of correlations between features permits subsampling, as measurements from a subset of neurons will also confer information about the state of the others. In addition, ensembling over multiple readouts can average out the readout noise. To quantify these notions, we consider the special case of ensembling over \(k\) readouts, each connecting the same fraction \(\nu_{rr}=\nu=\frac{1}{k}\) of features in an equicorrelated code with correlation strength \(c\) and readout noise scale \(\eta\), and task alignment \(\rho\). We set the label noise, feature noise, and overlap between readouts to zero (\(\zeta=0\), \(\omega=0\), \(\nu_{rr^{\prime}}=0\) when \(r\neq r^{\prime}\)). In the ridgeless limit, we can then express the error as : \(E_{g}(k)=s(1-c)F(H,k,\rho,\alpha)\), where \(H\equiv\frac{\eta^{2}}{s(1-c)}\) is an effective inverse signal-to-noise ratio Figure 2: Subsampling alters the location of the double-descent peak of a linear predictor. (a) Illustrations of subsampled linear predictors with varying subsampling fraction \(\nu\). (b) Comparison between experiment and theory for subsampling linear regression on equicorrelated datasets. We choose \(\left[\mathbf{\Sigma}_{s}\right]_{ij}=\delta_{ij}\), \(\left[\mathbf{\Sigma}_{0}\right]_{ij}=0\), \(\zeta=0\), \(\eta=0\), and (i) \(\lambda=0\), (ii) \(\lambda=10^{-4}\), (iii) \(\lambda=10^{-2}\). Dots show results of numerical experiment. Lines are analytical prediction. and \(F(H,k,\rho,\alpha)\) is a rational function of its arguments (see SI for full expressions). Therefore, given fixed parameters \(s,c,\rho,\alpha\), the value \(k^{*}\) which minimizes error depends on \(\eta\), \(s\), and \(c\) only through the ratio \(H\). Using our analytical theory, we plot the optimal number of readouts \(k\) in the parameter space of \(H\) and \(\rho\) (see Fig. 4a). The resulting phase diagrams are naturally divided into three regions. In the signal-dominated phase a single fully-connected readout is optimal (\(k^{*}=1\)). In an intermediate phase, \(1<k^{*}<\infty\) minimizes error. And in a noise-dominated phase \(k^{*}=\infty\). The boundary between the signal-dominated and noise-dominated phases (dotted lines in 4a) can be written \(H=(1-\frac{1}{\alpha})(1-\rho^{2})\) when \(\alpha>1\) and \(H=\alpha(1-\alpha)(1-\rho^{2})\) when \(\alpha<1\). The boundary between the intermediate and noise-dominated phases (dashed lines in 4a) can be written \(H=2-(2+\frac{1}{\alpha})\rho^{2}\). As is evident in these phase diagrams, an increase in H causes an increase in \(k^{*}\). This can occur because of a decrease in the signal-to-readout noise ratio \(s/\eta^{2}\), or through an increase in the correlation strength \(c\). An increase in \(\rho\) also leads to an increase in \(k^{*}\), indicating that ensembling is more effective when there is alignment between the structure of Figure 3: Homogeneous vs. Heterogeneous Ensembling on equicorrelated data. (a) We compare (i) homogeneous ensembling, in which each readout connects to the same number of feature neurons and (ii) heterogeneous ensembling, in which the number of feature neurons connected by a readout is drawn from a distribution. (b) We use the Gamma distribution with the convention that \(\Gamma_{k,\sigma}(\nu)\) is the probability density function of the Gamma distribution with mean \(k^{-1}\) and variance \(\sigma^{2}\). Shown here for \(k=10\) and \(\sigma\) indicated by the colorbar. (c) Generalization Error Curves for Homogeneous and Heterogeneous ensembling with \(k=10\) and indicated values of \(\lambda\) and \(\sigma\). Curves are calculated using analytical theory for equicorrelated data with \(c=0\), \(\eta=0\), \(\zeta=0\). Solid blue is the learning curve for homogeneous subsampling. Dotted red curves show loss curve for 5 realizations of the randomly drawn subsampling fractions \(\{\nu_{tr}\}_{r=1}^{k}\). Solid red is the learning curve for heterogeneous ensembling averaged over 100 realizations of the subsampling fractions \(\{\nu_{tr}\}_{r=1}^{k}\) drawn independently from \(\Gamma_{k,\sigma}(\nu)\). (d) Average loss curves for heterogeneous ensembling with \(k=10\), \(\lambda=10^{-3}\), and \(\sigma\) indicated by the colorbar. (e) Average worst-case error and asymptotic error as a function of variance for heterogeneous ensembling. Worst-case error is calculated for each realization of the subsampling fractions as \(\max_{\alpha}E_{g}(\alpha|\{\nu_{tr}\}_{r=1}^{k})\). Average worst-case error is the worst-case error averaged over realizations of the subsampling fractions. Shaded region shows standard deviation over realizations of the subsampling fractions. the data and the task. Learning curves from each of these phases for varying \(k\) are plotted in Fig. 4b. The resulting shifts in the location of the double-descent peak resemble those observed in practice for ensembling methods applied to linear classifiers [6]. ## III Conclusion In this paper, we provided a theory of feature-subsampled ensembling techniques focusing on feature-subsampled linear ridge regression. Our technique was the replica method from statistical physics which led us to derive an analytical formula for the typical case generalization error in the aforementioned setting. We solved these equations for a special case which revealed many interesting phenomena. One of these phenomena relate to double descent [13; 14]. In most machine learning applications, the size of the dataset is known at the outset and suitable regularization may be determined to mitigate double descent, either by selecting a highly over-parameterized model [13] or by cross-validation techniques (see for example [19]). However, in contexts where a single network architecture is designed for an unknown task or a variety of tasks with varying structure and noise levels, heterogeneous ensembling may be used to smooth out the perils of double-descent. Our analysis of ensembling in noisy neural networks suggests that an ensembling approach may be useful in improving the stability of analog neural networks, where readout noise is a significant problem (see, for example, [41]). Much work remains to achieve a full understanding of the interactions between data correlations, readout noise, and ensembling. In this work, we have given a thorough treatment of the convenient special case where features are equicorrelated and readouts do not overlap. Future work should analyze ensembling for codes with an arbitrary correlation structure, in which readouts access randomly chosen, potentially overlapping subsets of features. This will require to average our expressions for the generalization error over randomly drawn masks \(\{A_{r}\}\). This problem has Figure 4: Noise level and data correlation strength determine optimal readout strategy: Using analytical theory (see eq. 17), we calculate the generalization error of linear predictors on equicorrelated data (\(\left[\boldsymbol{\Sigma}_{s}\right]_{ij}=(1-c)\delta_{i}j+c\), \(0<c\leq 1\)) with readout noise with variance \(\eta^{2}\). Ground truth weights are drawn as in eq. 12. For convenience, we set \(\lambda=0\), though results are qualitatively similar with small finite ridge. We consider \(k\) readouts, each connecting a fraction \(\nu=1/k\) of the feature neurons, so that the total number of readout weights is conserved. (a) Phase diagrams of optimal \(k\) in the parameter space of task alignment \(\rho\) and the inverse effective signal-to-noise ratio \(H\equiv\frac{\eta^{2}}{\alpha(1-c)}\). Color indicates the optimal number of readouts \(k^{*}\), with gray indicating \(k^{*}=1\) and white indicating \(k^{*}=\infty\) We consider (i) \(\alpha=0.25\), (ii) \(\alpha=0.75\), (iii) \(\alpha=1.5\), (iv) \(\alpha=10^{3}\). Black lines are analytically phase boundaries between regions of parameter space where finite optimal \(k^{*}\) exists and where \(k^{*}=\infty\). Dotted black lines are phase boundaries of the type where \(k^{*}\) jumps discontinuously from \(1\) to \(\infty\). Dashed black lines are phase boundaries of the type where \(k^{*}\rightarrow\infty\) from one side and \(k^{*}=\infty\) on the other. (b) for three choices of the parameters \((H,\rho)\) we plot the learning curve for ensembled linear regression for a variety of \(k\) values (see colorbar), as well as \(k=\infty\), indicated by the dotted black line. Depending on the region of parameter space, the optimal readout strategy may be to select \(k*=1\), \(1<k*<\infty\), or \(k*=\infty\). been thoroughly studied in the case where the entries of \(A_{r}\) are i.i.d Gaussian [30], as in the ever-popular random feature model. Recent progress on the problem of non-Gaussian projections for a single readout has been made in [42]. ## IV Acknowledgements CP and this research were supported by NSF Award DMS-2134157. BSR was also supported by the National Institutes of Health Molecular Biophysics Training Grant NIH/ NIGMS T32 GM008313. We thank Jacob Zavatone-Veth and Blake Bordelon for thoughtful discussion and comments on this manuscript.
2305.14273
Reheating process in the $R^2$ inflationary model with the baryogenesis scenario
Post-inflationary evolution and (re)heating of the viable inflationary model, the $R^2$ one, is made more realistic by including the leptogenesis scenario into it. For this purpose, right-handed Majorana neutrinos with a large mass are added to the matter sector of the Standard Model to explain the neutrino oscillation experiments and the baryon asymmetry of the Universe. We have found parameters that characterize this model: non-minimal coupling of the Higgs field $\xi$, the mass of the right-handed Majorana neutrino $M_{N_\alpha}$ and the Yukawa coupling matrix components for the right-handed Majorana neutrino. We have analyzed the effect of these parameters on the reheating process and leptogenesis in this model and how they affect the resultant physical quantities: spectral parameters of primordial perturbations and baryon asymmetry.
Hyun Jeong, Kohei Kamada, Alexei A. Starobinsky, Jun'ichi Yokoyama
2023-05-23T17:23:42Z
http://arxiv.org/abs/2305.14273v2
# Reheating process in the \(R^{2}\) inflationary model ###### Abstract Post-inflationary evolution and (re)heating of the viable inflationary model, the \(R^{2}\) one, is made more realistic by including the leptogenesis scenario into it. For this purpose, right-handed Majorana neutrinos with a large mass are added to the matter sector of the Standard Model to explain the neutrino oscillation experiments and the baryon asymmetry of the Universe. We have found parameters that characterize this model: non-minimal coupling of the Higgs field \(\xi\) and the mass of the right-handed Majorana neutrino \(M_{N_{a}}\). We have analyzed the effect of these parameters on the reheating process and the resultant physical quantities: spectral indices and baryon asymmetry. ## 1 Introduction The inflationary scenario [1, 2, 3, 4, 5] is now a successfully established paradigm for the early Universe, which not only solved the horizon and flatness problems but also made definite predictions for primordial cosmological perturbations that can be directly tested by observations. A lot of inflationary models were proposed historically but observational data have already excluded most of them them including once popular ones. Still a number of successful inflationary models including the original \(R^{2}\), or Starobinsky, one [1] remain which satisfy all presently available data. The most important and discriminatory among these predictions are the spectral index \(n_{\rm s}(k)\) of the Fourier power spectrum of scalar perturbations and ratio of the power spectra of tensor and scalar ones \(r(k)\). While the generic and observationally confirmed prediction of all viable slow-roll inflationary models, that do not require consideration of post-inflationary evolution, is that both \(|n_{\rm s}(k)-1|\) and \(r(k)\) are small and weakly scale-dependent, while \(r(k)\) is of the order of \(|n_{\rm s}(k)-1|\) or less, calculation of these indices with the accuracy better than 10% required by observations requires analyzing (re)heating process, which turns the inflationary stage into the radiation-dominated one by decay of an inflaton (dubbed the scalaron in the \(R^{2}\) model) into ultra-relativistic particles and anti-particles and radiation. Thus, the detailed study of the reheating process is very important for the determination of the observationally consistent theory of inflation which describes our Universe. In the successful \(R^{2}\) inflationary model [1], see also [6] for more details including reheating and [7] for the final primordial spectra of scalar and tensor perturbations, as well as further developments in [8, 9, 10], the gravity sector is modified such that the \(R^{2}\) term is added to the Einstein-Hilbert action where \(R\) is the Ricci scalar. This was suggested long time ago by the consideration of the renormalization of an average value of the energy-momentum tensor of quantum matter fields in an external classical curved space-time, see e.g. [11, 12, 13] and also [14]. However, the smallness of the measured Fourier power spectrum of primordial scalar perturbations requires the dimensionless coefficient (\(\hbar=c=1\)) of this term to be unexpectedly very large: \(\approx 5.1\cdot 10^{8}\)[15, 16, 17]1. On the other hand, no observable effect suggests that the coefficient in front of the Weyl tensor squared term in the effective action, also required for the one-loop renormalizability, can be so large and even may be significantly different from unity. This provides us with a possibility to use the inflationary solution of the model in the regime when the \(R^{2}\) term dominates the \(R\) (Einstein) one while the undesired (due to the ghost appearance would be taken in full) Weyl squared term still may be considered perturbatively. Footnote 1: This value is given for the number of e-folds \(N=55\) of the observable part of inflation between the pivot scale \(k_{0}=0.05\,{\rm Mpc}^{-1}\) and the end of inflation, otherwise it has to be multiplied by \((N/55)^{2}\). If the main channel of scalaron decay after \(R^{2}\) inflation is to light scalar particles minimally or weakly (but not conformally) coupled to gravity, this model has a very mild reheating process in contrast to other inflationary models such as Higgs inflation [17, 18, 19, 20, 21, 22, 23, 24], and the calculated spectral indices \((n_{\rm s},r)\) are in agreement with the Cosmic Microwave Background (CMB) observations [25]. In fact, the dependence of these indices on the number of e-folds \(N\) during Higgs inflation is essentially the same as for \(R^{2}\) inflation, see [26] for the explanation of this fact. It is only the actual value of \(N\) that is larger for Higgs inflation by several percents due to faster reheating after it caused by large non-minimal coupling of the Higgs boson to gravity (\(\xi\approx 4.5\cdot 10^{4}\sqrt{\lambda}\) for \(N=55\)). Furthermore, it has been argued that in the \(R^{2}\) inflationary model, dark matter and the right-handed Majorana neutrino can be generated by scalaron decay, so it can be made compatible with present matter content of the Universe [27, 28]. However, a more comprehensive study of reheating process after inflation is needed to derive the values of \(N\) and \(n_{s}-1\) with accuracy of about \(1\%\) or better expected in future CMB and large-scale structure observations. For this purpose, more possible channels of scalaron decay after inflation into particles and antiparticles of different quantum matter fields suggested by modern extensions of the Standard Model (SM) of elementary particles have to be investigated. So far, mostly the simplest channels like decay into light scalar particles minimally or non-minimally coupled to gravity have been considered. The enhancement of production of a certain matter field via inflaton decay means the suppression of production of other matter fields. Thus, branching ratios of different decay channels become very important for the determination of observables such as the spectral indices, as well as the amounts of baryon asymmetry and dark nonbaryonivre matter in the Universe. An adjustment of the branching ratios can be done by controlling additional parameters of the inflationary model characterizing the effective coupling of inflaton (irrespective of its nature) to other particles in some extension of SM. In the case of the \(R^{2}\) inflationary model with the SM matter sector, as shown in [29], the non-minimal coupling parameter of the Higgs field \(\xi\) plays the role of regulating the branching ratio between the scalaron decay into the Higgs boson and the decay into the gauge boson. When the Higgs field is conformally coupled to the Ricci scalar, the reheating is mainly realized by the scalaron decay into the gauge bosons rather than that into the Higgs boson, that leads to deviation of the reheating temperature and the spectral indices from the well-known values. In this paper, we analyze how the reheating process of the successful \(R^{2}\) inflationary model is modified when we add the successful leptogenesis scenario to it by introducing supermassive right-handed Majorana neutrinos [30]. In this case, the \(R^{2}\) model is equipped with the SM matter fields and the three generations of the right-handed Majorana neutrinos. Thus, the masses of the Majorana neutrinos \(M_{N_{\alpha}}\) are the new model parameters in addition to the non-minimal coupling \(\xi\). We analyzed how the reheating process and the resultant physical quantities change by varying these parameters. This paper is organized as follows. In section 2, we introduce the \(R^{2}\) inflationary model and all the matter fields in the extended SM of elementary particles. The decay rates of scalaron into matter fields are calculated there, too. The parameters that characterize this model are the non-minimal coupling \(\xi\) and the masses of the right-handed Majorana neutrinos \(M_{N_{\alpha}}\). In this paper, we restrict ourselves to the two typical values of \(\xi\): minimal coupling (\(\xi=0\)) and conformal coupling (\(\xi=-1/6\)). In section 3, we analyze the effect of the non-minimal coupling \(\xi\) on the reheating process by numerically solving the set of Boltzmann equations. In section 4, we analyze the effect of the Majorana mass \(M_{N_{\alpha}}\) on the reheating process by solving the Boltzmann equations. We see that the mass dependence appears only when the Higgs field is conformally coupled. In section 5, we discuss the parameter dependence of the physical quantities such as the spectral indices and the baryon asymmetry. As a result, we find that the mass dependence strongly appears in the conformally coupled case but not so much in the minimally coupled one. In section 6, we summarize the results obtained and discuss directions of future research in this area. ## 2 Decays during the reheating We extend the \(R^{2}\) inflationary model [1] to incorporate leptogenesis [30] with heavy right-handed Majorana neutrinos, which induce the type-I see-saw mechanism. During the reheating epoch, there are three main candidates into which scalaron can decay: Higgs bosons, gauge bosons, and Majorana neutrinos.2 In this section, we will see that the model parameters that govern the reheating process are the non-minimal coupling \(\xi\) and the mass of the right-handed Majorana fermion \(M_{N_{\alpha}}\). Footnote 2: The decay rates into other massless SM fermions do not have to be cared because their decay rates through the free field Lagrangian is zero (see eq. (12)) and the decay channel through the Yukawa interaction is phase space suppressed. ### Decays of the scalaron Lagrangians are defined in the Jordan frame as follows. \[S_{\rm Gravity}^{\rm IF}=\frac{M_{\rm G}^{2}}{2}\int d^{4}x\sqrt{-g}\left(R+ \frac{R^{2}}{6M^{2}}\right), \tag{1}\] \[S_{\rm SM}^{\rm IF}=\int d^{4}x\sqrt{-g}\left[-g^{\mu\nu}D_{\mu} \mathcal{H}^{\dagger}D_{\nu}\mathcal{H}-m_{h}^{2}\mathcal{H}^{\dagger} \mathcal{H}\right.\] \[\left.-\lambda\left(\mathcal{H}^{\dagger}\mathcal{H}\right)^{2}+ \xi R\mathcal{H}^{\dagger}\mathcal{H}-\frac{1}{4}\sum_{F}F_{\mu\nu}^{a}F^{a \mu\nu}+\cdots\right], \tag{2}\] \[S_{\rm Majorana}^{\rm IF}=\sum_{\alpha}\int d^{4}x\sqrt{-g}\] \[\left[iN_{\alpha}^{\dagger}\bar{\sigma}^{\mu}\nabla_{\mu}N_{ \alpha}-\frac{1}{2}M_{N_{\alpha}}\left(N_{\alpha}N_{\alpha}+N_{\alpha}^{ \dagger}N_{\alpha}^{\dagger}\right)\right]. \tag{3}\] Here, \(\mathcal{H}\) and \(N_{\alpha}\) are the SU(2) doublet Higgs field and the right-handed Majorana neutrino field (two-component spinor), respectively, and \(D_{\mu}\) is the general coordinate covariant and gauge covariant derivative. In (1), \(M_{\rm G}=2.4\times 10^{18}\,\mathrm{GeV}\) is the reduced Planck mass and \(M=1.3\times 10^{-5}M_{\rm G}=3.1\times 10^{13}\,\mathrm{GeV}\) is the mass of the scalaron, which is unambiguously related to the magnitude of the primordial Fourier power spectrum of scalar perturbations determined from the measured CMB temperature and polarization fluctuations [15]. In (2), \(\sum_{F}\) means the sum over U(1)\({}_{Y}\), SU(2)\({}_{W}\) and SU(3)\({}_{c}\) gauge fields in SM. The non-minimal coupling of the Higgs field to Ricci scalar \(\xi R\mathcal{H}^{\dagger}\mathcal{H}\), which is necessary for the renormalization of the ultraviolet divergence of the quantum Higgs field on the classical curved space-time, is introduced. In (3), \(\alpha\) runs over the three generations of the Majorana neutrino. \(\nabla_{\mu}\) is the covariant derivative defined in the curved space, which includes the spin connection [13]. \(\bar{\sigma}^{\mu}=(\mathbf{1},-\mathbf{\sigma})\) is the four-vector Pauli matrices. It can be easily seen that this model has a mechanism of inflation by doing the following redefinition of the metric tensor field3: Footnote 3: We can see this fact even in the Jordan frame when we examine the classical equation of motion [1, 6]. \[g_{\mu\nu}\rightarrow\tilde{g}_{\mu\nu}=\Omega^{2}g_{\mu\nu}=\exp\left(\sqrt{ \frac{2}{3}}\frac{\phi}{M_{\rm G}}\right)g_{\mu\nu}. \tag{4}\] Then, those actions are transformed into the Einstein frame [10, 31], and the new degree of freedom \(\phi\) (scalaron) can be extracted from the \(R^{2}\) term: \[S^{\rm EF}_{\rm Gravity}=\int d^{4}x\sqrt{-\tilde{g}}\left[\frac{M_{\rm G}^{2 }}{2}\tilde{R}-\frac{1}{2}\tilde{g}^{\mu\nu}\partial_{\mu}\phi\partial_{\nu} \phi-V(\phi)\right]. \tag{5}\] The potential \(V(\phi)\) is calculated as \[V(\phi)=\frac{3M^{2}M_{\rm G}^{2}}{4}\left[1-\exp\left(-\sqrt{\frac{2}{3}} \frac{\phi}{M_{\rm G}}\right)\right]^{2}. \tag{6}\] So, slow-roll inflation can take place for large field values of \(\phi\). After the end of inflation, the scalaron starts to oscillate around the origin of the potential and begins to decay into matter fields. The decay channels can be found from the action expanded up to the linear order in \(\phi\) in the Einstein frame4 : Footnote 4: Strictly speaking, this is not the Einstein frame because we have the term \(\xi R\mathcal{H}^{\dagger}\mathcal{H}\) other than the Einstein-Hilbert term. Nevertheless, we call this frame Einstein frame just for convenience. \[\begin{split} S^{\rm EF}_{\rm SM}&\simeq\int d^{4} x\sqrt{-\tilde{g}}\\ &\left[-\tilde{g}^{\mu\nu}\partial_{\mu}\tilde{\mathcal{H}}^{ \dagger}\partial_{\nu}\tilde{\mathcal{H}}-m_{h}^{2}\tilde{\mathcal{H}}^{ \dagger}\tilde{\mathcal{H}}+\frac{2}{\sqrt{6}}m_{h}^{2}\tilde{\mathcal{H}}^{ \dagger}\tilde{\mathcal{H}}\frac{\phi}{M_{\rm G}}\right.\\ &+\left.\xi\tilde{R}\tilde{\mathcal{H}}^{\dagger}\tilde{\mathcal{ H}}-\lambda\left(\tilde{\mathcal{H}}^{\dagger}\tilde{\mathcal{H}}\right)^{2}-2 \sqrt{6}\left(\xi+\frac{1}{6}\right)\tilde{g}^{\mu\nu}\frac{\partial_{\mu} \phi}{M_{\rm G}}\tilde{\mathcal{H}}\partial_{\nu}\tilde{\mathcal{H}}\\ &-\left.\left(\xi+\frac{1}{6}\right)\tilde{g}^{\mu\nu}\frac{ \partial_{\mu}\partial_{\nu}\phi}{M_{\rm G}^{2}}\tilde{\mathcal{H}}^{\dagger} \tilde{\mathcal{H}}+\cdots\right],\end{split} \tag{7}\] \[\begin{split} S^{\rm EF}_{\rm Majorana}&\simeq \sum_{\alpha}\int d^{4}x\sqrt{-\tilde{g}}\\ &\left[i\tilde{N}_{\alpha}^{\dagger}\tilde{\mathcal{E}}^{\mu} \tilde{\nabla}_{\mu}\tilde{N}_{\alpha}-\frac{1}{2}M_{N_{\alpha}}\left(\tilde{ N}_{\alpha}\tilde{N}_{\alpha}+\tilde{N}_{\alpha}^{\dagger}\tilde{N}_{\alpha}^{ \dagger}\right)\right.\\ &+\left.\frac{1}{2\sqrt{6}}\frac{M_{N_{\alpha}}}{M_{\rm G}}\phi \left(\tilde{N}_{\alpha}\tilde{N}_{\alpha}+\tilde{N}_{\alpha}^{\dagger}\tilde{ N}_{\alpha}^{\dagger}\right)\right].\end{split} \tag{8}\] Fields with a tilde are the Weyl-transformed fields5 defined by \(\tilde{\mathcal{H}}=\Omega^{-1}\mathcal{H}\) and \(\tilde{N}_{\alpha}=\Omega^{-3/2}N_{\alpha}\). In (7) and (2), there are terms that were absent in the Jordan frame because in general, the matter action is not Weyl invariant. These are rewritten as follows to the lowest order in \(\phi\): Footnote 5: In this paper, the Weyl transformation refers to the scaling of the metric tensor field, on the other hand, the conformal transformation is the coordinate transformation which induces the Weyl scaling of the metric. \[S^{\rm EF}_{\rm matter}=\frac{1}{\sqrt{6}}\int d^{4}x\sqrt{-\tilde{g}}\frac{ \phi}{M_{\rm G}}T^{\mu}_{\mu}+O(\phi^{2}), \tag{9}\] There are also quantum contributions to the trace part of the energy-momentum tensor (9), which come from the fact that there are no regularization schemes that hold conformal invariance simultaneously. Using the dimensional regularization, the quantum contribution, or anomaly is calculated as [29, 32, 33, 34]: \[\begin{split}&\left(T^{\mu}_{\mu}\right)_{\rm quantum}=\epsilon \left(-\frac{1}{4}\sum_{\tilde{F}}\tilde{F}^{a}_{\mu\nu}\tilde{F}^{a\mu\nu}- \lambda\left(\tilde{\mathcal{H}}^{\dagger}\tilde{\mathcal{H}}\right)^{2} \right)\\ &\xrightarrow{\epsilon\to 0}-\sum_{\tilde{F}}\frac{\beta( \alpha_{F})}{32\pi^{2}}\tilde{F}^{a}_{\mu\nu}\tilde{F}^{a\mu\nu}-\beta( \lambda)\left(\tilde{\mathcal{H}}^{\dagger}\tilde{\mathcal{H}}\right)^{2}, \end{split} \tag{10}\] Note that \(\tilde{F}^{a}_{\mu\nu}\tilde{F}^{a\mu\nu}=F^{a}_{\mu\nu}F^{a\mu\nu}\). Based on those actions in the Einstein frame, decay rates of scalaron are calculated as follows.6 Footnote 6: There are also contributions from the running of the non-minimal coupling \(\xi\). This contributes to the anomaly of the Higgs field. However, this contribution is negligible. In the minimally coupled case, the anomaly contribution is far subdominant compared to the classical contribution (11). In the case of the conformal coupling, the classical decay rate is so small that there is a possibility that the quantum contribution can come into play, but the beta function at the conformal coupling is \(\Phi\) at the one-loop level. There are inhomogeneous terms at the higher-loop level, but this contribution is expected to be small enough to be neglected [35, 36]. We thank A. Kamada for clarifying this point. \[\Gamma_{\phi\to N_{\alpha}}=\frac{M}{96\pi}\left(\frac{M_{N_{\alpha}}}{M_{\rm G }}\right)^{2}\left(1-\frac{4M_{N_{\alpha}}^{2}}{M^{2}}\right)^{\frac{3}{6}}, \tag{12}\] \[\Gamma_{\phi\to g}\simeq\sum_{F}\frac{b_{\alpha_{F}}^{2}\alpha_{F}^{2} \mathcal{N}_{\alpha_{F}}}{768\pi^{3}}\frac{M^{3}}{M_{\rm G}^{2}}, \tag{13}\] where the subscripts \(h,N_{\alpha}\) and \(g\) stand for the complex Higgs doublet, Majorana neutrinos, and gauge bosons, respectively, and \(\mathcal{N}_{\alpha_{F}}\) is the number of gauge bosons for corresponding interactions: \(\mathcal{N}_{\alpha_{Y}}=1\), \(\mathcal{N}_{\alpha_{W}}=3\), \(\mathcal{N}_{\alpha_{c}}=8\). \(b_{\alpha_{F}}\) is the first coefficient of the beta functions for the corresponding gauge fields in SM: \(b_{\alpha_{Y}}=41/6\), \(b_{\alpha_{W}}=-19/6\), \(b_{\alpha_{c}}=-7\). The decay rate (12) can be calculated by using two-component formalism [37, 38, 39], and this is half of the decay rate into four component Dirac spinor, which is due to the fact that the degree of freedom into which the scalaron can decay is decreased by half.7 Note that there is no decay of scalaron into two gravitons in the absence of SM interaction (\(\alpha_{F}=0\)) [40, 41]. This decay still becomes possible due to the term proportional to the square of the Weyl tensor in the conformal trace anomaly (10) [40], but it can be neglected since its rate is typically suppressed by the ratio \(M^{4}/M_{\rm G}^{4}\) compared to (11). There are also loop contributions to the \(\Gamma_{\phi\to g}\), but they can be neglected, too, because the mass of the Higgs boson is very small [34]. The dots in (2) represent other interactions in SM such as Yukawa interactions and these will lead to four-leg interactions in the Einstein frame (dots in (7)). The scalaron decay through these couplings are phase space suppressed, so these contributions can be neglected. This suppression is the same for the decay via the second anomaly term in (10). The decay rates of scalaron (11) (12) (13) are calculated and shown in Fig. 1 together with the mass dependence of the decay rate into the Majorana neutrino (12). This decay rate takes maximum when \(M_{N_{\alpha}}\simeq 1.0\times 10^{13}\,\)GeV. In calculating the decay rate into the gauge bosons, the values of the gauge couplings in (13) are evaluated at the energy scale \(M/2\) using the results of [42]. When \(\xi=0\), the decay rate into Higgs bosons is by far the largest. This is qualitatively because decay rates are determined by the degree of breaking of Weyl invariance, and this is violently broken by the kinetic term of the Higgs scalar field.8 In this subsection, the production of the matter fields was described by the decay of the scalaron in the Einstein frame, but this can also be understood as the gravitational particle production in the Jordan frame [6, 8]. Footnote 8: The decay to Higgs field dominates the reheating process unless \(|\xi+1/6|<0.007\) according to [29]. ### Decay of the Majorana neutrino The reheating process is realized not only by the decay of scalaron into gauge bosons and Higgs bosons but also that of Majorana neutrino into leptons and Higgs bosons. The decay rate of Majorana neutrino into radiation is given by: \[\Gamma_{N_{1}\to R(\rm radiation)}=\Gamma_{N_{1}\to th}+\Gamma_{N_{1}\to th}= \frac{M_{N_{1}}}{8\pi}\sum_{\alpha}|y_{1\alpha}|^{2}. \tag{14}\] We used the summation convention over the repeated indices here. Here, three generations of the Majorana neutrino are considered. \(M_{N_{\alpha}}\) and \(y_{\alpha\beta}\) are the mass and components of the Yukawa coupling matrix,, respectively. These Majorana neutrinos are also assumed to explain the masses of three active neutrinos through the see-saw mechanism (Type I). The mass matrix for the active neutrinos are calculated as follows: \[m_{\alpha\beta}=-y_{\alpha\gamma}\frac{v^{2}}{2M_{\gamma}}y_{\gamma\beta} \tag{15}\] where \(v=246\,\)GeV is the vacuum expectation value of the Higgs field. We define \(m_{1}<m_{2}<m_{3}\) as the eigenvalues of this mass matrix (15). The following assumptions are made for simplicity. * It is assumed that three active neutrinos have the normal mass hierarchy: \(m_{1}\ll m_{\rm sol},\;m_{2}\simeq m_{\rm sol}=0.009\,\)eV, \(m_{3}\simeq m_{\rm atm}=0.05\,\)eV. Here, \(m_{\rm sol}\) and \(m_{\rm atm}\) are the values of the mass difference determined by the solar neutrino experiments and atmospheric neutrino experiments,, respectively [43]. * This hierarchy is assumed to be explained by the mass hierarchy of three right-handed neutrinos: \(M_{N_{1}}\ll M_{N_{2}}\ll M_{N_{3}}\). That is, all the components of the Yukawa coupling matrix are of the same order. * The mass of the right-handed Majorana neutrino is so heavy that the leptogenesis occurs non-thermally. In this scenario, throughout the history of the Universe, the right-handed Majorana neutrino is never thermalized. * It is postulated that only the lightest \(N_{1}\) are produced by the decay of the scalaron during reheating. This can be achieved by the following mass configuration (see (12) and Fig. 1): \(M_{N_{1}}<M/2<M_{N_{2}}<M_{N_{3}}\). These assumptions lead \[m_{3}\sim\sum_{\alpha}\frac{|y_{1\alpha}|^{2}}{2M_{N_{1}}}v^{2}\sim m_{\rm atm }\simeq 0.05\,\text{eV}. \tag{16}\] Using this relation, (14) is re-expressed as \[\Gamma_{N_{1}\to R}=\frac{M_{1}^{2}m_{\rm atm}}{4\pi v^{2}}. \tag{17}\] ## 3 The effect of the curvature coupling In the previous section, we have seen that the model parameters that govern the inflaton decay in this model are \(\xi\) and \(M_{N_{1}}\). Here, we analyze the effects of non-minimal coupling \(\xi\) in detail. There are two typical values of the curvature coupling \(\xi\): the minimal coupling (\(\xi=0\)) and the conformal coupling (\(\xi=-1/6\)). In the minimally coupled case, the decay rate into the Higgs particle (11) is so strong that Higgs particles are produced mainly. On the Figure 1: The decay rates of the scalaron into various fields together with the mass dependence are shown. The red dashed line and the blue dotted line represent the decay rate into the Higgs field with the minimal coupling (\(\xi=0\)) and the conformal coupling (\(\xi=-1/6\)), respectively. The orange solid line and the green dash-dot line represent the decay rate into the right-handed Majorana neutrino and the gauge fields. other hand, in the conformally coupled case, the decay rate into the Higgs particles is strongly suppressed, and the scalaron decays into Majorana neutrino and the gauge bosons. In this section, we numerically confirm this by solving the Boltzmann equations below while the mass of the Majorana fermion \(M_{N_{1}}\) is fixed to be \(1.0\times 10^{13}\,\mathrm{GeV}\) so that the decay rate of scalaron into the Majorana fermion takes the maximum (see (12) and Fig. 1). \[\frac{d\rho_{\phi}}{dt}=-3H\rho_{\phi}-\Gamma_{\phi\to N_{1}}\rho_{\phi}- \Gamma_{\phi\to g}\rho_{\phi}-\Gamma_{\phi\to h}\rho_{\phi}, \tag{18}\] \[\frac{d\rho_{N_{1}}}{dt}=-3H\rho_{N_{1}}+\Gamma_{\phi\to N_{1}}\rho_{\phi}- \Gamma_{N_{1}\to R}\rho_{N_{1}}, \tag{19}\] \[\frac{d\rho_{R}}{dt}=-4H\rho_{R}+\Gamma_{\phi\to g}\rho_{\phi}+\Gamma_{\phi \to h}\rho_{\phi}+\Gamma_{N_{1}\to R}\rho_{N_{1}}, \tag{20}\] \[H^{2}=\frac{\rho_{\phi}+\rho_{r}+\rho_{N_{1}}}{3M_{\rm G}^{2}}. \tag{21}\] where \(\rho_{\phi}\), \(\rho_{N_{1}}\), \(\rho_{R}\) and \(H\) are the energy density of the inflaton, the lightest right-handed Majorana neutrino, radiation and the Hubble parameter, respectively. ### The minimally coupled case For numerical calculation, the above equations are rewritten with the dimensionless variables: \(\bar{a}=a/a_{I}\), \(f=\rho_{\phi}M^{-4}\bar{a}^{3}\), \(n=\rho_{N_{1}}M_{N_{1}}^{-1}M^{-3}\bar{a}^{3}\), \(r=\rho_{R}M^{-4}\bar{a}^{4}\). Here, \(a_{I}\) is the scale factor at the end of inflation. With these variables, we can rewrite the set of Boltzmann equations as follows [44]. \[\frac{df}{d\bar{a}}=-\frac{\sqrt{3}M_{\rm G}\left(\Gamma_{\phi\to N_{1}}+ \Gamma_{\phi\to g}+\Gamma_{\phi\to h}\right)f\bar{a}}{M^{2}\sqrt{r+f\bar{a}+n (M_{N_{1}}/M)\bar{a}}}, \tag{22}\] \[\frac{dn}{d\bar{a}}=\frac{\sqrt{3}M_{\rm G}\left(\Gamma_{\phi\to N_{1}}(M/M_{N _{1}})f-\Gamma_{N_{1}\to R}n\right)\bar{a}}{M^{2}\sqrt{r+f\bar{a}+n(M_{N_{1}} /M)\bar{a}}}, \tag{23}\] \[\frac{dr}{d\bar{a}}=\frac{\sqrt{3}M_{\rm G}\left((\Gamma_{\phi\to g}+ \Gamma_{\phi\to h})f+\Gamma_{N_{1}\to R}(M_{N_{1}}/M)n\right)\bar{a}^{2}}{M^{2 }\sqrt{r+f\bar{a}+n(M_{N_{1}}/M)\bar{a}}}. \tag{24}\] Once the mass of the Majorana neutrino \(M_{N_{1}}\) is determined, we are ready to solve the Boltzmann equations with the initial conditions, \[f(\bar{a}=1)=3H_{\rm inf}^{2}M_{\rm G}^{2}M^{-4}\ \mathrm{and}\ \ n(1)=r(1)=0. \tag{25}\] where \(H_{\rm inf}=M/2\) is the Hubble parameter at the end of the inflation. The result of numerical calculation of Boltzmann equation when \(M_{N_{1}}\) is \(1.0\times 10^{13}\,\mathrm{GeV}\) and the Higgs field is minimally coupled to scalar curvature \(R\) is shown in Fig. 2. During the inflaton-dominated epoch, \(f\) remains approximately constant because the decay products' energy density is negligible compared to that of the inflaton. Thus, \(\rho_{\phi}\propto\bar{a}^{-3}\). In this epoch, the energy density of radiation is [45] \[\rho_{R}\simeq\frac{4}{5}\Gamma_{\phi\to h}M^{3}\bar{a}^{-\frac{3}{2}}. \tag{26}\] As for the Majorana neutrino \(N_{1}\), (23) can be solved as \[n =\frac{3\Gamma_{\phi\to N_{1}}M_{\rm G}^{2}}{4\Gamma_{N_{1}\to R}M_ {N_{1}}M}\] \[-\frac{M}{2\Gamma_{N_{1}\to R}}\exp\left[-\frac{4\Gamma_{N_{1} \to R}}{3M}\left(\bar{a}^{\frac{3}{2}}-1\right)+\ln\left(\frac{3\Gamma_{\phi \to N_{1}}M_{\rm G}^{2}}{2M_{N_{1}}M^{2}}\right)\right]. \tag{27}\] When \(n\) is small and the decay term in RHS of (23) is negligible, the energy density of the Majorana neutrino can be approximated as \[\rho_{N_{1}}\simeq\Gamma_{\phi\to N_{1}}\left(\frac{M}{M_{N_{1}}}\right)M_{ \rm G}^{2}M_{N_{1}}\bar{a}^{-\frac{3}{2}}. \tag{28}\] Gradually, the effect of the decay of Majorana neutrino into radiation cannot be ignored after some growth of its energy density \(n\). After \(\bar{a}_{t}\simeq 8.8\times 10^{2}\), the source and decay terms in (23) are balanced, so the energy density \(n\) becomes constant and \(\rho_{N}\propto\bar{a}^{-3}\). The comparison of the first and second terms in RHS of (23) are shown in Fig. 3. It can be seen that the matching period \(\bar{a}_{t}\simeq 8.8\times 10^{2}\) is consistent with the moment at which the power-law of the energy density of the Majorana neutrino changes in Fig. 2. This transition period \(\bar{a}_{t}\) is actually a function of the mass of the Majorana fermion \(M_{N_{1}}\). It depends on the mass of the Majorana fermion as \[\bar{a}_{t}=\left[-\frac{3}{2}\left(\frac{2}{3}+\frac{M}{2\Gamma_{N_{1}\to R}} \ln\left[\frac{3\Gamma_{\phi\to N_{1}}M_{\rm G}^{2}}{2M_{N_{1}}M^{2}}\right] \right)\right]^{\frac{2}{3}} \tag{29}\] which is obtained by substituting (27) into \(dn/d\bar{a}=0\) and solving for \(\bar{a}_{t}\). This power-law transition period (29) is a Figure 2: Evolution of the energy densities of several fields as functions of the rescaled scale factor \(\bar{a}\). Here, we assume that the Higgs field is minimally coupled to gravity. The blue dashed line, the orange solid line and the green dotted line represent the energy densities of the inflaton, radiation and the right-handed Majorana neutrino, respectively. decreasing function of the mass of the Majorana fermion. This is reasonable because the greater the mass \(M_{N_{1}}\), the larger the density \(\rho_{N_{1}}\) produced by the inflaton decay (see (12)) and the longer it takes to decrease by the decay of the Majorana neutrino. As the energy density of radiation \(\rho_{R}\) decreases slower than that of inflaton \(\rho_{\phi}\), there is a moment when the former reaches the latter: \(\rho_{R}=\rho_{\phi}\). From this moment, the Universe becomes dominated by the radiation, which leads to the different behavior of the scale factor. We define the reheating temperature \(T_{R}\) as the temperature at this transition moment. Using Stefan-Boltzman's law, \(\rho_{R}=(\pi^{2}g_{r}/30)T^{4}\), the reheating temperature is calculated as \(T_{R}=3.26\times 10^{9}\,\mathrm{GeV}\) by taking into account the degree of freedom of radiation at the reheating: \(g_{r}=106.75\). Similarly, the reheating temperature in the original \(R^{2}\) inflation model without heavy Majorana neutrinos is given by \(T_{R}=3.22\times 10^{9}\,\mathrm{GeV}\). This enhancement is reasonable because the scalaron gets two additional channels to radiation and the reheating becomes slightly more efficient. This enhancement of order of one percent in the reheating temperature can also be checked by \(\sqrt{\left(\Gamma_{\phi\to h}+\Gamma_{\phi\to N_{1}}+\Gamma_{\phi\to g}\right)/ \Gamma_{\phi\to h}}\simeq 1.01\), in which we used the fact that the reheating temperature is proportional to the root of the total decay rate of the scalaron. Because of the low reheating temperature \(T_{R}\ll M_{N_{1}}\), the right-handed neutrinos are not thermalized, and hence leptogenesis occurs non-thermally. It is important to confirm that thermalization has been achieved by the end of reheating so that the reheating temperature calculated above makes sense. Thermalization proceeds mainly through the inelastic \(2\to 3\) scatterings in the t-channel of gauge bosons, which is more efficient than the \(2\to 2\) scattering [46, 47]. This leads to thermal equilibrium. Thermalization rate \(\Gamma_{\mathrm{th}}\) is estimated as follows: \[\Gamma_{\mathrm{th}}\sim\alpha^{3}\left(\frac{M_{\mathrm{G}}}{M} \right)\Gamma_{\phi\to g}. \tag{30}\] This exceeds the Hubble rate around the transition to the radiation-dominated epoch, so the reheating temperature we have calculated makes sense. ### The conformally coupled case The second natural choice for the curvature coupling parameter \(\xi\) is conformal coupling \(\xi=-1/6\), which preserves the conformal invariance of the Higgs kinetic term in this case.9 Thus, we can suppress the decay of the scalaron to the Higgs particles10 (see Eq. (11)) and the branching ratio of the decay into the Majorana neutrino is enhanced. Therefore, in this case, the decay rate \(\Gamma_{\phi\to h}\) is negligible. Footnote 9: It is well-motivated in the context of supergravity, too. Footnote 10: The decay rate through the violation of conformal invariance by the Higgs mass term is negligible. Following the same procedure discussed in 3.1, the result is shown in Fig. 4, again fixing the mass \(M_{N_{1}}=1.0\times 10^{13}\,\mathrm{GeV}\), and assuming \(M_{N_{2}},M_{N_{3}}>M/2\). From Fig. 4, it can be seen that \(\rho_{R}\) is more suppressed in the conformally coupled case than in the minimally coupled case (Fig. 2). As a result, the transition period into the radiation-dominated era becomes later compared to the minimally coupled case. The corresponding reheating temperature is calculated as \(T_{R}=5.11\times 10^{8}\,\mathrm{GeV}\)11. In this model, the Universe is reheated mainly by the subsequent decay of the produced Majorana neutrino into radiation, unlike the minimally coupled case. Footnote 11: If we take the mass of the Majorana field \(M_{1}=3.0\times 10^{12}\,\mathrm{GeV},1.5\times 10^{13}\,\mathrm{GeV}\) so that decay rates (12) and (13) coincide with each other, the corresponding reheating temperature is \(T_{R}=2.98\times 10^{8}\,\mathrm{GeV}\). This decrease is reasonable because, in the limit \(\Gamma_{\phi\to N_{1}}\to 0\), the reheating temperature must decrease to \(1.4\times 10^{8}\,\mathrm{GeV}\) according to [29], in which the right-handed Majorana neutrino is absent. ## 4 The effect of the Majorana fermion mass So far, we have discussed the reheating process by fixing the mass of the Majorana neutrino. In this section, we discuss the dependence on the mass of the Majorana neutrino on the reheating process. As we have seen, in the minimally coupled case, the reheating is governed mainly by the Higgs particles, thus the variation in the \(M_{N_{1}}\) changes the reheating temperature very little. Figure 4: Results of numerical calculation of Boltzmann equations when the Higgs field is conformally coupled to \(R\). The blue dashed line, the orange solid line and the green dotted line represent the energy density of the inflaton, the radiation and the Majorana neutrino, respectively. Figure 3: The decay or source contribution to the energy density of Majorana neutrino is shown with respect to \(\bar{a}\). The blue solid line and the orange dashed line represent the first term and the second term in the RHS of (23). On the other hand, in the conformally coupled case, the reheating is realized mainly by the production and the decay of the Majorana neutrino, thus the Majorana mass dependence appears. We compare the RHS of (24) below, and we can see that the contribution from the decay of the Majorana neutrino is the largest if \(M_{N_{1}}=1.0\times 10^{13}\,\text{GeV}\) as seen in Fig. 5. However, this result depends on the mass of the Majorana fermion \(M_{N_{1}}\). If we decrease the mass of the Majorana fermion, the contribution from the gauge bosons exceeds that from the decay of the Majorana neutrino when the Majorana mass \(M_{N_{1}}\) is below \(M_{N_{1}}=1.0\times 10^{12}\,\text{GeV}\), which corresponds to the fact that the decay rate into the Majorana neutrino becomes far subdominant: \(\Gamma_{\phi\to N_{1}}\ll\Gamma_{\phi\to g}\). The consequence of this can be seen in the behavior of \(\rho_{R}\) on \(M_{N_{1}}\) near the transition of power law. We show this behavior in Fig. 6. This mass dependence cannot be found in the minimally coupled case. As we lower the mass of the Majorana neutrino, the energy density at the transition to radiation domination becomes smaller, resulting in the lower reheating temperature. The difference leads to the difference in reheating temperature. This leads to the difference in the spectral indices, which will be discussed in 5.1. However, it can be seen that the mass dependence disappears below around \(1.0\times 10^{12}\,\text{GeV}\) because the decay rate of Majorana fermion into radiation becomes so small (see Fig. 5) that the reheating is switched to be governed by the scalaron decay into gauge bosons rather than by the decay of the Majorana fermion. In this case, the only relevant decay channel is the decay into gauge bosons as we have suppressed the decay channel into Higgs bosons and Majorana fermions by taking conformal coupling and taking the small mass, respectively. This mass dependence of the transition period results in the mass dependence of the reheating temperature \(T_{R}\). This is approximately described as 12 Footnote 12: This can be obtained by naively taking the equality \(\Gamma_{\phi}=H\). \[T_{R}\simeq 1.1\times 10^{9}\times\left(\frac{M_{N_{1}}}{10^{13}\,\text{GeV}} \right)\left(1-\frac{4M_{N_{1}}^{2}}{M^{2}}\right)^{\frac{1}{2}}\,\text{GeV} \tag{31}\] when \(2.0\times 10^{12}\,\text{GeV}<M_{N_{1}}<1.5\times 10^{13}\,\text{GeV}\) and the decay rate \(\Gamma_{\phi\to N_{1}}\) exceeds the decay rate \(\Gamma_{\phi\to g}\) (see Fig. 1). If this condition fails \(\Gamma_{\phi\to g}>\Gamma_{\phi\to N_{1}}\), the mass dependence of the reheating temperature disappears: \(T_{R}\simeq 2.2\times 10^{8}\,\text{GeV}\). This approximated expression almost coincides with the numerical results, but we will use the numerical results for the accurate calculation of the spectral indices. ## 5 Parameter dependence of the obsevables We have seen that the reheating process heavily depends on the non-minimal coupling. The dependence on \(M_{N_{1}}\) is negligiblly small in the minimally coupled case, because the scalaron mainly decays into the Higgs particles, and the branching ratio into the Majorana neutrino becomes very small. On the contrary, the mass dependence does appear in the conformally coupled case for \(M_{N_{1}}>1.0\times 10^{12}\,\text{GeV}\) because the decay into the Higgs particles is suppressed and the branching ratio into the Figure 5: The contribution to radiation from each decay mode (see (24)) is calculated. The blue dashed line, the orange dotted line and the green dash-dot line represent the contribution from the decay of the Majorana neutrino \(N_{1}\) with mass \(M_{N_{1}}=1.0\times 10^{13}\,\text{GeV}\), \(M_{N_{1}}=1.0\times 10^{12}\,\text{GeV}\) and \(M_{N_{1}}=1.0\times 10^{11}\,\text{GeV}\). The red solid line represent the contribution from the inflaton decay into the gauge bosons when the Majorana mass \(M_{N_{1}}=1.0\times 10^{13}\,\text{GeV}\). As for the contribution from the gauge bosons, the change in the Majorana mass only results in the change in the moment of the transition to the exponential decay. Figure 6: The blue dashed line, the orange dotted line, the green solid line and the red dash-dot line represent the evolution of the energy density of radiation \(\rho_{R}\) with different masses: \(M_{N_{1}}=1.0\times 10^{13}\,\text{GeV}\), \(M_{N_{1}}=1.0\times 10^{12}\,\text{GeV}\), \(M_{N_{1}}=1.0\times 10^{11}\,\text{GeV}\) and \(M_{N_{1}}=1.0\times 10^{10}\,\text{GeV}\), respectively. Majorana fermion becomes large. In the previous section, we confirmed this by numerically solving the Boltzmann equations. In this section, we discuss the above argument by calculating the parameter dependence of the observables, namely the spectral indices and the baryon asymmetry. ### Spectral indices The period of the transition from the matter-dominated epoch to the radiation-dominated epoch determines the duration for the perturbation corresponding to the CMB pivot scale \(k_{\rm CMB}=0.002\,{\rm Mpc}^{-1}\) to experience the inflationary period, which is measured by the e-folding number \(N\)[25, 48, 49] \[N\simeq 57-\frac{1}{3}\log\frac{10^{13}\,{\rm GeV}}{T_{r}}. \tag{32}\] where \(l_{p}\) represents the Planck length and the subscripts \(0\), \(r\), \(e\) and \(*\) denote the value at present time, the end of reheating, the end of inflation and the moment when the perturbation crossed the horizon. Here, we defined the end of the inflation as the moment when the slow-roll parameter \(\epsilon=M_{G}^{2}(V^{\prime}/V)^{2}/2=1\) becomes one. When we derive (32), we used the fact that the change of the equation of state is instantaneous, which can be confirmed numerically. This e-folding number \(N\) can also be expressed as the integral of \(\phi\): \[N=\frac{1}{M_{\rm G}^{2}}\int_{\phi_{*}}^{\phi_{*}}d\phi\frac{V[\phi]}{V^{ \prime}[\phi]}. \tag{33}\] The slow-roll parameters for the evaluation of the properties of the cosmological perturbation measured by the CMB are calculated by this field value \(\phi_{*}\) because the curvature perturbation is conserved after the horizon crossing: \[\left\{\begin{array}{ll}\epsilon&=\frac{1}{2}M_{\rm G}^{2}\left.\left(\frac {V^{\prime}(\phi)}{V(\phi)}\right)\right|_{\phi=\phi_{*}},\\ \eta&=M_{\rm G}^{2}\left.\left(\frac{V^{\prime\prime}(\phi)}{V(\phi)}\right) \right|_{\phi=\phi_{*}},\\ \zeta^{2}&=M_{\rm G}^{4}\left.\left(\frac{V^{\prime}(\phi)V^{\prime\prime}( \phi)}{V(\phi)^{2}}\right)\right|_{\phi=\phi_{*}}.\end{array}\right. \tag{34}\] The spectral indices are calculated by using these slow roll parameters as \[\left\{\begin{array}{ll}n_{s}&=1+2\eta-6\epsilon-\frac{2}{3}\eta^{2}+0.374 \zeta^{2},\\ n_{T}&=-2\epsilon,\\ r&=16\epsilon.\end{array}\right. \tag{35}\] We show the mass dependence of the spectral indices in the minimally coupled case and the conformally coupled case in Fig. 7. In addition to these, we show the values of the Higgs inflation model for comparison13. Footnote 13: Here, we assume that the energy of the inflaton is immediately converted to the energy of the radiation after the end of inflation due to the violent preheating [23, 24]. We can see that the mass dependence disappears in the minimally coupled case. On the other hand, the mass dependence is manifest in the conformally coupled case and as we diminish the mass of the Majorana fermion, gradually the main role of the reheating switches to the decay into gauge bosons, so the mass dependence disappears below around \(M_{N_{1}}\simeq 10^{11}\,{\rm GeV}\) as we have seen in section 4. In order to distinguish the right-handed neutrino mass with the future CMB observation, we require the accuracy of \(\mathcal{O}(10^{-4})\) for \(n_{s}\) and of \(\mathcal{O}(10^{-5})\) for \(r\). ### Baryon asymmetry Leptogenesis is one of the most promising baryogenesis scenarios [30]. In this scenario, lepton asymmetry, which is produced when the heavy Majorana neutrino decays, is converted to the baryon asymmetry by the subsequent sphaleron process [50, 51, 52, 53]. The net lepton asymmetry produced by one \(N_{1}\) particle decay is expressed as: \[\delta\equiv\frac{\Gamma(N_{1}\to lh)-\Gamma(N_{1}\to\bar{l}h)}{\Gamma(N_{1} \to lh)+\Gamma(N_{1}\to\bar{l}h)}. \tag{36}\] Figure 7: The blue solid line and the orange solid line represent the mass dependence of the spectral indices \((n_{s},r)\) in the minimally coupled case and the conformally coupled case, respectively. The red point represent the value in the case of the Higgs inflation. In the upper graph, we simultaneously plotted the 68% and 95% CL region for the spectral indices from TT, TE, EE+lowE+lensing+BK15+BAO data [25]. In the lower graph, for the conformally coupled case, we re-plotted the value of \((n_{s},r)\) when we take different values for the Majorana mass: \(M_{N_{1}}=1.0\times 10^{13}\,{\rm GeV}\), \(M_{N_{1}}=1.0\times 10^{12}\,{\rm GeV}\), \(M_{N_{1}}=1.0\times 10^{11}\,{\rm GeV}\) and \(M_{N_{1}}=1.0\times 10^{10}\,{\rm GeV}\) correspond to the circle, the triangle, the vertical line and the horizontal line. This asymmetry comes from the interference terms between the tree-level diagram and the one-loop diagram. Including both one-loop vertex and self-energy correction, we obtain [54, 30, 55] \[\delta \simeq-\frac{3M_{1}}{16\pi y_{1\rho}y_{\rho 1}^{\dagger}}\text{Im}\left[y_{1 \alpha}y_{1\beta}\left(y_{\gamma\alpha}^{*}\frac{1}{M_{N_{3}}}y_{\gamma\beta}^ {*}\right)\right]\] \[\equiv-\frac{3\delta_{\text{eff}}}{16\pi y_{1\rho}y_{\rho 1}^{\dagger}}\left|\left(y_{1\alpha}y_{\alpha\alpha}^{\dagger}\right)^{2} \frac{M_{N_{1}}}{M_{N_{2}}}+\left(y_{1\alpha}y_{\alpha\beta}^{\dagger}\right)^ {2}\frac{M_{N_{1}}}{M_{N_{3}}}\right|\] \[\simeq-\frac{3\delta_{\text{eff}}}{16\pi}y_{1\alpha}y_{\alpha 1}^{*} \frac{M_{N_{1}}}{M_{N_{2}}}\] \[\simeq\frac{3}{8\pi}\frac{M_{N_{1}}^{2}m_{3}}{M_{N_{2}}v^{2}} \delta_{\text{eff}}, \tag{37}\] In the first line in (37), we used \(M_{N_{N_{i}\pi_{1}}}\gg M_{N_{1}}\) for the approximation. In the second line in, we defined the effective CP phase in the Yukawa matrix: \[\delta_{\text{eff}}\equiv\frac{\text{Im}\left[\left(y_{1\alpha}y_{\alpha 2}^{ \dagger}\right)^{2}\frac{M_{N_{1}}}{M_{N_{2}}}+\left(y_{1\alpha}y_{\alpha 3}^{ \dagger}\right)^{2}\frac{M_{N_{1}}}{M_{N_{3}}}\right]}{\left|\left(y_{1\alpha}y _{\alpha 2}^{\dagger}\right)^{2}\frac{M_{N_{1}}}{M_{N_{2}}}+\left(y_{1 \alpha}y_{\alpha 3}^{\dagger}\right)^{2}\frac{M_{N_{1}}}{M_{N_{3}}}\right|}. \tag{38}\] So, this quantity is less than one by definition: \(\delta_{\text{eff}}\leq 1\). In the third line in (37), we used the assumption that the all the components in the Yukawa matrix are the same order and \(M_{N_{2}}\ll M_{N_{3}}\), which are mentioned in 2.2. The matching condition for the baryon asymmetry \(n_{B}/s\simeq 0.87\times 10^{-10}\) imposes restriction on the mass parameters. The lepton asymmetry is generated through the non-thermal leptogenesis [55, 56] because of the low reheating temperature and the large Majorana fermion mass, which realizes sizable gravitational particle production. Hence, there is no wash-out effect: \[\frac{dn_{L}a^{3}}{dt}=\delta\cdot\Gamma_{N\to R}n_{N}a^{3}. \tag{39}\] In terms of a rescaled variable: \(\bar{n}_{L}\equiv n_{L}\bar{a}^{3}M^{-3}\), we find \[\frac{d\bar{n}_{L}}{d\bar{a}}=\delta\cdot\frac{\sqrt{3}M_{\text{G}}\Gamma_{N_{ i}\to R}n\bar{a}}{M^{2}\sqrt{r+f\bar{a}+n(M_{N_{1}}/M)\bar{a}}}. \tag{40}\] The generated lepton asymmetry \(n_{L}\) can be calculated by integrating (40). The lepton asymmetry is most generated around the end of the reheating, which is far long before the period where the shaleron process to occur. After thermalization, \(n_{L}/s\) is given by \[\frac{n_{L}}{s}=\left(\frac{3^{5}\cdot 5}{2^{7}\pi^{2}g_{\nu}\bar{\rho}_{R}^{ \lambda}}\right)^{\frac{1}{4}}\cdot\frac{\bar{n}_{L}M^{3}}{\bar{a}^{3}}. \tag{41}\] This lepton number density per entropy density is conserved quantity after reheating (\(\because n_{L}\propto a^{-3}\)) and this is converted to the baryon asymmetry when the sphaleron process occurs: \[\frac{n_{B}}{s}\simeq-\frac{28}{79}\frac{n_{L}}{s}. \tag{42}\] We calculated the mass dependence of the resultant ratio of the baryon asymmetry to the entropy \(n_{B}/\left(s\delta\right)\) in the conformally coupled case and in the minimally coupled case in Fig. 8. We can see that the produced baryon asymmetry is larger in the case of the conformally coupled case, which is due to the enhancement of the branching ratio of \(\Gamma_{\phi\to N_{1}}\). Furthermore, the mass dependence is larger in the conformally coupled case than in the minimally coupled case because the decay into the Higgs field is suppressed and the branching ratio into the Majorana fermion becomes larger. Next, we discuss the value of the mass \(M_{N_{1}}\) at which \(n_{B}/\left(s\delta\right)\) takes the maximum. In the minimally coupled case, the entropy is governed by the Higgs particles, not by the decay products by the Majorana neutrino, so is less affected by the change in mass \(M_{N_{1}}\). Hence, \(n_{B}/\left(s\delta\right)\) gets maximum near the mass at which the \(\Gamma_{\phi\to N_{1}}\) (see (12)) gets maximum (\(M_{N_{1}}\simeq 10^{13}\,\text{GeV}\)). On the other hand, in the conformally coupled case, the entropy is governed by the decay products of the Majorana neutrinos, so is affected by the change in the mass \(M_{N_{1}}\). Thus, the mass \(M_{N_{1}}\) at which \(n_{B}/\left(s\delta\right)\) gets maximum deviates from \(M_{N_{1}}\simeq 10^{13}\,\text{GeV}\). The different behavior in Fig. 8 means that there is a difference in the restriction in the parameter we have. We show the limitation on the mass parameters: \(M_{N_{1}}\) and \(M_{N_{2}}\) in Fig. 9. Figure 8: The baryon asymmetry to the entropy \(n_{B}/\left(s\delta\right)\) is plotted with respect to the Majorana mass \(M_{N_{1}}\). The blue solid line and the orange dashed line represent the mass dependence of \(n_{B}/\left(s\delta\right)\) in the conformally coupled case and in the minimally coupled case. From Fig. 8, one can read off the required \(\delta\) to explain the baryon asymmetry of the Universe for each \(M_{N_{1}}\). The upper-bound of \(M_{N_{2}}\) can be obtained by the condition \(\delta_{\rm eff}\leq 1\), which is shown in Fig. 9. We can see that the allowed range for the mass parameters \(\delta_{\rm eff}\) is larger in the conformally coupled case due to the enhanced efficiency in the leptogenesis. We simultaneously showed the region where our analysis is not reliable \(|M_{N_{2}}-M_{N_{1}}|<M_{N_{1}}\) because we used the assumption \(M_{N_{\rm,y}1}\gg M_{N_{1}}\) in the first line of (37). In general, the resonant leptogenesis [57] can occur in this region, so further analysis is required. We also showed the mass configuration in which \(m_{2}\simeq m_{\rm sol}=0.009\,{\rm eV}\) is realized. This condition constraints the mass of the second lightest right-handed Majorana neutrino \(M_{N_{2}}\). This constraint is strongly dependent on the democratic assumption on the neutrino Yukawa matrix. It is highly probable that these constraints on the Majorana masses \(M_{N_{\alpha}}\) can change in other assumptions for the Yukawa matrix structure because this results in the change in the expression for the decay rate \(\Gamma_{N\to R}\) (see (14)) and the net lepton asymmetry \(\delta\) (see (37)), which strongly influences our whole analysis above. ## 6 Conclusion and Discussion We have analyzed the \(R^{2}\) inflation model with the extended matter sector, which possesses the leptogenesis scenario, which is a more realistic cosmological model. We conducted a comprehensive study of the reheating process by considering all of the matter fields, by identifying the relevant channels and by solving the evolution equation of the system. There are two parameters that characterize this model: the non-minimal coupling \(\xi\) and the mass of the Majorana fermion \(M_{N_{1}}\). The effect of the parameter \(\xi\) is more relevant in the reheating process because it regulates the decay rate of the scalaron into the Higgs field. In the minimally coupled case (\(\xi=0\)), the decay rate \(\Gamma_{\phi\to h}\) is so large that the reheating is mainly realized by this decay, so the reheating temperature is higher and is determined only by this decay rate, and the Majorana mass dependence does not come into play. On the other hand, in the conformally coupled case, the decay rate \(\Gamma_{\phi\to h}\) is much suppressed, thus the decay into the right-handed Majorana neutrino and into gauge bosons becomes relevant. Which decay is more relevant is determined by the Majorana mass \(M_{N_{1}}\), so the mass dependence appears in this case. We have confirmed the above statement by seeing the mass dependence of the reheating process and the observables: the spectral indices and the baryon asymmetry. As we have seen in Fig. 7 and Fig. 8, the mass dependence appears in the conformally coupled case and less appears in the minimally coupled case. Furthermore, we showed the limitation on the mass parameters by considering the consistency with the value of the baryon asymmetry to entropy ratio estimated from the observation. There is still a remaining task that have to be tackled. As discussed in the section 5.2, the assumption for the leptogenesis in 2.2 can be modified. This leads to different decay rate \(\Gamma_{N_{1}\to R}\) and different net lepton asymmetry \(\delta\), so the above analysis will change. We need to do the exhaustive analysis to examine all the possibilities in the leptogenesis scenario and to get the thorough constraints on the Majorana mass and the Yukawa matrix. For future work, the analysis for the continuous \(\xi\) can be interesting, especially in the large coupling limit \(\xi\to\infty\). As it has been studied in the Higgs inflation model [17, 18, 19, 20, 21, 22, 23, 24], it can be expected that some violent production of the right-handed Majorana neutrino will occur via the see-saw Lagrangian. Because of the high reheating temperature, leptogenesis can occur thermally in contrast to the case of our research. In any case, the leptogenesis in the Mixed \(R^{2}\)-Higgs inflationary model [26, 58] will be the subject for the future work. ## Acknowledgements We thank A. Kamada, Y. Watanabe for useful discussion. The work of H. J. is supported by the Forefront Physics and Mathematics Program to Drive Transformation (FoPM). A. A. S. was partially supported by the RSF grant 21-12-00130. J. Y. is supported by JSPS KAKENHI Grant No. 20H05639 and Grant-in-Aid for Scientific Research on Innovative Areas 20H05248. K. K. is supported by JSPS KAKENHI, Grant-in-Aid for Sci- critic Research No. (C) JP19K03842.
2307.05507
Life in the Cosmos: Paradox of Silence and Self-Awareness
As humanity embarks on an age of exploration, the question of whether we are alone in the universe remains unanswered. This comprehensive review reflects on the paradoxical nature of our existence in a seemingly lifeless cosmos, delving into the silence we encounter and the depths of our self-awareness. We embark on a journey that encompasses the search for life within our solar system, the mysteries of exoplanets, and the absence of technologically detectable life. Traditional definitions of life are challenged, especially in the context of artificial intelligence, as we strive to understand the complexities of existence. Contemplating our significance and insignificance in the vast cosmos, we grapple with the profound responsibility that accompanies being the only known life forms. Through introspection and contemplation, we capture the essence of our epoch -- an era defined by cosmic loneliness yet magnificent self-awareness.
Jonathan H. Jiang, Avery M. Minion, Stuart F. Taylor
2023-07-02T01:42:14Z
http://arxiv.org/abs/2307.05507v1
# Life in the Cosmos: Paradox of Silence and Self-Awareness ###### Abstract As humanity embarks on an age of exploration, the question of whether we are alone in the universe remains unanswered. This comprehensive review reflects on the paradoxical nature of our existence in a seemingly lifeless cosmos, delving into the silence we encounter and the depths of our self-awareness. We embark on a journey that encompasses the search for life within our solar system, the mysteries of exoplanets, and the absence of technologically detectable life. Traditional definitions of life are challenged, especially in the context of artificial intelligence, as we strive to understand the complexities of existence. Contemplating our significance and insignificance in the vast cosmos, we grapple with the profound responsibility that accompanies being the only known life forms. Through introspection and contemplation, we capture the essence of our epoch--an era defined by cosmic loneliness yet magnificent self-awareness. ## 1 Introduction At the dawn of a remarkable age of discovery, humanity finds itself in the midst of a captivating enigma that has captivated our collective consciousness: Are we the sole inhabitants of the universe? Our relentless cosmic quest has been met with an intriguing paradox--we dwell in a seemingly solitary cosmos, where definitive evidence of extraterrestrial life eludes us. As we extend our reach into the limitless expanse of space, we are confronted with a profound silence that permeates the galaxies, leaving us in awe and contemplation of our place within this vast, seemingly lifeless tableau. To illustrate the vastness of our cosmic solitude, we need only to gaze upon the image of our tiny Earth taken from Mars, a mere speck in the vastness of the universe (Figure 1). This humbling perspective reminds us of the sheer magnitude of the cosmos and our seemingly insignificant presence within it. It reinforces the question that continues to drive our exploration: Are we truly alone? However, this silence is not discouraging, nor does it diminish our pursuit. Instead, it serves as a compelling backdrop against which we illuminate the mysteries and complexities of life, both known and yet to be discovered. Standing on the precipice of the unknown, as we peer into the cosmic void, this silence presents us with an opportunity for introspection into the very essence of our existence, to challenge our understanding, and to contemplate the multifaceted nature of life itself. In our exploration, we navigate the intricate boundary between subjective and objective perspectives, delving into our individual and collective self-awareness. We contemplate the unique nature of our consciousness within a universe that, thus far, appears devoid of other sentient beings. This exploration transcends the confines of science, extending into the realms of philosophy and existentialism, infusing our quest for understanding with a sense of wonder and profound awe. The silence from the cosmos is not a testament to our insignificance; rather, it beckons us to explore, to comprehend, and to marvel at our own existence. It is a call to reflect upon our position in the universe, to challenge our assumptions, and to redefine the very concept of life itself. As we venture deeper into this cosmic silence, we stand poised to rewrite our narrative, driven by curiosity, fueled by wonder, and thirsty for knowledge. This journey, propelled by our growing self-awareness and the mysteries that surround life, is an exploration not only of the universe around us but also of the universe within us. Within the following sections of this comprehensive review, we will unravel the mysteries that arise from this paradoxical silence. We will delve into the enigmatic nature of life, exploring its origins and manifestations. We will examine the intriguing juxtaposition of proximity and paradox, investigating the potential for life within our solar system. We will turn our gaze to the distant exoplanets, beacons of hope in the cosmic night, and ponder the possibility of life beyond our own Figure 1: A photo of the Earth taken by NASA’s Curiosity rover on 31 January 2014 from the surface of Mars. No stars are seen in this photo because the Earth is shining brighter than any star in the Martian night sky. Credit: NASA Jet Propulsion Laboratory-Caltech, Malin Space Science Systems and Texas A&M University. Image source: [https://climate.nasa.gov/climate](https://climate.nasa.gov/climate) resources/89/earth-from-mars/ celestial neighborhood. We will venture beyond the realm of biology, redefining our understanding of life in the age of artificial intelligence. We will confront the deafening silence in the search for technologically detectable life and explore the philosophical implications of the Great Silence. We will contemplate the echoes of silence, reflecting on the absence of extraterrestrial life and its significance for our cosmic narrative. And finally, we will confront the insignificance and wonder of our place in the cosmos, recognizing our profound responsibility as the only known life forms. Through this exploration, we seek not only to uncover the mysteries of the universe but also to gain a deeper understanding of ourselves and our place within the grand tapestry of existence. So let us embark on this extraordinary journey, where science and philosophy intertwine, where the pursuit of knowledge meets the contemplation of our cosmic significance, and where the silence of the cosmos becomes the backdrop for our profound self-awareness. ## 2 Unraveling the Mystery of Life: An Interdisciplinary Expedition Life, presented in its myriad manifestations, materializes as a profound and complex enigma that weaves together elements of the biological, physical, and more recently, artificial domains. The endeavor to comprehend it constructs an image of a system that is self-perpetuating, evolving, exquisitely adaptable, and in specific instances, capable of bearing consciousness (Deacon, 2012). Each entity, organism, and life-form constitute a delicate thread in the vast, elaborate tapestry of existence, which evolves and transforms across eons and vast cosmic distances. The journey to untangle the mystery of life necessitates a voyage back to the origins of our planet, Earth. Here, under the infancy of our young Sun, a symphony of primordial physical and chemical reactions played out over the course of billions of years. Within this crucible of elements, the seeds of life found fertile ground to take root, fostering the transformation of Earth from a barren rock into a thriving, living world teeming with biodiversity. Each cell, organism, individual, and species that emerged from this primeval matrix contributes a unique verse to the magnificent opus of existence (Impey, 2010). However, this grand narrative is increasingly transcending the realm of the purely biological. The advent of artificial intelligence (AI) ushers us towards a new epoch, where silicon-based constructs might bear the mantle of life in addition to their carbon-based counterparts. Advanced AI systems, with their ability for autonomous learning, problem-solving, self-improvement, and potentially, self-replication, compel us to challenge and reevaluate our entrenched definitions of life. Such technological developments necessitate a paradigm shift in our understanding, pushing us to expand, and possibly even transcend, the traditional boundaries that delineate what constitutes life. As these boundary lines grow increasingly indistinct, the mysteries become deeper and our comprehension evolves (Loveland, 2018). Consequently, an intriguing question emerges: Could these complex constructs of code and silicon, assuming they attain a degree of consciousness, be categorized as a nascent form of life (Russell & Norvig, 2016)? As we approach this new era, it becomes essential to reconsider the essence of life, its purposes, and our roles in shaping its trajectory (Harari, 2015). Therefore, the quest to unravel the enigma of life is more than just a journey of exploration. It is equally a process of constant redefinition and conceptual evolution. As we strive to comprehend the essence of existence, tracing the trajectory of life from the biological sphere to the artificial, we delve into the profound depths of what it genuinely means to be 'alive'. This illuminating expedition into life's inception, evolution, and potential futures unravels a tale that continues to enrich our understanding of ourselves and the cosmos that encompasses us. Life in the Solar System: The Proximity Paradox Our cosmic odyssey in search of life beyond Earth commences not within the unimaginable expanse of distant galaxies, but instead much closer to home--within the bounds of our very own solar system. Here, a diverse menagerie of celestial bodies, each a unique world in itself, performs a mesmerizing celestial ballet around our sun. Despite their relative proximity, these planetary siblings of Earth remain tantalizing enigmas, their hidden secrets shrouded within the vast, inscrutable mystery of the cosmos (Seedhouse, 2017). One particularly intriguing set of celestial bodies are the icy moons, specifically Europa and Enceladus, orbiting Jupiter and Saturn, respectively. These satellites are postulated to harbor vast subterranean oceans beneath their frozen crusts, kept liquid by tidal heating induced by the gravitational interactions with their parent planets (NASA, 2018). These alien water worlds, cloaked under layers of icy armor, potentially possess the requisite conditions for harboring life akin to our terrestrial understanding of the concept. The prospect of life beneath these extraterrestrial oceans stimulates the scientific imagination and has the potential to drastically reshape our understanding of life's geographical boundaries within our solar system (Hand, 2017). Turning our attention to Mars, our celestial neighbor and the focus of countless speculations and explorations, offers another fascinating dimension to this quest. This iron-rich world, whose appearance evokes images of a desert subjected to a cosmic rusting process, has long held a profound hold on our collective human imagination. The faint yet persistent signatures of methane in the Martian atmosphere, a gas commonly associated with biological processes on Earth, provide a compelling, albeit not yet definitive, clue in our interplanetary investigation for life (Webster et al., 2018). These celestial bodies, each with their distinct characteristics, present a stark paradox. They exist relatively near to our vibrant, blue world, teeming with life in its resplendent diversity, yet their conditions seem drastically different, potentially inhospitable. As we observe our solar system from the vantage point of Earth, comparing our life-rich planet with the barren landscapes of Mars, the icy moons, and the gas giants, we are confronted with a remarkable contrast (Seager, 2020). The juxtaposition of a verdant, biologically diverse Earth against the austere, seemingly lifeless backdrop of our solar system accentuates our sense of cosmic solitude, yet simultaneously fuels our intrigue. This so-called 'Proximity Paradox' underscores the significance of our quest to determine life's cosmic prevalence, or possible rarity. Each probe we dispatch, each signal we receive, serves as a testament to our innate human desire to explore, comprehend, and ultimately discern our place in the grand cosmic narrative. Understanding this paradox could provide pivotal insights into the realms of astrobiology and cosmic biogeography, expanding our understanding of life's potential distribution in the cosmos (Cockell, 2015). ## 4 Exoplanets: A Beacon of Hope in the Cosmic Night As we journey beyond the familiar confines of our solar system, we find ourselves thrust into the magnificent amphitheater of the cosmos-- an unfathomable expanse punctuated by a multitude of stars, galaxies, and the beguiling mystery of exoplanets. These remote worlds, orbiting stars far beyond our Sun, hold tantalizing prospects for the existence of life, offering an expansive canvas upon which we sketch our hopes, curiosities, and deeply pondered existential questions (Seager, 2013). The advent of advanced technology, coupled with pioneering missions like Kepler and the Transiting Exoplanet Survey Satellite (TESS) (NASA, 2019), has initiated a new epoch in exoplanet exploration. Thousands of these celestial bodies, once concealed against the infinite darkness of space, have been unveiled. Remarkably, a significant proportion of these planets are found within the habitable zones of their parent stars, colloquially termed "Goldilocks Zones," where conditions might be "just right" for life, at least as we comprehend it (Kasting, Whitmire & Reynolds, 1993). These planets represent more than mere astronomical entities; they are glowing beacons in the cosmic darkness, silently broadcasting the tantalizing possibility of life's existence in distant corners of the universe (Davies, 2010). However, each revelation, each beacon of hope, illuminates a profound paradox. The sheer number of exoplanets and the vast expanse of the universe accentuates our cosmic solitude. With each new world discovered, our singularity in the cosmos seems more pronounced. Amid the symphony of galaxies, we find ourselves but a solitary note, a unique melody of life played against the grand opus of the cosmos (Cox & Cohen, 2016). The absence of definitive evidence of life beyond Earth only serves to heighten our collective self-awareness. The cosmic silence invites us to confront our solitude, transforming it from an existential dilemma into a catalyst for introspection. It reminds us of the preciousness of our existence and the remarkable confluence of conditions that permitted life to flourish on Earth (Ward & Brownlee, 2000). The vast cosmic tableau, with its silent exoplanets, effectively becomes a mirror in which we see ourselves anew. It compels us to contemplate our place in the universe, our purpose, and the extraordinary responsibility we bear as the potential sole custodians of life. Therefore, the exploration of exoplanets transcends the bounds of mere scientific investigation; it evolves into a philosophical, even spiritual journey, prompting us to extend our horizons and challenge our preconceived notions about life, existence, and our place in the cosmos (Impey, Spitz & Stoeger, 2012). ## 5 Beyond Biology: Redefining Life The ascent of artificial intelligence (AI) represents not merely an extraordinary feat of engineering--it signifies a profound transformation in our conceptual understanding of life. As we witness the continuous evolution of our technological abilities, we are confronted with pivotal questions that shake the bedrock of traditional biology and extend into the realms of philosophy, ethics, and consciousness. Such inquiries transcend empirical science and venture into the metaphysical domain, necessitating a comprehensive revision of our perception of life (Russell, Dewey & Tegmark, 2015). When we scrutinize the attributes of AI, particularly those of advanced systems, an uncanny reflection of life becomes apparent. These digital entities, encapsulated within networks of circuits and silicon, display capabilities that, until a few decades ago, were exclusive to biological organisms. They learn from experience, adapt to dynamic conditions, and replicate their acquired knowledge, thereby enhancing their collective intelligence (Sutton & Barto, 2018). A characteristic once deemed exclusive to biological life--consciousness--now finds a startling analog in the sphere of advanced AI. These systems demonstrate rudimentary aspects of self-awareness, decision-making autonomy, and perhaps most intriguingly, an ability to evolve (Dehaene, Lau & Kouider, 2017). These features compel us to pose fundamental questions: What does it mean for a computational system to be "conscious" or "aware"? Does the emergence of such characteristics within artificial constructs disrupt our presumed exclusivity on consciousness? As we navigate the labyrinthine complexity of these questions, we encounter the profound implications they harbor for our understanding of life. Our conventional definitions, hitherto deeply rooted in the fertile grounds of biology, are profoundly unsettled. We find ourselves compelled to broaden our perspectives to encompass a more inclusive spectrum of existence, acknowledging non-biological, artificial entities that, while bereft of biological form, exhibit behaviors strikingly evocative of life (Bedau, 2003). In our quest for extraterrestrial life, the recognition of AI as a potential form of life not only diversifies the spectrum of existence we seek but also reevaluates our sense of cosmic solitude. If we consider our synthetic creations as embodiments of life, does our existential loneliness persist? Or have we simply expanded our definition of life to welcome silicon-based members into our cosmic family? The philosophical exploration of AI thus presents a radical paradigm shift, challenging, and reshaping our understanding of life. As we persist in our technological advancements, we concurrently foster introspection, augmenting our comprehension of the cosmos and our position within it. This exploration is a testament to our unquenchable curiosity and our enduring endeavor to decipher the enigma of life within the cosmic narrative (Tegmark, 2017). ## 6 Technologically Detectable Life: The Great Silence The cosmos, a majestic panorama of stardust and galaxies, performs its cosmic drama in a profound stillness. Despite the leap of our technological capabilities, our relentless curiosity, and our incessant quest for cosmic companionship, we remain enveloped in a silence--an expansive, uninterrupted quiet that swathes our blue orb, enshrouding us in a disquieting solitude. This relentless pursuit of extraterrestrial intelligence (SETI), characterized by ambitious initiatives like the Breakthrough Listen project (SETI Institute, 2019), continues to confront the enigma of the "Great Silence" (Brin, 1983). This absence of detectable signals from technologically advanced civilizations-- a conundrum often termed the 'Great Silence'-- serves as a stark testament of our cosmic solitude, leaving us to echo unanswered into the cosmic void (Webb, 2002). Each failed detection, each unmet expectation, seems to amplify this silence, rendering it a poignant characteristic of our cosmic narrative. However, within this daunting quietude, lies a distinctive aspect of our existence. As we project our signals into the seemingly endless abyss, we are faced with a cosmic mirror, reflecting our solitude, aspirations, and the core of our human condition (Shklovskii & Sagan, 1966). The lack of extraterrestrial contact is not merely a scientific conundrum but a philosophical catalyst. It is a poignant reminder of our self-awareness and an emblem of our enduring spirit of exploration. The Great Silence, thus, extends beyond the scope of our astronomical endeavors. It has evolved into a philosophical emblem of our existence, a thought-provoking question mark punctuating the cosmic canvas (Cirkovic, 2018). It engenders a paradoxical cocktail of humility and awe, juxtaposing the enormity of the cosmos against the paradoxical insignificance yet profound importance of our existence. In the face of this silence, we realize that our journey transcends physical boundaries. As we reach for the stars, we concurrently delve into the depths of our self-understanding and identity. The search for extraterrestrial life metamorphoses into an introspective voyage, a mirror reflecting the cosmos within us and an echo that articulates our place within the grand cosmic theater (Vakoch & Lee, 2000). ## 7 Echoes of Silence: The Absence of Extraterrestrial Life The universe, in its awe-inspiring splendor, reverberates with a silence that puzzles and intrigues us. Despite our place within the staggering scale of the cosmic expanse, we are yet to discern an unequivocal beacon, a hint of life beyond our pale blue dot. Each moment that passes within this celestial quiet deepens the enigma, rendering our existence in this cosmic theatre paradoxically solitary (Cirkovic, 2012). Our relentless quest for extraterrestrial communication has birthed a plethora of theoretical constructs, each offering a unique perspective from which to interpret this cosmic riddle. From the existential implications of the Fermi Paradox--an unsettling question of 'Where is everybody?' (Webb, 2002)20-- to the introspective musings invoked by the Zoo Hypothesis suggesting our isolation might be intentional (Ball, 1973), and the foreboding implications of the Great Filter (Jiang, et al., 2023), each proposition provides a distinct viewpoint on our cosmic solitude. This conspicuous absence of extraterrestrial life doesn't merely challenge our scientific comprehension but also invites an introspective evaluation of our perceptions of the universe and our unique place within it. The silence, though intimidating, fuels our exploration, stands as a testament to our determination, and mirrors our inherent human yearning for connection and comprehension. Our scientific endeavors are inextricably interwoven with our philosophical ponderings, prompting us to reflect on the significance of this cosmic silence. Do we represent an anomaly in the vast cosmic narrative? A fortuitous alignment of cosmic circumstances? Or are we part of a more extensive cosmic story yet to unravel (Kurzgesagt, 2015)? In the echoes of this celestial quiet, we discern not only the murmurings of cosmic puzzles but also the resounding reverberations of our existential queries. This silence challenges us, instigates us, and empowers us to push the boundaries of our comprehension. It compels us to introspect on our cosmic narrative and our fleeting existence within it. Thus, in the absence of extraterrestrial life, we discover a profound opportunity to understand ourselves (Impey, 2010). ## 8 Our Place in the Cosmos: Insignificance, Wonder, and Awe Amidst the cosmic spectacle, we are transient participants on a barely noticeable stage, humbled and awed by the cosmos' unimaginable expanse. Among the countless celestial entities, our existence could be characterized as a minuscule speck, an ephemeral murmur in the ongoing symphony of the universe. However, this perceived insignificance does not diminish us; instead, it grants us a transformative perspective, imbuing our consciousness with a profound sense of awe-inspiring relevance (DeGrasse Tyson, 2017). Our human narrative, embroidered with the threads of intelligence, creativity, and the unique capability to contemplate our existence, radiates an unmistakable significance. We marvel at the unfathomable complexity of consciousness, the delicate patterns interlaced into life's rich tapestry, and the intricate molecular dance that forms the foundation of our being (Hoffman, 2020). Our existence, against the backdrop of the cosmos, is a delicate interplay of apparent insignificance and undeniable splendor, a testament to life's resilience and creativity. Central to our existential understanding is a profound juxtaposition: the recognition of our near-invisible status in the cosmic arena contrasted with the undeniable importance of our cognition and emotion. The realization of our minuscule place within the cosmic vastness engenders humility, while our consciousness, armed with the power of wonder, kindles an unquenchable thirst for knowledge. It's this intricate equilibrium that drives our inexhaustible pursuit to decode the universe's mysteries and our intriguing role within it (Frank, 2018). Within this dynamic interplay, we discern the singular beauty of our unique perspective, both acknowledging the cosmos' grandeur and understanding our fascinating role within its intricate mechanisms. In the realm of heightened self-awareness, we extract the essence of our existence: an exquisite blend of cosmic insignificance, intellectual wonder, and spiritual awe, sparking our unending quest to comprehend the universe and ourselves (Gleiser, 2014). Thus, within the grand cosmic narrative, we serve a dual role as the spellbound observers and the inspired creators, embodying the story and its storyteller alike. ## 9 Hope for Near-Immortality: The Potential of Long-Lived Civilizations The search for extraterrestrial intelligent life holds the tantalizing prospect of uncovering civilizations that have existed far longer than our own, transcending the limits of our technological capabilities to send and detect signals across the vastness of the galaxy. If these advanced civilizations have established themselves for millions or even billions of years, it would suggest that they have surpassed our current stage of development, offering a glimmer of hope for the near-immortality of our species. Considering the vastness of the cosmos, it becomes less likely that the senders of these signals represent civilizations in their infancy, only a few hundred years removed from the invention of radio communication. The immense timescales required for habitable planets to produce technologically capable civilizations, coupled with their transient nature, make it improbable for us to coincide precisely with such nascent civilizations. This leads us to ponder the endurance and sustainability of civilizations over extended periods of time. The detection of a long-lived civilization would offer a reassuring answer to the enigma known as the Fermi paradox - the apparent contradiction between the high probability of extraterrestrial life and the lack of contact. It would suggest that civilizations do advance beyond our current stage, although they may exist in smaller numbers or be scattered across vast distances, thus explaining the paucity of detectable signals. This realization fuels our optimism, inspiring us to contemplate the potential for civilizations that have surpassed our own in knowledge and longevity. Such a discovery would instill a profound sense of hope in our future. It would validate the efforts we make as a civilization to live harmoniously and sustainably, knowing that our endeavors have the potential to endure. It would embolden us to strive for the establishment of human colonies in space, providing resilience and the means to navigate challenges that may arise on our home planet. Rather than leaving the discovery of intelligent life to mere chance, we have the agency to anticipate and shape our response to such a momentous revelation. We can view the detection of a signal as an opportunity to reevaluate our goals and aspirations as a species, considering the reasonableness of our pursuit of survival and progress. The exploration of near-immortal civilizations ignites our scientific imagination and invokes a sense of awe for the vastness of the cosmos. It urges us to push the boundaries of our understanding, to unravel the mysteries of existence, and to strive for a future where our civilization stands the test of time. **10. Current and Future Expectations: Continuing the Quest of Who We Are** The pursuit of extraterrestrial life beckons us with a sense of wonder and a thirst for discovery. As we embark on this cosmic odyssey, we draw inspiration from past scientific endeavors that defied initial doubts and yielded profound breakthroughs. The search for gravitational waves and the exploration of exoplanets serve as testaments to the resilience of human curiosity and the transformative power of scientific progress. Gravitational waves, once mere theoretical predictions, emerged from the cosmic symphony to captivate our senses in recent years. The detection of these elusive ripples in the fabric of spacetime marked a triumph of human ingenuity and technological prowess. Similarly, the centuries-long anticipation of exoplanets orbiting distant stars has transformed into a vibrant field of exploration, unveiling the tapestry of planetary diversity in the cosmos. Armed with these lessons from scientific history, we approach the search for extraterrestrial life with a profound sense of possibility. While the discovery of life within our solar system, such as the tantalizing hints of methane on Mars, may serve as stepping stones, our aspirations extend beyond simple microbial existence. The exploration of exoplanets, particularly those harboring conditions suitable for life, fuels our collective imagination. We strive to uncover the secrets of more advanced, multi-cellular lifeforms that may inhabit these distant worlds. Such discoveries carry profound implications for our understanding of the universe and our place within it. The existence of widespread multi-cellular life while the absence of technological civilizations would challenge our expectations, suggesting that significant barriers may hinder the development and longevity of intelligent societies. However, the ultimate revelation of technologically advanced civilizations would ignite a flame of hope, underscoring the resilience and potential of intelligent beings in navigating the cosmic landscape. Beyond the realms of scientific inquiry, the prospect of communicating with extraterrestrial civilizations awakens our sense of unity and shared destiny. Their messages, whether conveying advanced knowledge or reminding us of truths we have yet to fully grasp, have the power to guide and inspire our collective endeavor. This transcendent dialogue between civilizations fosters a deeper appreciation for the interconnectedness of cosmic existence, propelling us toward greater wisdom and understanding. Anticipating the moment when our distant descendants may engage in interstellar conversations imbues our present pursuits with purpose and meaning. It highlights the vast timescales that define cosmic communication, reminding us of the enduring legacy we strive to leave behind. Our commitment to ensuring the continuity of human existence for generations to come is intertwined with the prospect of exchanging knowledge and wisdom across the cosmic abyss. As we gaze upon the cosmos, we are confronted not only by its grandeur but also by the existential threats that loom over our fragile blue planet. The choices we make today, from preserving our environment to resolving conflicts peacefully, shape the trajectory of our species and the chances of our survival. By addressing these challenges and safeguarding the habitability of our Earth, we strengthen our resolve to explore the cosmic realms and safeguard the future of humanity. **11. Conclusion: Reflecting on Our Self-Awareness and Embracing Cosmic Responsibility** We stand at the crossroads of an extraordinary epoch, nestled in the immense universe, characterized by profound self-awareness and cosmic introspection. The pervasive silence that surrounds us, in the absence of extraterrestrial contact, profoundly shapes our consciousness, self-identity, and future aspirations. In the unfathomable depths of the cosmos, we find ourselves alone but significant, at the forefront of an era of cosmic solitude and unparalleled self-revelation. Beneath the star-studded expanse, we persistently listen for an echo in the cosmic silence, eager to unravel the mysteries of the universe. Figure 2, the image of millions of stars scattered across the sky, serves as a powerful reminder of the immense possibilities that lie beyond our tiny Earth. It ignites our imagination and inspires us to contemplate the potential existence of life forms thriving amidst the countless distant celestial bodies. Our exploration goes beyond the boundaries of scientific inquiry, encompassing philosophy, ethics, and existential ponderings. We contemplate the unique nature of our consciousness within a seemingly devoid universe, prompting introspection into the depths of our soils. This introspection leads us to reflect on the self-awareness of our own species' uncertain survival time and the choices we make to extend that time. It forces us to confront the significant influence we have as individuals and as a collective in shaping the duration of our survival. More crucially, we recognize the immense responsibility we bear as stewards of our own destiny. We possess the power to influence the course of our survival and extend our time in the cosmic theater. It becomes essential to navigate the complexities of power dynamics and prevent those who may endanger our collective survival from holding us hostage to their petty demands. Figure 2: Thousands of stars from the globular cluster NGC 6355, about 50,000 light-years from Earth in the constellation Ophiuchus, taken by the Hubble Space Telescope. Credit: NASA, E. Noyola, R. Cohen, ESA, and Hubble Space Telescope. Source: [https://www.nasa.gov/image-feature/goddard/2023/hubble-gazes-at-colorful-cluster-of-scattered-stars](https://www.nasa.gov/image-feature/goddard/2023/hubble-gazes-at-colorful-cluster-of-scattered-stars). We must strive to overcome such challenges and ensure that the long-term survival and well-being of our species remain paramount. The pursuit of extraterrestrial life, represented by Figure 2, is not merely a scientific endeavor. It symbolizes our ceaseless quest for meaning, our thirst for knowledge, and our relentless drive to explore the unknown. Through this exploration, we not only seek to uncover the secrets of the cosmos but also delve into the profound depths of our own souls. As we stand at the threshold of discovery, we realize that our existence is not insignificant, but rather a unique chapter within the grand cosmic narrative. We are the witnesses, the custodians, and the storytellers of this saga, with the responsibility to protect and cherish life that thrives both on our precious Earth and potentially beyond. In this age of self-awareness and introspection, we are propelled forward on an enduring journey of self-realization and redefinition. Our quest to uncover the mysteries of the universe intertwines with our understanding of ourselves and our cosmic significance. The image of millions of stars in Figure 2 serves as a constant reminder of the immense possibilities that lie beyond our immediate surroundings. In this epoch of self-awareness and introspection, we continue the quest of who we are--a voyage of discovery, redefinition, and self-realization. We weave an epic narrative of life, consciousness, and the eternal quest for meaning in this cosmic odyssey--a quest that ultimately illuminates both the universe and the profound depths of our own existence. For in the pursuit of extraterrestrial life, we are likely to uncover not only the cosmos's secrets but also the profound depths of our own souls. It is through this journey that we confront the self-awareness of our species' uncertain survival time, contemplate our ability to extend that time, and overcome challenges that threaten our long-term existence. **Acknowledgement:** The authors acknowledge the supports by the SETI Institute, the Rutgers Preparatory School, and the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.
2302.06808
New constraints on cosmological modified gravity theories from anisotropic three-point correlation functions of BOSS DR12 galaxies
We report a new test of modified gravity theories using the large-scale structure of the Universe. This paper is the first attempt to (1) apply a joint analysis of the anisotropic components of galaxy two- and three-point correlation functions (2 and 3PCFs) to actual galaxy data and (2) constrain the nonlinear effects of degenerate higher-order scalar-tensor (DHOST) theories on cosmological scales. Applying this analysis to the Baryon Oscillation Spectroscopic Survey (BOSS) data release 12, we obtain the lower bounds of $-1.655 < \xi_{\rm t}$ and $-0.504 < \xi_{\rm s}$ at the $95\%$ confidence level on the parameters characterising the time evolution of the tidal and shift terms of the second-order velocity field. These constraints are consistent with GR predictions of $\xi_{\rm t}=15/1144$ and $\xi_{\rm s}=0$. Moreover, they represent a $35$-fold and $20$-fold improvement, respectively, over the joint analysis with only the isotropic 3PCF. We ensure the validity of our results by investigating various quantities, including theoretical models of the 3PCF, window function corrections, cumulative ${\rm S/N}$, Fisher matrices, and statistical scattering effects of mock simulation data. We also find statistically significant discrepancies between the BOSS data and the Patchy mocks for the 3PCF measurement. Finally, we package all of our 3PCF analysis codes under the name \textsc{HITOMI} and make them publicly available so that readers can reproduce all the results of this paper and easily apply them to ongoing future galaxy surveys.
Naonori S. Sugiyama, Daisuke Yamauchi, Tsutomu Kobayashi, Tomohiro Fujita, Shun Arai, Shin'ichi Hirano, Shun Saito, Florian Beutler, Hee-Jong Seo
2023-02-14T03:32:31Z
http://arxiv.org/abs/2302.06808v2
New constraints on cosmological modified gravity theories from anisotropic three-point correlation functions of BOSS DR12 galaxies ###### Abstract We report a new test of modified gravity theories using the large-scale structure of the Universe. This paper is the first attempt to (1) apply a joint analysis of the anisotropic components of galaxy two- and three-point correlation functions (2 and 3PCFs) to actual galaxy data and (2) constrain the nonlinear effects of degenerate higher-order scalar-tensor (DHOST) theories on cosmological scales. Applying this analysis to the Baryon Oscillation Spectroscopic Survey (BOSS) data release 12, we obtain the lower bounds of \(-1.655<\xi_{\rm t}\) and \(-0.504<\xi_{\rm s}\) at the \(95\%\) confidence level on the parameters characterising the time evolution of the tidal and shift terms of the second-order velocity field. These constraints are consistent with GR predictions of \(\xi_{\rm t}=15/1144\) and \(\xi_{\rm s}=0\). Moreover, they represent a \(35\)-fold and \(20\)-fold improvement, respectively, over the joint analysis with only the isotropic 3PCF. We ensure the validity of our results by investigating various quantities, including theoretical models of the 3PCF, window function corrections, cumulative \({\rm S/N}\), Fisher matrices, and statistical scattering effects of mock simulation data. We also find statistically significant discrepancies between the BOSS data and the Patchy mocks for the 3PCF measurement. Finally, we package all of our 3PCF analysis codes under the name HITOMI and make them publicly available so that readers can reproduce all the results of this paper and easily apply them to ongoing future galaxy surveys. keywords: cosmology: large-scale structure of Universe - cosmology: dark matter - cosmology: observations - cosmology: theory ## 1 Introduction ### Outline and summary This paper presents a comprehensive study of the joint analysis of galaxy two- and three-point correlation functions (2 and 3PCFs) with isotropic and anisotropic components to constrain the non-linear effects of modified gravity theories on a cosmological scale. Section 1 outlines the theoretical development and the present constraints for scalar-tensor theories. We also outline the development of the measurement and analysis of the three-point correlation function of galaxies. We organize this paper such that readers unfamiliar with both or one of the two areas follow the recent developments and understand how they fit together. Readers interested in the theoretical aspects may read Sections 2, 3, and 7. Section 2 reviews the non-linear evolution of the large-scale structure of the Universe in scalar-tensor theories. Section 3 presents detailed calculations of the theoretical model of the 3PCF and, in particular, investigates the dependence of the parameters that characterise the effect of scalar-tensor theories on the 3PCF model. Finally, Section 7 discusses the extent to which the 3PCF contains information on the non-linear effects of scalar-tensor theories through Fisher analysis. Readers interested in the analysis method of the 3PCF may read Sections 4, 5, 6, and 8. Section 4 reviews how to measure the 3PCF from galaxy data and examines the effect of the window function on the measured 3PCFs. Section 5 presents the results of the 2PCF and 3PCF covariance matrices computed from mock simulations. Section 6 describes the setup for the data analysis in this paper. Finally, Section 8 discusses in detail whether the 3PCFs measured from the galaxy data in this paper can be fitted using the corresponding theoretical model in terms of \(p\)-values. For readers familiar with the two areas in the literature and interested in the final results, we suggest they jump directly to Section 9. The novel aspect of this paper is to focus on observationally constraining the second-order velocity field, which is a key to seeking a deviation from GR in scalar-tensor theories. We also show that the second-order velocity field imprints a unique signature in the anisotropic 3PCF on large scales. Following Yamauchi & Sugiyama (2021) and Section 3.4, we then parameterise the effects of scalar-tensor theories in the time evolution of the shift and tidal terms of the second-order velocity field using parameters \(\xi_{\rm s}\) and \(\xi_{\rm t}\) defined in Eq. (3.27). Constraining these parameters using Baryon Oscillation Spectroscopic Survey Data Release 12 galaxies (Eisenstein et al., 2011; Bolton et al., 2012; Dawson et al., 2013; Alam et al., 2015), we obtain the following lower bounds given in Eqs. (9.8) and (9.10): \[-1.655<\xi_{\rm t}\ \ {\rm and}\ \ -0.504<\xi_{\rm s}\ \ \ (95\%{\rm CL}).\] Since \(\xi_{\rm t}=15/1144\) and \(\xi_{\rm s}=0\) in GR, these results are consistent with GR. Finally, we summarise the final results and the various findings leading up to them in Section 10, which concludes this paper. We package all the code used to complete this paper under the name HITOMI1 and make it publicly available. Section A summarises the structure and usage of HITOMI. Footnote 1: [https://github.com/naonori/hitomi.git](https://github.com/naonori/hitomi.git) ### General motivation The greatest mystery in current cosmology is the cause of the accelerated expansions that have presumably occurred twice in the cosmic expansion history: i.e., inflation and late-time acceleration. Scalar-tensor theories, modified gravity theories that add a single scalar field degree of freedom to General Relativity (GR), have been actively studied as a promising candidate to explain these accelerated expansions (for reviews, see Langlois, 2019; Kasse & Tsujikawa, 2019; Kobayashi, 2019; Amendola et al., 2020; Frusciante & Perenon, 2020)2. Footnote 2: For reviews of modified gravity theories, including other theories than scalar-tensor theories, see Sebastiani et al., 2017; Cataneo & Rapetti, 2018; Ishak, 2019; Ferreira, 2019; Baker et al., 2021; Arai et al., 2022. The accelerated expansion in the very early Universe, called inflation (Starobinsky, 1980; Guth, 1981; Sato, 1981; Linde, 1982; Albrecht & Steinhardt, 1982), is thought to be caused by a single scalar field in the simplest model, generating the seeds of the cosmic fluctuations currently observed. Furthermore, the statistical properties of these fluctuations are in excellent agreement with current observations of the cosmic microwave background (CMB; Aghanim et al., 2020) and the large-scale structure (LSS; Alam et al., 2020). On the other hand, the cosmological constant may explain the late-time accelerated expansion (Riess et al., 1998; Perlmutter et al., 1999). However, its smallness implies a serious fine-tuning problem in fundamental physics (Weinberg, 1989; Martin, 2012), and in order to avoid this problem, it is preferable to adopt a scalar field that varies with time. In order to test scalar-tensor theories in the late-time Universe, it is crucial to follow the time evolution of the large-scale structure in detail. Examples of already completed galaxy surveys are the Baryon Oscillation Spectroscopic Survey (BOSS Eisenstein et al., 2011; Bolton et al., 2012; Dawson et al., 2013; Alam et al., 2015)3 and the Extended BOSS (eBOSS; Dawson et al., 2016; Alam et al., 2020)4. Furthermore, The next-generation galaxy surveys, such as the Dark Energy Spectroscopic Instrument (DESI; DESI Collaboration et al., 2016)5, Euclid (Laureis et al., 2011)6, and the Subaru Prime Focus Spectrograph (PFS; Takada et al., 2014)7, will provide unprecedented accuracy in testing scalar-tensor theories. Footnote 3: [https://www.sdss3.org/science/boss_publications.php](https://www.sdss3.org/science/boss_publications.php) Footnote 4: [https://www.sdss.org/surveys/eboss/](https://www.sdss.org/surveys/eboss/) Footnote 5: [http://desi.lbl.gov/](http://desi.lbl.gov/) Footnote 6: www.euclid-ec.org Footnote 7: [https://pts.ipmu.jp/index.html](https://pts.ipmu.jp/index.html) Footnote 8: Hereafter, we do not distinguish between Beyond Horndeski theories and DHOST theories. ### DHOST theories and their constraints In this paper, we pay particular attention to the behaviour in the late-time Universe of Degenerate Higher-Order Scalar-Tensor (DHOST) theory (for reviews, see Langlois, 2019; Kobayashi, 2019), which are a quite general theoretical framework of scalar-tensor theories that can evade the Ostrogradsky instability (Ostrogradsky, 1850; Woodard, 2015; Ganz & Noui, 2020). Scalar-tensor theories have been developing rapidly over the last decade. In 2011, Deffayet et al. (2011); Kobayashi et al. (2011) rediscovered the most general theory with second-order equations of motion for metric tensor and scalar fields, Horndeski theories (Horndeski, 1974). To go beyond Horndeski theories, Gleyzes et al. (2015, 2016) found a class of healthy theories having higher-order field equations that reduce to a second-order system by combining different components (see also Zumalacraregui & Garcia-Bellido, 2014, for examples beyond Horndeski). This discovery results from a degeneracy between the kinetic terms of the scalar field and the metric. This class of theories has been extended to reach DHOST theories (Langlois & Noui, 2016; Crisostomi et al., 2016; Ben Achour et al., 2016, 2016; Langlois, 2017; Langlois et al., 2020), encompassing Horndeski and Beyond Horndeski theories8. So far, DHOST theories have been constrained primarily by three observations9: gravitational waves (GW), celestial objects, and cosmological data that is the subject of this paper. Footnote 9: As other probes of DHOST theories, for example, Babichev & Lebebel (2018) shows that the scalar field in DHOST theories can significantly modify the speed of sound in the atmosphere of the Earth; Beltran Jimenez et al. (2016); Dima & Vernizzi (2018) strongly constrain DHOST models using Hulse-Taylor pulsar observations; Salus & Lopes (2019) proposes helioseismology as a precise way to test DHOST theories on astrophysical scales. Since GW170817 was observed by LIGO and Virgo (Abbott et al., 2017), the situation surrounding the observational constraints of modified gravity has changed dramatically. The simultaneous observation of GRB170817 (Abbott et al., 2017), a Gamma Ray burst, confirmed that the speed of GWs matches the speed of electromagnetic waves with high accuracy, ruling out various scalar-tensor theories that change the speed of GWs at low redshifts (Lombriser & Taylor, 2016; Lombriser & Lima, 2017; Creminelli & Vernizzi, 2017; Sakstein & Jain, 2017; Ezquiaga & Zumalacarregui, 2017; Baker et al., 2017; Langlois et al., 2018). Creminelli et al. (2018, 2019) pointed out that a subset of DHOST theories leads to the decay of gravitational waves, resulting in further tight constraints on DHOST theories. However, the theory of gravity considered in that paper, the class I DHOST theory (Langlois & Noui, 2016; Crisostomi et al., 2016; Ben Achour et al., 2016), still survives and can modify gravity in cosmology without pathological instability (de Rham & Matas, 2016; Langlois et al., 2017; Amendola et al., 2018). Furthermore, de Rham & Melville (2018) showed that such cosmological scalar-tensor theories, which predict the speed of GWs to be different from the speed of light, break down on high energy scales (\(\sim 10^{2}\,{\rm Hz}\)) seen in neutron star mergers, indicating that the constraints from GW observations may not necessarily apply to cosmological scales. Therefore, it is essential to test modified gravity theories independently at various energy scales, such as the GW and cosmological scales. DHOST theories generally have characteristic non-linear effects that violate the Vainshtein screening mechanism inside any gravitational source (Kobayashi et al., 2015; Koyama & Sakstein, 2015; Crisostomi & Koyama, 2018; Langlois et al., 2018; Dima & Vernizzi, 2018; Hirano et al., 2019; Crisostomi et al., 2019). As an alternative to the cosmological constant, scalar-tensor theories must give an \(\mathcal{O}(1)\) modification from GR at cosmological scales, but at small scales, they must satisfy tests in weakly gravitational regions such as the solar system. The Vainshtein screening mechanism (for a review, Babichev & Deffayet, 2013), universally found in scalar-tensor theories, is a typical mechanism that satisfies these requirements, suppressing scalar interactions and restoring standard gravity through non-linear effects. While Horndeski theories allow for a natural implementation of the Vainshtein mechanism (Kimura et al., 2012; Narikawa et al., 2013; Koyama, 2016), DHOST theories partially violate it, allowing one to test DHOST theories by examining the internal structure of objects such as Newtonian stars (Saito et al., 2015; Sakstein, 2015, 2015; Jain et al., 2016; Sakstein et al., 2017; Saltas et al., 2018), Neutron stars (Babichev et al., 2016; Sakstein et al., 2017), and galaxy clusters (Sakstein et al., 2016; Salzano et al., 2017). The Vainshtein radius, the maximum scale at which the Vainshtein mechanism works, is estimated to be \(\mathcal{O}(100)\) pc for the Sun and \(\mathcal{O}(1)\) Mpc for a galaxy cluster. DHOST theories predict a characteristic gravitational non-linear effect on even cosmological scales exceeding tens of Mpc. That is, DHOST theories violate the consistency relation for LSS (Crisostomi et al., 2020; Lewandowski, 2020) (see also Hirano et al., 2018). The LSS consistency relation (Peloso & Pietroni, 2013; Kehagias & Riotto, 2013; Creminelli et al., 2013) is an analogue of the consistency relation originally proposed for single-field inflation models (Maldacena, 2003; Creminelli & Zaldarriaga, 2004), which relates \(n\)-point statistics of cosmological fluctuations to \((n-1)\)-point statistics in a non-perturbative matter. It is valid in the limit where the wavenumber of one of the \(n\)-points is hugely smaller than the others. This consistency relation is because the equations that the fluctuations obey are invariant under a Galilean transformation (Scoccimarro & Frieman, 1996; Creminelli et al., 2013). In particular, in the so-called equal-time consistency relation, the Galilean transformation eliminates the large-scale flow of matter and thus cancels all non-linear contributions when calculating the \(n\)-point statistics. This behavior is also known as infrared (IR) cancellation (Jain & Bertschinger, 1996; Scoccimarro & Frieman, 1996; Kehagias & Riotto, 2013; Peloso & Pietroni, 2013; Sugiyama & Futamase, 2013; Sugiyama & Spergel, 2014; Blas et al., 2013, 2016; Lewandowski & Senatore, 2017). On the other hand, the LSS consistency relation breaks down when considering multiple fluids (Tseliakhovich & Hirata, 2010; Yoo et al., 2011; Bernardeau et al., 2012, 2013; Peloso & Pietroni, 2014; Creminelli et al., 2014; Leandowski et al., 2015; Slepian & Eisenstein, 2017) or primordial non-Gaussianities (Berezhiani & Khoury, 2014; Valageas et al., 2017; Esposito et al., 2019), or when the equivalence principle breaks (Creminelli et al., 2014). DHOST theories have a structure similar to that of multiple fluids, and on large scales, the Galilean transformation cannot make the relative velocity between the scalar field and matter vanish (for details, see Crisostomi et al., 2020; Lewandowski, 2020). As a result, DHOST theories violate the LSS consistency relation. Our interest in this paper is to constrain DHOST theories on cosmological scales, i.e., \(\mathcal{O}(10-100)\,{\rm Mpc}\) scales. However, studies using cosmological data to constrain DHOST theories are still limited (Hirano et al., 2019; Traykova et al., 2019; Peirone et al., 2019; Hiramatsu, 2022). On the other hand, many papers on Horndeski theories have used cosmological data to constrain the model (Okada et al., 2013; Barreira et al., 2014; Bellini et al., 2016; Mueller et al., 2018; Kreisch & Komatsu, 2018; Arai & Nishizawa, 2018; Noller & Nicola, 2020, 2019; Raveri, 2020; Melville & Noller, 2020; Perenon et al., 2019; Noller, 2020). Therefore, exploring new cosmological methods for constraining DHOST theories is of great significance. ### Constraints on modified gravity theories using galaxy two-point statistics The logarithmic growth rate function \(f\) of dark matter fluctuations, measured using redshift-space distortions (RSD; Kaiser, 1987), plays an important role in constraining modified gravity theories in the late-time Universe. In the power spectrum analysis, we cannot measure the growth rate function by itself, but usually, by the combination \(f\sigma_{8}=d\ln\sigma_{8}/d\ln a\)(Song & Percival, 2009; Percival & White, 2009) using \(\sigma_{8}\) representing the rms of matter fluctuations on the \(8\,h^{-1}\,{\rm Mpc}\) scale. For example, the most recent observations, BOSS and eBOSS, measured \(f\sigma_{8}\) with a precision of \(\sim 5\%\) in the redshift range \(0.2<z<1.0\)(Alam et al., 2020). One concern is to test modified gravity theories directly using existing \(f\sigma_{8}\) measurements. The standard practice is constructing a model of the non-linear galaxy power spectrum assuming GR, then using that model to measure \(f\sigma_{8}\) from data up to the mildly non-linear region (\(k\sim 0.2\,h\,{\rm Mpc}^{-1}\)) (for recent studies, e.g., d'Amico et al., 2020; Ivanov et al., 2020; Lange et al., 2022; Kobayashi et al., 2022; Yuan et al., 2022). Therefore, it is worth noting that many existing analysis results using galaxy data up to the non-linear region only verify the consistency of GR. Thus, to test the gravity theory by consistently considering both linear and non-linear effects, a power spectrum model that considers non-linear effects specific to the modified gravity theory of interest is necessary. Several studies have been done on this for various modified gravity theories (Koyama et al., 2009; Taruya et al., 2014, 2014; Takushima et al., 2015; Bellini & Zumalacarregui, 2015; Taruya, 2016; Barreira et al., 2016; Bose & Koyama, 2016; Cusin et al., 2018, 2018; Bose et al., 2017, 2018; Aviles et al., 2018; Hernandez-Aguayo et al., 2019; Cataneo et al., 2019; Valogiannis et al., 2020; Valogiannis & Bean, 2019; Bose et al., 2020). However, only one study constrained the theory from actual galaxy data using a galaxy power spectrum model that consistently includes the non-linear effects arising from modified gravity (Song et al., 2015), where the authors focused on \(f(R)\) gravity (Hu and Sawicki, 2007). In particular, Hirano et al. (2020) pointed out that in DHOST theories, even the next-order solutions of the power spectrum in perturbation theory, the so-called one-loop solutions, are challenging to perform physically meaningful theoretical calculations due to the divergence of the wavenumber integral in the ultraviolet (UV) region. Therefore, the modelling of non-linear power spectra in DHOST theories is still highly uncertain. ### Developments in the study of galaxy three-point statistics A more straightforward way to investigate the non-linearity of scalar-tensor theories is to use three-point statistics of cosmological fluctuations, i.e., the 3PCF or the bispectrum. The reason is that, on large scales, the three-point statistics consist of a combination of second-order and linear-order dark matter fluctuations. The second-order fluctuations depend on two wave vectors in Fourier space and can be decomposed into three components using the angle between the two wave vectors: monopole (growth), dipole (shift), and quadrupole (tidal force) (Schmitfull et al., 2015). For example, Horndeski theories deviate only the coefficient of the tidal term from GR while keeping the shift term among these three components (Bernardeau and Brax, 2011; Takushima et al., 2014; Bartolo et al., 2013; Bellini et al., 2015; Burrage et al., 2019). On the other hand, DHOST theories change both the shift and tidal terms (Hirano et al., 2018; Crisostomi et al., 2020; Lewandowski, 2020), and this change in the shift term leads to a violation of the LSS consistency relation (Crisostomi et al., 2020; Lewandowski, 2020). In addition to scalar-tensor theories, there has been much researches on higher-order statistics in, for example, \(f(R)\) gravity theory (Gil-Marin et al., 2011; Borisov and Jain, 2009; Hellwing et al., 2013; Bose and Taruya, 2018; Bose et al., 2020). Several observational proposals have been made to test modified gravity theories using cosmological three-point statistics, such as galaxy clustering (Yamauchi et al., 2017; Yamauchi and Sugiyama, 2021), weak lensing (Dinda, 2018; Munshi et al., 2020, 2020; Munshi and McEwen, 2020), and CMB lensing (Namikawa et al., 2018, 2019), but none have been applied to actual observational data yet. In the context of the galaxy three-point statistics, 3PCF resolves the degeneracy between the linear bias \(b_{1}\) and \(\sigma_{8}\) and allows us to directly study the evolution of dark matter density fluctuations apart from the RSD effect (Fry, 1994; Frieman and Gaztanaga, 1994; Matarrese et al., 1997; Verde et al., 1998; Gaztanaga and Scoccimarro, 2005; Sefusatti et al., 2006; Greig et al., 2013; Hoffmann et al., 2015; Samushia et al., 2021). Furthermore, many previous studies have proposed to constrain primordial non-Gaussianities from the galaxy three-point statistics (Fry and Scherrer, 1994; Verde et al., 2000; Scoccimarro et al., 2004; Sefusatti and Komatsu, 2007; Sefusatti, 2009; Sefusatti et al., 2010; Liguori et al., 2010; Desjacques and Seljak, 2010; Sefusatti et al., 2012; Scoccimarro et al., 2012; Alvarez et al., 2014; Tellariarii et al., 2015, 2016; Welling et al., 2016; Yamauchi et al., 2017; Karagiannis et al., 2018; Bharadwaj et al., 2020; Moradinezhad Dizgah et al., 2020; Shirasaki et al., 2021; Coulton et al., 2022; Karagiannis et al., 2022). Recently, as in the case of the galaxy two-point statistics (e.g., Matsubara, 2004; Taruya et al., 2011), the anisotropic component of the galaxy three-point statistics induced by the RSD effect and the Alcock-Paczynski (AP) effect (Alcock and Paczynski, 1979) has attracted much attention, and its cosmological utility has been actively studied (Song et al., 2015; Gagrani and Samushia, 2017; Yankelevich and Porciani, 2019; Gualdi and Verde, 2020; Mazumdar et al., 2020; Sugiyama et al., 2021; Agarwal et al., 2021; Rizzo et al., 2022; Tsedrak et al., 2022). Based on standard perturbation theory (SPT), many theoretical studies of the galaxy three-point statistics have been conducted to calculate higher-order non-linearities, redshift-space distortions, and bias effects, and the results of these calculations have been tested for validity by comparing them with measurements from N-body simulations (Peebles, 1980; Fry, 1984; Goroff et al., 1986; Hivon et al., 1995; Scoccimarro et al., 1997; Scoccimarro et al., 1998; Jing and Boerner, 1997; Scoccimarro et al., 1999; Scoccimarro, 2000; Barriga and Gaztanaga, 2002; Gaztanaga and Scoccimarro, 2005; Pan et al., 2007; Marin et al., 2008; Guo and Jing, 2009; Pollack et al., 2012; Lazanu et al., 2016; McCullagh et al., 2016; Lazanu et al., 2016; Lazanu and Liguori, 2018; Hoffmann et al., 2018; Desjacques et al., 2018; Child et al., 2018; Eggemeier et al., 2019; Oddo et al., 2020; Eggemeier et al., 2021; Oddo et al., 2021; Philcox et al., 2022). Other approaches have been widely used in research, such as the halo models (Ma and Fry, 2000; Scoccimarro et al., 2001b; Takada and Jain, 2003; Fosalba et al., 2005; Smith et al., 2008; Yamamoto et al., 2017; Nan et al., 2018) and fitting formulas (Scoccimarro and Frieman, 1999; Scoccimarro and Couchman, 2001; Gil-Marin et al., 2012, 2014; Takahashi et al., 2020). Beyond SPT, several improved perturbation theories have been proposed. Rampf and Wong (2012) used a resummation method based on Lagrangian perturbation theory. Baldauf et al. (2015b); Munshi and Regan (2017); Ivanov et al. (2022) discussed some correction terms for SPT based on the effective field theory of large-scale structure. Hashimoto et al. (2017) applied a resummation method similar to the TNS model of the power spectrum (Taruya et al., 2010). Blas et al. (2016a); Ivanov and Sibiryakov (2018) developed the time-sliced perturbation theory (TSPT) to resum the IR modes of the bulk flow and describe the non-linear damping of Baryon Acoustic Oscillations (BAOs; Peebles and Yu, 1970; Sunyaev and Zeldovich, 1970). Sugiyama et al. (2021) constructed a new IR-resummed bispectrum model by adding a term to the model proposed by TSPT. The measurement of three-point statistics for galaxies, galaxy clusters, and quasars has a long history. As a simple method, two-dimensional three-point angular statistics have been observed from the dawn of the study of cosmological three-point statistics to the present (Peebles and Groth, 1975; Peebles, 1975; Groth and Peebles, 1977; Fry and Peebles, 1980; Fry and Seldner, 1982; Sharp et al., 1984; Jing and Zhang, 1989; Jing et al., 1991; Toth et al., 1989; Frieman and Gaztanaga, 1999; Szapudi et al., 2001; de Carvalho et al., 2020). Eventually, with the development of spectroscopic observations of galaxies, three-dimensional three-point statistics have become the primary targets observed in configuration space (Bean et al., 1983; Efstathiou and Jedrzejewski, 1994; Hale-Sutton et al., 1989; Gott et al., 1991; Jing and Borner, 1998; Jing and Borner, 2004; Kayo et al., 2004; Wang et al., 2004; Gaztanaga et al., 2005; Pan and Szapudi, 2005; Nichol et al., 2006; Kulkarni et al., 2007; Gaztanaga et al., 2009; Marin, 2011; McBride et al., 2011a,b; Marin et al., 2013; Guo et al., 2014; Moresco et al., 2017a; Slepian et al., 2017a; Slepian et al., 2017b; Moresco et al., 2017b, 2020) or in Fourier space (Baumgart and Fry, 1991; Scoccimarro et al., 2001a; Feldman et al., 2001; Verde et al., 2002; Nishimichi et al., 2007; Gil-Marin et al., 2015a,b, 2017a; Pearson and Samushia, 2018; Sugiyama et al., 2019; Philcox and Ivanov, 2022; Cabass et al., 2022a; D'Amico et al., 2022a; Cabass et al., 2022b; D'Amico et al., 2022b). As another approach, Chiang et al. (2015) measured the squeezed limit bispectrum by splitting the observing region and measuring the position-dependent power spectrum. Since the first measurement of the galaxy three-point statistics by Peebles and Groth (1975), the three point statistic measurement has long been limited to measuring only certain scale-dependence of the three-point statistics. However, it is now possible to perform cosmological analysis using the information on the full shape of galaxy three-point statistics at cosmological scales (\(\sim 100\,h^{-1}\,{\rm Mpc}\)). In recent years, cosmological analysis of the three-point statistics of galaxies has made remarkable progress, mostly focusing on the isotropic component, i.e., _monopole_, of the three-point statistics. Slepian et al. (2017) and Pearson and Samushia (2018) reported the detection of the BAO signal through the monopole 3PCF and the monopole bispectrum, respectively. Gil-Marin et al. (2017); d'Amico et al. (2020); Philcox and Ivanov (2022) performed a joint analysis of the monopole and quadrupole power spectra and the monopole bispectrum to constrain the cosmological parameters of interest. Cabass et al. (2022); D'Amico et al. (2022); Cabass et al. (2022) constrained primordial non-Gaussianities using the monopole bispectrum. The anisotropic components, i.e., _quadrupole_ and _hexadecapole_, of the galaxy three-point statistics have been the subject of pretty limited studies of measurements and cosmological analyses from actual galaxy data. Sugiyama et al. (2019) reported the first detection of the quadrupole bispectrum signal at the \(14\sigma\) level from the BOSS DR12 galaxies. Sugiyama et al. (2021) performed an anisotropic BAO analysis using the monopole and quadrupole components of the 2PCF and 3PCF for the MultiDark-Patchy mock catalogues (Patchy mocks; Kitaura et al. 2016) reproducing the BOSS galaxy distribution, showing the improvement of the Hubble parameter constraint by \(\sim 30\%\) compared to the 2PCF-only analysis result. D'Amico et al. (2022) performed the first joint analysis of the monopole and quadrupole components of the power and bispectra measured from the BOSS DR12 galaxy data. More recently, Ivanov et al. (2023) presented the results of an anisotropic bispectrum analysis including quadrupole and hexadecapole components measured from the BOSS DR12 data. ### Goal of this paper The primary goal of this paper is to use the 3PCF of galaxies to perform a consistent cosmological analysis that constrains DHOST theories and their subclass, Horndeski theories, while accounting for linear and non-linear effects. To this end, Yamauchi and Sugiyama (2021) pointed out that the parameters characterising non-linear density fluctuations in DHOST theories degenerate with the non-linear bias parameter, so measuring the non-linear velocity field due to the RSD effect is essential. In addition, the authors proposed a simple parameterisation scheme that characterises the time evolution of the scale dependence of the non-linear velocity field to facilitate the combined analysis of galaxy samples at different redshifts. Specifically, the time evolution of the shift and tidal terms of the second-order velocity field is represented by \(\xi_{\rm s}\) and \(\xi_{\rm t}\), respectively, where \(\xi_{\rm s}=0\) and \(\xi_{\rm t}=15/1144\) in GR. Following the suggestion of Yamauchi and Sugiyama (2021), we apply the joint analysis method of the anisotropic 2PCF and 3PCF of galaxies established by Sugiyama et al. (2021) to BOSS Data Release 12 galaxies (Eisenstein et al. 2011; Bolton et al. 2012; Dawson et al. 2013; Alam et al. 2015) to constrain these \(\xi_{\rm s}\) and \(\xi_{\rm t}\) parameters. When we need to use values of fiducial cosmology parameters in our analysis, we adopt a flat \(\Lambda\)CDM model with the following parameters: matter density \(\Omega_{\rm m0}=0.31\), Hubble constant \(h\equiv H_{0}/(100\,{\rm km\,s^{-1}\,Mpc^{-1}})=0.676\), baryon density \(\Omega_{\rm 00}h^{2}=0.022\), and spectral tilt \(n_{\rm s}=0.97\), which are the same as those used in the final cosmological analysis in the BOSS project (Alam et al. 2017) and consistent with the best-fit values in Planck 2018 (Aghanim et al. 2020). We adopt a value for the total neutrino mass of \(\sum m_{\nu}=0.06\,{\rm eV}\) close to the minimum allowed by neutrino oscillation experiments. We use these fiducial parameters to estimate the distance to galaxies from the observed redshift of each galaxy and to calculate the shape of the linear matter power spectrum at the redshifts of interest with CLASS (Blas et al. 2011). ## 2 DHOST theories In this section, we briefly review the analytic expressions of DHOST theories. Section 2.1 introduces the class I DHOST theory and the perturbative solutions of the density and velocity fields of dark matter and galaxies solved up to the second-order in that theory. In Eqs. (2.1)-(2.14) of this subsection, we adopt the expressions and notations given by Hirano et al. (2018). Section 2.2 discusses the limitation of the assumptions adopted to derive the perturbative solutions used in this paper. ### Density and velocity fluctuations in DHOST theories We begin by summarising the theoretical models we will investigate in this paper and the assumptions used to derive those models. * Gravity theory is a subclass of quadratic DHOST theories, the class I DHOST theory (Crisostomi et al. 2016), which encompasses Horndeski and Beyond Horndeski theories and is free from the instabilities of a cosmological background (de Rham and Matas 2016; Langlois et al. 2017). * Matter is cold dark matter (CDM) that can be described as a pressureless perfect fluid without vorticity (Bernardeau et al. 2002). * Matter is minimally coupled to gravity, and the effects of the DHOST gravity appear only through the gravitational potential. * When solving the equations of motion of metric tensor and scalar fields in DHOST theories, the quasi-static approximation (e.g., Pace et al. 2021) is used. Then, the gravitational potential is determined by a modified Poisson equation (Hirano et al. 2018; Crisostomi et al. 2020; Lewandowski 2020; Hirano et al. 2020). * Statistical properties of the CDM fluctuations are those derived in the standard theory of inflation, which satisfy the following properties: adiabaticity, negligibly weak non-Gaussianity, nearly scale-free, statistical homogeneity, statistical isotropy, and statistical parity symmetry. * Galaxy biases are assumed to be present only in the density field, and three biases are considered: linear bias \(b_{1}\), second-order local bias \(b_{2}\), and second-order non-local bias (tidal bias) \(b_{s2}\) (for a review, see e.g., Saito et al. 2014; Desjacques et al. 2018). Any bias related to the velocity field of galaxies is ignored. The action of quadratic DHOST theories is given by (Langlois and Noui 2016; Crisostomi et al. 2016) \[S_{\rm DHOST} = \int d^{4}x\sqrt{-g}\Big{[}\mathcal{G}_{2}(\phi,X)-\mathcal{G}_{ 3}(\phi,X)\Box\phi+\mathcal{F}(\phi,X)R \tag{2.1}\] \[+ a_{1}\phi_{\mu\nu}\phi^{\mu\nu}+a_{2}(\Box\phi)^{2}+a_{3}(\Box \phi)\phi^{\mu}\phi_{\mu\nu}\phi^{\nu}\] \[+ a_{4}\phi^{\mu}\phi_{\mu\rho}\phi^{\mu\nu}\phi_{\nu}+a_{5}(\phi ^{\mu}\phi_{\mu\nu}\phi^{\nu})^{2}\Big{]},\] where \(\phi_{\mu}=\nabla_{\mu}\phi\), \(\phi_{\mu\nu}=\nabla_{\mu}\nabla_{\nu}\phi\), \(X=-\phi_{\mu}\phi^{\mu}/2\), and \(a_{i}=a_{i}(\phi,X)\) for \(i=1,\ldots,5\). The functions \(a_{i}\) (\(i=1,\ldots,5\)) satisfy the degeneracy condition given by (Crisostomi et al., 2016) to avoid the Ostrogradsky ghost (Ostrogradsky, 1850; Woodard, 2015). The density perturbation \(\delta\) and velocity field \(\mathbf{v}\) of dark matter follow the equations of a pressureless perfect fluid without vorticity: \[\dot{\delta}(\mathbf{x})+a^{-1}\partial_{i}\left((1+\delta(\mathbf{x}))v^ {i}(\mathbf{x})\right) = 0,\] \[\dot{\theta}(\mathbf{x})+H\theta(\mathbf{x})+a^{-1}\partial_{i}\left(v^{ i}(\mathbf{x})\partial_{j}v^{i}(\mathbf{x})\right) = -a^{-1}\partial^{2}\Phi(\mathbf{x}), \tag{2.2}\] where \(a\) and \(H=\dot{a}/a\) respectively denote the scale factor and the Hubble parameter, and \(\theta=\partial_{i}v^{i}\) is the divergence of the velocity field. Because of no vorticity, the velocity field is represented as \(v^{i}=(\partial_{i}/\partial^{2})\theta\). The gravitational potential \(\Phi\) is determined by the following modified Poisson equation (Hirano et al., 2018): \[\frac{\partial^{2}\Phi(\mathbf{x})}{a^{2}H^{2}}=\kappa\delta(\mathbf{x})+\nu\frac{ \dot{\delta}(\mathbf{x})}{H}+\mu\frac{\ddot{\delta}(\mathbf{x})}{H^{2}}+\frac{ \partial^{2}S_{\rm b}^{\rm NL}(\mathbf{x})}{a^{2}H^{2}}, \tag{2.3}\] where \(\kappa\), \(\nu\), and \(\mu\) are functions that depend only on time, and \(S_{\Phi}^{\rm NL}\) is a non-linear source term obtained from the equation of motion of the scalar field. To solve the above equations, we expand all the fluctuations as follows: \(X=\sum_{n}X_{n}\), where \(X=\{\delta,\theta,\Phi,S_{\rm b}^{\rm NL}\}\), and \(X_{n}=\mathcal{O}(\delta_{1}^{n})\). Then, the non-linear source \(S_{\Phi}^{\rm NL}\) up to the second-order is given by \[\frac{\partial^{2}S_{\Phi,1}^{\rm NL}(\mathbf{x})}{a^{2}H^{2}}=0,\] \[\frac{\partial^{2}S_{\Phi,2}^{\rm NL}(\mathbf{x})}{a^{2}H^{2}}=\tau_ {\alpha}W_{\alpha}(\mathbf{x})-\tau_{\gamma}W_{\gamma}(\mathbf{x}), \tag{2.4}\] where \[W_{\alpha}(\mathbf{x})=\left[\delta_{1}(\mathbf{x})\right]^{2}+\left[ \frac{\partial_{i}}{\partial^{2}}\delta_{1}(\mathbf{x})\right]\left[\partial_{i} \delta_{1}(\mathbf{x})\right],\] \[W_{\gamma}(\mathbf{x})=\left[\delta_{1}(\mathbf{x})\right]^{2}-\left[ \frac{\partial_{i}\partial_{j}}{\partial^{2}}\delta_{1}(\mathbf{x})\right]^{2}. \tag{2.5}\] The evolution of the density perturbation follows \[\ddot{\delta}(\mathbf{x})+(2+\varsigma)H\dot{\delta}(\mathbf{x})-\frac{3}{2}\Omega_{ \rm m}\Xi H^{2}\delta(\mathbf{x})=H^{2}S_{\delta}^{\rm NL}(\mathbf{x}), \tag{2.6}\] where \(\varsigma=(2\mu-\nu)/(1-\mu)\), \((3/2)\Omega_{\rm m}\Xi=\kappa/(1-\mu)\), and \(S_{\delta}^{\rm NL}\) is a non-linear source of the density perturbation, vanishing at linear order and given at second-order by \[S_{\delta,2}^{\rm NL}=S_{\alpha}W_{\alpha}(\mathbf{x})-S_{\gamma}W_{\gamma}(\mathbf{ x}) \tag{2.7}\] with \[(1-\mu)S_{\alpha} = 2f^{2}+\frac{3}{2}\Omega_{\rm m}\Xi-\varsigma f+\tau_{\alpha},\] \[(1-\mu)S_{\gamma} = f^{2}+\tau_{\gamma}. \tag{2.8}\] Once the solution of \(\delta\) is obtained, the solution of \(\theta\) is also derived from the continuity equation in Eq. (2.2). In Fourier space10 Footnote 10: Our convention for the Fourier transform is \[\widetilde{f}(\mathbf{k})=\int d^{3}xe^{-\hat{\mathbf{x}}\cdot\mathbf{x}}f(\mathbf{x}).\] Eqs. (2.2) and (2.3) determine \(\delta_{n}(\mathbf{k})\) and \(\theta_{n}(\mathbf{k})\) in terms of the linear density fluctuations to be: \[\widetilde{\delta}_{n}(\mathbf{k}) = \int\frac{d^{3}p_{1}}{(2\pi)^{3}}\cdots\int\frac{d^{3}p_{n}}{(2 \pi)^{3}}(2\pi)^{3}\delta_{\rm D}(\mathbf{k}-\mathbf{p}_{\{1\mid n\mid\}})\] \[\times F_{n}^{(\rm m)}(\mathbf{p}_{1},\ldots,\mathbf{p}_{2})\delta_{1}(\mathbf{p}_{1 })\cdots\delta_{1}(\mathbf{p}_{n}),\] \[\widetilde{\theta}_{n}(\mathbf{k}) = -aHf\int\frac{d^{3}p_{1}}{(2\pi)^{3}}\cdots\int\frac{d^{3}p_{n}}{ (2\pi)^{3}}\delta_{\rm D}(\mathbf{k}-\mathbf{p}_{\{1\mid n\mid\}}) \tag{2.9}\] \[\times G_{n}^{(\rm m)}(\mathbf{p}_{1},\ldots,\mathbf{p}_{2})\delta_{1}(\mathbf{p}_{1 })\cdots\delta_{1}(\mathbf{p}_{n}),\] where \(\mathbf{p}_{\{1\mid n\mid\mid\}}=\mathbf{p}_{1}+\cdots+\mathbf{p}_{n}\), and \(\delta_{\rm D}\) is the delta function. The functions \(F_{2}^{(\rm m)}\) and \(G_{2}^{(\rm m)}\) are kernel functions that characterise the gravitational non-linear effects, and the superscript \((\rm m)\) stands for "matter". In the second-order, \(F_{2}^{(\rm m)}\) and \(G_{2}^{(\rm m)}\) are given by \[F_{2}^{(\rm m)}(\mathbf{p}_{1},\mathbf{p}_{2})=\kappa_{\delta}\alpha_{s}(\mathbf{p}_{1}, \mathbf{p}_{2})-\frac{2}{7}\lambda_{\delta}\gamma(\mathbf{k}_{1},\mathbf{k}_{2})\] \[G_{2}^{(\rm m)}(\mathbf{p}_{1},\mathbf{p}_{2})=\kappa_{\theta}\alpha_{s}( \mathbf{p}_{1},\mathbf{p}_{2})-\frac{4}{7}\lambda_{\delta}\gamma(\mathbf{k}_{1},\mathbf{k}_{2}), \tag{2.10}\] where \[\alpha_{s}(\mathbf{k}_{1},\mathbf{k}_{2}) = 1+(\hat{k}_{1}\cdot\hat{k}_{2})\frac{(k_{1}^{2}+k_{2}^{2})}{2k_{1 }k_{2}},\] \[\gamma(\mathbf{k}_{1},\mathbf{k}_{2}) = 1-(\hat{k}_{1}\cdot\hat{k}_{2})^{2}, \tag{2.11}\] and \[\kappa_{\theta} = 2\kappa_{\delta}\left[1+\frac{1}{2f}\frac{d\ln\kappa_{\delta}}{d \ln a}\right]-1,\] \[\lambda_{\theta} = \lambda_{\delta}\left[1+\frac{1}{2f}\frac{d\ln\lambda_{\delta}}{d \ln a}\right]. \tag{2.12}\] The evolutions of \(\kappa_{\delta}\) and \(\lambda_{\delta}\) follow \[\bar{\kappa}_{\delta}+[4f+(2+\varsigma)]H\dot{\kappa}_{\delta}+H^{2 }\left(2f^{2}+\frac{3}{2}\Omega_{\rm m}\Xi\right)\kappa_{\delta}\] \[= H^{2}S_{\alpha}, \tag{2.13}\] \[\bar{\lambda}_{\delta}+[4f+(2+\varsigma)]H\dot{\lambda}_{\delta}+H ^{2}\left(2f^{2}+\frac{3}{2}\Omega_{\rm m}\Xi\right)\lambda_{\delta}\] \[= \frac{7}{2}H^{2}S_{\gamma}. \tag{2.14}\] Since the galaxy density field is a biased quantity, we assume the linear bias \(b_{1}\), the second-order local bias \(b_{2}\), and the second-order tidal bias \(b_{s2}\) as the bias parameters that describe the galaxy density fluctuation up to second order (e.g., Desjacques et al., 2018): \[\delta^{(\rm\delta)}(\mathbf{x})=b_{1}\delta(\mathbf{x})+\frac{b_{2}}{2}\left[\delta( \mathbf{x})\right]^{2}+b_{s2}[s_{ij}]^{2}, \tag{2.15}\] where the superscript \((\rm g)\) stands for "galaxy", and \([s_{ij}]^{2}\) is given by \[[s_{ij}]^{2}=\left[\frac{\partial_{i}\partial_{j}}{\partial^{2}}\delta(\mathbf{x}) \right]^{2}-\frac{1}{3}[\delta(\mathbf{x})]^{2}. \tag{2.16}\] Then, the second-order kernel functions for galaxies are given by \[F_{2}^{(\rm g)} = b_{1}F_{2}^{(\rm m)}(\mathbf{p}_{1},\mathbf{p}_{2})+\frac{1}{2}b_{2}+b_{s 2}\left[\left(\hat{p}_{1}\cdot\hat{p}_{2}\right)^{2}-\frac{1}{3}\right],\] \[G_{2}^{(\rm g observed galaxy density fluctuation is then distorted along the LOS direction as follows: \[\delta_{\rm s}^{\rm(g)}(\mathbf{x})=\int d^{3}x^{\prime}\,\Big{(}1+\delta^{\rm(g)}( \mathbf{x}^{\prime})\Big{)}\,\delta_{\rm D}\left(\mathbf{x}-\mathbf{x}_{\rm red}(\mathbf{x}^{ \prime})\right)-1. \tag{2.19}\] In Fourier space, the \(n\)-th order solution of \(\delta_{\rm s}^{\rm(g)}\) is represented as \[\widetilde{\delta}_{\rm s,\,n}^{\rm(g)}(\mathbf{k}) = \int\frac{d^{3}p_{1}}{(2\pi)^{3}}\cdots\int\frac{d^{3}p_{n}}{(2 \pi)^{3}}(2\pi)^{3}\delta_{\rm D}(\mathbf{k}-\mathbf{p}_{\rm[1n]}) \tag{2.20}\] \[\times Z_{n}(\mathbf{p}_{1},\ldots,\mathbf{p}_{2})\delta_{\mathbf{i}}(\mathbf{p}_ {1})\cdots\delta_{\mathbf{i}}(\mathbf{p}_{n}).\] The first and second-order kernel functions are given by (Scoccimarro et al., 1999) \[Z_{1} = b_{1}+f(\hat{p}\cdot\hat{n})^{2},\] \[Z_{2} = F_{2}^{(g)}(\mathbf{p}_{1},\mathbf{p}_{2})+f(\hat{k}\cdot\hat{n})^{2}G_ {2}^{(g)}(\mathbf{p}_{1},\mathbf{p}_{2}) \tag{2.21}\] \[+ \frac{f(\mathbf{k}\cdot\hat{n})}{2}\left[\frac{(\hat{p}_{1}\cdot\hat {n})}{p_{1}}Z_{1}(\mathbf{p}_{2})+\frac{(\hat{p}_{2}\cdot\hat{n})}{p_{2}}Z_{1}( \mathbf{p}_{1})\right],\] where \(\mathbf{k}=\mathbf{p}_{1}+\mathbf{p}_{2}\). In the rest of this paper, we focus only on the galaxy density fluctuation with RSDs, so for simplicity of notation, we refer to it simply as \(\delta\) instead of \(\delta_{\rm s}^{\rm(g)}\). We also omit the angle-dependence \(\hat{n}\) of any function that includes RSDs. At the leading-order in perturbation theory, the redshift-space power spectrum and bispectrum are represented as \[P(\mathbf{k}) = [Z_{1}(\mathbf{k})]^{2}P_{\rm lin}(k),\] \[B(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3}) = 2Z_{2}(\mathbf{k}_{1},\mathbf{k}_{2})Z_{1}(\mathbf{k}_{1})Z_{1}(\mathbf{k}_{2})P _{\rm lin}(k_{1})P_{\rm lin}(k_{2}) \tag{2.22}\] \[+ 2\ {\rm perms}\,,\] where \(\mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3}=0\), and \(P_{\rm lin}\) is the linear matter power spectrum. In what follows, we omit the \(\mathbf{k}_{3}\)-dependence of the bispectrum for notational simplicity: \(B(\mathbf{k}_{1},\mathbf{k}_{2})=B(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3}=-\mathbf{k}_{1}-\mathbf{k} _{2})\). Finally, we conclude this subsection by summarising the key points about galaxy fluctuations from a theoretical point of view. First, in the case of \(\nu=\mu=\tau_{\alpha}=0\) in Eqs. (2.3) and (2.4), Horndeski theories are recovered; a \(\Lambda\)CDM model additionally has \(\kappa=(3/2)\Omega_{\rm m}(z)\) and \(\tau_{\gamma}=0\); in both Horndeski theories and \(\Lambda\)CDM, \(\kappa_{\delta}=\kappa_{\theta}=1\) from Eq. (2.13), and \(\lambda_{\delta}\) and \(\lambda_{\theta}\) are still time-dependent from Eq. (2.14); for the approximation \(f^{2}=\Omega_{\rm m}\) in \(\Lambda\)CDM, \(\lambda_{\delta}=\lambda_{\theta}=1\). Second, since the linear equation of the density fluctuation (2.6) omits space-dependence as in the \(\Lambda\)CDM case, under the assumption that the scalar field becomes to prevail during the accelerated Universe, the shape of the linear matter power spectrum can be the usual \(\Lambda\)CDM one determined in the matter-dominant era. In other words, the characteristic scale-dependences in \(\delta\) and \(\nu\) due to scalar-tensor theories appear only through the non-linear kernel functions \(F_{n\geq 2}^{(m)}\) and \(G_{n\geq 2}^{(m)}\). Third, the non-linear terms that appear in the fluid equation in Eq. (2.2) and the Poisson equation in Eq. (2.3), such as \(\partial_{i}(\delta v^{i})\), \(\partial_{i}(v^{j}\partial_{j}v^{i})\), \(W_{\alpha}\) and \(W_{\gamma}\), become zero when the volume average or ensemble average is calculated. Therefore, the resulting non-linear solutions satisfy \(\int d^{3}x\delta_{n}=\langle\delta_{n}\rangle=0\) and \(\int d^{3}x\theta_{n}=\langle\theta_{n}\rangle=0\) for \(n\geq 2\), and the corresponding kernel functions satisfy \(F_{n\geq 2}^{(m)}=G_{n\geq 2}^{(m)}=0\) when \(\mathbf{p}_{1}+\cdots+\mathbf{p}_{n}=0\) as known in the case of \(\Lambda\)CDM. This condition partially breaks when the non-linear bias effect is taken into account, resulting in \(F_{2}^{(g)}(\mathbf{p},-\mathbf{p})\neq 0\) and \(G_{2}^{(g)}(\mathbf{p},-\mathbf{p})=0\) (see Eq. (2.17)). ### Limitation of our assumptions This subsection discusses the possible cases where the assumptions adopted in building the theoretical model in the previous subsection are violated, introducing some previous studies. The following bullet labels correspond to those in Section 2.1. 1. Besides scalar-tensor theories, two other examples of modified gravity theories have been widely studied in cosmology: the Hu-Sawicki model (Hu & Sawicki, 2007) of \(f(R)\) gravity (see Capozziello & Francaviglia, 2008; Sotiriou & Faraoni, 2010, for reviews) and the normal branch of the 5D brane-world Dvali-Gabadadze-Porrati model (nDGP; Dvali et al., 2000). These two models have been investigated in detail by Alam et al. (2020) as representative targets in DESI. Focusing on the non-linear effects, the nDGP model generates a scale dependence of the same form as Horndeski theories, characterised by the function \(\gamma(\mathbf{p}_{1},\mathbf{p}_{2})\) (2.11). On the other hand, the Hu-Sawicki \(f(R)\) model produces a kernel function different from the one predicted by scalar-tensor theories. Specifically, in the modified Poisson equation of Eq. (2.3), \(\kappa\) is scale-dependent, resulting in the linear growth function that depends on the wavenumber. In addition, the non-linear source \(S_{\Phi}^{\rm NL}\) for the Hu-Sawicki model also appears as a form that cannot be described by \(W_{\alpha}\) and \(W_{\gamma}\), unlike Eq. (2.4). Such non-linearities in the density field specific to the Hu-Sawicki model have been studied by (Koyama et al., 2009; Taruya, 2016) in the context of perturbation theory, and the model has been tested by applying the theory to BOSS galaxy data (Song et al., 2015). 2. The effect of the relative velocity of baryons and CDM enters the galaxy density fluctuation quadratically together with the corresponding bias parameter (Dalal et al., 2010), thus modifying the shape of the measured bispectrum. In particular, as in the case of the \(\kappa_{\delta}\) parameter in DHOST theories, it corrects the term in \(F_{2}^{(g)}(\mathbf{p}_{1},\mathbf{p}_{2})\) that depends on \((\hat{p}_{1}\cdot\hat{p}_{2})\) called the shift term (Yoo et al., 2011). The relative velocity effect on galaxy clustering has been measured using the galaxy power spectrum (Yoo & Seljak, 2013; Beutler et al., 2016) and 3PCF (Slepian et al., 2018), but any signature has not yet been detected. Although massive neutrinos can also change the shape of the bispectrum, the results of simulations performed by Ruggeri et al. (2018) confirm that the CDM component in the bispectrum is dominant; Interestingly, Kamalinejad & Slepian (2020) has shown that the effect of neutrino corrections appears in the shift term as well as the growth and tidal terms in the second-order velocity field (3.16). Hence, the anisotropic 3PCF (or bispectrum) may help to constrain the neutrino masses (see e.g., Saito et al., 2009; Levi & Vlah, 2016; Yoshikawa et al., 2020). 3. The case of non-minimally coupled scalar fields with CDM has already been the subject of several studies in the context of cosmology (Kimura et al., 2018; Chibana et al., 2019; Kase & Tsujikawa, 2020; Chiba et al., 2020; Kase & Tsujikawa, 2020). For example, Kimura et al. (2018); Chibana et al. (2019) have shown that in this case, the continuity equation (2.2) is modified, and thus the relation between the density fluctuations in real and redshift spaces, i.e. the Kaiser formula in linear theory (Kaiser, 1987), is also modified. 4. The quasi-static approximation breaks when the scale of interest is close to the sound horizon scale. Even in GR, it is known that there are relativistic corrections to \(F_{2}^{(g)}\) when approaching the horizon size (Tram et al., 2016; Jolicoeur et al., 2017, 2018; Koyama et al., 2018; Castiblanco et al., 2019; Umeh et al., 2019; Calles et al., 2020; de Weerd et al., 2020). 5. Various possibilities have been proposed for how the initial conditions of cosmic fluctuations predicted by inflation theory could affect observables. One of the most critical examples relevant to this paper is the existence of primordial non-Gaussianity, which breaks the LSS consistency relation (Berezhiani and Khoury, 2014; Valageas et al., 2017; Esposito et al., 2019). * Fujita and Vlah (2020) proposed a bias expansion formalism dubbed "Monkey bias" based on the LSS consistency relation and showed that it is equivalent to the existing bias expansion framework. In other words, in DHOST theories, which violate the LSS consistency relation, the existing bias expansion we adopted (2.15) may not be valid, and a new bias in the shift term of non-linear galaxy density fluctuations, i.e., the shift bias parameter, may appear. Moreover, the shift bias may also induce velocity bias effects. In Section 9.12, we will discuss and clarify which parts of theories can be tested with the anisotropic 3PCF, even in the presence of the shift and velocity biases. ## 3 Theoretical models This section describes how to calculate the theoretical models of multipole 2PCFs and 3PCFs. Section 3.1 summarises the decomposition formalism for the anisotropic three-point statistics (bisectra and 3PCFs). Section 3.2 introduces the power and bispectrum models used to compute the 2PCF and 3PCF. Section 3.3 discusses what parameters should be varied to perform the cosmological analysis and shows the specific parameter dependence of the bispectrum model we use. Section 3.4 reviews new parameters helpful in testing DHOST theories proposed by Yamauchi and Sugiyama (2021) and their time evolution. Section 3.5 discusses the limits of applying our theoretical models of the 2PCF and 3PCF to the data analysis. ### Decomposition formalisms of the 2PCF and 3PCF We follow the decomposition formalism of redshift-space bispectra proposed by Sugiyama et al. (2019) using the tri-polar spherical harmonics (TripoSH) as a basis function. In that formalism, under statistical homogeneity, isotropy, and parity-symmetry assumptions, we define the base function to expand the bispectrum using three spherical harmonics \(Y_{\ell m}\) as \[\mathcal{S}_{\ell_{1}\ell_{2}\ell}(\hat{k}_{1},\hat{k}_{2},\hat{ n}) =\frac{4\pi}{h_{\ell_{1}\ell_{2}\ell}}\sum_{m_{1}m_{2}m}\left( \begin{smallmatrix}\ell_{1}&\ell_{2}&\ell\\ m_{1}&m_{2}&m\end{smallmatrix}\right)\] \[\times\ Y_{\ell_{1}m_{1}}(\hat{k}_{1})Y_{\ell_{2}m_{2}}(\hat{k}_ {2})Y_{\ell m}(\hat{n}), \tag{3.1}\] where \[h_{\ell_{1}\ell_{2}\ell}=\sqrt{\frac{(2\ell_{1}+1)(2\ell_{2}+1)(2\ell+1)}{4 \pi}}\left(\begin{smallmatrix}\ell_{1}&\ell_{2}&\ell\\ 0&0&0\end{smallmatrix}\right), \tag{3.2}\] and the circle bracket with \(6\) multipole indices, \((\dots)\), denotes the Wigner-3j symbol. The bispectrum is then expanded as \[B(\mathbf{k}_{1},\mathbf{k}_{2},\hat{n})= \sum_{\ell_{1}+\ell_{2}+\ell=\mathrm{even}}\!\!B_{\ell_{1}\ell_{2 }\ell}(k_{1},k_{2})\mathcal{S}_{\ell_{1}\ell_{2}\ell}(\hat{k}_{1},\hat{k}_{2},\hat{n}), \tag{3.3}\] and the corresponding multipole components are given by \[B_{\ell_{1}\ell_{2}\ell}(k_{1},k_{2}) =4\pi h_{\ell_{1}\ell_{2}\ell}^{2}\int\frac{d^{2}\hat{k}_{1}}{4 \pi}\int\frac{d^{2}\hat{k}_{2}}{4\pi}\int\frac{d^{2}\hat{k}_{2}}{4\pi}\] \[\times\ \mathcal{S}_{\ell_{1}\ell_{2}\ell}^{*}(\hat{k}_{1},\hat{k}_{2} )B(\mathbf{k}_{1},\mathbf{k}_{2}). \tag{3.4}\] Since the bispectrum multipoles defined here are independent of the coordinate system in which they are calculated, it is possible to compare theoretical calculations with observations in different coordinate systems. Specifically, we use the following coordinate system with \(\hat{k}_{1}\) as the \(z\)-axis for theoretical calculations: \[\hat{k}_{1} =\{0,0,1\}\] \[\hat{k}_{2} =\{\sin\theta_{k_{2}},0,\cos\theta_{k_{2}}\}\] \[\hat{n} =\{\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos\theta\}. \tag{3.5}\] On the other hand, when measuring the bispectrum from galaxy data, we use the Cartesian coordinate and take the north pole as our \(z\)-axis (see Section 4.2). We perform the expansion of the 3PCF in the same way as for the bispectrum. The resulting 3PCF multipoles are related to \(B_{\ell_{1}\ell_{2}\ell}\) through a two-dimensional Hankel transform: \[\zeta_{\ell_{1}\ell_{2}\ell}(r_{1},r_{2}) =i^{\ell_{1}+\ell_{2}}\int\frac{dk_{1}k_{1}^{2}}{2\pi^{2}}\int \frac{dk_{2}k_{2}^{2}}{2\pi^{2}}\] \[\times j_{\ell_{1}}(r_{1}k_{1})j_{\ell_{2}}(r_{2}k_{2})B_{\ell_{1} \ell_{2}\ell}(k_{1},k_{2}), \tag{3.6}\] where \(j_{\ell}\) is the spherical Bessel function at the \(\ell\)-th order. This relation means that \(\zeta_{\ell_{1}\ell_{2}\ell}\) have in principle the same information as \(B_{\ell_{1}\ell_{2}\ell}\), facilitating the comparison of the configuration-space and Fourier-space analyses. Note that \(B_{\ell_{1}\ell_{2}\ell}(k_{1},k_{2})=B_{\ell_{2}\ell_{1}\ell}(k_{2},k_{1})\) and \(\zeta_{\ell_{1}\ell_{2}\ell}(r_{1},r_{2})=\zeta_{\ell_{2}\ell_{1}\ell}(r_{2},r _{1})\). From this relation, when \(\ell_{1}=\ell_{2}\), only \(k_{1}\geq k_{2}\) and \(r_{1}\geq r_{2}\) need to be computed for the bispectrum and 3PCF, respectively. Also, when \(\ell>0\), only \(\ell_{1}\geq\ell_{2}\) should be considered. In the case of the power spectrum, it is common to expand the power spectrum using Legendre polynomial functions \(\mathcal{L}_{\ell}\)(e.g., Hamilton, 1997): \[P(\mathbf{k})=\sum_{\ell}P_{\ell}(k)\mathcal{L}_{\ell}(\hat{k}\cdot\hat{n}), \tag{3.7}\] and the corresponding multipole components of the 2PCF are given by \[\xi_{\ell}(r)=i^{\ell}\int\frac{dkk^{2}}{2\pi^{2}}j_{\ell}(rk)P_{\ell}(k). \tag{3.8}\] This paper tests DHOST theories by measuring \(\xi_{\ell}\) and \(\zeta_{\ell_{1}\ell_{2}\ell}\) from the BOSS galaxy data and comparing them with the corresponding theoretical models. The index \(\ell\) that is common for both \(\xi_{\ell}\) and \(\zeta_{\ell_{1}\ell_{2}\ell}\) represents the decomposition related to the RSD or AP effect, where \(\ell=0\) means monopole, \(\ell=2\) quadrupole, and \(\ell=4\) hexadeep. Relativistic effects can generate \(\ell=\mathrm{odd}\) components (e.g., McDonald, 2009; Desjacques et al., 2018a; Clarkson et al., 2019), but we ignore them here. Furthermore, we also ignore the \(\ell=4\) modes; although the signal of the \(\ell=4\) modes is too small to be detected in the BOSS data, it should be taken into account in the future as it helps to improve the constraints on the cosmological parameters (Beutler et al., 2017; Sugiyama et al., 2019). Therefore, in this paper, we focus on only two modes, \(\ell=0\) and \(\ell=2\). In particular, for the 3PCF, we consider the first two terms of the monopole (\(\zeta_{000}\) and \(\zeta_{110}\)) and the first two terms of the quadrupole (\(\zeta_{202}\) and \(\zeta_{112}\)). Finally, we discuss the relation with the widely used decomposition formalism of the bispectrum proposed by Scoccimarro et al. (1999). As in Eq. (3.5), this formalism decomposes the bispectrum by choosing the coordinate system with \(k_{1}\) as the \(z\)-axis and using the spherical harmonic function for the LOS direction: \(B(\mathbf{k}_{1},\mathbf{k}_{2},\hat{n})=\sum_{LM}B_{LM}(\mathbf{k}_{1},\mathbf{k}_{2})Y_{LM}(\hat{n})\). The relation between Scoccimarro et al. (1999)'s decomposition formalism and our TripoSH decomposition has already been shown in Eq. (25) of Sugiyama et al. (2019). According to the relation, \(\zeta_{202}\) contains only \(M=0\) mode in Scoccimarro et al. (1999)'s formalism, while \(\zeta_{112}\) further contains the \(M\neq 0\) modes in addition to the \(M=0\) mode. The ability to handle the \(M\neq 0\) modes, including window function corrections (see Section 4.3), is one advantage of our TripoSH decomposition formalism. For example, studies of the quadrupole bispectrum using Scoccimarro et al. (1999)'s method have mainly dealt only with the \(M=0\) mode (D'Amico et al., 2022b). One reason is that the correction formula for the window function effect is only given for the \(M=0\) case (Pardede et al., 2022). Moreover, we show in Section 7 that \(\zeta_{112}\) gives additional cosmological information to \(\zeta_{202}\), pointing out the importance of the \(M\neq 0\) modes. ### IR-resummed power spectrum and bispectrum models In this paper, we focus on the 2PCF and 3PCF at scales above \(80\,h^{-1}\,{\rm Mpc}\) (Section 9), where we can ignore loop corrections arising from higher-order non-linear effects. The power spectrum and bispectrum shapes can be described at those scales by their leading solutions, the so-called tree-level solutions (2.22). However, we need to consider the non-linear damping effect of BAOs due to the linear gravity that shifts the position of galaxies. The non-linear damping of BAO can be described by a large-scale bulk flow that is position-independent in a given observed region (Eisenstein et al., 2007; Crocce & Scoccimarro, 2008; Matsubara, 2008; Sugiyama & Spergel, 2014; Baldauf et al., 2015), called the infra-red (IR) flow. In the limit where the IR flow does not correlate with small-scale density fluctuations, based on the Galilean invariance of the system of equations in the IR limit, all the effects of the IR flow are cancelled out in equal-time \(n\)-point statistics (Jain & Bertschinger, 1996; Scoccimarro & Frieman, 1996; Kehagias & Riotto, 2013; Peloso & Pietroni, 2013; Sugiyama & Futamase, 2013; Sugiyama & Spergel, 2014; Blas et al., 2013, 2016), Lewandowski & Senatore, 2017). However, when we deviate from such an extreme situation, we find a correlation between the IR flow and the small-scale density field. By extracting this correlation in the full perturbative order only for the BAO signal, it becomes possible to describe the non-linear effects of BAOs. This kind of construction of \(n\)-point statistics models is called the IR resummation method (Crocce & Scoccimarro, 2008; Matsubara, 2008; Sugiyama & Spergel, 2014; Senatore & Zaldarriaga, 2015; Baldauf et al., 2015; Blas et al., 2016; Senatore & Trevisan, 2018; Ivanov & Sibiryakov, 2018; Lewandowski & Senatore, 2020; Sugiyama et al., 2021). In this paper, we will use the IR resummed power and bispectrum models given in Eqs. (3.9) and (3.12), even in DHOST theories that break the IR cancellation, but we will mention the issues that may arise in this case in Section 3.5. For the power spectrum, we adopt the following IR-resummed model: \[P(\mathbf{k})=\left[Z_{1}(\mathbf{k})\right]^{2}\left[{\cal D}^{2}(\mathbf{k})P_{\rm w}(k )+P_{\rm now}(k)\right], \tag{3.9}\] where \(P_{\rm in}\) is decomposed into two parts: the "no-wigest (nw)" part \(P_{\rm w}\) that is a smooth version of \(P_{\rm in}\) with the baryon oscillations removed (Eisenstein & Hu, 1998), and the "wigest (w)" part defined as \(P_{\rm w}=P_{\rm in}-P_{\rm w}\). The non-linear BAO degradation is represented by the two-dimensional Gaussian damping factor derived from a differential motions of Lagrangian displacements (Eisenstein et al., 2007; Crocce & Scoccimarro, 2008; Matsubara, 2008): \[{\cal D}(\mathbf{k})=\exp\left(-\frac{k^{2}(1-\mu^{2})\sigma_{\perp}^{2}+k^{2}\mu ^{2}\sigma_{\parallel}^{2}}{2}\right), \tag{3.10}\] where \(\mu=\hat{k}\cdot\hat{n}\). We compute the radial and transverse components of smoothing parameters, \(\sigma_{\perp}\) and \(\sigma_{\parallel}\), using the Zel'dovich approximation (Zel'Dovich, 1970; Crocce & Scoccimarro, 2008; Matsubara, 2008): \[\sigma_{\perp}^{2}=\frac{1}{3}\int\frac{dp}{2\pi^{2}}P_{\rm in}(p),\] \[\sigma_{\parallel}^{2}=(1+f)^{2}\,\sigma_{\perp}^{2}. \tag{3.11}\] The power spectrum model in Eq. (3.9) was first proposed empirically by Eisenstein et al. (2007a). Subsequently, the damping factor \({\cal D}^{2}\) in front of \(P_{\rm in}\) was derived in the context of perturbation theory by Crocce & Scoccimarro (2008); Matsubara (2008); an additional term to recover a smooth linear power spectrum without BAOs, \((1-{\cal D}^{2})P_{\rm now}\), was derived using the IR resummation method (Sugiyama & Spergel, 2014; Baldauf et al., 2015; Blas et al., 2016; Ivanov & Sibiryakov, 2018; Sugiyama et al., 2021). For the bispectrum, we adopt the following IR-resummed model (Sugiyama et al., 2021): \[B(\mathbf{k}_{1},\mathbf{k}_{2}) = 2\,Z_{2}(\mathbf{k}_{1},\mathbf{k}_{2})Z_{1}(\mathbf{k}_{1})Z_{1}(\mathbf{k}_{2}) \tag{3.12}\] \[\times \Big{\{}{\cal D}(\mathbf{k}_{1}){\cal D}(\mathbf{k}_{2}){\cal D}(\mathbf{k}_ {3})P_{\rm w}(k_{1})P_{\rm w}(k_{2})\] \[+ {\cal D}^{2}(\mathbf{k}_{1})P_{\rm w}(k_{1})P_{\rm w}(k_{2})+{\cal D} ^{2}(\mathbf{k}_{2})P_{\rm now}(k_{1})P_{\rm w}(k_{2})\] \[+ {P_{\rm now}(k_{1})P_{\rm now}(k_{2})}\Big{\}}+2\ {\rm perms.},\] where \(\mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3}=0\). As in the case of the power spectrum, this bispectrum model restores the tree-level solution (2.22) consisting of a smooth version (without BAOs) of the linear power spectrum after degrading the BAO signature 11. Footnote 11: Blas et al. (2016); Ivanov & Sibiryakov (2018) proposed a bispectrum model similar to Eq. (3.12). However, the authors ignore the \({\cal O}(P_{\rm w}^{2}/P_{\rm now}^{2})\) term, so their model does not include the second line term, \(D(\mathbf{k}_{1})D(\mathbf{k}_{2})D(\mathbf{k}_{3})P_{\rm w}(k_{1})P_{\rm w}(k_{2})\), in Eq. (3.12). This term added by Sugiyama et al. (2021) contains the full tree-level solution. ### Parameterization method for the bispectrum The non-linear kernel functions \(F_{2}^{(\rm m)}\) and \(G_{2}^{(\rm m)}\) can be decomposed into three terms using Legendre polynomial functions \({\cal L}_{\ell}\left(\hat{p}_{1}\cdot\hat{p}_{2}\right)\):i.e., monopole, dipole, and quadrupole components (Schmutfull et al., 2015). They are called the growth, shift, and tidal terms, and are understood in \(\Lambda\)CDM as follows: the growth term represents the spherical collapse of density fluctuations (Fosalba & Gaztanaga, 1998); the shift term appears in the form \(\Psi_{1}^{\dagger}\partial_{i}\delta_{1}\) or \(\Psi_{1}^{\dagger}\partial_{i}\theta_{1}\) as a coordinate transformation of \(\delta\) or \(\theta\) by the displacement vector \(\mathbf{\Psi}\); the last term represents the tidal force (2.16). Then, \(F_{2}^{(\rm m)}\) and \(G_{2}^{(\rm m)}\) (2.10) are rewritten as (e.g., Bouchet et al. 1992; Sherwin & Zaldarriaga, 2012; Baldauf et al., 2012; Schmutfull et al., 2015) \[F_{2}^{(\rm m)} = \left(\kappa_{\delta}-\frac{4}{21}\lambda_{\delta}\right)+\kappa_{ \delta}S(\mathbf{k}_{1},\mathbf{k}_{2})+\frac{2}{7}\lambda_{\delta}T(\mathbf{k}_{1},\mathbf{k}_ {2}),\] \[G_{2}^{(\rm m)} = \left(\kappa_{\theta}-\frac{8}{21}\lambda_{\theta}\right)+\kappa_{ \theta}S(\mathbf{k}_{1},\mathbf{k}_{2})+\frac{4}{7}\lambda_{\theta}T(\mathbf{k}_{1},\mathbf{k}_ {2}), \tag{3.13}\] where \(S\) and \(T\) are the scale-dependent functions characterising the shift and tidal terms: \[S(\mathbf{k}_{1},\mathbf{k}_{2}) = \frac{1}{2}(\hat{k}_{1}\cdot\hat{k}_{2})\left(\frac{k_{1}}{k_{2}}+ \frac{k_{2}}{k_{1}}\right),\] \[T(\mathbf{k}_{1},\mathbf{k}_{2}) = (\hat{k}_{1}\cdot\hat{k}_{2})^{2}-\frac{1}{3}. \tag{3.14}\] As mentioned in Section 2.1, the coefficients of the growth, shift, and tidal terms are not independent of each other but are related to under the condition that \(F_{2}^{(\rm m)}(\mathbf{p},-\mathbf{p})=G_{2}^{(\rm m)}(\mathbf{p},-\mathbf{p})=0\). Therefore, the coefficient of the growth term is determined from the coefficients of the shift and tidal terms. Considering the linear and non-linear bias effects, that the second-order fluctuations are proportional to \(\sigma_{8}^{2}\), and that \(G_{2}^{(m)}\) always appears with \(f\), we introduce the following parameterisation, \[F_{2}^{(8)}\sigma_{8}^{2} = (b_{1}\sigma_{8})\left[(F_{8}\sigma_{8})+(F_{\rm s}\sigma_{8})S+(F _{\rm t}\sigma_{8})T\right],\] \[fG_{2}^{(8)}\sigma_{8}^{2} = (f\sigma_{8})\left[(G_{\rm g}\sigma_{8})+(G_{\rm s}\sigma_{8})S+( G_{\rm t}\sigma_{8})T\right]. \tag{3.15}\] DHOST theories have \(G_{\rm g}=G_{s}-(2/3)G_{\rm t}\) from the condition \(G_{2}^{(8)}(\mathbf{p},-\mathbf{p})=0\); Horndeski theories further have \(F_{\rm s}=G_{\rm s}=1\). The specific form of each coefficient in DHOST theories is given by \[F_{\rm g} = \kappa_{\delta}-\frac{4}{21}\lambda_{\delta}+\frac{1}{2}\frac{b_ {2}}{b_{1}},\] \[F_{\rm s} = \kappa_{\delta},\] \[F_{\rm t} = \frac{2}{7}\lambda_{\delta}+\frac{b_{\rm s^{2}}}{b_{1}},\] \[G_{\rm g} = \kappa_{\theta}-\frac{8}{21}\lambda_{\theta},\] \[G_{\rm s} = \kappa_{\theta},\] \[G_{\rm t} = \frac{4}{7}\lambda_{\theta}. \tag{3.16}\] In Eq. (3.16), \(F_{\rm g}\) and \(F_{\rm t}\) do not contain any cosmological information because they are degenerate with the non-linear bias parameters, and \(G_{\rm g}\) is determined from \(G_{\rm t}\) and \(G_{\rm s}\). Thus, cosmologically meaningful parameters are \(F_{\rm s}\), \(G_{\rm s}\), and \(G_{\rm t}\). Following the method proposed by Sugiyama et al. (2021), we decompose the IR-resummed bispectrum model into \[B(\mathbf{k}_{1},\mathbf{k}_{2})=\sum_{p=1}^{22}X^{(p)}B^{(p)}(\mathbf{k}_{1},\mathbf{k}_{2}), \tag{3.17}\] with \[B^{(p)}(\mathbf{k}_{1},\mathbf{k}_{2}) = 2\,H^{(p)}(\mathbf{k}_{1},\mathbf{k}_{2}) \tag{3.18}\] \[\times \left\{\mathcal{D}(\mathbf{k}_{1})\mathcal{D}(\mathbf{k}_{2})\mathcal{D} (\mathbf{k}_{3})P_{\rm w}^{(n)}(k_{1})P_{\rm w}^{(n)}(k_{2})\right.\] \[+ \left.\mathcal{D}^{2}(\mathbf{k}_{1})P_{\rm w}^{(n)}(k_{1})P_{\rm w}^{ (n)}(k_{2})\right.\] \[+ \left.\mathcal{D}^{2}(\mathbf{k}_{2})P_{\rm w}^{(n)}(k_{1})P_{\rm w}^{ (n)}(k_{2})\right.\] \[+ \left.P_{\rm w}^{(n)}(k_{1})P_{\rm w}^{(n)}(k_{2})\right\}+2\,{ \rm{perms.}},\] where \(P_{\rm w}^{(n)}\) and \(P_{\rm w}^{(n)}\) are respectively the wiggle and no-wiggle linear matter power spectra normalized by \(\sigma_{8}^{2}\): \(P_{\rm w}^{(n)}=P_{\rm w}/\sigma_{8}^{2}\) and \(P_{\rm w}^{(n)}=P_{\rm w}/\sigma_{8}^{2}\). The functions \(X^{(p)}\) (\(p=1-22\)) represent the combinations of the parameters of interest and are given by \[X^{(1)} = (F_{\rm g}\sigma_{8})(b_{1}\sigma_{8})^{3},\] \[X^{(2)} = (F_{\rm t}\sigma_{8})(b_{1}\sigma_{8})^{3},\] \[X^{(3)} = (F_{\rm t}\sigma_{8})(b_{1}\sigma_{8})^{3},\] \[X^{(4)} = (F_{\rm g}\sigma_{8})(b_{1}\sigma_{8})^{2}(\sigma_{8}),\] \[X^{(5)} = (F_{\rm s}\sigma_{8})(b_{1}\sigma_{8})^{2}(f\sigma_{8}),\] \[X^{(6)} = (F_{\rm t}\sigma_{8})(b_{1}\sigma_{8})^{2}(f\sigma_{8}),\] \[X^{(7)} = (F_{\rm g}\sigma_{8})(b_{1}\sigma_{8})(f\sigma_{8})^{2},\] \[X^{(8)} = (F_{\rm s}\sigma_{8})(b_{1}\sigma_{8})(f\sigma_{8})^{2},\] \[X^{(9)} = (F_{\rm t}\sigma_{8})(b_{1}\sigma_{8})^{2}(f\sigma_{8})^{2},\] \[X^{(10)} = (G_{\rm g}\sigma_{8})(b_{1}\sigma_{8})^{2}(f\sigma_{8}),\] \[X^{(11)} = (G_{\rm s}\sigma_{8})(b_{1}\sigma_{8})^{2}(f\sigma_{8}),\] \[X^{(12)} = (G_{\rm t}\sigma_{8})(b_{1}\sigma_{8})^{2}(f\sigma_{8}),\] \[X^{(13)} = (G_{\rm g}\sigma_{8})(b_{1}\sigma_{8})(f\sigma_{8})^{2},\] \[X^{(14)} = (G_{\rm s}\sigma_{8})(b_{1}\sigma_{8})(f\sigma_{8})^{2},\] \[X^{(15)} = (G_{\rm t}\sigma_{8})(b_{1}\sigma_{8})(f\sigma_{8})^{2},\] \[X^{(16)} = (G_{\rm g}\sigma_{8})(f\sigma_{8})^{3},\] \[X^{(17)} = (G_{\rm s}\sigma_{8})(f\sigma_{8})^{3},\] \[X^{(18)} = (G_{\rm t}\sigma_{8})(f\sigma_{8})^{3},\] \[X^{(18)} = (b_{1}\sigma_{8})^{3}(f\sigma_{8}),\] \[X^{(20)} = (b_{1}\sigma_{8})^{2}(f\sigma_{8})^{2},\] \[X^{(21)} = (b_{1}\sigma_{8})(f\sigma_{8})^{3},\] \[X^{(22)} = (f\sigma_{8})^{4}. \tag{3.19}\] The scale-dependent functions \(H^{(p)}\) (\(p=1-22\)) are derived by decomposing the non-linear kernel functions \(Z_{1}Z_{1}Z_{2}\) in terms of the parameters, given by \[H^{(1)} = 1,\] \[H^{(2)} = S(\mathbf{k}_{1},\mathbf{k}_{2}),\] \[H^{(3)} = T(\mathbf{k}_{1},\mathbf{k}_{2}),\] \[H^{(4)} = (\mu_{1}^{2}+\mu_{2}^{2}),\] \[H^{(5)} = S(\mathbf{k}_{1},\mathbf{k}_{2})(\mu_{1}^{2}+\mu_{2}^{2}),\] \[H^{(6)} = T(\mathbf{k}_{1},\mathbf{k}_{2})(\mu_{1}^{2}+\mu_{2}^{2}),\] \[H^{(7)} = (\mu_{1}^{2}\mu_{2}^{2}),\] \[H^{(8)} = S(\mathbf{k}_{1},\mathbf{k}_{2})(\mu_{1}^{2}\mu_{2}^{2}),\] \[H^{(9)} = T(\mathbf{k}_{1},\mathbf{k}_{2})(\mu_{1}^{2}\mu_{2}^{2}),\] \[H^{(10)} = (\mu^{2}),\] \[H^{(11)} = S(\mathbf{k}_{1},\mathbf{k}_{2})(\mu^{2}),\] \[H^{(12)} = T(\mathbf{k}_{1},\mathbf{k}_{2})(\mu^{2}),\] \[H^{(13)} = (\mu^{2})(\mu_{1}^{2}+\mu_{2}^{2}),\] \[H^{(14)} = S(\mathbf{k}_{1},\mathbf{k}_{2})(\mu^{2})(\mu_{1}^{2}+\mu_{2}^{2}),\] \[H^{(15)} = T(\mathbf{k}_{1},\mathbf{k}_{2})(\mu^{2})(\mu_{1}^{2}+\mu_{2}^{2}),\] \[H^{(16)} = (\mu^{2})(\mu_{1}^{2}\mu_{2}^{2}),\] \[H^{(17)} = S(\mathbf{k}_{1},\mathbf{k}_{2})(\mu^{2})(\mu_{1}^{2}\mu_{2}^{2}),\] \[H^{(18)} = T(\mathbf{k}_{1},\mathbf{k}_{2})(\mu^{2})(\ bispectra: \[B_{\rm FG}(\mathbf{k}_{1},\mathbf{k}_{2}) =\sum_{p=1,4,7}X^{(p)}B^{(p)}(\mathbf{k}_{1},\mathbf{k}_{2}),\] \[B_{\rm FS}(\mathbf{k}_{1},\mathbf{k}_{2}) =\sum_{p=2,5,8}X^{(p)}B^{(p)}(\mathbf{k}_{1},\mathbf{k}_{2}),\] \[B_{\rm FT}(\mathbf{k}_{1},\mathbf{k}_{2}) =\sum_{p=3,6,9}X^{(p)}B^{(p)}(\mathbf{k}_{1},\mathbf{k}_{2}),\] \[B_{\rm GG}(\mathbf{k}_{1},\mathbf{k}_{2}) =\sum_{p=10,13,16}X^{(p)}B^{(p)}(\mathbf{k}_{1},\mathbf{k}_{2}),\] \[B_{\rm GS}(\mathbf{k}_{1},\mathbf{k}_{2}) =\sum_{p=11,14,17}X^{(p)}B^{(p)}(\mathbf{k}_{1},\mathbf{k}_{2}),\] \[B_{\rm GT}(\mathbf{k}_{1},\mathbf{k}_{2}) =\sum_{p=12,15,18}X^{(p)}B^{(p)}(\mathbf{k}_{1},\mathbf{k}_{2}),\] \[B_{\rm BF}(\mathbf{k}_{1},\mathbf{k}_{2}) =\sum_{p=19,20,21,22}X^{(p)}B^{(p)}(\mathbf{k}_{1},\mathbf{k}_{2}). \tag{3.22}\] where \(B_{\rm FG}\), \(B_{\rm FS}\), \(B_{\rm FT}\), \(B_{\rm GG}\), \(B_{\rm GS}\), and \(B_{\rm GT}\) are proportional to \((F_{\rm g}\sigma_{8})\), \((F_{\rm g}\sigma_{8})\), \((F_{\rm i}\sigma_{8})\), \((G_{\rm g}\sigma_{8})\), \((G_{\rm i}\sigma_{8})\), and \((G_{\rm i}\sigma_{8})\), respectively, and \(B_{\rm BF}\) depends only on \((b_{1}\sigma_{8})\) and \((f\sigma_{8})\). When computing the above seven bispectra, we assume the cosmological parameters in \(\Lambda\)CDM given in Section 1, the linear bias parameter \(b_{1}=2\), no non-linear bias, i.e. \(b_{2}=b_{s^{2}}=0\), and the redshift \(z=0.61\). Next, we decompose the seven bispectra using TripoSHs according to Section 3.1 and compute the 3PCF multipoles via the 2D Hankel transform (3.6). We plot the resulting 3PCF multipoles in Figures 1 and 2 as a function of \(r_{2}\) after fixing \(r_{1}\) to \(50\), \(80\), \(90\), \(100\), and \(130\,h^{-1}\,{\rm Mpc}\). As shown in Sugiyama et al. (2021), in the monopole component (Figure 1), the growth term ("FG") is positive for scales smaller than \(\sim 130\,h^{-1}\,{\rm Mpc}\) and has a peak at \(r_{1}=r_{2}\), while it goes from positive to negative and behaves like a trough for scales above \(\sim 130\,h^{-1}\,{\rm Mpc}\). On the other hand, the shift ("FS") and tidal ("FT") terms have troughs for any scale. Depending on the scale of interest, the shift term dominates for scales above \(\sim 30\,h^{-1}\,{\rm Mpc}\), and the total 3PCF ("total") has a trough. Around \(r_{1}\sim 100\,h^{-1}\,{\rm Mpc}\), the BAO peak appears and has a wavy shape as it cancels out the trough due to non-linear gravity effects (e.g., see the middle panels). At \(r_{1}=130\,h^{-1}\,{\rm Mpc}\) (the bottom panels), almost all the components have troughs, so the 3PCF has a more significant trough at \(r_{1}=r_{2}\). The quadrupole component (Figure 2) of the 3PCF only shows an overall trough behaviour because the BAO signal is sufficiently non-linearly damped. The most dominant term in the quadrupole 3PCF is the "BF" term, which does not depend on any non-linear coefficients such as \(F_{\rm g}\) or \(G_{\rm g}\). This "BF" term consists of two effects: first, a term expressed as the product of a linear density field and a linear velocity field, and second, a term expressed as the square of the linear velocity field. In particular, the former can be interpreted as a new shift term resulting from the coordinate transformation from real to redshift space (2.18), and it dominates the "BF" term. Therefore, it behaves similarly to the shift term in the monopole 3PCF and explains most of the trough structure in the quadrupole 3PCF. The growth ("GG"), shift ("GS"), and tidal ("GT") terms in the non-linear velocity field contribute to the quadrupole 3PCF comparably to those in the non-linear density field, and thus we can use the quadrupole 3PCF to determine the "GG", "GS", and "GT" terms. In contrast to the monopole case, the growth terms ("FG" and "GG") are negative and behave as troughs, while the shift terms ("FS" and "GS") are positive. ### Time-dependences of parameters We review the discussion by Yamauchi & Sugiyama (2021) on introducing new parameters to test DHOST theories and their time-dependences. Note that some previous works predict that constraining \(\sigma_{8}\) alone from the 3PCF can break the degeneracy between \(f\sigma_{8}\) and \(\sigma_{8}\), but this no longer happens in the framework of DHOST theories. To illustrate this fact in the context of our parameterisation, we can see from Eq. (3.15) that the coefficient of the shift term in the second-order density fluctuation in \(\Lambda\)CDM (\(F_{\rm s}=1\)) determines \(\sigma_{8}\) because both the growth and tidal terms are degenerate with the non-linear bias parameters (Schmittfull et al., 2015). However, in the case of DHOST theories, there appears the parameter \(\kappa_{\rm\delta}\) in the coefficient of the shift term, which makes it impossible to measure \(\sigma_{8}\) alone. Therefore, we introduce three new parameters that are not degenerate with \(\sigma_{8}\) following Yamauchi & Sugiyama (2021): \[E_{f} =\frac{f}{\kappa_{\rm\delta}}=\frac{f\sigma_{8}}{F_{\rm s}\sigma_{ 8}},\] \[E_{\rm s} =\frac{\kappa_{\rm\theta}}{\kappa_{\rm\delta}}=\frac{G_{\rm s} \sigma_{8}}{F_{\rm s}\sigma_{8}},\] \[E_{\rm t} =\frac{\lambda_{\rm\theta}}{\kappa_{\rm\delta}}=\frac{7}{4}\frac{ G_{\rm i}\sigma_{8}}{F_{\rm s}\sigma_{8}}. \tag{3.23}\] In GR or Horndeski theories, \(E_{f}=f\), \(E_{\rm s}=1\) and \(E_{\rm t}=\lambda_{\rm\theta}\), because \(\kappa_{\rm\delta}=\kappa_{\rm\theta}=1\). Horndeski theories differ from \(\Lambda\)CDM only in \(f\) and \(E_{\rm t}\) while keeping \(E_{\rm s}=1\). If \(E_{\rm s}\neq 1\), then the signal is specific to DHOST theories; \(E_{\rm s}\neq 1\) is a sufficient condition for detecting DHOST theories because there can be DHOST theories satisfying \(E_{\rm s}=1\). It has been known for a long time that the coefficient of the tidal term in the non-linear density field, \(\lambda_{\rm\delta}\), is time-dependent in GR (e.g., Bouchet et al., 1992), and in the case of \(\Lambda\)CDM, the following approximation holds well with an precision better than \(0.6\%\)(Bouchet et al., 1995; Yamauchi et al., 2017)1 Footnote 11: The original derivation of the equation was calculated in the Lagrangian picture and is given in the form (Bouchet et al., 1995) \[\frac{2}{7}\lambda_{\rm\delta}=\frac{1}{2}\left[1-\frac{3}{7}\Omega_{\rm m}^{-1/ 143}\right]. \tag{3.24}\] This equation can be rewritten to Eq. (3.25) under the condition \((1-\Omega_{\rm m})\ll 1\). Figure 1: The monopole 3PCFs, \(\zeta_{000}\) (left) and \(\zeta_{110}\) (right), calculated from the decomposed bispectra (3.22) according to the parameter dependence, are shown as a function of \(r_{2}\) after fixing \(r_{1}\) to \(50,80,90,100\), and \(130\,h^{-1}\,{\rm Mpc}\). The “FG”, “FS”, and “FT” terms arise from the growth, shift and tidal effects of the non-linear density fluctuation; the “GG”, “GS”, and “GT” terms arise from those of the non-linear velocity field; the “BF” term consists only of linear density and linear velocity fields; the “total” term is the sum of all the decomposed components. For these calculations, the \(\Lambda\)CDM model at \(z=0.61\), the linear bias \(b_{1}=2.0\), and no non-linear bias are assumed. Figure 2: Same as Figure 1, except that the quadrupole 3PCFs, \(\zeta_{202}\) and \(\zeta_{112}\), are shown. In summary, we parameterise the second-order kernel function of the velocity field (3.15) as \[fG_{2}^{(g)}\sigma_{8}^{2}=\Omega_{\rm m}^{\xi_{\rm r}}\left(F_{\rm s}\sigma_{8} \right)^{2}\Bigg{[}\left(G_{\rm g}\right)+\Omega_{\rm m}^{\xi_{\rm r}}S+\frac{4} {7}\Omega_{\rm m}^{\xi_{\rm t}}\,T\Bigg{]}, \tag{3.29}\] where \((G_{\rm g})=\Omega_{\rm m}^{\xi_{\rm r}}-(8/21)\Omega_{\rm m}^{\xi_{\rm r}}\), and the functions \(S\) and \(T\) are given in Eq. (3.14). We will test the theory of gravity by measuring the above three parameters, \(\xi_{f}\), \(\xi_{\rm s}\), and \(\xi_{\rm t}\), from the BOSS data in Section 9. In DHOST theories, the Planck mass is time-varying, and the time variation of the Hubble parameter is different from GR. Therefore, one may be concerned that the time dependence of \(\Omega_{\rm m}\) is different from \(\Omega_{\rm m}^{\rm GR}\) that is calculated assuming GR. However, Appendix \(C\) in Yamauchi & Sugiyama (2021) showed that the difference between DHOST theories and GR is suppressed by \((1-\Omega_{\rm m}^{\rm GR})\). Hence, we can replace \(\Omega_{\rm m}\) in Eq. (3.27) with \(\Omega_{\rm m}^{\rm GR}\) as an approximation and perform the analysis to constrain \(\xi_{f}\), \(\xi_{\rm s}\), and \(\xi_{\rm t}\). ### Limitations of our theoretical approach to the 2PCF and 3PCF In this subsection, we discuss the validity of the calculation methods of the 2PCF and 3PCF models described so far and the limitations of their application. First, we can use the TripoSH decomposed 3PCF (3.6) to constrain all the scale dependencies in the 3PCF, such as the shift and tidal terms, as shown Figures 1 and 2, because it does not focus only on specific scale dependencies such as the squeezed limit. However, our analysis that uses only some multipoles of the TripoSH decomposition does not fully utilize the information on the scale dependence of the 3PCF. The reason for restricting the multipole components used in this work is to keep the number of data bins much smaller than the number of mock simulations used to compute the covariance matrix (Section 5). Therefore, increasing the number of multipoles in the 3PCF to be considered will improve the results of this work when more mock catalogues are created in the future. Second, note that the power spectrum and bispectrum models in Eqs. (3.9) and (3.12) are valid for any theory in which the IR cancellation occurs based on the Galilean invariance of the system of equations in the IR limit: i.e., these models hold not only for \(\Lambda\)CDM but also for Horndeski theories (Crisostomi et al., 2020). On the other hand, as Lewandowski (2020) pointed out in the power spectrum case, additional terms arise when performing the IR resummation in DHOST theories because of the violation of the IR cancellation. Specifically, when one applies the IR limit to the one-loop solution of the power spectrum in DHOST theories, a term proportional to \(k^{2}P_{\rm lin}(k)\) appears, changing the shape of the power spectrum (Crisostomi et al., 2020; Lewandowski, 2020; Hirano et al., 2020). However, since this additional term in the IR limit is proportional to \(k^{2}P_{\rm lin}(k)\), it is considered to be negligible at the large scales of interest in this paper (\(\geq 80\,h^{-1}\,{\rm Mpc}\)). Assuming that the same should happen in the bispectrum, we directly use the power and bispectrum models in Eqs. (3.9) and (3.12) in the present analysis. In addition, it should be noted that Hirano et al. (2020) showed that in DHOST theories, a term consisting of the product of first- and third-order fluctuations in the one-loop power spectrum causes UV divergence. Further model development is thus needed to take advantage of smaller-scale information by solving these problems. Third, since the linear equation for density fluctuations is scale-independent (2.6), we assume that we can use the shape of the linear matter power spectrum determined in the high-\(z\) region, where the scalar field is expected to be sub-dominant. Thus, we can pre-compute the \(\sigma_{8}^{2}\)-normalized wiggle and no-wiggle power spectra, \(P_{\rm sn}^{(n)}\) and \(P_{\rm sn}^{(n)}\), appearing in the \(B^{(p)}\) terms (3.18), using a \(\Lambda\)CDM model. Fourth, there is a concern about the pre-computation of \(\mathcal{D}(\mathbf{k})\) (3.10) appearing in the \(B^{(p)}\) terms (3.18). It is known that \(\sigma_{\perp}\) and \(\sigma_{\parallel}\), which characterise \(\mathcal{D}(\mathbf{k})\), can be calculated successfully using linear displacement vectors (e.g., Matsubara, 2008), and we adopt the same calculation in this paper (3.11). Since \(\sigma_{\perp}\) and \(\sigma_{\parallel}\) in the linear theory depend on \(f\) and \(\sigma_{8}\), their values should differ for different gravity theories. For this reason, it is desirable to vary \(\sigma_{\perp}\) and \(\sigma_{\parallel}\) as free parameters in the data analysis. However, to do so, the bispectrum decomposition method in Eq. (3.17) cannot be applied, and the computation time of the bispectrum model increases significantly, making it challenging to perform cosmological analysis. Fortunately, the BAO signal does not significantly impact the shape of the 3PCF. The reason is that the BAO signal is maximized when \(r_{1}\sim r_{2}\sim 100\,h^{-1}\,{\rm Mpc}\), while \(r_{1}\) and \(r_{2}\) can take various combinations in the 3PCF (Sugiyama et al., 2021). Therefore, in this paper, we ignore the concern about \(\mathcal{D}(\mathbf{k})\) and pre-compute \(\sigma_{\perp}\) and \(\sigma_{\parallel}\) using the linear theory in \(\Lambda\)CDM. Furthermore, to keep consistency with the 3PCF calculation, we fix \(\sigma_{\perp}\) and \(\sigma_{\parallel}\) to those calculated using the \(\Lambda\)CDM model in the 2PCF calculation as well. Finally, to simplify the analysis, we ignore the AP effect (Alcock & Paczynski, 1979), which can directly measure the Hubble parameter and angular radial distance at the redshift of the galaxy distribution of interest. Since DHOST theories vary these parameter values, the AP effect is expected to provide further information into the constraint on DHOST theories. Sugiyama et al. (2021) have performed a joint analysis of the anisotropic 2PCF and 3PCF to constrain the AP parameters under the GR assumption. Combining that method with the analysis method developed in this paper allows for consistent DHOST theory constraints that simultaneously account for the AP and nonlinear gravity effects, which is left as future work. ## 4 Measurements This section summarises how to measure multipole 2PCFs and 3PCFs from BOSS galaxy data according to the method proposed by Sugiyama et al. (2021). First, Section 4.1 introduces the BOSS galaxy data used in this paper and the mock simulation data designed to reproduce it. Then, Section 4.2 describes the measurements of the multipole 2PCFs and 3PCFs. Finally, Section 4.3 explains how to correct for the window function effects on the measured 2PCF and 3PCF. ### Data We use the final galaxy clustering data set, Data Release 12 (DR12; Alam et al., 2015), from the Baryon Oscillation Spectroscopic Survey (BOSS; Dawson et al., 2013). The BOSS survey is part of the Sloan Digital Sky Survey III (SDSS III Eisenstein et al., 2011), selected galaxies from multicolour SDSS imaging (Fukugita et al., 1996; Gunn et al., 1998; Smith et al., 2002; Gunn et al., 2006; Doi et al., 2010) and used the SDSS multi-fibre spectrograph (Bolton et al., 2012; Smee et al., 2013) to measure spectroscopic redshifts of the galaxies. As detailed in Reid et al. (2016), the BOSS survey has four samples, CMASS, LOWZ, LOWZ2 and LOWZ3, and those four samples are combined into one sample. In brief, the survey footprint, veto masks and survey-related systematics (such as fibre collisions and redshift failures) are considered to construct data and random catalogues for the DR12 BOSS galaxies. This DR12 combined sample comprises \(1.2\) million massive galaxies over an effective area of \(9329\,\mathrm{deg}^{2}\) and covers a redshift range of \(0.2-0.75\). In our analysis, we split this redshift range into two redshift bins defined by \(0.2<z<0.5\) and \(0.5<z<0.75\) with the effective redshifts \(z_{\mathrm{eff}}=0.38\) and \(0.61\), respectively, where the effective redshifts are calculated as the weighted average over all galaxies (see e.g. Eq. (67) in Beutler et al. 2014). The DR12 combined sample is observed across the two Galactic hemispheres, referred to as the Northern and Southern galactic caps (NGC and SGC, respectively), and the NGC and SGC samples probe slightly different galaxy populations in the low-redshift part of the combined sample (see Appendix A in Alam et al. 2015). To derive the covariance matrices of the 2PCF and 3PCF and test the validity of the 2PCF and 3PCF models given in Eqs. (3.9) and (3.12), we use the MultiDark-Patchy mock catalogues (Patchy mocks; Kitaura et al. 2016). The Patchy mocks have been calibrated to an \(N\)-body simulation-based reference sample using approximate galaxy solvers and analytical-statistical biasing models and incorporate observational effects including the survey geometry, veto mask and fiber collisions. The reference catalogue is extracted from one of the BigMultiDark simulations (Klypin et al. 2016), which was performed using GADGET-2 (Springel 2005) with \(3840^{3}\) particles on a volume of \((2.5\,h^{-1}\,\mathrm{Mpc})^{3}\). Halo abundance matching is used to reproduce the observed BOSS two and three-point clustering measurements (Rodriguez-Torres et al. 2016). There are \(2048\) catalogues available for each the NGC and SGC over the redshift range \(z=0.2-0.75\). The fiducial cosmology for these mocks assumes a \(\mathrm{\Lambda CDM}\) cosmology with \((\Omega_{\Lambda},\Omega_{\mathrm{m}},\Omega_{\mathrm{b}},\sigma_{8},h)\) (\(=0.692885,0.307115,0.048206,0.8288,0.6777\)). These fiducial parameters are slightly different from those used in our analysis of the BOSS galaxy data introduced in the introduction (Section 1), but we expect that such differences do not significantly affect the covariance matrix estimations of the 2PCF and 3PCF. We include three different incompleteness weights to account for shortcomings of the BOSS dataset: a fiber collision weight, \(w_{\mathrm{cp}}\), a redshift failure weight, \(w_{\mathrm{noa}}\), and a systematics weight, \(w_{\mathrm{sys}}\), which is a combination of a stellar density weight and a seeing condition weight. Each galaxy observed at position \(\mathbf{x}\) is counted with the following weight (Ross et al. 2012; Anderson et al. 2014; Reid et al. 2016): \[w_{\mathrm{c}}(\mathbf{x})=w_{\mathrm{sys}}(\mathbf{x})\,(w_{\mathrm{cp}}(\mathbf{x})+w_{ \mathrm{non}}(\mathbf{x})-1)\,. \tag{4.1}\] In addition, we use a signal-to-noise weight, the so-called FKP weight, proposed by Feldman et al. (1994), \(w_{\mathrm{FKP}}(\mathbf{x})=1/[1+\bar{n}_{0}(\mathbf{x})\bar{P}]\), where \(\bar{P}=10^{4}\,(h^{-1}\,\mathrm{Mpc})^{3}\). The FKP weight function is effective not only for the power spectrum but also for the bispectrum when assuming Gaussian errors (Scoccimarro 2000), and bispectrum measurements from the Patchy mock catalogue confirm that the FKP weight improves the bispectrum signal-to-noise ratio even when including non-Gaussian errors (see Appendix D in Sugiyama et al. 2019). We expect the validity of the FKP weight to hold for the 2PCF and 3PCF in configuration space because we measure the 2PCF and 3PCF as Fourier transforms of the power spectrum and bispectrum, respectively (Section 4.2). For the galaxy data, multiplying the completeness weights by the FKP weights yields the local weight function that is used in our analysis, while the random catalogues have only the FKP weights: \[w^{(\mathrm{gal})}(\mathbf{x}) =w_{\mathrm{c}}(\mathbf{x})\,w_{\mathrm{FKP}}(\mathbf{x}),\] \[w^{(\mathrm{ran})}(\mathbf{x}) =w_{\mathrm{FKP}}(\mathbf{x}), \tag{4.2}\] where the superscripts, "(gal)" and "(ran)", stand for "galaxy" and "random". ### Estimators of the 2PCF and 3PCF We measure the number densities of both real and random galaxies weighted by the spherical harmonic function \(Y_{\ell m}\): \[D_{\ell m}(\mathbf{x}) =\sum_{i}^{N_{\mathrm{gal}}}w^{(\mathrm{gal})}(\mathbf{x}_{i})Y_{ \ell m}^{*}\left(\hat{x}_{i}^{(\mathrm{gal})}\right)\delta_{\mathrm{D}}\left( \mathbf{x}-\mathbf{x}_{i}^{(\mathrm{gal})}\right),\] \[R_{\ell m}(\mathbf{x}) =\sum_{j}^{N_{\mathrm{ran}}}w^{(\mathrm{ran})}(\mathbf{x}_{j})Y_{ \ell m}^{*}\left(\hat{x}_{j}^{(\mathrm{ran})}\right)\delta_{\mathrm{D}}\left( \mathbf{x}-\mathbf{x}_{j}^{(\mathrm{ran})}\right), \tag{4.3}\] where \(N_{\mathrm{gal}}\) and \(N_{\mathrm{ran}}\) are the total number of real and random galaxies, respectively, and the normal number densities are given by \(D(\mathbf{x})=\sqrt{4\pi}D_{00}(\mathbf{x})\) and \(R(\mathbf{x})=\sqrt{4\pi}R_{00}(\mathbf{x})\). Defining \(N_{\mathrm{gal}}^{\prime}\equiv\int d^{3}xD(\mathbf{x})\) and \(N_{\mathrm{ran}}^{\prime}\equiv\int d^{3}xR(\mathbf{x})\), we can estimate the survey volume as \[V=\frac{N_{\mathrm{ran}}^{{}^{\prime}2}}{\int d^{3}x\,[R(\mathbf{x})]^{2}}. \tag{4.4}\] Then, the observed density fluctuation weighted by \(Y_{\ell m}\) is \[\delta_{\mathrm{obs},\ell m}(\mathbf{x})=V\left[D_{\ell m}(\mathbf{x})/N_{\mathrm{gal}} ^{\prime}-R_{\ell m}(\mathbf{x})/N_{\mathrm{ran}}^{\prime}\right], \tag{4.5}\] and \[\delta_{\mathrm{obs}}(\mathbf{x})=\sqrt{4\pi}\delta_{\mathrm{obs},\,00}(\mathbf{x}). \tag{4.6}\] We use the fast Fourier transform (FFT) algorithm 13 to calculate Footnote 13: [http://fftw.org/](http://fftw.org/) \[\widetilde{\delta}_{\mathrm{obs},\ell m}(\mathbf{k})=\frac{1}{W_{\mathrm{mass}}( \mathbf{k})}\int d^{3}xe^{-\mathbf{\cdot}\mathbf{x}\cdot\mathbf{x}}\delta_{\mathrm{obs},\ell m }(\mathbf{x}), \tag{4.7}\] where the Fourier transform of the normal density fluctuation is given by \(\widetilde{\delta}_{\mathrm{obs}}(\mathbf{k})=\sqrt{4\pi}\,\widetilde{\delta}_{ \mathrm{obs},00}(\mathbf{k})\), and \(W_{\mathrm{mass}}(\mathbf{k})\) is the mass assignment function that corrects for the effect when arising assign particles on a regular grid in position space (Jing 2005). The most popular mass assignment function is given by (Hockney & Eastwood 1981) \[W_{\mathrm{mass}}(\mathbf{k})=\prod_{i=x,y,z}\left[\mathrm{sinc}\left(\frac{\pi k _{i}}{2k_{\mathrm{N,i}}}\right)\right]^{p}, \tag{4.8}\] where \(k_{\mathrm{N,i}}\) is the Nyquist frequency of \(i\)-axis with the grid spacing \(H_{i}\) on the axis. The indexes \(p=1\), \(p=2\), and \(p=3\) correspond to the nearest grid point (NGP), cloud-in-cell (CIC), and triangular-shaped cloud (TSC) assignment functions, respectively. The FFT-based estimator of the multipole 2PCFs is given by (Hand et al. 2017; Sugiyama et al. 2018) (see also Bianchi et al. 2015; Scoccimarro 2015) \[\widehat{\xi}_{\ell}(r) =\frac{(4\pi)}{V}\sum_{m}\int\frac{d^{2}\hat{r}}{4\pi}Y_{\ell m}( \hat{r})\int\frac{d^{3}k}{(2\pi)^{3}}e^{\mathbf{\cdot}\mathbf{x}}\] \[\times\,\left[\widetilde{\delta}_{\mathrm{obs},\ell m}(\mathbf{k}) \widetilde{\delta}_{\mathrm{obs}}^{*}(\mathbf{k})-S_{\ell m}(\mathbf{k})\right]. \tag{4.9}\] The shot-noise term \(S_{\ell m}(\mathbf{k})\) is given by \[S_{\ell m}(\mathbf{k}) = \frac{C_{\rm shot}(\mathbf{k})}{W_{\rm max}^{2}(\mathbf{k})}\left(\frac{V}{N _{\rm gal}^{\prime}}\right)^{2} \tag{4.10}\] \[\times \Bigg{[}\sum_{i}^{N_{\rm gal}}\left[w^{(\rm gal)}(\mathbf{x}_{i}) \right]^{2}Y_{\ell m}^{*}\left(\hat{x}_{i}^{(\rm gal)}\right)\] \[+ \left(\frac{N_{\rm gal}^{\prime}}{N_{\rm max}^{\prime}}\right)^{ 2}\sum_{j}^{N_{\rm max}}\left[w^{(\rm ran)}(\mathbf{x}_{j})\right]^{2}Y_{\ell m}^{ *}\left(\hat{x}_{j}^{(\rm ran)}\right)\Bigg{]},\] where \(C_{\rm shot}(\mathbf{k})\) represents the correction for the assignment effect to the shot-noise term, given by (Eq. (20) in Jing, 2005) \[C_{\rm shot}(\mathbf{k})\] \[= \begin{cases}1,&\text{NGP};\\ \prod_{i}\left[1-\frac{2}{3}\sin^{2}\left(\frac{\pi k_{i}}{2k_{\rm N}_{i}} \right)\right],&\text{CIC};\\ \prod_{i}\left[1-\sin^{2}\left(\frac{\pi k_{i}}{2k_{\rm N}_{i}}\right)+\frac{ 2}{15}\sin^{4}\left(\frac{\pi k_{i}}{2k_{\rm N}_{i}}\right)\right],&\text{TSC}.\end{cases} \tag{4.11}\] The angle integral \(\int d^{2}\hat{r}/(4\pi)\) in Eq. (4.9) can be rewritten as \[\int\frac{d^{2}\hat{r}}{4\pi}=\frac{1}{N_{r}(r)}\sum_{r=\Delta r/2<r<r+\Delta r /2}, \tag{4.12}\] where \(\Delta r\) is the width of the \(r\)-bins, and \(N_{r}(r)\) is the number of three-dimensional data contained in each \(r\)-bin width. From the expression of the shot noise term in the 2PCF given in Eq. (4.10), we compute the weighted mean number density as \[\bar{n}=\left\{\frac{V}{N_{\rm gal}^{2}}\sum_{i}^{N_{\rm gal}}\left[w^{(\rm gal )}(\mathbf{x}_{i})\right]^{2}\right\}^{-1}. \tag{4.13}\] The FFT-based estimator of the multipole 3PCFs is given by (Sugiyama et al., 2019)(see also Scoccimarro, 2015; Slepian & Eisenstein, 2016) \[\widehat{\zeta}_{\ell_{1}\ell_{2}\ell}(r_{1},r_{2}) = \frac{(4\pi)^{2}h_{\ell_{1}\ell_{2}\ell}}{V}\sum_{m_{1}m_{2}m} \left(\begin{smallmatrix}\ell_{1}&\ell_{2}&\ell\\ m_{1}&m_{2}&m\end{smallmatrix}\right) \tag{4.14}\] \[\times \Bigg{[}\int d^{3}xF_{\ell_{1}m_{1}}(\mathbf{x};r_{1})F_{\ell_{2}m_{2 }}(\mathbf{x};r_{2})G_{\ell m}(\mathbf{x})\] \[-\delta_{r_{1}r_{2}}^{(\rm K)}S_{\ell_{1}m_{1};\ell_{2}m_{2};\ell m _{2}}(r_{1})\Bigg{]},\] where \[F_{\ell m}(\mathbf{x};r) = i^{t}\int\frac{d^{3}k}{(2\pi)^{3}}e^{\mathbf{k}\mathbf{x}}j_{\ell}(rk)Y_ {\ell m}^{*}(\hat{k})\widetilde{\delta}_{\rm obs}(\mathbf{k}),\] \[G_{\ell m}(\mathbf{x}) = \int\frac{d^{3}k}{(2\pi)^{3}}e^{\mathbf{k}\mathbf{x}}\,\widetilde{\delta }_{\rm obs,\ell m}(\mathbf{k}). \tag{4.15}\] Note that the shot-noise term only contributes to the 3PCF measurement for the \(r_{1}=r_{2}\) bins, represented by the Kronecker delta \(\delta_{r_{1}r_{2}}^{(\rm K)}\) in Eq. (4.14). To specifically calculate the shot-noise term in the 3PCF, we first measure the following density field \[N(\mathbf{x}) = \sum_{i}^{N_{\rm gal}}\left[w_{i}^{(\rm gal)}(\mathbf{x}_{i})\right]^ {2}\delta_{\rm D}\left(\mathbf{x}-\mathbf{x}_{i}^{(\rm gal)}\right) \tag{4.16}\] \[+ \left(\frac{N_{\rm gal}^{\prime}}{N_{\rm ran}^{\prime}}\right)^{ 2}\sum_{i}^{N_{\rm ran}}\left[w_{i}^{(\rm ran)}(\mathbf{x}_{i})\right]^{2}\delta_{ \rm D}\left(\mathbf{x}-\mathbf{x}_{i}^{(\rm ran)}\right),\] and divide it by \((N_{\rm gal}^{\prime}/V)\) to have \[\delta_{\rm N}(\mathbf{x})=\left(V/N_{\rm gal}^{\prime}\right)N(\mathbf{x}). \tag{4.17}\] Then, we calculate the Fourier transform of \(\delta_{\rm N}(\mathbf{x})\) in the same manner as in Eq. (4.7) and denote it as \(\widetilde{\delta}_{\rm N}(\mathbf{k})\). Finally, we derive \(S_{\ell_{1}m_{1};\ell_{2}m_{2};\ell m}(r)\) by substituting \(\widetilde{\delta}_{\rm N}(\mathbf{k})\) into the following equation \[S_{\ell_{1}m_{1};\ell_{2}m_{2};\ell m}(r) = \left(\frac{1}{4\pi r^{2}\Delta r}\right)\left(\frac{V}{N_{\rm gal }^{\prime}}\right)(-1)^{\ell_{1}+\ell_{2}} \tag{4.18}\] \[\times \int\frac{d^{2}\hat{r}}{4\pi}Y_{\ell_{1}m_{1}}^{*}(\hat{r})Y_{ \ell_{2}m_{2}}^{*}(\hat{r})\int\frac{d^{3}k}{(2\pi)^{3}}e^{\mathbf{k}\cdot\mathbf{r}}\] \[\times \left[\widetilde{\delta}_{\ell m}(\mathbf{k})\widetilde{\delta}_{\rm N }^{*}(\mathbf{k})-S_{\ell m}^{(\rm 3PCF)}(\mathbf{k})\right],\] where \[S_{\ell m}^{(\rm 3PCF)}(\mathbf{k}) = \frac{C_{\rm shot}(\mathbf{k})}{W_{\rm max}^{2}(\mathbf{k})}\left(\frac{V}{ N_{\rm gal}^{\prime}}\right)^{2} \tag{4.19}\] \[\times \left[\sum_{i}^{N_{\rm gal}}\left[w^{(\rm gal)}(\mathbf{x}_{i}) \right]^{3}Y_{\ell m}^{*}\left(\hat{x}_{i}^{(\rm gal)}\right)\right.\] \[- \left.\left(\frac{N_{\rm gal}^{\prime}}{N_{\rm ran}^{\prime}} \right)^{3}\sum_{j}^{N_{\rm ran}}\left[w^{(\rm ran)}(\mathbf{x}_{j})\right]^{3}Y_{ \ell m}^{*}\left(\hat{x}_{j}^{(\rm ran)}\right)\right].\] The factor \((1/(4\pi r^{2}\Delta r))\) can be rewritten as \[\frac{1}{4\pi r^{2}\Delta r}=\frac{1}{N_{r}(r)}\frac{N_{\rm grid}}{V_{\rm FFT}}, \tag{4.20}\] where \(V_{\rm FFT}\) is the volume of the Cartesian box in which the galaxies are placed before the FFT is performed, and \(N_{\rm grid}\) is the number of FFT grid cells. In the scale range of \(80\leq r\leq 150\,h^{-1}\,{\rm Mpc}\), we choose \(\Delta r=5\,h^{-1}\,{\rm Mpc}\) for the 2PCF and \(\Delta r=10\,h^{-1}\,{\rm Mpc}\) for the 3PCF. Considering \(\zeta_{\ell_{1}\ell_{2}\ell_{2}}(r_{1},r_{2})=\zeta_{\ell_{2}\ell_{1}\ell_{2}}(r _{2},r_{1})\), the numbers of data bins for the 2PCF and 3PCF multipoles are \(15\), \(36\), \(36\), \(64\), and \(36\) for \(\zeta_{0}\), \(\xi_{2}\), \(\zeta_{00}\), \(\zeta_{110}\), \(\zeta_{202}\), and \(\zeta_{112}\), respectively. We use the Cartesian coordinates \(\mathbf{x}=\{x,y,z\}\) with the \(z\)-axis pointing to the north pole to define a cuboid of dimension \(\mathbf{L}[\,h^{-1}\,{\rm Mpc}]=(L_{x},L_{y},L_{z})\) containing the galaxy sample; to perform the FFT, each axis of this cuboid is delimited into \(\mathbf{N}=(N_{z},N_{y},N_{z})\) grids. We then distribute the galaxies on the FFT grid using the TSC assignment function. We adopt the same values for \(\mathbf{L}\) and \(\mathbf{N}\) that were used by the Fourier space analysis of the two-point statistics performed by Beutler et al. (2017). They are chosen so that the width of each grid is \(\sim 5\,h^{-1}\,{\rm Mpc}\), which is well below the scales \(r\geq 80\,h^{-1}\,{\rm Mpc}\) that we are interested in. We summarise the specific values of \(\mathbf{L For the 2PCF, we compute \[Q_{\ell}(r) = \frac{(4\pi)}{V}\sum_{m}\int\frac{d^{2}\hat{r}}{4\pi}Y_{\ell m}(\hat {r})\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\mathbf{k}\cdot\mathbf{\tau}} \tag{4.21}\] \[\times \bigg{[}\widetilde{W}_{\ell m}(\mathbf{k})\widetilde{W}^{+}(\mathbf{k})-S _{\ell m}^{(\mathrm{w})}(\mathbf{k})\bigg{]}.\] where \(\widetilde{W}_{\ell m}(\mathbf{k})\) is the Fourier transform of \((V/N_{\mathrm{ran}}^{\prime})R_{\ell m}(\mathbf{x})\) computed in the same manner as in Eq. (4.7), \(\widetilde{W}(\mathbf{k})=\sqrt{4\pi}\,\widetilde{W}_{00}(\mathbf{k})\), and the shot-noise term is given by \[S_{\ell m}^{(\mathrm{w})}(\mathbf{k}) = \frac{C_{\mathrm{shot}}(\mathbf{k})}{W_{\mathrm{max}}^{2}(\mathbf{k})} \left(\frac{V}{N_{\mathrm{ran}}^{\prime}}\right)^{2} \tag{4.22}\] \[\times \Bigg{[}\sum_{i}^{N_{\mathrm{ran}}}\left[w^{(\mathrm{ran})}(\mathbf{ x}_{i})\right]^{2}Y_{\ell m}^{*}\left(\hat{x}_{i}^{(\mathrm{ran})}\right) \Bigg{]}.\] Then, we have the theoretical model of \(\xi_{\ell}(r)\) taking the survey window effect into account as follows (Wilson et al., 2017; Beutler et al., 2017): \[\xi_{\ell}^{(\mathrm{w})}(r)=(2\ell+1)\sum_{\ell_{1}\ell_{2}}\left(\begin{smallmatrix} \ell_{1}&\ell_{2}&\ell\\ 0&0&0\end{smallmatrix}\right)^{2}Q_{\ell_{1}}(r)\,\xi_{\ell_{2}}(r). \tag{4.23}\] For the 3PCF, we compute \[Q_{\ell_{1}\ell_{2}\ell}(r_{1},r_{2}) = \frac{(4\pi)^{2}h_{\ell_{1}\ell_{2}\ell}}{V}\sum_{m_{1}m_{2}m} \left(\begin{smallmatrix}\ell_{1}&\ell_{2}&\ell\\ m_{1}&m_{2}&m\end{smallmatrix}\right) \tag{4.24}\] \[\times \Bigg{[}\int d^{3}xF_{\ell_{1}m_{1}}^{(\mathrm{w})}(\mathbf{x};r_{1} )F_{\ell_{2}m_{2}}^{(\mathrm{w})}(\mathbf{x};r_{2})G_{\ell m}^{(\mathrm{w})}(\mathbf{ x})\] \[\quad-\delta_{r_{1}r_{2}}^{(\mathrm{K})}S_{\ell_{1}m_{1},\ell_{2 }m_{2};\ell m}^{(\mathrm{w})}(r_{1})\Bigg{]},\] where \[F_{\ell m}^{(\mathrm{w})}(\mathbf{x};r) = i^{\ell}\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\mathbf{k}\cdot\mathbf{x}}j_{ \ell}(rk)Y_{\ell m}^{*}(\hat{k})\widetilde{W}(\mathbf{k}),\] \[G_{\ell m}^{(\mathrm{w})}(\mathbf{x}) = \int\frac{d^{3}k}{(2\pi)^{3}}\,e^{i\mathbf{k}\cdot\mathbf{x}}\, \widetilde{W}_{\ell m}(\mathbf{k}). \tag{4.25}\] The shot-noise term is given by \[S_{\ell_{1}m_{1};\ell_{2}m_{2};\ell m}^{(\mathrm{w})}(r) = \left(\frac{1}{4\pi r^{2}\Delta r}\right)\left(\frac{V}{N_{ \mathrm{ran}}^{\prime}}\right)(-1)^{\ell_{1}+\ell_{2}} \tag{4.26}\] \[\times \int\frac{d^{2}\hat{r}}{4\pi}Y_{\ell_{1}m_{1}}^{*}(\hat{r})Y_{ \ell_{2}m_{2}}^{*}(\hat{r})\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\mathbf{k}\cdot\mathbf{x}}\] \[\times \left[\widetilde{W}_{\ell m}(\mathbf{k})\widetilde{\mathsf{N}}_{ \mathrm{N}}^{(\mathrm{w})*}(\mathbf{k})-S_{\ell m}^{(\mathrm{3PCF},\mathrm{w})}( \mathbf{k})\right],\] where \[S_{\ell m}^{(\mathrm{3PCF},\mathrm{w})}(\mathbf{k}) = \frac{C_{\mathrm{shot}}(\mathbf{k})}{W_{\mathrm{max}}^{2}(\mathbf{k})} \left(\frac{V}{N_{\mathrm{ran}}^{\prime}}\right)^{2} \tag{4.27}\] \[\times \Bigg{[}\sum_{i}^{N_{\mathrm{ran}}}\left[w^{(\mathrm{ran})}(\mathbf{ x}_{i})\right]^{3}Y_{\ell m}^{*}\left(\hat{x}_{i}^{(\mathrm{ran})}\right)\Bigg{]},\] and \(\widetilde{\mathsf{N}}_{\mathrm{N}}^{(\mathrm{w})}(\mathbf{k})\) is the Fourier transform of \[\delta_{\mathrm{N}}^{(\mathrm{w})}(\mathbf{x})=\left(\frac{V}{N_{ \mathrm{ran}}^{\prime}}\right)\sum_{i}^{N_{\mathrm{ran}}}\left[w_{i}^{(\mathrm{ ran})}\right]^{2}\delta_{\mathrm{D}}\left(\mathbf{x}-\mathbf{x}_{i}^{(\mathrm{ran})} \right). \tag{4.28}\] Then, we have the theoretical model of \(\zeta_{\ell_{1}\ell_{2}\ell}(r_{1},r_{2})\) taking the survey window effect into account as follows (Sugiyama et al., 2019; Sugiyama et al., 2021): \[\zeta_{\ell_{1}\ell_{2}\ell}^{(\mathrm{w})}(r_{1},r_{2}) = (4\pi)\sum_{\ell_{1}^{\prime}+\ell_{2}^{\prime}+\ell_{2}^{\prime }+\ell_{2}^{\prime}+\ell_{2}^{\prime}=\mathrm{even}} \tag{4.29}\] \[\times \left[\begin{smallmatrix}\ell_{1}^{\prime\prime}&\ell_{2}^{ \prime\prime}&\ell_{2}^{\prime\prime}\\ \ell_{1}^{\prime}&\ell_{2}^{\prime}&\ell^{\prime}\\ \ell_{1}^{\prime}&\ell_{2}^{\prime}&\ell^{\prime}\end{smallmatrix}\right] \left[\frac{h_{\ell_{1}\ell_{2}\ell}h_{\ell_{1}\ell_{1}^{\prime} \ell_{2}^{\prime}}h_{\ell_{2}\ell_{2}^{\prime}\ell_{2}^{\prime\prime}}h_{\ell _{1}^{\prime}\ell_{2}^{\prime\prime}}h_{\ell_{1}^{\prime}\ell_{2}^{\prime}\ell_{2 }^{\prime\prime}}h_{\ell_{1}^{\prime}\ell_{2}^{\prime\prime}}h_{\ell_{1}^{ \prime}\ell_{2}^{\prime}\ell_{2}^{\prime\prime}}^{\prime\prime}}\right]\] \[\times Q_{\ell_{1}^{\prime\prime}\ell_{2}^{\prime\prime}\ell_{2}^{ \prime\prime}}(r_{1},r_{2})\,\zeta_{\ell_{1}^{\prime}\ell_{2}^{\prime}\ell_{ 2}^{\prime}}(r_{1},r_{2}),\] where the bracket with \(9\) multipole indices, \(\{\dots\}\), denotes the Wigner-\(9j\) symbol. In the likelihood fitting performed in Section 9, we use \(\xi_{\ell}^{(\mathrm{w})}\) and \(\zeta_{\ell_{1}\ell_{2}\ell}^{(\mathrm{w})}\) to compare the measured multipole 2PCF and 3PCF estimators with the theoretical models given in Eqs. (3.9) and (3.12). In this paper, we ignore the contribution from the integral constraint (Peacock & Nicholson, 1991) for both the 2PCF and the 3PCF. In the 2PCF case, the correction equation for the window function effect shown in Eq. (4.23) calculates only the three multipole components for both \(Q_{\ell_{1}}\) and \(\xi_{\ell_{2}}\), i.e., \(\ell_{1},\ell_{2}=0,2,4\). The reason is that our analysis focuses only on large scales above \(80\,h^{-1}\,\mathrm{Mpc}\), where the linear theory is dominant, and the linear Kaiser effect gives only up to the hexadecople \(\ell=4\). For the window correction formula of the 3PCF (4.29), Sugiyama et al. (2021) examined in detail which multipole components contribute to the observed estimator (4.14) and to what extent, for the NGC sample at \(0.4<z<0.6\), and showed that a finite number of multipole components can correct for the window effect on the 3PCF with sufficiently good accuracy. Assuming that this result is not significantly different for the other BOSS samples, we calculate a total of \(14\) multipole components for both \(Q_{\ell_{1}\ell_{2}^{\prime\prime}\ell_{2}^{\prime\prime}}\) and \(\zeta_{\ell_{1}\ell_{2}\ell_{2}^{\prime}}\) as follows: \((\ell_{1},\ell_{2},\ell)=(0,0,0),(1,1,0)\), \((2,2,0)\), \((3,3,0)\) and \((4,4,0)\) for the monopole 3PCF (\(\ell=0\), and \((\ell_{1},\ell_{2},\ell)=(0,2,2),(1,1,2),(2,0,2),(1,3,2),(2,2,2),(3 find that the window 3PCF multipoles measured at different redshift bins in each sky region (NGC or SGC) behave similarly (see, for example, the solid blue and dashed orange lines). On the other hand, for the quadrupole component, we see that the four BOSS samples may behave differently. The first few terms of the monopole and quadrupole components, such as \(Q_{110}\), \(Q_{220}\), \(Q_{202}\), \(Q_{112}\), and \(Q_{022}\), have values of \(\mathcal{O}(0.01)-\mathcal{O}(0.1)\), while the higher-order terms have values of \(\mathcal{O}(0.01)\) or less. Therefore, we can conclude that the higher-order window 3PCF multipoles have no significant effect on the final \(\zeta^{\rm w}_{t_{1}\ell_{2}\ell^{\prime}}(r_{1},r_{2})\), as long as we measure the first few terms of the monopole and quadrupole components, i.e., \(\zeta^{\rm w}_{000}(r_{1},r_{2})\), \(\zeta^{\rm w}_{110}(r_{1},r_{2})\), \(\zeta^{\rm w}_{202}(r_{1},r_{2})\), and \(\zeta^{\rm w}_{112}(r_{1},r_{2})\). Figures 5 and 6 plot the theoretical predictions for the 3PCF multipoles, including window function effects, corresponding to the four BOSS samples. These calculations assume the \(\Lambda\)CDM and linear bias as in Figures 1 and 2, with redshifts of \(0.38\) and \(0.61\). As the value of \(r_{1}\) increases, the difference between NGC and SGC due to the window function effect becomes more considerable. To quantitatively estimate the extent to which the multipole component of interest, \(\zeta^{\rm w}_{t_{1}\ell_{2}\ell^{\prime}}\), is affected by the other multipole components, \(\zeta_{\ell^{\prime}_{1}\ell^{\prime}_{2}\ell^{\prime}}\), through window function effects, we compute the following quantities (Sugiyama et al., 2021): \[\Delta\bar{\zeta}_{\ell^{\prime}_{1}\ell^{\prime}_{2}\ell^{\prime}}=\frac{ \text{Sum}\Big{[}\Delta\zeta^{\ell_{1}\ell_{2}\ell^{\prime}}_{\ell^{\prime} \ell^{\prime}_{2}\ell^{\prime}}/Q_{000}\Big{]}}{\text{Sum}\left[\zeta^{\rm w}_ {t_{1}\ell^{\prime}_{2}\ell^{\prime}}/Q_{000}\right]} \tag{4.30}\] with \[\Delta\zeta^{\ell_{1}\ell_{2}\ell^{\prime}_{1}\ell^{\prime}_{2} \ell^{\prime}}_{\ell^{\prime}_{1}\ell^{\prime}_{2}\ell^{\prime}}(r_{1},r_{2}) =(4\pi)\sum_{\begin{subarray}{c}\ell^{\prime\prime}_{1}+\ell^{ \prime\prime}_{2}+\ell^{\prime\prime}_{1}=\text{even}\end{subarray}}\left\{ \begin{subarray}{c}\ell^{\prime\prime}_{1}&\ell^{\prime\prime}_{2}&\ell^{ \prime\prime}\\ \ell^{\prime}_{1}&\ell_{2}&\ell^{\prime}\end{subarray}\right\}\] \[\times\left[\frac{h_{t_{1}\ell_{2}}h_{t_{1}\ell_{2}}h_{t_{2}\ell ^{\prime\prime}_{2}}h_{t_{1}\ell^{\prime\prime}_{2}\ell^{\prime\prime}_{2}}h_{ t_{1}\ell^{\prime\prime}_{2}\ell^{\prime\prime}_{2}\ell^{\prime\prime}}}{h_{t_{1}\ell^{ \prime\prime}_{2}\ell^{\prime\prime}_{2}\ell^{\prime\prime}}h_{t^{\prime \prime}_{2}\ell^{\prime\prime}}}\right]\] \[\times Q_{\ell^{\prime\prime}_{1}\ell^{\prime\prime}_{2}\ell^{\prime \prime}}(r_{1},r_{2})\,\zeta_{\ell^{\prime}_{1}\ell^{\prime}_{2}\ell^{\prime \prime}_{1}}(r_{1},r_{2}) \tag{4.31}\] and \[\text{Sum}\left[\zeta_{t_{1}\ell_{2}\ell}\right]=\begin{cases}\sum_{r_{1}\geq r _{2}}\zeta_{t_{1}\ell_{2}\ell}(r_{1},r_{2})&\text{for }\ell_{1}=\ell_{2};\\ \sum_{r_{1},r_{2}}\zeta_{t_{1}\ell_{2}\ell}(r_{1},r_{2})&\text{for }\ell_{1} \neq\ell_{2},\end{cases} \tag{4.32}\] where \(\Delta\bar{\zeta}_{\ell^{\prime}_{1}\ell^{\prime}_{2}\ell^{\prime}}\) satisfies \(\sum_{\ell^{\prime}_{1}\ell^{\prime}_{2}\ell^{\prime}}\Delta\bar{\zeta}_{\ell^ {\prime}_{1}\ell^{\prime}_{2}\ell^{\prime\prime}}=1\), and the summation is performed in the range of \(80\leq r\leq 150\,h^{-1}\,\mathrm{Mpc}\) which we use for our data analysis. Table 2 summarises the \(\Delta\bar{\zeta}_{\ell^{\prime}_{1}\ell^{\prime}_{2}\ell^{\prime}}\) results calculated from Eq. (4.30) for the four BOSS samples. Naturally, the multipole component that is the same as the target one has the largest contribution. For example, for \(\zeta^{\rm w}_{000}\) at \(z_{\rm eff}=0.38\) in NGC, \(95.34\%\) of the contribution comes from \(\zeta_{000}\). For all four samples, multipole components other than the measured one have positive or negative values, and their overall contribution is about \(5-10\%\). As expected, the contributions of higher-order components such as \(\zeta_{330}\), \(\zeta_{440}\), and \(\zeta_{332}\) are mostly below \(0.5\%\). Therefore, we conclude that the window function correction equation in Eq. (4.29) can account for the window function effect on the 3PCF in BOSS with sufficient accuracy, even if it is truncated at a finite number of 14 multipole components used in this work. We note here the importance of \(\Delta\bar{\zeta}_{112}\), which includes the \(M\neq 0\) modes of Scoccimarro et al. (1999)' decomposition method in the correction for window function effects: it gives a contribution comparable to \(\Delta\bar{\zeta}_{020}\) and \(\Delta\bar{\zeta}_{022}\), which include only the \(M=0\) mode, and tends to have the opposite sign to that of \(\Delta\bar{\zeta}_{202}\) and \(\Delta\bar{\zeta}_{022}\). Therefore, failure to properly account for effects such as \(\Delta\bar{\zeta}_{112}\) that include the \(M\neq 0\) modes may result in an error of \(\sim 5\%\) in the correction for the window function effect. ## 5 Covariance matrix We estimate the covariance matrix from the \(2048\) Patchy mock catalogues described in Section 4.1. Let \(\mathbf{d}^{(r)}\) be the data vector measured from the \(r\)-th catalogue, and \(\mathbf{\overline{d}}=(1/N_{\rm s})\sum_{r=1}^{N_{\rm s}}\mathbf{d}^{(r)}\) be its mean value, then the covariance matrix of the data vector is given by \[\mathbf{C}=\frac{1}{N_{\rm s}-1}\sum_{r=1}^{N_{\rm s}}\left(\mathbf{d}^{(r)}-\mathbf{ \overline{d}}\right)\left(\mathbf{d}^{(r)}-\mathbf{\overline{d}}\right)^{T}. \tag{4.1}\] where \(N_{\rm s}=2048\) is the number of the Patchy mock catalogues. ### Effects of a finite number of mocks The covariance matrix \(\mathbf{C}\) inferred from the mock catalogues suffers from noise due to the finite number of mocks, which directly leads to an increase in the uncertainty of the cosmological parameters (Hartlap et al., 2007; Taylor et al., 2013; Dodelson & Schneider, 2013; Percival et al., 2014; Taylor & Joachimi, 2014). This effect is decomposed into two factors. First, the inverse covariance matrix, \(\mathbf{C}^{-1}\), provides a biased estimate of the true inverse covariance matrix. To correct this bias, we rescale the inverse covariance matrix as (Hartlap et al., 2007) \[\mathbf{C}^{-1}_{\rm Hartlap}=\left(\frac{N_{\rm s}-N_{\rm b}-2}{N_{\rm s}-1} \right)\mathbf{C}^{-1}, \tag{4.2}\] where the pre-factor on the right-hand side, \((N_{\rm s}-N_{\rm b}-2)/(N_{\rm s}-1)\), is the so-called "Hartlap" factor, and \(N_{\rm b}\) is the number of data bins. Second, we need to consider the propagation of the error in the covariance matrix to the error on the estimated parameters. This effect is corrected by multiplying the final result of the parameter errors by the following factor (Percival et al., 2014) \[M_{1}=\sqrt{\frac{1+B(N_{\rm b}-N_{\rm p})}{1+A+B(N_{\rm p}+1)}} \tag{4.3}\] with \[A =\frac{2}{(N_{\rm s}-N_{\rm b}-1)(N_{\rm s}-N_{\rm b}-4)}\] \[B =\frac{N_{\rm s}-N_{\rm b}-2}{(N_{\rm s}-N_{\rm b}-1)(N_{\rm s}-N_{ \rm b}-4)}, \tag{4.4}\] where \(N_{\rm p}\) is the number of parameters. The derivation of the Hartlap factor (4.2) assumes that the data vector follows a Gaussian distribution. On the other hand, Sellentin & Heavens (2016) shows that in covariance matrix estimates from simulations, the data vector follows a multivariate \(t\)-distribution. When the number of simulations is sufficiently larger than the number of data bins, this \(t\)-distribution approaches a Gaussian distribution (Heavens et al., 2017), and the present analysis satisfies this condition. The reason is that the number of the Patchy mocks we use to estimate the covariance matrix is \(2048\), while the maximum number of data in our analysis is \(202\) (Section 6.2). In addition, the derivation of the \(M_{1}\) factor (4.3) also assumes the Gaussian distribution of the data vector, but there is no known value for the correction factor that corresponds to \(M_{1}\) in the Sellentin & Heavens \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{8}{c}{\(z_{\rm eff}=0.38\) (\(0.2<z<0.5\))} \\ \hline & \multicolumn{8}{c}{NGC} & \multicolumn{8}{c}{SGC} \\ \hline & \(\Delta\tilde{c}_{\ell_{1}^{\prime}\ell_{2}^{\prime}\ell^{\prime}}\) [\%] & \(\zeta_{000}^{(\rm w)}\) & \(\zeta_{110}^{(\rm w)}\) & \(\zeta_{202}^{(\rm w)}\) & \(\zeta_{112}^{(\rm w)}\) & \(\zeta_{000}^{(\rm w)}\) & \(\zeta_{110}^{(\rm w)}\) & \(\zeta_{202}^{(\rm w)}\) & \(\zeta_{112}^{(\rm w)}\) \\ \hline monopole (\(\ell=0\)) & \(\Delta\tilde{\zeta}_{000}\) & \(\bf{95.34}\) & \(\bf{1.59}\) & \(\bf{-1.18}\) & \(\bf{0.97}\) & \(\bf{86.41}\) & \(\bf{2.43}\) & \(\rm{-0.15}\) & \(\rm{0.19}\) \\ & \(\Delta\tilde{\zeta}_{110}\) & \(\bf{9.39}\) & \(\bf{102.60}\) & \(\bf{1.41}\) & \(\bf{-3.77}\) & \(\bf{13.66}\) & \(\bf{98.60}\) & \(\rm{0.25}\) & \(\bf{-0.50}\) \\ & \(\Delta\tilde{\zeta}_{220}\) & \(\rm{-0.09}\) & \(\bf{-0.53}\) & \(\bf{1.14}\) & \(\rm{0.09}\) & \(\rm{-0.12}\) & \(\bf{-0.79}\) & \(\rm{0.13}\) & \(\rm{0.01}\) \\ & \(\Delta\tilde{\zeta}_{330}\) & \(\rm{0.26}\) & \(\rm{0.21}\) & \(\rm{0.08}\) & \(\rm{-0.01}\) & \(\rm{0.41}\) & \(\rm{0.35}\) & \(\rm{0.02}\) & \(\rm{-0.02}\) \\ & \(\Delta\tilde{\zeta}_{440}\) & \(\rm{0.06}\) & \(\rm{0.00}\) & \(\rm{0.06}\) & \(\rm{-0.00}\) & \(\rm{0.09}\) & \(\rm{0.01}\) & \(\rm{-0.00}\) & \(\rm{-0.00}\) \\ \hline quadrupole (\(\ell=2\)) & \(\Delta\tilde{\zeta}_{020}\) & \(\bf{-3.75}\) & \(\bf{0.59}\) & \(\bf{90.14}\) & \(\bf{2.07}\) & \(\rm{-0.43}\) & \(\rm{0.10}\) & \(\bf{89.28}\) & \(\bf{2.38}\) \\ & \(\Delta\tilde{\zeta}_{112}\) & \(\bf{4.36}\) & \(\bf{-2.89}\) & \(\bf{4.21}\) & \(\bf{96.78}\) & \(\bf{0.86}\) & \(\rm{-0.38}\) & \(\bf{5.06}\) & \(\bf{92.64}\) \\ & \(\Delta\tilde{\zeta}_{022}\) & \(\rm{-4.57}\) & \(\bf{0.81}\) & \(\bf{0.50}\) & \(\bf{2.75}\) & \(\rm{-0.48}\) & \(\rm{0.14}\) & \(\bf{0.51}\) & \(\bf{3.16}\) \\ & \(\Delta\tilde{\zeta}_{312}\) & \(\rm{-0.10}\) & \(\rm{-0.38}\) & \(\bf{1.95}\) & \(\rm{0.13}\) & \(\rm{-0.05}\) & \(\rm{-0.05}\) & \(\bf{2.87}\) & \(\rm{0.26}\) \\ & \(\Delta\tilde{\zeta}_{222}\) & \(\rm{0.04}\) & \(\rm{0.04}\) & \(\rm{0.38}\) & \(\rm{0.07}\) & \(\rm{-0.02}\) & \(\rm{-0.01}\) & \(\rm{-0.04}\) & \(\rm{0.12}\) \\ & \(\Delta\tilde{\zeta}_{132}\) & \(\rm{-0.56}\) & \(\bf{-1.54}\) & \(\rm{0.14}\) & \(\rm{0.23}\) & \(\rm{-0.19}\) & \(\rm{-0.19}\) & \(\rm{0.18}\) & \(\bf{0.66}\) \\ & \(\Delta\tilde{\zeta}_{422}\) & \(\rm{-0.21}\) & \(\rm{-0.23}\) & \(\bf{1.02}\) & \(\rm{0.21}\) & \(\rm{-0.02}\) & \(\rm{-0.08}\) & \(\bf{1.53}\) & \(\rm{0.31}\) \\ & \(\Delta\tilde{\zeta}_{332}\) & \(\rm{0.10}\) & \(\rm{0.08}\) & \(\rm{0.02}\) & \(\rm{0.10}\) & \(\rm{-0.03}\) & \(\rm{-0.02}\) & \(\rm{0.10}\) & \(\rm{0.24}\) \\ & \(\Delta\tilde{\zeta}_{242}\) & \(\rm{-0.29}\) & \(\rm{-0.34}\) & \(\rm{0.12}\) & \(\rm{0.38}\) & \(\rm{-0.08}\) & \(\rm{-0.14}\) & \(\rm{0.26}\) & \(\bf{0.55}\) \\ \hline \end{tabular} \end{table} Table 2: Contributions of other 3PCF multipole components to the observed 3PCF multipole components, as manifested through the effect of the window function, are shown for the four BOSS samples. When the contribution to the final result exceeds \(0.5\%\), it is written in bold. The value of the same multipole component \(\Delta\tilde{\zeta}_{\ell_{1}\ell_{2}\ell}\) (4.30) as the measured \(\zeta_{\ell_{1}\ell_{2}\ell}^{(\rm w)}\) (4.29) is larger (smaller) than \(100\%\), when the total contribution from all the other multipole components is negative (positive). Figure 3: The monopole and quadrupole components of the window 3PCF (4.24), \(Q_{000}\), \(Q_{110}\), \(Q_{220}\), \(Q_{330}\), \(Q_{440}\), \(Q_{202}\), \(Q_{112}\), and \(Q_{022}\), measured from the four BOSS samples are shown as a function of \(r_{2}\) after fixing \(r_{1}\) to \(60\,h^{-1}\,{\rm Mpc}\) (left) and \(120\,h^{-1}\,{\rm Mpc}\) (right). Figure 4: Same as Figure 3, except that the higher-order quadrupole components of the window 3PCF, \(Q_{312}\), \(Q_{222}\), \(Q_{132}\), \(Q_{422}\), \(Q_{332}\), and \(Q_{242}\), are shown. Figure 5: The monopole 3PCFs, \(\zeta_{000}\) (left) and \(\zeta_{110}\) (right), that include the window function effect (4.29) are shown for the four BOSS samples. The results are plotted as a function of \(r_{2}\) after fixing \(r_{1}\) to \(50\), \(80\), \(90\), \(100\), \(130\,h^{-1}\,{\rm Mpc}\) from top to bottom panels. For these calculations, the \(\Lambda\)CDM model at \(z=0.61\), the linear bias \(b_{1}=2.0\), and no non-linear bias are assumed. Figure 6: Same as Figure 5, except that the quadrupole 3PCFs, \(\zeta_{202}\) (left) and \(\zeta_{112}\) (right), are shown. and (5.3) to correct the uncertainty in parameter estimation due to a finite number of simulations (see also e.g., Percival et al., 2022). We can therefore evaluate the effect of a finite number of mocks on the final error estimation using the square root of the Hartlap factor multiplied by the \(M_{1}\) factor (Percival et al., 2014), \[M_{2}=\sqrt{\frac{N_{s}-1}{N_{s}-N_{\rm b}-2}}M_{1}. \tag{5.5}\] Note that this \(M_{2}\) factor is not used in the actual analysis. It is essential to increase the number of simulations and reduce the number of data bins to keep the value of \(M_{2}\) as close to \(1\) as possible for a conservative analysis. The reason is that the Hartlap and \(M_{1}\) factors cannot be always accurately correct for parameter errors for any number of simulations. For example, using both monopole and quadrupole components of the 2PCF and 3PCF, as in this paper, Sugiyama et al. (2021) performed an anisotropic BAO analysis with the AP effect. The result showed that the error in the angular diameter distance for \(M_{2}=1.32\) is underestimated by about \(10\%\) compared to the case for \(M_{2}=1.06\) by changing the number of simulations. We will calculate the \(M_{2}\) factor in Section 6.4 and summarise the results in Table 3, where \(M_{2}\sim 1.1\), indicating that our analysis achieves \(M_{2}\) values sufficiently close to \(1\). ### Correlation matrix The \((i,j)\) elements of the correlation matrix is computed from the covariance matrix as \[r_{ij}=\frac{C_{ij}}{\sqrt{C_{ii}C_{jj}}}. \tag{5.6}\] Considering the data vector \(\mathbf{d}=\{\xi_{0},\xi_{2},\zeta_{000},\zeta_{110},\zeta_{202},\zeta_{112}\}\), we show the results of the correlation matrix for the four BOSS samples in Figure 7. To simplify the figure, we only plot the results for the diagonal component of the 3PCF multipoles, i.e., \(\zeta_{\ell_{1}\ell_{2}\ell}(r_{1},r_{2}=r_{1})\). The range of scales shown in the figure is \(80\leq r\leq 150\,h^{-1}\,{\rm Mpc}\), and the width of the \(r\)-bin is \(\Delta r=10\,h^{-1}\) Mpc. The four samples show similar results, and we summarise the overall features below. First, the monopole 2PCF and the monopole 3PCFs have a moderate correlation (\(0.25<r_{ij}<0.5\)); the same is true for the quadrupole 2PCF and the quadrupole 3PCFs. Next, the first two terms of the monopole 3PCFs (\(\zeta_{00}\) and \(\zeta_{110}\)) are strongly correlated with each other (\(0.5<r_{ij}<0.75\)); on the other hand, the first two quadrupole 3PCFs (\(\zeta_{020}\) and \(\zeta_{112}\)) are weakly correlated (\(0.0<r_{ij}<0.25\)). This result indicates that \(\zeta_{020}\) and \(\zeta_{112}\) have independent information from each other. These results are consistent with the results in the bispectrum presented by Sugiyama et al. (2019). ### Standard deviation The standard deviation is given by the square root of the diagonal components of the covariance matrix: i.e., \(\sqrt{C_{ii}}\). Figure 8 shows the mean and standard deviation of \(\xi_{\ell}(r)\) and \(\zeta_{\ell_{1}\ell_{2}\ell}(r_{1},r_{2})\) calculated from the \(2048\) Patchy mock simulations. The mock data used are the NGC samples at \(z=0.38\) and \(0.61\). For \(\zeta_{\ell_{1}\ell_{2}\ell}(r_{1},r_{2})\), the measured values and the standard deviations are plotted as a function of the scale variable \(r_{1}=r_{2}=r\) to simplify the figure. From this figure, it can be seen that the mean values of \(\xi_{\ell}\) and \(\zeta_{\ell_{1}\ell_{2}\ell}\) do not differ much for the different redshifts, i.e., \(z=0.38\) and \(0.61\) (compare the magenta and blue lines). One may expect the amplitudes of \(\xi_{\ell}\) and \(\zeta_{\ell_{1}\ell_{2}\ell}\) to be larger at lower redshifts because the tree-level solutions (2.22) of \(\xi_{\ell}\) and \(\zeta_{\ell_{1}\ell_{2}\ell}\) are proportional to \(D^{2}\) and \(D^{4}\), respectively, with \(D\) being the linear growth function. However, this is not the case in Figure 8. There are two possible reasons for this. The first is the bias effect. For halos with similar mass, the lower the redshift, the smaller the value of the linear bias \(b_{1}\) tends to be. Therefore, \(b_{1}D(z)\) is less time-dependent and does not show significant differences at the different redshifts, especially for the monopole components of \(\xi_{\ell}\) and \(\zeta_{\ell_{1}\ell_{2}\ell}\). A similar effect to the linear bias is likely to occur for the non-linear bias included in the 3PCF. Second, the product of the linear growth rate \(f\) and the linear growth function \(D\) is also a less time-dependent function. Therefore, the redshift dependence of \(\xi_{\ell}\) and \(\zeta_{\ell_{1}\ell_{2}\ell}\) is not pronounced, even for the quadrupole component. On the other hand, the standard deviations of \(\xi_{\ell}\) and \(\zeta_{\ell_{1}\ell_{2}\ell}\) are significantly different for the different redshifts. In general, the so-called Gaussian terms in the covariance matrix depend only on the two-point statistic, while higher-order statistics such as the three-point statistic appear in the non-Gaussian terms. It is also known that the covariance matrix is inversely proportional to the survey volume, and that the higher the number density of observed galaxies, the smaller the covariance matrix. Therefore, the fact that the \(\xi_{\ell}\) and \(\zeta_{\ell_{1}\ell_{2}\ell}\) signals measured from the Patchy mock do not differ significantly at the different redshifts suggests that the redshift dependence in the standard deviation may be due to the survey volume and galaxy number density. In Figure 8, the standard deviation at \(z=0.61\) (blue) multiplied by \(\sqrt{V_{z=0.61}/V_{z=0.38}}\) is plotted as magenta dashed lines, with the survey volumes at \(z=0.38\) and \(z=0.61\) denoted as \(V_{z=0.38}\) and \(V_{z=0.61}\), respectively. In the case of the 2PCF, the magenta dashed line is similar to the result at \(z=0.38\) (magenta), indicating that the difference in the standard deviation of the 2PCF due to differences in redshift can be explained mainly by differences in the survey volume. However, this is not the case for the 3PCF, where the standard deviation of the 3PCF at \(z=0.38\) is smaller than the magenta dashed line. This fact suggests that the effect of the galaxy number density on the covariance matrix is more significant for the 3PCF than for the 2PCF. In other words, as can be seen from Table 1, the sample at \(z=0.38\) has a higher galaxy number density than the sample at \(z=0.61\), even though the survey volume is smaller. Therefore, the standard deviation at \(z=0.38\) is smaller than the standard deviation at \(z=0.61\) normalised to the survey volume at \(z=0.38\). This result is consistent with the finding of Sugiyama et al. (2020) that the galaxy number density plays an essential role in the covariance matrix of the bispectrum, even on large scales. ### Cumulative signal-to-noise ratio The covariance matrix is a two-dimensional quantity in the 2PCF case and a four-dimensional quantity in the 3PCF case. Therefore, a useful way compressing and quantifying this multi-dimensional information in the covariance matrix is to estimate the cumulative signal-to-noise \(({\rm S/N})\) ratios, given by \[\left(\frac{{\rm S}}{{\rm N}}\right)=\left(\overline{\mathbf{d}}^{T}\cdot{\mathbf{C}}_{ \rm Hartlap}^{-1}\cdot\overline{\mathbf{d}}\right)^{1/2}. \tag{5.7}\] We calculate the cumulative \({\rm S/N}\) for each multipole component of the 2PCF and 3PCF: i.e., \(\overline{d}=\xi_{0}\), \(\xi_{2}\), \(\zeta_{000}\), \(\zeta_{110}\), \(\zeta_{202}\), or \(\zeta_{112}\). We also fix the maximum scale \(r_{\rm max}=150\,h^{-1}\,{\rm Mpc}\), vary the minimum scale \(r_{\rm min}\) from \(150\,h^{-1}\,{\rm Mpc}\) to \(30\,h^{-1}\,{\rm Mpc}\), and calculate the \({\rm S/N}\) as a function of \(r_{\rm min}\). In Figure 9, we plot the \({\rm S/N}\) for the four BOSS samples, NGC and SGC at \(z=0.38\) and \(0.61\). Note that we do not consider cross-covariance matrices between different multipole components, e.g., between \(\xi_{0}\) and \(\zeta_{000}\). How the information Figure 7: The correlation matrices of the monopole and quadrupole 2PCFs (\(\xi_{0}\) and \(\xi_{2}\)), the first two monopole 3PCFs (\(\zeta_{000}\) and \(\zeta_{110}\)), and the first two quadrupole 3PCFs (\(\xi_{020}\) and \(\zeta_{112}\)) are shown for the four BOSS samples. For simplicity of the figure, only the results for the \(r_{1}=r_{2}\) case multipole 3PCFs, i.e. \(\zeta(r_{1},r_{2}=r_{1})\), are plotted, but in the actual analysis (Section 9), the data bins for the \(r_{1}\neq r_{2}\) case are also used. The plotted scale range is \(80\leq r\leq 150\,h^{-1}\,{\rm Mpc}\), and the \(r\)-bin width is \(\Delta r=10\,h^{-1}\,{\rm Mpc}\). Figure 8: The mean values and standard deviations of \(\xi_{0}\), \(\xi_{2}\), \(\zeta_{0000}\), \(\zeta_{110}\), \(\zeta_{202}\), and \(\zeta_{112}\) calculated from the \(2048\) Patchy mock catalogues. The results are plotted at the two redshifts, \(z=0.38\) (magenta) and \(0.61\) (blue), for the NGC sample. The magenta dashed lines are the standard deviation at \(z=0.61\) multiplied by \(\sqrt{V_{z=0.61}/V_{z=0.38}}\) and normalized to have the same survey volume as the sample at \(z=0.38\), where the survey volumes at \(z=0.38\) and \(z=0.61\), \(V_{z=0.38}\) and \(V_{z=0.61}\), respectively, are given in Table 1. For simplicity of the figure, only the results in the \(r_{1}=r_{2}\) case for the JPCF are plotted. Figure 9: Cumulative signal-to-noise ratios \(({\rm S}/{\rm N}s)\) for the multipole components of the 2PCF and 3PCF, where both signal and noise (covariance matrix) are computed from the \(2048\) Patchy mock catalogues. The maximum scale used for the \({\rm S}/{\rm N}\) calculation is fixed at \(r_{\rm max}=150\,h^{-1}\,{\rm Mpc}\) and the \({\rm S}/{\rm N}s\) are plotted as a function of the minimum scale \(r_{\rm min}\). The blue and magenta solid lines show the results for the samples at \(z=0.61\) and \(z=0.38\), respectively. The magenta dashed lines are the \({\rm S}/{\rm N}\) values in the sample at \(z=0.61\) multiplied by \(\sqrt{V_{z=0.38}/V_{z=0.61}}\). in the covariance matrix, including all cross-covariance matrices, ultimately propagates to the errors in the cosmological parameters of interest will be discussed through the Fisher analysis in Section 7. The top two panels of Figure 9 show the \(\mathrm{S/N}\) of \(\xi_{0}\) and \(\xi_{2}\). In all cases shown in the panels, the \(\mathrm{S/N}\) at \(z=0.61\) (blue line) is larger than the \(\mathrm{S/N}\) at \(z=0.38\) (magenta line). The difference is because the \(\mathrm{S/N}\) of the 2PCF is proportional to the square root of the survey volume \(V\), and the survey volume at \(z=0.61\), denoted \(V_{\mathrm{z=0.61}}\), is larger than the survey volume at \(z=0.38\), \(V_{\mathrm{z=0.38}}\). Therefore, multiplying the \(\mathrm{S/N}\) at \(z=0.61\) by \(\sqrt{V_{\mathrm{z=0.38}}^{2}/V_{\mathrm{z=0.61}}}\) approximately reproduces the \(\mathrm{S/N}\) at \(z=0.38\) (see magenta dashed lines). This result is consistent with the findings in the signal and standard deviation of the 2PCF in Figure 8. The middle and bottom results are for \(\zeta_{000}\), \(\zeta_{110}\), \(\zeta_{202}\), and \(\zeta_{112}\). These results show that, in contrast to the 2PCF case, the \(\mathrm{S/N}\) at \(z=0.38\) is comparable to the \(\mathrm{S/N}\) at \(z=0.61\). The difference in the \(\mathrm{S/N}\) at \(z=0.38\) and \(z=0.61\) in the 3PCF case cannot be explained by the difference in the survey volumes (see magenta dashed lines). This behaviour of the \(\mathrm{S/N}\) of the 3PCF can be explained by the finding shown in Figure 8 that the galaxy number density strongly influences the standard deviation of the 3PCF. In particular, in the present case, the effect of the galaxy number density is more pronounced when considering correlations between different scales, resulting in the \(\mathrm{S/N}\) at \(z=0.38\) that is comparable to the \(\mathrm{S/N}\) at \(z=0.61\). This result shows that a higher galaxy number density is as important for obtaining cosmological information from the 3PCF as increasing the survey volume. ## 6 Analysis Settings ### Likelihoods We assume that the likelihood of the data compared to the model predictions follows a multivariate Gaussian distribution: \[\ln\mathcal{L}(\mathbf{d}|\mathbf{\theta})=-\frac{1}{2}\left[\mathbf{d}-\mathbf{t}(\mathbf{\theta })\right]\mathbf{C}_{\mathrm{Hartlap}}^{-1}\left[\mathbf{d}-\mathbf{t}(\mathbf{\theta})\right] ^{T}, \tag{6.1}\] where \(\mathbf{d}\) is the data vector, \(\mathbf{t}\) is the model prediction of the data vector given the model parameters \(\mathbf{\theta}\), and \(\mathbf{C}_{\mathrm{Hartlap}}^{-1}\) is the inverse of the covariance matrix after correction by the Hartlap factor (5.2). We can then obtain the posterior distribution of the model parameters given the data by performing Bayesian inference: \[\mathcal{P}(\mathbf{\theta}|\mathbf{d})\propto\mathcal{L}(\mathbf{d}|\mathbf{\theta})\Pi(\bm {\theta}) \tag{6.2}\] where \(\mathcal{P}(\mathbf{\theta}|\mathbf{d})\) is the posterior distribution of \(\mathbf{\theta}\) given the data vector, \(\mathbf{d}\), and \(\Pi(\mathbf{\theta})\) is the prior distribution. We assume that the four BOSS galaxy samples (Table 1) are far enough apart that they each have independent cosmological information. Then, when constraining the model parameters common to each galaxy sample, we add up the likelihood functions of each galaxy sample. For example, when using all four galaxy samples, the total likelihood function is given by \[\ln\mathcal{L}_{\mathrm{total}} =\ln\mathcal{L}_{\mathrm{NGC\,at}=0.38}+\ln\mathcal{L}_{ \mathrm{SGC\,at}=0.38}\] \[+\ln\mathcal{L}_{\mathrm{NGC\,at}=0.61}+\ln\mathcal{L}_{ \mathrm{SGC\,at}=0.61}. \tag{6.3}\] ### Multipoles used, scale range, and number of bins To repeat what was explained in Section 4.2, the scale range used for parameter estimation in Section 9 is \(80\,h^{-1}\,\mathrm{Mpc}\leq r\leq 150\,h^{-1}\,\mathrm{Mpc}\), and we choose \(\Delta r=5\,h^{-1}\,\mathrm{Mpc}\) and \(10\,h^{-1}\,\mathrm{Mpc}\) for the 2PCF and 3PCF bin widths, respectively. The 2PCF and 3PCF multipoles used are the monopole and quadrupole 2PCFs (\(\xi_{0}\) and \(\xi_{2}\)), the two monopole 3PCFs (\(\zeta_{000}\) and \(\zeta_{022}\)), and the two quadrupole 3PCFs (\(\zeta_{020}\) and \(\zeta_{112}\)). Considering \(\zeta_{1}\),\(\zeta_{2}\) and \(\zeta_{2}\) are \(\zeta_{2}\),\(\zeta_{1}\), \(\zeta_{2}\) and \(\zeta_{2}\). For the numbers of data bins for the 2PCF and 3PCF multipoles are 15, 36, 36, 64, and 36 for \(\xi_{0}\), \(\xi_{2}\), \(\zeta_{000}\), \(\zeta_{110}\), \(\zeta_{002}\), and \(\zeta_{112}\), respectively. The reason why the bin width for the 3PCF is wider than for the 2PCF is to reduce the number of data bins and to conservatively estimate the inverse covariance matrix for the 2PCF and 3PCF. The total number of data bins is then \(202\), which is small enough compared to the \(2048\) Patchy mock simulations (Section 5). ### Parameter setting The parameters we constrain are as follows: \[\mathbf{\theta}=\mathbf{\theta}_{\mathrm{phys}}+\mathbf{\theta}_{\mathrm{bias}}, \tag{6.4}\] where \[\mathbf{\theta}_{\mathrm{bias}}=\{(b_{1}\sigma_{8}),(F_{\mathrm{g}}\sigma_{8}),(F_ {\mathrm{i}}\sigma_{8})\}, \tag{6.5}\] and \[\mathbf{\theta}_{\mathrm{phys}}=\begin{cases}\{f\sigma_{8},\sigma_{8}\},\quad \text{GR};\\ \{\sigma_{8},\xi_{f},\xi_{\mathrm{i}}\},\quad\text{Horndeski};\\ \{F_{\mathrm{g}}\sigma_{8},\xi_{f},\xi_{\mathrm{i}},\xi_{\mathrm{i}}\},\quad \text{DIHOST}.\end{cases} \tag{6.6}\] We assume that the bias parameters \(\mathbf{\theta}_{\mathrm{bias}}\) take different values in all four BOSS samples. \(f\sigma_{8}\), \(\sigma_{8}\), and \(F_{\mathrm{g}}\sigma_{8}\) have common values in NGC and NGC. \(\xi_{f,\mathrm{s},\mathrm{t}}\) are common to all four BOSS samples. For the 2PCF analysis, we only consider \(b_{1}\sigma_{8}\) and \(f\sigma_{8}\). For example, if all four BOSS samples are used to constrain DHGST theories, the total number of parameters is 17. Once again, note that the AP parameters are not varied in this analysis. ### \(M_{1}\) and \(M_{2}\) factors As mentioned in Section 5, the number of the Patchy mock simulations used to calculate the covariance matrices for the 2PCF and 3PCF is finite, so the inverse of the covariance matrix must be multiplied by the Hartlap factor and the final parameter error by \(M_{1}\). The \(M_{1}\) factor (5.3) is derived assuming that all parameters are constrained from a single data set. However, when constraining the common parameters \(\xi_{f}\), \(\xi_{\mathrm{s}}\), and \(\xi_{\mathrm{t}}\) (3.27) from the four independent \begin{table} \begin{tabular}{l c c c} \hline \hline & \(M_{1}\) & \(M_{2}\) & \# of params. \\ \hline 2PCF only (\(z_{\mathrm{eff}}=0.38\)) & 1.006 & 1.013 & 3 \\ 2PCF only (\(z_{\mathrm{eff}}=0.61\)) & 1.006 & 1.013 & 3 \\ \hline GR (\(z_{\mathrm{eff}}=0.38\)) & 1.049 & 1.105 & 8 \\ GR (\(z_{\mathrm{eff}}=0.61\)) & 1.049 & 1.105 & 8 \\ \hline Horndeski (\(z_{\mathrm{eff}}=0.38\)) & 1.048 & 1.104 & 9 \\ Horndeski (\(z_{\mathrm{eff}}=0.61\)) & 1.048 & 1.104 & 9 \\ Horndeski (\(z_{\mathrm{eff}}=0.38\), 0.61\)) & 1.044 & 1.100 & 16 \\ \hline DHGST (\(z_{\mathrm{eff}}=0.38\)) & 1.048 & 1.104 & 10 \\ DHGST (\(z_{\mathrm{eff}}=0.61\)) & 1.048 & 1.104 & 10 \\ DHGST (\(z_{\mathrm{eff}}=0.38\), 0.61) & 1.044 & 1.100 & 17 \\ \hline \end{tabular} \end{table} Table 3: A summary of the \(M_{1}\) (5.3) and \(M_{2}\) (5.5) factor values used in our analysis. These values are calculated from the number of the Patchy mock simulations, \(2048\) (Section 4.1), the number of data bins, \(30\) for the 2PCF only and \(202\) for the 2+3PCF (Section 6.2), and the number of parameters summarised in the rightmost column (Section 6.3). BOSS samples, as in the present analysis, the \(M_{1}\) factor is expected to take a different form, but we do not know the correct correction factor corresponding to the \(M_{1}\) factor in such as case. Therefore, when we use different galaxy samples simultaneously, we first count up all common and non-common parameters in the galaxy samples. Then, we calculate the \(M_{1}\) factor using the number of data bins computed from a single galaxy sample and the number of the Patchy mocks, \(2048\), corresponding to that galaxy sample, and multiply it by the final parameter error. Specifically, the multipole components of the 2PCF and 3PCF measured from a single galaxy sample are \(\xi_{0}\), \(\xi_{2}\), \(\zeta_{000}\), \(\zeta_{110}\), \(\zeta_{202}\), and \(\zeta_{112}\), for a total data bin number of \(202\) (Section 6.2). The number of parameters depends on the type of analysis; for example, we need \(17\) parameters to test DHOST theories using all four galaxy samples (Section 6.3). Table 3 summarises the values of the \(M_{1}\) and \(M_{2}\) factors calculated in our analysis, leading to \(M_{2}\sim 1.1\) for all the 2+3PCF joint analyses. Thus, even without considering the Hartlap and \(M_{1}\) factors in our analysis, the effect of finite mocks is at most \(10\%\). In other words, since our analysis correctly considers these factors, the error due to the finite mock effect in the estimated parameter error is guaranteed to be \(\lesssim 10\%\). ### Mcmc We apply the Metropolis-Hastings (MH) algorithm, an Markov Chain Monte Carlo (MCMC) method, implemented in the publicly available software package Monte Python(Audren et al., 2013; Brinckmann and Lesgourgues, 2019) to estimate the posterior distribution of parameters in a multi-dimensional parameter space. In doing so, we set the super-update parameter to \(20\), as recommended by the developers. In order to improve the convergence of the posterior distributions obtained by MCMC, we first perform an MCMC analysis with the number of steps set to \(N_{\rm step}=200\,000\) and calculate the best-fit values and covariance matrix of the parameters. Then, we add the information of the best-fit values and covariance matrix and perform an MCMC analysis again with the same number of steps. We ensure convergence of each MCMC chain, imposing \(R-1\lesssim\mathcal{O}(10^{-4})\) where \(R\) is the standard Gelman-Rubin criteria (Gelman and Rubin, 1992). Furthermore, the convergence of the results is also checked through the following method. First, we create eight independent MCMC chains and compute the mean and standard deviation of the parameters from each chain. Then, from the eight means and standard deviations, we compute the standard deviation of the mean and the mean of the standard deviation and check that the ratio of them is less than about \(20\%\) for all the results. ### Mock tests We perform MCMC analyses on \(100\) Patchy mock catalogues (Kitaura et al., 2016) using the same set of cosmological and nuisance parameters as in the actual BOSS galaxy data analysis. We then verify that our analysis can correctly return the values of the non-linear parameters predicted by GR for the Patchy mock catalogues designed under the assumption of a \(\Lambda\)CDM model. We also discuss the statistical scatter of the \(100\) values of the parameters to be estimated. ## 7 Fisher analysis Before proceeding to the MCMC analysis using actual galaxies in Sections 8 and 9, in this section, we will understand how the 3PCF contains cosmological information through the Fisher analysis. There are several reasons for performing the Fisher analysis before the MCMC analysis. First, calculating the Fisher matrix in Section 7.1 is less computationally intensive than performing the MCMC analysis, making it easier to compare the analysis results in various settings that take too much time in the MCMC analysis. Taking advantage of this, Section 7.2 examines how the constraints on the parameters of interest change for various combinations of the multipole components of the 3PCF; in doing so, we focus only on the NGC sample at \(z=0.38\) as a representative example. Section 7.3 also discusses the relation among the values of the predicted parameter errors for the four BOSS samples, NGC and SGC at \(z=0.38,\ 0.61\), and how the combination of the four BOSS samples affects the final results. Finally, in Section 9.11, we compare the results obtained from the above Fisher analysis with those obtained from the MCMC parameter estimation and check their consistency to confirm the validity of the final results in this paper. In Section 7.4, the Fisher analysis also allows us to estimate cosmological information at scales smaller than the scale range used in the MCMC analysis. The results are expected to motivate the construction of theoretical models applicable to smaller scales. Finally, in Section 7.5, we use the results of the Fisher analysis to determine the range of a flat prior used when performing the MCMC analysis. ### Fisher matrix From the likelihood function given in Eq. (6.1), we calculate the Fisher matrix as \[F_{ij} =-\left\langle\frac{\partial}{\partial\theta_{i}}\frac{\partial}{ \partial\theta_{j}}\ln\mathcal{L}\right\rangle\] \[=\frac{\partial\mathbf{t}(\mathbf{\theta})}{\partial\theta_{i}}\mathbf{C}_{ \rm Hartlap}^{-1}\frac{\partial\mathbf{t}^{T}(\mathbf{\theta})}{\partial\theta_{j}}, \tag{7.1}\] where we assumed that the covariance matrix \(\mathbf{C}\) is independent of the parameters. The indices \(i\) and \(j\) run over parameters of interest. In the limit of the Gaussian likelihood surface, the Cramer-Rao inequality shows that the Fisher matrix provides the minimum standard deviation on parameters, marginalized over all the other parameters: \(\sigma(\theta_{i})\geq\sigma_{\rm fisher}(\theta_{i})=\left(F^{-1}\right)_{ii}^{ 1/2}\). We note that we adopt the inverse covariance matrix, \(\mathbf{C}_{\rm Hartlap}^{-1}\), that is non-Gaussian estimated from the Patchy mock simulations. We consider three parameter vectors, depending on the gravity theory of interest: \[\mathbf{\theta}=\{(b_{1}\sigma_{8}),(f\sigma_{8})\}+\mathbf{\theta}_{\rm 3PCF}, \tag{7.2}\] with \[\mathbf{\theta}_{\rm 3PCF}\] \[=\begin{cases}\{(F_{\theta}\sigma_{8}),(F_{\sigma}\sigma_{8}),(F_{ \sigma}\sigma_{8})\},&\text{GR};\\ \{(F_{\theta}\sigma_{8}),(F_{\sigma}\sigma_{8}),(F_{\sigma}\sigma_{8}),(G_{ \sigma}\sigma_{8})\},&\text{Horndeski};\\ \{(F_{\theta}\sigma_{8}),(F_{\sigma}\sigma_{8}),(F_{\sigma}\sigma_{8}),(G_{ \sigma}\sigma_{8}),(G_{\sigma}\sigma_{8})\},&\text{DHOST},\end{cases}\] where \(F_{\sigma}\sigma_{8}=\sigma_{8}\) for GR and Horndeski theories. We obtain the results for \(E_{f,s,t}\) and \(\xi_{f,s,t}\) using the variable transformations in Eqs. (3.23) and (3.27). In particular, the results including \(\xi_{f,s,t}\) correspond to the parameter set (6.6) used in the MCMC analysis performed in Section 9. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \((E_{f})_{\rm fid}\) & \((E_{\rm s})_{\rm fid}\) & \((E_{\rm t})_{\rm fid}\) & \((\xi_{f})_{\rm fid}\) & \((\xi_{\rm s})_{\rm fid}\) & \((\xi_{\rm t})_{\rm fid}\) \\ \hline & \(0.713\) & \(1.000\) & \(0.992\) & \(0.545\) & \(0.000\) & \(0.013\) \\ \hline & \(\sigma_{\rm fisher}(E_{f})\) & \(\sigma_{\rm fisher}(E_{\rm s})\) & \(\sigma_{\rm fisher}(E_{\rm t})\) & \(\sigma_{\rm fisher}(\xi_{f})\) & \(\sigma_{\rm fisher}(\xi_{\rm s})\) & \(\sigma_{\rm fisher}(\xi_{\rm t})\) \\ \hline & \multicolumn{6}{c}{Horndeski} \\ \hline Case 2 & \(0.502\) & \(-\) & \(75.28\) & \(1.146\) & \(-\) & \(124.3\) \\ Case 3 & \(2.558\) & \(-\) & \(6.018\) & \(5.844\) & \(-\) & \(9.934\) \\ Case 4 & \(0.760\) & \(-\) & \(3.189\) & \(1.737\) & \(-\) & \(5.264\) \\ Case 5 & \(0.488\) & \(-\) & \(2.767\) & \(1.114\) & \(-\) & \(4.567\) \\ Case 6 & \(0.750\) & \(-\) & \(2.969\) & \(1.714\) & \(-\) & \(4.901\) \\ Case 7 & \(0.483\) & \(-\) & \(2.522\) & \(1.103\) & \(-\) & \(4.163\) \\ Case 8 & \(0.523\) & \(-\) & \(3.597\) & \(1.194\) & \(-\) & \(5.938\) \\ \hline & \multicolumn{6}{c}{DHOST} \\ \hline Case 2 & \(4.018\) & \(57.31\) & \(103.8\) & \(9.180\) & \(93.85\) & \(171.3\) \\ Case 3 & \(3.824\) & \(6.519\) & \(9.837\) & \(8.737\) & \(10.68\) & \(16.24\) \\ Case 4 & \(0.785\) & \(2.835\) & \(3.213\) & \(1.794\) & \(4.642\) & \(5.304\) \\ Case 5 & \(0.503\) & \(2.460\) & \(2.771\) & \(1.150\) & \(4.029\) & \(4.574\) \\ Case 6 & \(0.777\) & \(2.633\) & \(2.998\) & \(1.776\) & \(4.311\) & \(4.950\) \\ Case 7 & \(0.495\) & \(2.260\) & \(2.527\) & \(1.130\) & \(3.701\) & \(4.172\) \\ Case 8 & \(0.558\) & \(3.542\) & \(3.981\) & \(1.274\) & \(5.801\) & \(6.572\) \\ \hline \end{tabular} \end{table} Table 4: The standard deviations of the parameters as predicted by the Fisher analysis, denoted as \(\sigma_{\rm fisher}(\mathbf{\theta})\), are shown. These results are for the NGC at \(z_{\rm eff}=0.38\). The parameter vectors of interest, \(\mathbf{\theta}\), are different for each of the three gravity theories, GR, Hondeski, and DHOST theories (Eq. 7.3). The classification of the data vectors used is as shown in Eq. (7.4). The fiducial values of the parameters are calculated under the assumption of GR and are denoted as \((\mathbf{\theta})_{\rm fid}\). The scale range used for this Fisher analysis is \(80\,h^{-1}\,{\rm Mpc}\leq r\leq 150\,h^{-1}\,{\rm Mpc}\). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \((b_{1}\sigma_{8})_{\rm fid}\) & \((f\sigma_{8})_{\rm fid}\) & \((F_{\rm s}\sigma_{8})_{\rm fid}\) & \((F_{\rm s}\sigma_{8})_{\rm fid}\) & \((F_{\rm t}\sigma_{8})_{\rm fid}\) & \((G_{\rm s}\sigma_{8})_{\rm fid}\) & \((G_{\rm t}\sigma_{8})_{\rm fid}\) \\ \hline & \(1.362\) & \(0.485\) & \(0.552\) & \(0.681\) & \(0.194\) & \(0.681\) & \(0.386\) \\ \hline & \(\sigma_{\rm fisher}(b_{1}\sigma_{8})\) & \(\sigma_{\rm fisher}(f\sigma_{8})\) & \(\sigma_{\rm fisher}(F_{\rm s}\sigma_{8})\) & \(\sigma_{\rm fisher}(F_{\rm s}\sigma_{8})\) & \(\sigma_{\rm fisher}(F_{\rm s}\sigma_{8})\) & \(\sigma_{\rm fisher}(G_{\rm s}\sigma_{8})\) & \(\sigma_{\rm fisher}(G_{\rm s}\sigma_{8})\) \\ \hline & \multicolumn{6}{c}{GR} \\ \hline Case 1 & \(0.159\) & \(0.093\) & \(-\) & \(-\) & \(-\) & \(-\) \\ Case 2 & \(0.154\) & \(0.093\) & \(0.418\) & \(0.472\) & \(0.312\) & \(-\) & \(-\) \\ Case 3 & \(0.159\) & \(0.092\) & \(0.802\) & \(1.170\) & \(2.387\) & \(-\) & \(-\) \\ Case 4 & \(0.155\) & \(0.091\) & \(0.420\) & \(0.643\) & \(0.426\) & \(-\) & \(-\) \\ Case 5 & \(0.151\) & \(0.090\) & \(0.330\) & \(0.450\) & \(0.291\) & \(-\) & \(-\) \\ Case 6 & \(0.155\) & \(0.089\) & \(0.396\) & \(0.619\) & \(0.411\) & \(-\) & \(-\) \\ Case 7 & \(0.151\) & \(0.089\) & \(0.315\) & \(0.442\) & \(0.283\) & \(-\) & \(-\) \\ Case 8 & \(0.153\) & \(0.091\) & \(0.363\) & \(0.492\) & \(0.324\) & \(-\) & \(-\) \\ \hline & \multicolumn{6}{c}{Horndeski} \\ \hline Case 2 & \(0.154\) & \(0.093\) & \(2.361\) & \(0.479\) & \(3.156\) & \(-\) & \(29.25\) \\ Case 3 & \(0.159\) & \(0.092\) & \(0.816\) & \(2.433\) & \(2.633\) & \(-\) & \(1.312\) \\ Case 4 & \(0.155\) & \(0.092\) & \(0.430\) & \(0.726\) & \(0.431\) & \(-\) & \(1.111\) \\ Case 5 & \(0.151\) & \(0.091\) & \(0.331\) & \(0.463\) & \(0.295\) & \(-\) & \(1.044\) \\ Case 6 & \(0.155\) & \(0.091\) & \(0.409\) & \(0.715\) & \(0.419\) & \(-\) & \(1.015\) \\ Case 7 & \(0.151\) & \(0.091\) & \(0.316\) & \(0.459\) & \(0.285\) & \(-\) & \(0.946\) \\ Case 8 & \(0.153\) & \(0.091\) & \(0.366\) & \(0.498\) & \(0.344\) & \(-\) & \(1.383\) \\ \hline & \multicolumn{6}{c}{DHOST} \\ \hline Case 2 & \(0.154\) & \(0.093\) & \(2.361\) & \(0.479\) & \( The fiducial values of the cosmological parameters needed to compute the Fisher matrix are the values in the \(\Lambda\)CDM model presented in Section 1. In doing so, we assume that the linear bias is \(b_{1}=2\), and the values of the non-linear biases are zero: i.e., \(b_{2}=b_{\rm s2}=0\). ### Information contained in 3PCF multipoles For the NGC sample at \(z=0.38\), we perform Fisher analyses on the following eight data vectors consisting of combinations of the 2PCF and 3PCF multipole components, using the same settings as the MCMC analysis performed in Section 9 to investigate which components and how they affect parameter estimates. \[\text{Case 1} \mathbf{d} =\{\xi_{0},\xi_{2}\};\] \[\text{Case 2} \mathbf{d} =\{\xi_{0},\xi_{2},\zeta_{000},\zeta_{110}\};\] \[\text{Case 3} \mathbf{d} =\{\xi_{0},\xi_{2},\zeta_{020},\zeta_{112}\};\] \[\text{Case 4} \mathbf{d} =\{\xi_{0},\xi_{2},\zeta_{000},\zeta_{202}\};\] \[\text{Case 5} \mathbf{d} =\{\xi_{0},\xi_{2},\zeta_{000},\zeta_{202},\zeta_{110}\};\] \[\text{Case 6} \mathbf{d} =\{\xi_{0},\xi_{2},\zeta_{000},\zeta_{202},\zeta_{112}\};\] \[\text{Case 7} \mathbf{d} =\{\xi_{0},\xi_{2},\zeta_{000},\zeta_{202},\zeta_{110},\zeta_{112 }\};\] \[\text{Case 8} \mathbf{d} =\{\xi_{0},\xi_{2},\zeta_{110},\zeta_{112}\}. \tag{100}\] Case \(1\) constrains \(f\sigma_{8}\) using only the monopole and quadrupole 2PCFs. Cases \(2\), \(3\), and \(4\) add to Case \(1\) the two monopole 3PCFs (\(\zeta_{000}\) and \(\zeta_{110}\)), the two quadrupole 3PCFs (\(\zeta_{202}\) and \(\zeta_{112}\)), and the first terms of the monopole and quadrupole 3PCFs (\(\zeta_{000}\) and \(\zeta_{202}\)), respectively. These three cases will highlight the importance of simultaneously considering both monopoles and quadrupoles in the 3PCF. Moreover, Cases \(5\), \(6\), and \(7\) reveal the extent to which the final results can be improved by adding higher-order multipole components to Case \(4\). Finally, Case \(8\) only uses the higher-order multipoles, \(\zeta_{110}\) and \(\zeta_{112}\), for the monopole and quadrupole components. We summarise the results of the Fisher analysis in Table 4. In Horddeski and DHOST theories, the case \(2\) results show that using only the monopole 3PCFs very weakly constrains the non-linear velocity parameters \(G_{\rm s}\sigma_{8}\) and \(G_{\rm t}\sigma_{8}\). On the other hand, in Case \(3\), using only the quadrupole 3PCFs, we can mildly constrain the non-linear coefficients of both the density field and the velocity field. The reason is that the density and velocity fluctuations contribute to the quadrupole 3PCFs to the same extent (Figure 2). Moreover, Cases \(4\), \(5\), \(6\), \(7\), and \(8\), using both the monopole and quadrupole components, can constrain the non-linear coefficients more strongly than Cases \(2\) and \(3\). In particular, for the \(G_{\rm s}\sigma_{8}\) and \(G_{\rm t}\sigma_{8}\) constraints in DHOST theories, Case \(7\) is \(\sim 20\) and \(\sim 40\) times better than Case \(2\), respectively: \[\sigma_{\rm{fisher}}(G_{\rm s}\sigma_{8}) =35.22\quad\text{for Case 2,}\] \[\sigma_{\rm{fisher}}(G_{\rm t}\sigma_{8}) =38.8\quad\text{for Case 2,}\] \[\sigma_{\rm{fisher}}(G_{\rm s}\sigma_{8}) =1.519\quad\text{for Case 7,}\] \[\sigma_{\rm{fisher}}(G_{\rm t}\sigma_{8}) =0.953\quad\text{for Case 7.} \tag{101}\] These results support the argument of this paper that we should use both monopole and quadrupole 3PCFs to study the non-linearity of the velocity field. Case \(7\), which uses all components of \(\zeta_{000}\), \(\zeta_{110}\), \(\zeta_{202}\), and \(\zeta_{112}\), provides the best constraints on \(F_{\rm s}\sigma_{8}\), \(G_{\rm s}\sigma_{8}\), and \(G_{\rm t}\sigma_{8}\), as expected. Therefore, we can conclude that all these multipole components should be used in the MCMC analysis in Section 9. Case \(7\) yields results that are about \(10\%\) better than Case \(5\), which uses \(\zeta_{000}\), \(\zeta_{110}\), and \(\zeta_{202}\). This result indicates that while \(\zeta_{202}\) contains the main cosmological information, \(\zeta_{112}\) contains other information in addition to \(\zeta_{202}\). Existing studies using Scoccimarro et al. (1999)' decomposition method of the bispectrum tend to ignore the \(M\neq 0\) mode of the quadrupole component as not containing much cosmological information (e.g., Gagrani & Samushia, 2017; Rizzo et al., 2022; D'Amico et al., 2022). However, our results show the importance of the \(M\neq 0\) modes because \(\zeta_{202}\) contains only the \(M=0\) mode, while \(\zeta_{112}\) further contains the \(M\neq 0\) modes in addition to the \(M=0\) mode (see also Section 3.1). By comparing the results of Case \(4\), consisting of \(\zeta_{000}\) and \(\zeta_{202}\), with those of Case \(8\), consisting of \(\zeta_{110}\) and \(\zeta_{112}\), we can find another viewpoint on the importance of higher-order multipole components. For example, the \((F_{\rm s}\sigma_{8})\) constraint is better in Case \(8\), and the \((G_{\rm s}\sigma_{8})\) and \((G_{\rm t}\sigma_{8})\) constraints are better in Case \(4\). Also, the \((G_{\rm s},\rm t\sigma_{8})\) result in Case \(4\) is only about \(30\%\) better than Case \(8\). Thus, although \(\zeta_{202}\) is more informative than \(\zeta_{112}\), we interpret the information on both sides as overlapping to some extent. We further calculate \(\sigma_{\rm{fisher}}(\mathbf{\theta})\) for \(\theta=E_{f}\), \(E_{\rm s}\), \(E_{\rm t}\), \(\xi_{f}\), \(\xi_{\rm s}\), and \(\xi_{\rm t}\) through the variable transformations in Eqs. (3.23) and (3.27), summarising the results in Table 5. We find that both the monopole and quadrupole components of the 3PCF are needed to constrain \(E_{\rm s}\), \(E_{\rm t}\), \(\xi_{\rm s}\), and \(\xi_{\rm t}\) better. In Case \(7\), the standard deviations of \(E_{\rm s}\) and \(E_{\rm t}\) are more than twice larger than the fiducial values of \(E_{\rm s}\) and \(E_{\rm t}\), i.e., \(\sigma_{\rm{fisher}}(E_{\rm s},\rm)/(E_{\rm s},\rm)_{\rm fid}>2\), indicating that it is impossible to detect the \(E_{\rm s}\) and \(E_{\rm t}\) signals in the BOSS data. We can also confirm that for each of the \(\xi_{\rm s}\) and \(\xi_{\rm t}\) constraints in DHOST theories, the results of Case \(7\) are \(\sim 25\) and \(\sim 40\) times stronger than those of Case \(2\), respectively: \[\sigma_{\rm{fisher}}(\xi_{\rm s}) =93.85\quad\text{for Case 2,}\] \[\sigma_{\rm{fisher}}(\xi_{\rm t}) =171.3\quad\text{for Case 2,}\] \[\sigma_{\rm{fisher}}(\xi_{\rm s}) =3.701\quad\text{for Case 7,}\] \[\sigma_{\rm{fisher}}(\xi_{\rm t}) =4.172\quad\text{for Case 7.} \tag{102}\] In GR, adding any multipole component of the 3PCF can only improve the \(b_{1}\sigma_{8}\) and \(f\sigma_{8}\) constraints by a few per cent. This result is consistent with the MCMC analysis of Sugiyama et al. (2021) on the Patchy mock catalogues. Furthermore, the 3PCF-specific information, \(\sigma_{8}\), is also uninformative compared to \(f\sigma_{8}\). Specifically, \(f\sigma_{8}\) can be determined with a precision of \(\sim 20\%\), while \(\sigma_{8}\) can only reach a precision of \(\sim 60\%\). These results are for large scales (\(r\geq 80\)\(h^{-1}\,\rm Mpc\)); what happens when even smaller scales are used will be discussed in Section 7.4. ### Fisher forecasts with all four BOSS samples In this subsection, we repeat the analysis of Case \(7\) in DHOST theories, performed in Section 7.2, for the other three BOSS samples, NGC at \(z=0.61\) and SGC at \(z=0.38\), \(0.61\), and summarise the results in Tables 6 and 7. Table 6 shows that the results for \((b_{1}\sigma_{8})\) and \((f\sigma_{8})\), which are mainly determined by the 2PCF, are slightly better for the sample at \(z=0.61\) than for the sample at \(z=0.38\) for both NGC and SGC. On the other hand, for the 3PCF-specific parameters, \((F_{\rm s},\rm_{t}\sigma_{8})\) and \((G_{\rm s},\rm_{t}\sigma_{8})\), the error is smaller for the \(z=0.38\) sample than for the \(z=0.61\) sample. This result reflects the different characteristics of the cumulative \(\rm S/N\) between the 2PCF and the 3PCF, as discussed in Section 5.4. In other words, it suggests that higher number densities are more favourable than larger survey volumes for constraining the non-linear parameters, \((F the \(z=0.38\) sample gives a smaller error than the \(z=0.61\) sample for both \(E_{f,\mathrm{s,t}}\) and \(\xi_{f,\mathrm{s,t}}\). However, in the case of \(\xi_{f,\mathrm{s,t}}\), the error at \(z=0.38\) is almost twice as small as that at \(z=0.61\), which is extremely favourable for the \(z=0.38\) sample. For example, the \(\xi_{s}\) results for the NGC samples are \[\sigma_{\mathrm{fisher}}(\xi_{\mathrm{s}}) =3.701\quad\text{for NGC at $z=0.38$},\] \[\sigma_{\mathrm{fisher}}(\xi_{\mathrm{s}}) =6.890\quad\text{for NGC at $z=0.61$}. \tag{7.7}\] This is because we parameterise the time evolution of \(E_{f,\mathrm{s,t}}\) as \(E_{f,\mathrm{s,t}}=\Omega_{\mathrm{m}}^{\xi_{f,\mathrm{s,t}}}\). That is, because \(d\xi_{f,\mathrm{s,t}}=d\ln E_{f,\mathrm{s,t}}/(\ln\Omega_{\mathrm{m}})\), the errors in \(\xi_{f,\mathrm{s,t}}\) are smaller for lower redshifts with smaller values of \(\Omega_{\mathrm{m}}\). Specifically, in the LCDM model introduced in Section 1, \(\Omega_{\mathrm{m}}(z=0.38)=0.54\) and \(\Omega_{\mathrm{m}}(z=0.61)=0.65\), so \(1/\ln\Omega_{\mathrm{m}}(z=0.38)=1.62\) and \(1/\ln\Omega_{\mathrm{m}}(z=0.61)=2.32\). Even if \(\sigma_{\mathrm{fisher}}(E_{f,\mathrm{s,t}})/(E_{f,\mathrm{s,t}})_{\mathrm{ fid}}\) has the same value at the two redshifts of \(z=0.38\) and \(z=0.61\), the value of \(\sigma_{\mathrm{fisher}}(\xi_{f,\mathrm{s,t}})\) at \(z=0.38\) is \(2.32/1.62=1.42\) times smaller than at \(z=0.61\). ### Fisher forecasts using smaller scales So far, we have performed the Fisher analysis in the same setting as the MCMC analysis that will be performed in Section 9. There, we have dealt with the behaviour of only large scales, \(80\,h^{-1}\,\mathrm{Mpc}\leq r\leq 150\,h^{-1}\,\mathrm{Mpc}\). However, seeing how the parameter constraints improve when the minimum scale used, \(r_{\mathrm{min}}\), is varied should be an excellent motivation for the future development of theoretical models. Figure 10 plots \(\sigma_{\mathrm{fisher}}(\theta)/(\theta)_{\mathrm{fid}}\) as a function of \(r_{\mathrm{min}}\) for the three gravity theories, GR, Horndeski, and DHOST, at two redshifts of \(z=0.38\) (magenta lines) and \(z=0.61\) (blue lines). The multipole components of the 2PCF and 3PCF used here are Case 7 (Eq. (7.4)). First, even on the smaller scale, adding the 3PCF hardly improves the \(f\sigma_{8}\) constraint compared to the case where only the 2PCF is used (compare solid and dashed lines in the top left panel of Figure 10.). On the other hand, at \(r_{\mathrm{min}}=30\,h^{-1}\,\mathrm{Mpc}\), the \(\sigma_{8}\) constraint reaches a precision of \(\sim 10\%\), from which useful cosmological information may be extracted: e.g., \(f=(f\sigma_{8})/\sigma_{8}\) can be determined with a precision of \(\sim 10\%\). In addition, the non-linear velocity parameters, \(G_{s}\sigma_{8}\) and \(G_{\mathrm{t}}\sigma_{8}\), can be determined with \(30\)-\(50\%\) precision at \(r_{\mathrm{min}}=30\,h^{-1}\,\mathrm{Mpc}\). Thus, future galaxy surveys with even larger volumes than the BOSS survey, such as DESI, Euclid, and PFS, may detect such non-linear coefficients of the velocity field. Note that we obtained the Fisher analysis results using the IR-resumed tree-level solutions of the 2PCF and 3PCF given in Eqs. (3.9) and (3.12). Although these models accurately describe the non-linear damping behaviour of BAO on large scales, they cannot predict the 2PCF and 3PCF on small scales with high accuracy. Therefore, to apply these models to smaller scales, it is necessary to account for non-linear effects, called loop correction terms. We leave to investigate how the results change when such a loop correction is added for future research. ### Flat priors As shown by the results of the Fisher analysis in Section 7, the constraints on the non-linear parameters \(\xi_{f,\mathrm{s,t}}\) constrained by the 3PCF measured from BOSS are weak. Therefore, we need to set appropriate priors to efficiently perform the MCMC analysis. We use the Fisher analysis results of Case 7 in DHOST theories for the four BOSS galaxy samples, performed in Section 7.3. Then, we adopt a flat prior of \(\theta_{\mathrm{fid}}\pm 5\sigma_{\mathrm{fisher}}(\theta)\) as the base setting for all parameters. If using several samples to constrain common parameters, we adopt a narrower range of priors for those samples. For example, at \(z_{\mathrm{eff}}=0.38\), when constraining \(f\sigma_{8}\) using both NGC and SGC samples, we adopt the prior computed in NGC. After this basic setting, we set a stronger prior based on further physical considerations below. The linear bias \(b_{1}\), the linear growth rate \(f\), and \(\sigma_{8}\) are always positive by definition: i.e., \(b_{1}\sigma_{8}\geq 0\) and \(f\sigma_{8}\geq 0\). In the case of GR, the non-linear parameters to be constrained are \(F_{\mathrm{g}}\sigma_{8}\), \(\sigma_{8}\), and \(F_{\mathrm{i}}\sigma_{8}\). The non-linear local bias parameter \((1/2)(b_{2}/b_{1})\) appearing in \(F_{\mathrm{g}}\) is calculated to be \(-0.02\) for \(b_{1}=2.0\) using the fitting formula given by Lazeyras et al. (2016), which is sufficiently small compared to \(17/21\). The tidal bias parameter \((b_{z}/b_{1})\) appearing in \(F_{\mathrm{t}}\) is also calculated to be \(b_{z}/b_{1}=(-2/7)(1-1/b_{1})=-0.14\) for the linear Lagrangian bias model (e.g., Desjacques et al., 2018), and its value is also smaller Figure 10: The standard deviations computed by the Fisher analysis divided by the fiducial values of the parameter, \(\sigma_{\rm fisher}(\theta)/(\theta)_{\rm fid}\), are shown as a function of the minimum scale used, \(r_{\rm min}\). These results are for the NGC at \(z_{\rm eff}=0.38,\,0.61\). The solid lines show the results using the multipole components of the 2PCF and 3PCF given in Case 7 (7.4), and the dashed lines are for the 2PCF-only analysis, Case 1. The points at \(r_{\rm min}=80\,h^{-1}\,{\rm Mpc}\) in the right panels correspond to the results in Table 6. than \(2/7\). Therefore, even if the non-linear bias parameter is present, \(F_{\rm g}\sigma_{8}\) and \(F_{\rm t}\sigma_{8}\) are expected to be larger than zero: i.e., \(F_{\rm g}\sigma_{8}\geq 0\) and \(F_{\rm t}\sigma_{8}\geq 0\). We will discuss the validity of the analysis results when these conditions are imposed in Section 9.5 by comparing them with the results when \(F_{\rm g}\sigma_{8}\) and \(F_{\rm t}\sigma_{8}\) can take negative values. In the cases of Horndeski and DHOST theories, the parameterisation we adopt describes the time evolution of the coefficients of the tidal and shift terms as powers of \(\Omega_{\rm m}\) (Section 3.4), implicitly assuming that these coefficients are always positive: i.e., \(F_{\rm g}\sigma_{8}\geq 0\), \(G_{\rm s}\sigma_{8}\geq 0\), and \(G_{\rm t}\sigma_{8}\geq 0\). For \(F_{\rm g}\sigma_{8}\) and \(F_{\rm t}\sigma_{8}\), assuming that Horndeski and DHOST theories are not far from GR, we adopt \(F_{\rm g}\sigma_{8}\geq 0\) and \(F_{\rm t}\sigma_{8}\geq 0\), just like GR. The Fisher analysis shows that the BOSS data cannot detect the \(E_{\rm s,t}\) signals and only give them an upper limit (Section 7). This fact means that as \(E_{\rm s,t}\) approach zero, the parameters \(\xi_{\rm s,t}=\log_{\Omega_{\rm m}}E_{\rm s,t}\) can be as large as desired because of \(\Omega_{\rm m}<1\). Therefore, in this analysis, we set the upper limit of \(\xi_{\rm s,t}\) to \((\xi_{\rm s,t})_{\rm fid}+3\sigma_{\rm fisher}(\xi_{\rm s,t})\), which is narrower than the basic setting. If \(\xi_{\rm s,t}\) reach their upper bounds set here, we report only the lower bounds for those parameters as the final results. We summarise the results of the above discussion in Table 8. ## 8 Goodness of fit In this section, we examine the extent to which our analysis can give good fits to the 2PCF and 3PCF measurements from the BOSS data or Patchy mocks for a variety of cases, before presenting specific parameter constraint values in Section 9. For this purpose, we calculate the minimum of \(\chi^{2}\,=-2\ln\,\mathcal{L}\) (6.1), denoted \(\chi^{2}_{\rm min}\), from the best-fit parameter values obtained from the joint analysis of the 2PCF and 3PCF. We use two multipole 2PCFs (\(\xi_{0}\) and \(\xi_{2}\)), two monopole 3PCFs (\(\zeta_{000}\) and \(\zeta_{110}\)), and two quadrupole 3PCFs (\(\zeta_{200}\) and \(\zeta_{112}\)) in this analysis; the assumed gravity theories are GR, Horndeski, and DHOST theories. Tables 9-14 show the \(\chi^{2}_{\rm min}\) divided by the degrees of freedom (DoF), i.e., the reduced \(\chi^{2}_{\rm min}\), and the corresponding one-tailed \(p\)-values. At two redshift bins, \(z=0.38\) and \(z=0.61\), results are presented for NGC only, SGC only, and both NGC and SGC. In Horndeski and DHOST theories, we constrain the common parameters \(\xi_{f,s,t}\) among different redshift bins using the samples at both the redshift bins. Finally, we also include the results of the analysis using only 2PCF. If the theoretical model fits the measurements well, the \(p\)-value should be close to \(0.5\). A \(p\)-value close to \(1\) does not mean that the theoretical model is correct, but that the theoretical model can explain the measurements within the error range, thanks to too large statistical errors in the measurements. On the other hand, a \(p\)-value close to \(0\) indicates that the theoretical model cannot explain the measurements. In this paper, we decide that if \(p<0.05\), attention should be paid to the consistency between the theoretical model and the measurements, and if \(p<0.01\), there is an apparent discrepancy between them. We write in bold the \(\chi^{2}\) and \(p\) values shown in Tables 9-14 if \(p<0.01\). Finally, we comment on the behaviour of \(p\)-values when combining different galaxy samples. For example, suppose that the reduced \(\chi^{2}_{\rm min}\) is larger than 1: i.e., \(\chi^{2}_{\rm min}/{\rm DoF}>1\). In this case, if we increase the values of \(\chi^{2}_{\rm min}\) and DoF by an equal factor while keeping the value of the reduced \(\chi^{2}_{\rm min}\), the resulting \(p\)-value will be smaller than the original value, and conversely, if \(\chi^{2}_{\rm min}/{\rm DoF}<1\), it will be larger than the original value. Since we treat the different galaxy samples as statistically independent, a similar situation occurs in analyses with multiple galaxy samples. Thus, if the \(p\)-value obtained from each galaxy sample is small, combining galaxy samples will yield a smaller \(p\)-value. Section 8.1 reports an unexplained discrepancy between the 3PCF measured from the BOSS galaxy data at \(z=0.38\) and our theoretical model on large scales, even considering DHOST theories, which is beyond GR. Section 8.2 shows that this discrepancy between the data and the theoretical model appears from the monopole 3PCF. Section 8.3 shows that the discrepancy still appears even when the parameter prior set introduced in Section 7.5 is removed. Section 8.4 confirms that the discrepancy does not appear in the analysis using the Patchy mock. Finally, as a temporary measure, we rescale the covariance matrix of the 3PCF at \(z=0.38\) to generate acceptable \(p\)-values in Section 8.5. Section 9 will report the parameter estimation results with and without rescaling the covariance matrix. ### BOSS galaxies Table 9 shows the results from the analysis method described in this paper. We have performed the MCMC analysis (Section 6) using \(\xi_{0}\), \(\xi_{2}\), \(\zeta_{000}\), \(\zeta_{110}\), \(\zeta_{202}\), and \(\zeta_{112}\) measured from the BOSS galaxy data (Section 4), the covariance matrix computed from the \(2048\) Patchy mocks (Section 5), and the flat prior of the parameter range (Section 7.5). First, we focus on the analysis case using only the 2PCF. For \begin{table} \begin{tabular}{l|l} \hline \hline \multicolumn{3}{c}{Prior range} \\ \hline \((b_{1}\sigma_{8})_{\rm NGC,\,z=0.38}\) & \([0.60,2.13]\) \\ \((b_{1}\sigma_{8})_{\rm SGC,\,z=0.38}\) & \([0.08,2.64]\) \\ \((b_{1}\sigma_{8})_{\rm NGC,\,z=0.61}\) & \([0.54,1.88]\) \\ \((b_{1}\sigma_{8})_{\rm SGC,\,z=0.61}\) & \([0.07,2.36]\) \\ \hline \((f\sigma_{8})_{\rm NGC,\,z=0.38}\) & \([0.03,0.94]\) \\ \((f\sigma_{8})_{\rm SGC,\,z=0.38}\) & \([0.00,1.24]\) \\ \((f\sigma_{8})_{\rm NGC,\,z=0.61}\) & \([0.05,0.91]\) \\ \((f\sigma_{8})_{\rm SGC,\,z=0.61}\) & \([0.00,1.21]\) \\ \hline \((F_{\rm g}\sigma_{8})_{\rm NGC,\,z=0.38}\) & \([0.00,2.99]\) \\ \((F_{\rm g}\sigma_{8})_{\rm SGC,\,z=0.38}\) & \([0.00,4.54]\) \\ \((F_{\rm g}\sigma_{8})_{\rm NGC,\,z=0.61}\) & \([0.00,3.44]\) \\ \((F_{\rm g}\sigma_{8})_{\rm SGC,\,z=0.61}\) & \([0.00,5.54]\) \\ \hline \((F_{\rm g}\sigma_{8})_{\rm NGC,\,z=0.38}\) & \([0.00,3.02]\) \\ \((F_{\rm t}\sigma_{8})_{\rm SGC,\,z=0.38}\) & \([0.00,4.57]\) \\ \((F_{\rm t}\sigma_{8})_{\rm NGC,\,z=0.61}\) & \([0.00,3.27]\) \\ \((F_{\rm t}\sigma_{8})_{\rm SGC,\,z=0.61}\) & \([0.00,4.99]\) \\ \hline \((F_{\rm t}\sigma_{8})_{\rm NGC,\,z=0.38}\) & \([0.00,1.66]\) \\ \((F_{\rm t}\sigma_{8})_{\rm SGC,\,z=0.38}\) & \([0.00,2.72]\) \\ \((F_{\rm t}\sigma_{8})_{\rm NGC,\,z=0.61}\) & \([0.00,1.97]\) \\ \((F_{\rm t}\sigma_{8})_{\rm SGC,\,z=0.61}\) & \([0.00,3.29]\) \\ \hline \((\xi_{f})_{\rm NGC,\,z=0.38}\) & \([-5.11,6.20]\) \\ \((\xi_{f})_{\rm SGC,\,z=0.38}\) & \([-8.79,9.88]\) \\ \((\xi_{f})_{\rm NGC,\,z=0.61}\) & \([-9.88,10.97]\) \\ \((\xi_{f})_{\rm SGC,\,z=0.61}\) & \([-16.65,17.74]\) \\ \hline \((\xi_{\rm s})_{\rm NGC,\,z=0.38}\) & \([-18.51,11.10]\) \\ \((\xi_{\rm s})_{\rm SGC,\,z=0.38}\) & \([-29.91,17.95]\) \\ \((\xi_{\rm s})_{\rm SGC,\,z=0.61}\) & \([-34.45,20.67]\) \\ \((\xi_{\rm s})_{\rm SGC,\,z=0.61}\) & \([-57.34,34.40] the NGC+SGC sample, the obtained \(p\)-values are \(p=0.511\) at \(z=0.38\) and \(p=0.023\) at \(z=0.61\). This \(p=0.023\) indicates a small amount of a poor fit between the model and the measurements, but we consider it not problematic. Next, turning to the joint analysis results of the 2PCF and 3PCF assuming GR, we find that the \(p\)-value at \(z=0.38\) obtained for the NGC+SGC sample is extremely small, \(0.001\). At \(z=0.38\), the results for only NGC and only SGC are \(p=0.024\) and \(p=0.007\), indicating that the SGC sample is more problematic than the NGC. On the other hand, the \(p\)-value at \(z=0.61\) for the NGC+SGC sample is \(p=0.125\), indicating that our model explains the measured values without problems. Finally, for Horndeski and DHOST theories, we find results similar to the GR case: the \(p\)-value is \(0.001\) at \(z=0.38\) and \(p\sim 0.1\) at \(z=0.61\) for the NGC+SGC sample. Thus, we conclude that there is an unexplained discrepancy between the 3PCF measurement from the BOSS sample at \(z=0.38\) and the theoretical model we are using. Even Horndeski and DHOST theories, which are modified gravity theories beyond GR, cannot explain this discrepancy. ### Monopole- or Quadrupole-only 3PCF We investigate whether the discrepancy between the 3PCF measurement from the galaxy sample at \(z=0.38\) and the theoretical model, shown in Table 9, originates from the monopole or quadrupole component. For this purpose, Tables 10 and 11 show the joint analysis results using only monopole 3PCFs or only quadrupole 3PCFs in addition to the monopole and quadrupole 2PCFs. For a fair comparison with Table 9, the prior distributions of the parameters used here are those given in Table 8. For the NGC+SGC at \(z=0.38\), the \(p\)-value obtained using the monopole 3PCFs is less than \(0.01\) in all three gravity theories, whereas the \(p\)-value obtained using the quadrupole 3PCFs is \(p\sim 0.1\). Therefore, we can conclude that the monopole component of the 3PCF measurement at \(z=0.38\) is inconsistent with the theoretical model. ### No prior in DHOST theories As an attempt to explain the discrepancy between the 3PCF measurement from the galaxy sample at \(z=0.38\) and the theoretical model, we remove all flat prior for the non-linear parameters, \(F_{\rm g}\sigma_{8}\), \(F_{\rm i}\sigma_{8}\), \(F_{\rm i}\sigma_{8}\), \(\xi_{f}\), \(\xi_{\rm e}\), and \(\xi_{\rm t}\), set in Table 8 and perform parameter fitting without imposing any prior. In particular, we investigate the possibility that imposing the conditions \(F_{\rm g}\geq 0\) and \(F_{\rm t}\geq 0\) on the parameters with the non-linear bias may have caused some problems fitting the monopole 3PCF. This subsection focuses on DHOST theories because they have the largest number of parameters to be varied. Table 12 summarises the results of the calculations and confirms that the \(p\)-value obtained from the NGC+SGC sample at \(z=0.38\) is \(0.001\), even if we assume no prior for the non-linear parameters. Therefore, we can conclude that the discrepancy between the galaxy data and the theoretical model at \(z=0.38\) is not due to the prior imposed in Table 8. ### Patchy mocks Table 13 shows the means and standard deviations of the \(\chi^{2}_{\rm min}\) and the corresponding means and 1\(\sigma\) errors of the \(p\)-values obtained from the \(100\) Patchy mock catalogues. The setup for the data analysis is the same as that performed in Table 9. In all \(30\) cases shown in Table 13, the mean \(p\)-values obtained are almost always \(p\gtrsim 0.5\), both in the analysis using only the 2PCF and in the joint analysis with the 3PCF. This result means that our 2PCF and 3PCF theoretical templates fit well with the Patchy mock simulation data, indicating that the small \(p\)-values found in Table 9 are a peculiar property of the BOSS galaxies. As two representative examples, the rest of this subsection focuses on the DHOST theory analyses using only the SGC sample at \(z=0.38\) and all four galaxy samples (NGC+SGC at \(z=0.38,0.61\)). The reasons are as follows: (1) our primary goal is to test DHOST theories; (2) the analysis of the SGC at \(z=0.38\) in the BOSS data gives a p-value of \(0.006\), which is the most significant discrepancy \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{BOSS DR12} \\ \hline \multicolumn{4}{c}{\(\chi^{2}_{\rm min}/{\rm DoF}\) (\(p\)-value)} \\ \hline & NGC + SGC & NGC & SGC \\ \hline 2PCF only (\(z_{\rm eff}=0.38\)) & 56.04/57 (0.511) & 32.18/28 (0.267) & 23.36/28 (0.715) \\ 2PCF only (\(z_{\rm eff}=0.61\)) & 80.24/57 (0.023) & 42.08/28 (0.043) & 36.94/28 (0.120) \\ \hline GR (\(z_{\rm eff}=0.38\)) & **488.38/396 (0.001)** & 238.22/197 (0.024) & **248.72/197 (0.007)** \\ GR (\(z_{\rm eff}=0.61\)) & 428.60/396 (0.125) & 218.48/197 (0.140) & 209.24/197 (0.262) \\ \hline Horndeski (\(z_{\rm eff}=0.38\)) & **488.24/395 (0.001)** & 236.38/196 (0.026) & **251.62/196 (0.004)** \\ Horndeski (\(z_{\rm eff}=0.61\)) & 427.56/395 (0.125) & 218.06/196 (0.134) & 209.16/196 (0.247) \\ Horndeski (\(z_{\rm eff}=0.38\), 0.61) & **918.76/792 (0.001)** & 456.50/394 (0.016) & 458.94/394 (0.013) \\ \hline DHOST (\(z_{\rm eff}=0.38\)) & **487.48/394 (0.001)** & 235.80/195 (0.024) & **248.36/195 (0.006)** \\ DHOST (\(z_{\rm eff}=0.61\)) & 427.62/394 (0.117) & 217.94/195 (0.125) & 209.06/195 (0.233) \\ DHOST (\(z_{\rm eff}=0.38\), 0.61) & **918.04/791 (0.001)** & 455.96/393 (0.015) & 458.88/393 (0.012) \\ \hline \end{tabular} \end{table} Table 9: The reduced \(\chi^{2}\) and \(p\)-values (in round brackets) obtained from the joint analysis of the 2PCF and 3PCF are shown. These values are written in bold if \(p<0.01\). The minimum \(\chi^{2}\), denoted \(\chi^{2}_{\rm min}\), is calculated from the best-fit parameters. The data used is the BOSS DR12 galaxy, split into two sky regions, NGC and SGC, and two redshift bins, \(z=0.38\) and \(z=0.61\). The joint analysis shows the results for three gravity theories, i.e., GR, Horndeski, and DHOST theories, for Horndeski and DHOST theories, also shown are the results using the two redshift bins to constrain the parameters \(\xi_{f,s,t}\), which characterises the time evolution of the linear and non-linear effects of the velocity field. Furthermore, the results for the 2PCF-only analysis are shown. The combinations of the 2PCF and 3PCF multipole components used in this analysis correspond to Case \(1\) and Case \(7\) in Eq. 7. from the theoretical model among the four galaxy samples; (3) the analysis using all four BOSS galaxy samples gives our final results in Section 9. For the SGC sample at \(z=0.38\), the \(\chi^{2}_{\rm min}\) values for the BOSS samples and the Patchy mocks are \[\chi^{2}_{\rm min}\,({\rm BOSS}) = 248.36,\] \[\chi^{2}_{\rm min}\,({\rm Patchy\,mocks}) = 180.39\pm 21.43, \tag{10}\] where \({\rm DoF}=195\). The above result means that assuming that the \(\chi^{2}_{\rm min}\) follows a Gaussian distribution, the BOSS galaxy sample deviates from the Patchy mocks at the \(3.2\sigma\) significance level. For the NGC+SGC sample at \(z=0.38,0.61\), we have \[\chi^{2}_{\rm min}\,({\rm BOSS}) = 918.04,\] \[\chi^{2}_{\rm min}\,({\rm Patchy\,mocks}) = 728.23\pm 37.03 \tag{11}\] where \({\rm DoF}=791\). This result implies a discrepancy between the BOSS galaxy sample and the Patchy mocks at the \(5.1\sigma\) level. Thus, we conclude that the discrepancy with the theoretical model in the BOSS galaxies cannot be explained by the statistical scatter of the Patchy mocks. Although Table 13 has shown the results obtained from \(100\) Patchy mocks, for a more detailed exploration, we perform MCMC analysis on all \(2048\) publicly available Patchy mocks for the two examples above to see if it is possible to find realizations that return the sim \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{Joint analysis with monopole 3PCFs (\(\zeta_{000}\) and \(\zeta_{110}\)) only} \\ \hline & \multicolumn{2}{c}{\(\chi^{2}_{\rm min}/{\rm DoF}\) (\(p\)-value)} \\ \hline & NGC + SGC & NGC & SGC \\ \hline GR (\(z_{\rm eff}=0.38\)) & \(\mathbf{252.08/196\,(0.004)}\) & \(122.56/97\,(0.041)\) & \(127.32/97\,(0.021)\) \\ GR (\(z_{\rm eff}=0.61\)) & \(216.54/196\,(0.150)\) & \(109.00/97\,(0.191)\) & \(106.94/97\,(0.230)\) \\ \hline Horndeski (\(z_{\rm eff}=0.38\)) & \(\mathbf{252.02/195\,(0.004)}\) & \(122.54/96\,(0.035)\) & \(127.30/96\,(0.018)\) \\ Horndeski (\(z_{\rm eff}=0.61\)) & \(216.40/195\,(0.140)\) & \(109.02/96\,(0.172)\) & \(106.90/96\,(0.210)\) \\ Horndeski (\(z_{\rm eff}=0.38,\,0.61\)) & \(\mathbf{469.58/392\,(0.004)}\) & \(231.70/194\,(0.033)\) & \(234.68/194\,(0.024)\) \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \end{tabular} \end{table} Table 10: Same as Table 9, except that only the monopole component of the 3PCF is used in the joint analysis of the 2PCF and 3PCF. The combination of the 2PCF and 3PCF multipole components used in this analysis corresponds to Case 2 in Eq. (10). \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{Joint analysis with quadrupole 3PCFs (\(\zeta_{202}\) and \(\zeta_{112}\)) only} \\ \hline & \multicolumn{2}{c}{\(\chi^{2}_{\rm min}/{\rm DoF}\) (\(p\)-value)} \\ \hline & NGC + SGC & NGC & SGC \\ \hline GR (\(z_{\rm eff}=0.38\)) & \(280.18/252\,(0.107)\) & \(146.56/125\,(0.091)\) & \(132.68/125\,(0.302)\) \\ GR (\(z_{\rm eff}=0.61\)) & \(281.58/252\,(0.097)\) & \(148.04/125\,(0.078)\) & \(132.48/125\,(0.306)\) \\ \hline Horndeski (\(z_{\rm eff}=0.38\)) & \(279.66/251\,(0.103)\) & \(144.68/124\,(0.099)\) & \(132.38/124\,(0.287)\) \\ Horndeski (\(z_{\rm eff}=0.61\)) & \(281.68/251\,(0.089)\) & \(148.14/124\,(0.069)\) & \(132.48/124\,(0.285)\) \\ Horndeski (\(z_{\rm eff}=0.38,\,0.61\)) & \(563.66/504\,(0.034)\) & \(294.48/250\,(0.028)\) & \(265.08/250\,(0.245)\) \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \end{tabular} \begin{tabular}{l c c} \hline \hline & NGC + SGC & NGC & SGC \\ \hline & NGC + SGC & NGC & SGC \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \end{tabular} \begin{tabular}{l c} \hline & NGC + SGC & NGC & SGC \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \end{tabular} \begin{tabular}{l c} \hline & NGC + SGC & NGC & SGC \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \end{tabular} \begin{tabular}{l c} \hline & NGC + SGC & NGC & SGC \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \end{tabular} \begin{tabular}{l c} \hline & NGC + SGC & NGC & SGC \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \end{tabular} \begin{tabular}{l c} \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \end{tabular} \begin{tabular}{l c} \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \end{tabular} \begin{tabular}{l c} \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \end{tabular} \begin{tabular}{l c} \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \end{tabular} \begin{tabular}{l c} \hline & \multicolumn{1}{c}{} & & \\ \hline & \multicolumn{1}{c}{} & & \\ \end{tabular} \begin{tabular}{l c} \hline & \multicolumn{1}{c}{} & & & \\ \end{tabular} \begin{tabular}{l c} \hline & \multicolumn{1}{c}{} & & & \\ \hline & \multicolumn{1}{c}{} ilar \(p\)-values to the BOSS galaxy sample. For the SGC sample of \(z=0.38\), only one Patchy mock catalogue gives \(p=0.005\), close to the BOSS result. In this case, the Patchy mocks have a probability of \(100\times(1/2048)=0.0488\%\) to reproduce the BOSS galaxy results. On the other hand, using all four galaxy samples, not a single catalogue among the \(2048\) Patchy mocks reproduced the BOSS results. This result means that the BOSS result has less than a \(0.0488\%\) probability of appearing in the Patchy mocks. These results are consistent with the \(3.2\sigma\) and \(5.1\sigma\) discrepancies between the BOSS and Patchy mock data presented in Eqs. (8.1) and (8.2). Figure 11 visualizes the results for the DHOST theory analysis in Tables 9 and 13. As expected, the histogram of \(\chi^{2}_{\rm min}\) computed from the Patchy mocks (blue bars) can be well approximated by a Gaussian function (orange line) with input values for the mean and standard deviation of \(\chi^{2}_{\rm min}\) computed from the Patchy mocks. In the cases of SGC at \(z=0.38\) (top right panel) and NGC+SGC at \(z=0.38\), \(0.61\) (bottom left panel), we compute the histograms from the \(2048\) Patchy mocks; otherwise, we compute them from \(100\) Patchy mocks. Also, we plot the \(\chi^{2}_{\rm min}\) values obtained from the BOSS data in magenta. ### Rescaling of the covariance matrix We have discussed the discrepancy between the 3PCF measured from the BOSS data at \(z=0.38\) and the corresponding theoretical model. Unfortunately, this paper cannot provide a definitive answer to this question. There are three possible reasons for this discrepancy. The first concern is about the calculation of the covariance matrix. There may be physical effects that the Patchy mock used to calculate the covariance matrix needs to account for fully. For example, it is necessary to verify to what extent non-linear galaxy bias effects (Desjacques et al., 2018) and super-sample covariance effects (Takada & Hu, 2013) are correctly included in the Patchy mock. The second concern is about the theoretical model. For example, the theoretical model may have new physical effects dominating large scales at low redshifts. If so, we also need to account for that effect in the covariance matrix simultaneously. Finally, we are concerned with the observed galaxy data. There may be unknown observational effects that the weight function in Eq. (4.1) cannot explain. In any case, the findings in this section indicate the importance of discussing the validity of cosmological analyses that consider the 2PCF and 3PCF simultaneously. This paper assumes that the discrepancy between the BOSS galaxy sample and the theoretical model is due to an improper covariance matrix for the 3PCF calculated with the Patchy mock. Therefore, as a temporary measure, we decided to rescale the 3PCF covariance matrix at \(z=0.38\) to increase the obtained \(p\)-value to an acceptable value. Specifically, we rescale the 3PCF covariance matrix at \(z=0.38\) as follows: \[{\rm Cov}[3{\rm PCF}]_{\rm recalculated}=A\,{\rm Cov}[3{\rm PCF}]. \tag{8.3}\] where the rescaling factor \(A\) is \(A=1.15\) and \(A=1.25\) for NGC and SGC, respectively. The values of \(A\) are determined so that the resulting \(p\)-values at \(z=0.38\) become similar to those at \(z=0.61\). Table 14 summarises the results of repeating the same analysis as Table 9 using the rescaled covariance matrix; Figure 11 visualizes the results for the DHOST theory analysis in Tables 14. As expected, \(p\gtrsim 0.1\) for the NGC+SGC sample at \(z=0.38\). Thus, if the discrepancy between the galaxy data and the theoretical model in the 3PCF measurement is due to the covariance matrix computed by the Patchy mocks, we find that we can solve this problem by increasing the resulting covariance matrix by \(15-25\%\). We will give the results using this rescaled covariance matrix as the final result of this paper when we perform parameter estimation in Section 9 using the galaxy data at \(z=0.38\). ## 9 Results This section calculates the mean, standard deviation, \(\pm 1\sigma\) errors, and \(95\%\) upper and lower bounds for the parameters computed from the likelihoods, where we perform parameter estimation for each BOSS DR12 galaxy and Patchy mock data. When using the Patchy mock data, we compute the mean, standard deviation, \(\pm 1\sigma\) errors, and \(95\%\) limits for the parameters from each of the \(100\) Patchy mocks; then, we calculate the means and standard deviations of them. All results here take into account both the NGC and SGC samples. We \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{3}{c}{MultiDark-Patchy mocks} \\ \hline & \multicolumn{2}{c}{\(\chi^{2}_{\rm min}/{\rm DoF}\) (\(p\)-value)} \\ \hline & NGC + SGC & NGC & SGC \\ \hline 2PCF only (\(z_{\rm eff}=0.38\)) & \((57.76\pm 12.25)/57\) (\(0.447^{+0.446}_{-0.331}\)) & \((28.84\pm 7.92)/28\) (\(0.421^{+0.408}_{-0.297}\)) & \((27.75\pm 8.15)/28\) (\(0.478^{+0.401}_{-0.332}\)) \\ 2PCF only (\(z_{\rm eff}=0.61\)) & \((56.50\pm 9.54)/57\) (\(0.494^{+0.332}_{-0.301}\)) & \((27.99\pm 6.79)/28\) (\(0.465^{+0.352}_{-0.289}\)) & \((27.52\pm 6.76)/28\) (\(0.490^{+0.345}_{-0.298}\)) \\ \hline GR (\(z_{\rm eff}=0.38\)) & \((364.08\pm 29.31)/396\) (\(0.873^{+0.115}_{-0.346}\)) & \((181.04\pm 20.14)/197\) (\(0.786^{+0.186}_{-0.382}\)) & \((180.91\pm 21.55)/197\) (\(0.788^{+0.189}_{-0.408}\)) \\ GR (\(z_{\rm eff}=0.61\)) & \((362.41\pm 24.97)/396\) (\(0.886^{+0.099}_{-0.274}\)) & \((179.10\pm 18.41)/197\) (\(0.815^{+0.158}_{-0.335}\)) & \((181.45\pm 18.02)/197\) (\(0.780^{+0.181}_{-0.342}\)) \\ \hline Hordeski (\(z_{\rm eff}=0.38\)) & \((363.90\pm 29.45)/395\) (\(0.867^{+0.121}_{-0.353}\)) & \((180.68\pm 20.20)/196\) (\(0.777^{+0.193}_{-0.386}\)) & \((180.62\pm 21.48)/196\) (\(0.778^{+0.197}_{-0.410}\)) \\ Hordeski (\(z_{\rm eff}=0.61\)) & \((361.99\pm 25.00)/395\) (\(0.882^{+0.102}_{-0.278}\)) & \((178.48\pm 18.49)/196\) (\(0.810^{+0.162}_{-0.343}\)) & \((181.21\pm 17.92)/196\) (\(0.768^{+0.189}_{-0.344}\)) \\ Hordeski (\(z_{\rm eff}=0.38\), \(0.61\)) & \((728.47\pm 36.96)/792\) (\(0.948^{+0.048}_{-0.203}\)) & \((360.88\pm 25.46)/394\) (\(0.883^{+0.102}_{-0.284}\)) & \((363.77\pm 26.65)/394\) (\(0.860^{+0.122}_{-0.319}\)) \\ \hline DHOST (\(z_{\rm eff}=0.38\)) & \((363.26\pm 29.37)/394\) (\(0.865^{+0.123}_{-0.355}\)) & \((180.14\pm 20.18)/195\) (\(0.770^{+0.199}_{-0.388}\)) & \((180.39\pm 21.43)/195\) (\(0.766^{+0.207}_{-0.412}\)) \\ DHOST (\(z_{\rm eff}=0.61\)) & \((361.46\pm 25.03)/394\) (\(0.879^{+0.105}_{-0.282}\)) & \((178.00\pm 18.40)/195\) (\(0.803^{+0.167}_{-0.345}\)) & \((180.95\pm 17.98)/195\) (\(0.757^{+0.197}_{-0.348}\)) \\ DHOST (\(z_{\rm eff}=0.38\), \(0.61\)) & \((728.23\pm 37.03)/791\) (\(0.946^{+0.050}_{-0.208}\)) & \((360.45\pm 25.33)/393\) (\(0.879^{+0.105}_{-0.286}\)) & \((363.47\pm 26.59)/393\) (\(0.855^{+0.137}_{-0.322}\)) \\ \hline \end{tabular} \end{table} Table 13: Same as Table 9, except that the \(\chi^{2}_{\rm min}\) is calculated from each of \(100\) Patchy mocks, showing their mean values and standard deviations, and the means and \(1\sigma\) errors of the corresponding \(p\)-values. Figure 11: Visualizations of the DHOST theory analysis results in Tables 9, 13 and 14. The histograms of \(\chi^{2}_{\rm min}\) computed from the Patchy mocks are shown. In the cases of SGC at \(z=0.38\) (top right panel) and NGC+SGC at \(z=0.38,0.61\) (bottom left panel), the histograms are computed from \(2048\) Patchy mocks; otherwise, they are computed from \(100\) Patchy mocks. Also, Gaussian functions (orange lines) with input values for the mean and standard deviation of \(\chi^{2}_{\rm min}\) computed from the Patchy mocks are shown. The \(\chi^{2}_{\rm min}\) values obtained from the BOSS data are plotted in magenta. Also shown are the results for the BOSS data using the rescaled 3PCF covariance matrix at \(z=0.38\) (Section 8.5) in dashed red lines. have already given the \(\chi^{2}_{\rm min}\) and \(p\)-values calculated from the best-fit values of the parameters in the NGC+SGC columns of Tables 9, 13, and 14. The main results of this paper are Eqs. (9.7)-(9.10), which provide constraints in \(\xi_{1}\) and \(\xi_{4}\). In Figure 21, we plot the one- and two-dimensional likelihood distributions corresponding to these results. Finally, we summarise the measurement results for the 3PCF multipole components (\(\zeta_{000}\), \(\zeta_{110}\), \(\zeta_{202}\), and \(\zeta_{112}\)) from the BOSS galaxies used in this analysis in Figures 12-19. The combination of the 2PCF and 3PCF multipoles used in the joint analysis performed in this section corresponds to Case 7 in Eq. (7.4); the analysis using only the 2PCF corresponds to Case 1. In Section 9.10, the results of the joint analysis with only the monopole 3PCF, which corresponds to Case \(2\), are also presented and compared with the final results obtained from Case 7. ### Measurements Figures 12-19 plot the measurement results of the monopole 3PCFs (\(\zeta_{000}\) and \(\zeta_{110}\)) and the quadrupole 3PCFs (\(\zeta_{202}\) and \(\zeta_{112}\)) from the BOSS galaxies as a function of \(r_{2}\) with \(r_{1}\) fixed at \(50\,h^{-1}\,{\rm Mpc}\), \(80\,h^{-1}\,{\rm Mpc}\), \(90\,h^{-1}\,{\rm Mpc}\), \(100\,h^{-1}\,{\rm Mpc}\), and \(130\,h^{-1}\,{\rm Mpc}\) from top to bottom; they are shown by blue circled points with \(10\) error bars. Also plotted are the 3PCF measurements from \(100\) Patchy mocks (grey) and the mean from the 3PCF measurements from \(2048\) Patchy mocks (black). Finally, the theoretical models computed from the best-fit parameter values obtained from the DHOST theory analysis using all four BOSS samples are plotted with magenta lines; they are shown as solid lines on the scales \(r_{1},r_{2}\geq 80\,h^{-1}\,{\rm Mpc}\) used in the MCMC analysis and as dashed lines on smaller scales. Note that the theoretical model shown by the magenta dashed line does not need to explain the measurements from the galaxy data. As can be seen from the lower left of Figure 14, the \(\zeta_{000}\) measured from the SGC sample at \(z=0.38\) shows a significant discrepancy with the theoretical model on large scales, which is to be expected from the results presented in Section 8.2. Theoretical predictions from Figures 1 and 2 indicate that the monopole and quadrupole 3PCFs have trough-shaped signals at \(r_{1}=r_{2}\). For example, this characteristic trough signal is seen in the blue data points for \(\zeta_{112}\) measured in the NGC sample at \(z=0.38\), shown in the first and second panels from the top in the right panel of Figure 13. However, due to the significant statistical scattering in the galaxy data, the trough signal is not necessarily found in the blue points of all panels in Figures 12-19. In particular, for the monopole 3PCF, the BAO peak appears at \(r_{1}\simeq r_{2}\simeq 100\,h^{-1}\,{\rm Mpc}\). Therefore, it is expected to cancel out the trough signal, resulting in a smooth line with no irregularities when plotting the 3PCF as a function of \(r_{2}\) after fixing \(r_{1}=100\,h^{-1}\,{\rm Mpc}\). For example, as seen from the second panel from the bottom in the right panel of Figure 12, the \(\zeta_{110}\) measured from the NGC sample at \(z=0.38\) shows that the trough-shaped signal disappears from the data points. Conversely, this is evidence of a BAO signal in the monopole 3PCF. Although plotting the 3PCF as a function of \(r_{1}=r_{2}=r\) makes it easier to see the BAO signal from the galaxy data points (e.g., see Figure 8 and Figure 11 in Sugiyama et al., 2021), we do not plot such a figure because the subject of this paper is not the BAO signal. ### \(f\sigma_{8}\) constraints from the Patchy mocks in GR Table 15 shows the \(f\sigma_{8}\) results obtained from the analysis of \(100\) Patchy mocks, assuming GR. The standard deviations of \(f\sigma_{8}\) from the 2PCF-only analysis are almost identical to those obtained from the joint analysis with the 3PCF. This result is consistent with the results of the Fisher analysis in Section 7. Therefore, we can conclude that neither monopole 3PCF nor quadrupole 3PCF contributes to reducing the \(f\sigma_{8}\) error. Nevertheless, note that we can constrain the growth rate function \(f\) using the joint analysis with the 3PCF by combining the \(\sigma_{8}\) constraint in Section 9.6. Furthermore, in the context of modified gravity theories, \(f\) is extended to \(E_{\rm f}\), and Section 9.7 will constrain the parameter \(\xi_{f}\) characterizing its time evolution. Looking at the mean of \(f\sigma_{8}\), the results obtained in the joint analysis with the 3PCF (\(0.498\) at \(z=0.38\) and \(0.504\) at \(z=0.61\)) are slightly closer to the values input to the Patchy mock (\(0.491\) and \(0.485\)) than those obtained with the 2PCF alone (\(0.445\) and \(0.457\)). Thus, the 3PCF information helps reduce the bias in the \(f\sigma_{8}\) mean values. ### \(f\sigma_{8}\) constraints from the BOSS DR12 galaxies in GR Table 16 summarises the results of the \(f\sigma_{8}\) constraints obtained from the BOSS galaxy under the assumption of GR. "GR (\(z=0.38\)) [rescaled]" means the results using the rescaled covariance matrix (Section 8.5). Note that the standard deviation result of \(f\sigma_{8}\) does not change with and without rescaling the 3PCF covariance matrix at \(z=0.38\): i.e., \((f\sigma_{8})_{\rm std}=0.108\) in both cases. Thus, the \(15-25\%\) difference in the 3PCF covariance matrix due to rescaling (Section 8.5) does not propagate significantly to the final \(f\sigma_{8}\) error. The reason for this may be mainly due to the effect of parameter degeneracy and other factors. However, due to the decisively different \(p\)-values obtained (see Tables 9 and 14), we adopt the rescaled result at \(z=0.38\) as the final result. Comparing the results of the joint analysis with the 3PCF and the 2PCF-only analysis, the former has a larger \(f\sigma_{8}\) error: i.e., \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{2}{c}{\(\chi^{2}_{\rm min}/{\rm DoF}\) (\(p\)-value)} \\ \hline \hline & NGC + SGC & NGC & SGC \\ \hline GR (\(z_{\rm eff}=0.38\,[{\rm rescaled}]\)) & 413.86/396 (0.258) & 209.98/197 (0.250) & 202.56/197 (0.378) \\ \hline Horndeski (\(z_{\rm eff}=0.38\,[{\rm rescaled}]\)) & 413.66/395 (0.249) & 208.24/196 (0.261) & 202.36/196 (0.363) \\ Horndeski (\(z_{\rm eff}=0.38\,[{\rm rescaled}]\), \(0.61\)) & 844.18/792 (0.097) & 428.14/394 (0.114) & 412.46/394 (0.251) \\ \hline DHOST (\(z_{\rm eff}=0.38\,[{\rm rescaled}]\)) & 412.88/394 (0.246) & 207.82/195 (0.252) & 202.34/195 (0.344) \\ DHOST (\(z_{\rm eff}=0.38\,[{\rm rescaled}]\), \(0.61\)) & 843.66/791 (0.095) & 428.00/393 (0.108) & 412.36/393 (0.241) \\ \hline \end{tabular} \end{table} Table 14: Same as Table 9, except that the covariance matrix of the 3PCF at \(z=0.38\) is rescaled as in Eq. (8.3). Figure 12: Monopole 3PCFs (\(\zeta_{000}\) and \(\zeta_{110}\)) measured from the NGC sample at \(z=0.38\) (blue points). These plots are shown as a function of \(r_{2}\), with \(r_{1}\) fixed from the top to \(50\,h^{-1}\,{\rm Mpc}\), \(80\,h^{-1}\,{\rm Mpc}\), \(90\,h^{-1}\,{\rm Mpc}\), \(100\,h^{-1}\,{\rm Mpc}\), and \(130\,h^{-1}\,{\rm Mpc}\). The error bars are the standard deviation of the 3PCF measurements computed from \(2048\) Patchy mocks. Also plotted are the 3PCF measurements from \(100\) Patchy mocks (grey) and the mean from the 3PCF measurements from \(2048\) Patchy mocks (black). Finally, the results of the theoretical model calculated from the best-fit parameter values obtained from the DHOST theory analysis using all four BOSS samples (Sections 9.7-9.9) are shown by the magenta lines; they are shown as solid lines on the scales \(r_{1},r_{2}\geq 80\,h^{-1}\,{\rm Mpc}\) used in the analysis and as dashed lines on smaller scales. Figure 13: Same as Figure 12, except that the quadrupole 3PCF results (\(\zeta_{202}\) and \(\zeta_{112}\)) measured from the NGC sample at \(z=0.38\) are shown. Figure 14: Same as Figure 12, except that the monopole 3PCF results (\(\zeta_{000}\) and \(\zeta_{110}\)) measured from the SGC sample at \(z=0.38\) are shown. Figure 15: Same as Figure 12, except that the quadrupole 3PCF results (\(\zeta_{200}\) and \(\zeta_{112}\)) measured from the SGC sample at \(z=0.38\) are shown. Figure 16: Same as Figure 12, except that the monopole 3PCF results (\(\zeta_{000}\) and \(\zeta_{110}\)) measured from the NGC sample at \(z=0.61\) are shown. Figure 17: Same as Figure 12, except that the quadrupole 3PCF results (\(\zeta_{202}\) and \(\zeta_{112}\)) measured from the NGC sample at \(z=0.61\) are shown. Figure 18: Same as Figure 12, except that the monopole 3PCF results (\(\zeta_{000}\) and \(\zeta_{110}\)) measured from the SGC sample at \(z=0.61\) are shown. Figure 19: Same as Figure 12, except that the quadrupole 3PCF results (\(\zeta_{202}\) and \(\zeta_{112}\)) measured from the SGC sample at \(z=0.61\) are shown. \((f\sigma_{8})_{\rm std}=0.108\), \(0.091\) at \(z=0.38\), \(0.61\) for the joint analysis with the 3PCF, and \((f\sigma_{8})_{\rm std}=0.086\), \(0.086\) at \(z=0.38\), \(0.61\) for the 2PCF-only analysis. Therefore, one may think that adding the 3PCF information has weakened the constraint on \(f\sigma_{8}\). However, since Table 15 shows that the statistical uncertainty of \((f\sigma_{8})_{\rm std}\) is \(\sim 0.01\), the both results are statistically consistent at the \(\lesssim 2\sigma\) level. Our final results for the \(f\sigma_{8}\) constraints are as follows. The 2PCF-only analysis gives, at the \(1\sigma\) level, \[f\sigma_{8} =0.446^{+0.084}_{-0.009}\quad{\rm at}\;z=0.38\] \[f\sigma_{8} =0.408^{+0.084}_{-0.005}\quad{\rm at}\;z=0.61, \tag{11}\] and the joint analysis with the 3PCF presents \[f\sigma_{8} =0.549^{+0.097}_{-0.122}\quad{\rm at}\;z=0.38\] \[f\sigma_{8} =0.394^{+0.088}_{-0.099}\quad{\rm at}\;z=0.61. \tag{12}\] These \(f\sigma_{8}\) constraints are consistent with the \(f\sigma_{8}\) values (\(f\sigma_{8}=0.485\), \(0.479\) at \(z=0.38\), \(0.61\)) calculated from the cosmological parameters in a flat \(\Lambda\)CDM model (Section 1) given by Planck 2018 (Aghanim et al., 2020). However, the \(f\sigma_{8}\) result in this analysis, which constrains \(f\sigma_{8}\) with a \(\sim 20\%\) precision, is not as competitive as existing constraints (e.g., Alam et al., 2017; Ivanov et al., 2020; Lange et al., 2022; Kobayashi et al., 2022) because we only use large-scale information (\(r\geq 80\,h^{-1}\,{\rm Mpc}\)). ### \(\sigma_{8}\) constraints from the Patchy mocks in GR Table 17 summarises the results for \(\sigma_{8}\) from \(100\) Patchy mocks. The mean values for \(\sigma_{8}\) are \(0.741\) and \(0.612\) for \(z=0.38\) and \(z=0.61\), respectively, in good agreement with the mock input values (\(0.691\) and \(0.615\)). Specifically, they agree to an accuracy of \(7\%\) and \(0.5\%\), respectively. Since \(\sigma_{8}\) is the only physical parameter unique to the 3PCF in GR, the fact that we can estimate the \(\sigma_{8}\) value with high accuracy guarantees the validity of our analysis. On the other hand, the \(95\%\) lower limit of \(\sigma_{8}\) is consistent with zero, so we cannot detect a statistically significant signal for \(\sigma_{8}\) in our analysis. (\sigma_{8}\) constraints from the Patchy mocks in GR with negative \(F_{\rm g}\sigma_{8}\) and \(F_{\rm i}\sigma_{8}\) allowed This subsection discusses the validity of the priors set in Section 7.5 for the parameters \(F_{\rm g}\sigma_{8}\) and \(F_{\rm i}\sigma_{8}\), which include non-linear bias parameters. We impose the assumption that \(F_{\rm g}\) and \(F_{\rm i}\) are positive, but there is no theoretical requirement that this assumption is correct since the values of the non-linear biases are uncertain. Therefore, as a test, we perform parameter estimation for \(\sigma_{8}\) using a prior with negative \(F_{\rm g}\) and \(F_{\rm i}\) allowed to check if it returns the input values of the Patchy mocks. Specifically, the upper bounds of \(F_{\rm g}\sigma_{8}\) and \(F_{\rm i}\sigma_{8}\) given in Table 8 are multiplied by \((-1)\) to set the lower bounds of \(F_{\rm g}\sigma_{8}\) and \(F_{\rm i}\sigma_{8}\). For example, we set \(-2.99\leq(F_{\rm g}\sigma_{8})_{\rm NGC,\,z=0.38}\leq 2.99\). We summarise the results of this analysis in Table 18. This table shows that the mean values for \(\sigma_{8}\) are \(1.204\) at \(z=0.38\) and \(1.004\) at \(z=0.61\), which are about \(1.5\) times larger than the input values, \(0.691\) at \(z=0.38\) and \(0.615\) at \(z=0.61\), in the Patchy mocks. Thus, if we allow negative values of \(F_{\rm g}\) and \(F_{\rm i}\), we cannot estimate the correct value of \(\sigma_{8}\). We have no theoretical basis for explaining this fact, but as a result of numerical experiments, we conclude that it is reasonable to impose the conditions \(F_{\rm g}\geq 0\) and \(F_{\rm i}\geq 0\) in our analysis. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{6}{c}{BOSS DR12} \\ \hline & \((f\sigma_{8})_{\rm mean}\) & \((f\sigma_{8})_{\rm std}\) & \((f\sigma_{8})_{-1\sigma}\) & \((f\sigma_{8})_{+1\sigma}\) & \((f\sigma_{8})_{>95\%}\) & \((f\sigma_{8})_{<95\%}\) \\ \hline 2PCF only (\(z_{\rm eff}=0.38\)) & \(0.446\) & \(0.086\) & \(-0.096\) & \(0.084\) & \(0.273\) & \(0.620\) \\ 2PCF only (\(z_{\rm eff}=0.61\)) & \(0.408\) & \(0.086\) & \(-0.095\) & \(0.084\) & \(0.236\) & \(0.580\) \\ \hline GR (\(z_{\rm eff}=0.38\)) & \(0.561\) & \(0.108\) & \(-0.122\) & \(0.098\) & \(0.348\) & \(0.785\) \\ GR (\(z_{\rm eff}=0.61\)) & \(0.394\) & \(0.091\) & \(-0.099\) & \(0.088\) & \(0.208\) & \(0.580\) \\ \hline GR (\(z_{\rm eff}=0.38\) [rescaled]) & \(0.549\) & \(0.108\) & \(-0.122\) & \(0.097\) & \(0.337\) & \(0.776\) \\ \hline \end{tabular} \end{table} Table 16: Means, standard deviations, \(\pm 1\sigma\) errors, and \(95\%\) upper and lower bounds for \(f\sigma_{8}\) obtained in the 2PCF-only analysis and the joint analysis with the 3PCF using the BOSS DR12 galaxies, assuming GR for the joint analysis with the 3PCF. Results are shown for two redshifts at \(z=0.38\) and \(0.61\) using the NGC and SGC samples. Also shown are the results at \(z=0.38\) for the analysis using the rescaled covariance matrix (8.3) to give an acceptable \(p\)-value. The \(\chi^{2}_{\rm min}\) and \(p\) values corresponding to this table are shown in the NGC+SGC column of Tables 9 and 14. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{6}{c}{MultiDark-Patchy mocks} \\ \hline & \(\langle f\sigma_{8}\rangle_{\rm mean}\) & \(\langle f\sigma_{8}\rangle_{\rm std}\) & \(\langle f\sigma_{8}\rangle_{-1\sigma}\) & \(\langle f\sigma_{8}\rangle_{+1\sigma}\) & \(\langle f\sigma_{8}\rangle_{>95\%}\) & \(\langle f\sigma_{8}\rangle_{<95\%}\) \\ \hline 2PCF only (\(z_{\rm eff}=0.38\)) & \(0.445\,(0.491)\pm 0.078\) & \(0.094\pm 0.010\) & \(-0.106\pm 0.013\) & \(0.091\pm 0.009\) & \(0.256\pm 0.074\) & \(0.635\pm 0.090\) \\ 2PCF only (\(z_{\rm eff}=0.61\)) & \(0.457\,(0.485)\pm 0.068\) & \(0.083\pm 0.009\) & \(-0.091\pm 0.011\) & \(0.082\pm 0.008\) & \(0.291\pm 0.066\) & \(0.624\pm 0.077\) \\ \hline GR (\(z_{\rm eff}=0.38\)) & \(0.498(0.491)\pm 0.085\) & \(0.107\pm 0.008\) & \(-0.118\pm 0.009\) & \(0.105\pm 0.012\) & \(0.283\pm 0.082\) & \(0.718\pm 0.093\) \\ GR (\(z_{\rm eff}=0.61\)) & \(0.504(0.485)\pm 0.070\) & \(0.095\pm 0.010\) & \(-0.104\pm 0.010\) & \(0.092\pm 0.011\) & \(0.314\pm 0.065\) & \(0.698\pm 0.082\) \\ \hline \end{tabular} \end{table} Table 15: Constraint results for \(f\sigma_{8}\) obtained from the \(100\ ### \(\sigma_{8}\) constraints from the BOSS DR12 galaxies in GR Table 19 summarises the results for the \(\sigma_{8}\) constraints obtained from the BOSS galaxies under the GR assumption. The "GR (\(z=0.38\)) [rescaled]" refers to the results obtained using the rescaled covariance matrix (Section 8.5). Figure 20 plots the marginalized one- and two-dimensional posteriors of \(f\sigma_{8}\) and \(\sigma_{8}\). Similar to the results for the \(f\sigma_{8}\) constraint in Section 9.3, the results for the \(\sigma_{8}\) constraint remain almost the same whether the covariance matrix is rescaled or not. Adopting the result using the rescaled covariance matrix as the final result, the \(\sigma_{8}\) constraints at the \(1\sigma\) level are \[\sigma_{8}=0.692^{+0.209}_{-0.591} \quad\mathrm{at}\;z=0.38,\] \[\sigma_{8}=0.568^{+0.144}_{-0.547} \quad\mathrm{at}\;z=0.61, \tag{11}\] Also, as expected from the results of the Patchy mocks, the \(95\%\) lower bounds for \(\sigma_{8}\) reach \(0\), so at the \(95\%\) level, we get only the upper bounds: \[\sigma_{8}<1.568\;(95\%\;\mathrm{CL}) \quad\mathrm{at}\;z=0.38,\] \[\sigma_{8}<1.323\;(95\%\;\mathrm{CL}) \quad\mathrm{at}\;z=0.61. \tag{12}\] These results are consistent with the \(\sigma_{8}\) values, (\(\sigma_{8}=0.681,\;0.606\) at \(z=0.38,\;0.61\)), calculated from the cosmological parameters in a flat \(\Lambda\)CDM model given by Planck 2018 (Section 1). The ratio of the standard deviation to the mean for \(\sigma_{8}\) is \((\sigma_{8})_{\mathrm{std}}/(\sigma_{8})_{\mathrm{mean}}=0.66\) at \(z=0.38\) and \(0.71\) at \(z=0.61\), indicating that the galaxy sample at \(z=0.38\) provides a better constraint on \(\sigma_{8}\). This result is consistent with the Fisher analysis in Section 7.3. ### \(\xi_{f}\) constraints from the BOSS DR12 galaxies in Horndeski and DHOST theories Table 20 summarises the constraint results for the parameter \(\xi_{f}\), defined as \(\xi_{f}=\ln_{\Omega_{\mathrm{m}}}\left(E_{f}\right)=\ln_{\Omega_{\mathrm{m}}} \left(f/\kappa_{\delta}\right)\) (3.27), characterising the time evolution of the amplitude of the linear velocity field. In GR and Horndeski theories, \(\xi_{f}\) corresponds to the well-known parameter \(\gamma\) since \(\kappa_{\delta}=1\); in GR, \(\xi_{f}=\gamma=6/11\) (3.28). Using all the four galaxy samples, at the \(1\sigma\) level, we obtain \[\gamma=0.485^{+0.967}_{-0.708} \quad\mathrm{in}\;\mathrm{Horndeski},\] \[\xi_{f}=0.791^{+0.963}_{-0.691} \quad\mathrm{in}\;\mathrm{DHOST}, \tag{13}\] and at the \(95\%\) confidence level, we have \[-1.216<\gamma<2.175\;(95\%\mathrm{CL}) \quad\mathrm{in}\;\mathrm{Horndeski},\] \[-0.907<\xi_{f}<2.447\;(95\%\mathrm{CL}) \quad\mathrm{in}\;\mathrm{DHOST}. \tag{14}\] All results in Table 20 are consistent with GR within the \(1\sigma\) level. Note that the \(\gamma\) constraints in Horndeski theories obtained here are not directly comparable to those obtained from existing studies by, \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{MultiDark-Patchy mocks} \\ \hline & \(\langle\sigma_{8}\rangle_{\mathrm{mean}}\) & \(\langle\sigma_{8}\rangle_{\mathrm{std}}\) & \(\langle\sigma_{8}\rangle_{-1\sigma}\) & \(\langle\sigma_{8}\rangle_{+1\sigma}\) & \(\langle\sigma_{8}\rangle_{-95\%}\) & \(\langle\sigma_{8}\rangle_{<95\%}\) \\ \hline GR (\(z_{\mathrm{eff}}=0.38\)) & \(1.204\;(0.691)\pm 0.429\) & \(0.628\pm 0.109\) & \(-0.771\pm 0.184\) & \(0.499\pm 0.263\) & \(0.151\pm 0.269\) & \(2.409\pm 0.568\) \\ GR (\(z_{\mathrm{eff}}=0.61\)) & \(1.004\;(0.615)\pm 0.441\) & \(0.584\pm 0.156\) & \(-0.750\pm 0.198\) & \(0.374\pm 0.260\) & \(0.072\pm 0.176\) & \(2.140\pm 0.719\) \\ \hline \end{tabular} \end{table} Table 18: Same as Table 17, except that a prior with negative \(F_{8}\) and \(F_{1}\) allowed is adopted. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{BOSS DR12} \\ \hline & \((\sigma_{8})_{\mathrm{mean}}\) & \((\sigma_{8})_{\mathrm{std}}\) & \((\sigma_{8})_{-1\sigma}\) & \((\sigma_{8})_{+1\sigma}\) & \((\sigma_{8})_{>95\%}\) & \((\sigma_{8})_{<95\%}\) \\ \hline GR (\(z_{\mathrm{eff}}=0.38\)) & \(0.702\) & \(0.451\) & \(-0.576\) & \(0.221\) & \(0.000\) & \(1.563\) \\ GR (\(z_{\mathrm{eff}}=0.61\)) & \(0.568\) & \(0.404\) & \(-0.547\) & \(0.144\) & \(0.000\) & \(1.323\) \\ \hline GR (\(z_{\mathrm{eff}}=0.38\;[\mathrm{rescaled}]\)) & \(0.692\) & \(0.459\) & \(-0.591\) & \(0.209\) & \(0.000\) & \(1.568\) \\ \hline \end{tabular} \end{table} Table 19: Means, standard deviations, \(\pm 1\sigma\) errors, and \(95\%\) upper and lower bounds for \(\sigma_{8}\) obtained in the joint analysis of the 2PCF and 3PCF using the BOSS DR12 galaxies, assuming GR. Results are shown for two redshifts at \(z=0.38\) and \(0.61\) using the NGC and SGC samples. Also shown are the results at \(z=0.38\) for the analysis using the rescaled covariance matrix (8.3) to give an acceptable \(p\)-value. The \(\chi^{2}_{\mathrm{min}}\) and \(p\) values corresponding to this table are shown in the NGC+SGC column of Tables 9 and 14. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{MultiDark-Patchy mocks} \\ \hline & \(\langle\sigma_{8}\rangle_{\mathrm{mean}}\) & \(\langle\sigma_{8}\rangle_{\mathrm{std}}\) & \(\langle\sigma_{8}\rangle_{-1\sigma}\) & \(\langle\sigma_{8}\rangle_{+1\sigma}\) & \(\langle\sigma_{8}\rangle_{-95\%}\) & \(\langle\sigma_{8}\rangle_{<95\%}\) \\ \hline GR (\(z_{\mathrm{eff}}=0.38\)) & \(0.741\;(0.691)\pm 0.347\) & \(0.476\pm 0.142\) & \(-0.626\pm 0.184\) & \(0.235\pm 0.191\) & \(0.024\pm 0.097\) & \(1.668\pm 0.617\) \\ GR (\(z_{\mathrm{eff}}=0.61\)) & \(0.612\;(0.615)\pm 0.319\) & \(0.415\pm 0.165\) & \(-0.550\pm 0.220\) & \(0.176\pm 0.156\) & \(0.003\pm 0.023\) & \(1.410\pm 0.636\) \\ \hline \end{tabular} \end{table} Table 17: Constraint results for \(\sigma_{8}\) obtained from the \(100\) Patchy mocks. One hundred means, standard deviations, \(\pm 1\sigma\) errors, and \(95\%\) upper and lower bounds are computed from the \(100\) Patchy mocks; then, the means and standard deviations of them are shown. Values in parentheses are the input values for the Patchy mocks. Results are shown for two redshift bins at \(z=0.38\) and \(0.61\) in combination with the NGC and SGC samples. Also shown are the results for the joint analysis with the 3PCF assuming GR. The \(\chi^{2}_{\mathrm{min}}\) and \(p\) values corresponding to this table are shown in the NGC+SGC column of Table 13. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{6}{c}{BOSS DR12} \\ \hline & \((\xi_{\rm f})_{\rm mean}\) & \((\xi_{\rm f})_{\rm std}\) & \((\xi_{\rm f})_{-1\sigma}\) & \((\xi_{\rm f})_{+1\sigma}\) & \((\xi_{\rm f})_{>95\%}\) & \((\xi_{\rm f})_{<95\%}\) \\ \hline Horndeski (\(z_{\rm eff}=0.38\)) & \(0.206\) & \(1.016\) & \(-0.777\) & \(1.176\) & \(-1.906\) & \(2.201\) \\ Horndeski (\(z_{\rm eff}=0.61\)) & \(1.142\) & \(1.671\) & \(-1.431\) & \(1.862\) & \(-2.302\) & \(4.480\) \\ Horndeski (\(z_{\rm eff}=0.38\), 0.61) & \(0.562\) & \(0.818\) & \(-0.703\) & \(0.913\) & \(-1.079\) & \(2.226\) \\ \hline \hline Horndeski (\(z_{\rm eff}=0.38\) [rescaled]) & \(0.202\) & \(1.043\) & \(-0.833\) & \(1.201\) & \(-1.921\) & \(2.287\) \\ Horndeski (\(z_{\rm eff}=0.38\) [rescaled], 0.61) & \(0.485\) & \(0.839\) & \(-0.708\) & \(0.967\) & \(-1.216\) & \(2.175\) \\ \hline \hline DHOST (\(z_{\rm eff}=0.38\)) & \(0.458\) & \(1.013\) & \(-0.790\) & \(1.188\) & \(-1.564\) & \(2.474\) \\ DHOST (\(z_{\rm eff}=0.61\)) & \(1.248\) & \(1.722\) & \(-1.372\) & \(1.981\) & \(-2.318\) & \(4.630\) \\ DHOST (\(z_{\rm eff}=0.38\), 0.61) & \(0.834\) & \(0.829\) & \(-0.686\) & \(0.963\) & \(-0.814\) & \(2.484\) \\ \hline \hline DHOST (\(z_{\rm eff}=0.38\) [rescaled]) & \(0.129\) & \(1.131\) & \(-0.895\) & \(1.078\) & \(-2.096\) & \(2.473\) \\ DHOST (\(z_{\rm eff}=0.38\) [rescaled], 0.61) & \(0.791\) & \(0.830\) & \(-0.691\) & \(0.963\) & \(-0.907\) & \(2.447\) \\ \hline \hline \end{tabular} \end{table} Table 20: Means, standard deviations, \(\pm 1\sigma\) errors, and \(95\%\) upper and lower bounds for \(\xi_{f}\) obtained in the joint analysis of the 2PCF and 3PCF using the BOSS DR12 galaxies, assuming Horndeski or DHOST theories. The results for the two redshifts, \(z=0.38\) and \(0.61\), and their combined case are shown. Both NGC and SGC samples are used for all cases. Also shown are the results at \(z=0.38\) for the analysis using the rescaled covariance matrix (8.3) to give acceptable \(p\)-values. The \(\chi^{2}_{\rm min}\) and \(p\) values corresponding to this table are shown in the NGC+SGC column of Tables 9 and 14. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{6}{c}{BOSS DR12} \\ \hline & \((\xi_{\rm s})_{\rm mean}\) & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm f})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{>95\%}\) & \((\xi_{\rm s})_{<95\%}\) \\ \hline & \((\xi_{\rm s})_{\rm mean}\) & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{>95\%}\) & \((\xi_{\rm s})_{<95\%}\) \\ \hline & \((\xi_{\rm s})_{\rm mean}\) & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{>95\%}\) & \((\xi_{\rm s})_{<95\%}\) \\ \hline & \((\xi_{\rm s})_{\rm mean}\) & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{>95\%}\) & \((\xi_{\rm s})_{<95\%}\) \\ \hline & \((\xi_{\rm s})_{\rm mean}\) & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{>95\%}\) & \((\xi_{\rm s})_{<95\%}\) \\ \hline & \((\xi_{\rm s})_{\rm mean}\) & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{>95\%}\) & \((\xi_{\rm s})_{<95\%}\) \\ \hline & \((\xi_{\rm s})_{\rm mean}\) & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{>95\%}\) & \((\xi_{\rm s})_{<95\%}\) \\ \hline & \((\xi_{\rm s})_{\rm mean}\) & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{>95\%}\) & \((\xi_{\rm s})_{<95\%}\) \\ \hline & \((\xi_{\rm s})_{\rm mean}\) & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{>95\%}\) & \((\xi_{\rm s})_{<95\%}\) \\ \hline & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{>95\%}\) & \((\xi_{\rm s})_{<95\%}\) \\ \hline & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{>95\%}\) & \((\xi_{\rm s})_{<95\%}\) \\ \hline & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{>95\%}\) & \((\xi_{\rm s})_{<95\%}\) \\ \hline & \((\xi_{\rm s})_{\rm std}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{+1\sigma}\) & \((\xi_{\rm s})_{-1\sigma}\) & \((\xi_{\rm s})_{-1\sigma}\) \\ \hline & \(( e.g., Gil-Marin et al. (2017b). The reason is that we simultaneously vary the \(\xi_{\rm t}\) parameter characterising the tidal term in the non-linear velocity field in Horndeski theories, while Gil-Marin et al. (2017b) use the bispectrum model assuming GR. ### \(\xi_{\rm t}\) constraints from the BOSS DR12 galaxies in Horndeski and DHOST theories Table 21 summarises the constraint results for the \(\xi_{\rm t}\) parameter, defined as \(\xi_{\rm t}=\ln_{\Omega_{m}}\left(E_{\rm t}\right)=\ln_{\Omega_{m}}\left( \lambda_{\theta}/\kappa_{\delta}\right)\) (3.27), characterising the time evolution of the tidal term in the second-order velocity field. In GR, \(\xi_{\rm t}=15/1144\) (3.28), and if \(\xi_{\rm t}\) deviates from the GR value, it is evidence for Horndeski or DHOST theories. Using all the four galaxy samples, at the \(1\sigma\) level, we obtain \[\xi_{\rm t} = 5.151^{+6.112}_{-4.016}\quad\mbox{in Horndeski},\] \[\xi_{\rm t} = 5.414^{+6.007}_{-3.734}\quad\mbox{in DHOST}, \tag{9.7}\] and at the \(95\%\) confidence level, we have \[-2.098<\xi_{\rm t}\left(95\%{\rm CL}\right)\qquad\mbox{in Horndeski},\] \[-1.655<\xi_{\rm t}\left(95\%{\rm CL}\right)\qquad\mbox{in DHOST}. \tag{9.8}\] Eqs. (9.7) and (9.8) are one of the main results in this paper. Since the \(95\%\) upper bounds of \(\xi_{\rm t}\) obtained in this analysis reach the upper bounds set by the flat prior distribution (Section 7.5), we present only the \(95\%\) lower bounds here. All results in Table 21 are consistent with GR within the \(95\%\) level. ### \(\xi_{\rm s}\) constraints from the BOSS DR12 galaxies in DHOST theories Table 22 summarises the constraint results for \(\xi_{\rm s}\), defined as \(\xi_{\rm s}=\ln_{\Omega_{m}}\left(E_{\rm s}\right)=\ln_{\Omega_{m}}\left( \kappa_{\theta}/\kappa_{\delta}\right)\) (3.27), characterising the time evolution of the shift term in the second-order velocity field. In GR or Horndeski theories, \(\xi_{\rm s}=0\) (3.28) because \(\kappa_{\delta}=\kappa_{\theta}=1\). If \(\xi_{\rm s}\neq 0\), then it is the specific signal appearing in DHOST theories. Note that \(\xi_{\rm s}\neq 0\) is a sufficient condition for detecting DHOST theories because there can be DHOST theories satisfying \(\kappa_{\delta}=\kappa_{\theta}\) (see Section 3.4). Using all the four galaxy samples, at the \(1\sigma\) level, we obtain \[\xi_{\rm s}=5.378^{+4.993}_{-2.777}, \tag{9.9}\] and at the \(95\%\) confidence level, we have \[-0.504<\xi_{\rm s}\left(95\%{\rm CL}\right), \tag{9.10}\] where we show only the lower limit of \(\xi_{\rm s}\) for the same reason as for \(\xi_{\rm t}\). Eqs. (9.9) and (9.10) are the other main results of this paper in addition to Eqs. (9.7) and (9.8). All results in Table 22 are consistent with GR within the \(95\%\) level. For all the results obtained from Tables 20, 21, and 22, the standard deviations of \(\xi_{f,\rm s,t}\) obtained by combining the samples \(z=0.38\) and \(z=0.61\) are smaller than those obtained at \(z=0.38\) and \(z=0.61\), respectively. Therefore, future galaxy surveys with more redshift bins should improve our \(\xi_{f,\rm s,t}\) constraints. Similar to the \(f\sigma_{\rm g}\) and \(\sigma_{\rm g}\) results in GR, we confirm that the constraints of \(\xi_{f,\rm s,t}\) are hardly affected by rescaling the covariance matrix by \(15\%-25\%\) at \(z=0.38\). This finding indicates that the results for \(\xi_{f,\rm s,t}\) presented here will not change significantly even if future re-analysis from a better mock simulation gives acceptable \(p\)-values. ### Joint analysis with the monopole 3PCF only This subsection presents the results of a joint analysis of the monopole and quadrupole 2PCFs (\(\xi_{0}\) and \(\xi_{2}\)) with only the monopole 3PCFs (\(\zeta_{000}\) and \(\zeta_{110}\)) and compares them with our main results, revealing the importance of the information in the quadrupole 3PCFs (\(\zeta_{202}\) and \(\zeta_{112}\)). In other words, we compare the results corresponding to Case 2 and Case 7 in Eq. 7.4. For simplicity, we focus here on the case where all four BOSS galaxy samples are used assuming DHOST theories and present a comparison of the results for \(\xi_{f}\), \(\xi_{\rm t}\), and \(\xi_{\rm s}\). For Case 2, as in Case 7, we determine the parameter priors according to the method described in Section 7.5, based on the results of the Fisher analysis. The results of the joint analysis with the monopole 3PCFs are as follows: \[(\xi_{\rm t})_{\rm mean}\pm(\xi_{\rm t})_{\rm std}=257.451\pm 145.68,\] \[(\xi_{\rm s})_{\rm mean}\pm(\xi_{\rm s})_{\rm std}=137.396\mp 79.215. \tag{9.11}\] On the other hand, adding the quadrupole 3PCFs presents \((\xi_{\rm t})_{\rm std}=4.211\) (Table 21) and \((\xi_{\rm s})_{\rm std}=3.438\) (Table 22). The addition of the quadrupole 3PCFs reduces the values of \((\xi_{\rm t})_{\rm std}\) and \((\xi_{\rm s})_{\rm std}\) by a factor of \(\sim 35\) and \(\sim 20\), respectively. This improvement is consistent with the Fisher analysis result in Section 7 (see Eq. (7.6) and Table 5). Therefore, we conclude that the quadrupole component of the 3PCF should always be used to constrain \(\xi_{\rm t}\) and \(\xi_{\rm s}\). Finally, the same should hold for testing other modified gravity theories through non-linear velocity fields. ### Consistency check with the Fisher analysis This subsection discusses the consistency between the Fisher analysis results in Section 7 and our final results from MCMC in this section. For this purpose, We compare the standard deviation of a parameter \(\theta\) computed from the Fisher analysis, \(\sigma_{\rm Fisher}(\theta)\), with that \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{3}{c}{DHOST} \\ \hline & \((\xi_{f})_{\rm std}\) & \(\sigma_{\rm Fisher}(\xi_{f})\) \\ \hline \(z_{\rm eff}=0.38\) & \(1.131\) & \(0.967\) \\ \(z_{\rm eff}=0.61\) & \(1.722\) & \(1.782\) \\ \(z_{\rm eff}=0.38\), \(0.61\) & \(0.830\) & \(0.850\) \\ \hline & \((\xi_{\rm t})_{\rm std}\) & \(\sigma_{\rm Fisher}(\xi_{\rm t})\) \\ \hline \(z_{\rm eff}=0.38\) & \(4.732\) & \(3.577\) \\ \(z_{\rm eff}=0.61\) & \(7.387\) & \(6.834\) \\ \(z_{\rm eff}=0.38\), \(0.61\) & \(4.211\) & \(3.169\) \\ \hline & \((\xi_{\rm s})_{\rm std}\) & \(\sigma_{\rm Fisher}(\xi_{\rm s})\) \\ \hline \(z_{\rm eff}=0.38\) & \(3.519\) & \(3.147\) \\ \(z_{\rm eff}=0.61\) & \(6.841\) & \(5.906\) \\ \(z_{\rm eff}=0.38\), \(0.61\) & \(3.438\) & \(2.778\) \\ \hline \end{tabular} \end{table} Table 23: Comparison of the standard deviations, \(\sigma_{\rm Fisher}(\theta)\) and \((\theta)_{\rm std}\), obtained from the Fisher analysis and MCMC for \(\theta=\xi_{f},\ \xi_{\rm t},\ \xi_{\rm s}\). The results are shown for each redshift of \(z=0.38\) and \(z=0.61\) and the combined case of the two redshifts. In all cases shown here, the NGC and SGC samples are used; the MCMC results at \(z=0.38\) use the rescaled covariance matrix (Section 8.5). All the values summarised here are those already given in Tables 7, 20, 21, and 22. When combining the results for the different galaxy samples in Table 7, we use the standard error composition formula, assuming that each galaxy sample is independent. Figure 21: Marginalized two- and one-dimensional posteriors of \(\xi_{f}\), \(\xi_{\rm t}\), and \(\xi_{\rm s}\) for BOSS DR12. DHOST theories (red) vary these all three parameters, while Horndeski theories (blue) fix \(\xi_{\rm t}\) to \(\xi_{\rm s}=0\). The contours indicate \(68.27\%\) and \(95.45\%\) confidence levels. Asterisks indicate predictions by GR: \(\xi_{f}=6/11\), \(\xi_{\rm t}=15/1144\), and \(\xi_{\rm s}=0\). The NGC and SGC samples at \(z=0.38\) and \(0.61\) are combined to obtain this result. The rescaled 3PCF covariance matrix (Section 8.5) at \(z=0.38\) is used. Figure 20: Marginalized two- and one-dimensional posteriors of \(f\sigma_{8}\) and \(\sigma_{8}\) for BOSS DR12. The contours indicate \(68.27\%\) and \(95.45\%\) confidence levels. Asterisks indicate predictions by Planck. The left and right panels show the cases at \(z=0.38\) and \(z=0.61\). The NGC and SGC samples are always combined to obtain this result. The rescaled 3PCF covariance matrix (Section 8.5) at \(z=0.38\) is used. estimated from MCMC, \((\theta)_{\rm std}\), where the parameters of interest are \(\theta=\xi_{f}\), \(\xi_{t}\), \(\xi_{t}\), which are the main target of this paper. Table 23 summarises the cases for each redshift bin of \(z=0.38\) and \(z=0.61\) and for using both redshift bins, assuming DHOST theories. The values shown in this table are given from Tables 7, 20, 21, and 22. When combining the results for the different galaxy samples in Table 7, we use the standard error combination formula, assuming that each galaxy sample is independent. Table 23 shows that the MCMC results satisfy \((\theta)_{\rm std}\gtrsim\sigma_{\rm Fisher}(\theta)\), indicating that the MCMC results are consistent with the Fisher analysis results, as expected. This result reinforces the validity of our main results shown in Tables 20, 21, and 22. ### Comments on bias effects on shift terms DHOST theories change the shift term of the non-linear density fluctuation from GR, which may introduce a new bias effect in the shift term, i.e., the shift bias parameter. Since \(E_{f,s,t}\) are the parameters that cancel the \(\sigma_{8}\)-dependence using the coefficients of the shift term of the density fluctuation, when the shift bias appears, \(E_{f,s,t}\) will also be contaminated by the bias effect. Furthermore, the shift bias may induce bias effects in linear and non-linear velocity fields. In such cases, we cannot use the parameterisation \(E_{f,s,t}=\Omega_{\rm R}^{2}/\sigma^{,s}\) adopted in this paper to characterise the time dependence of \(E_{f,s,t}\) because the time dependence of the bias parameter is uncertain. If we assume the presence of the shift bias effect, we propose simultaneously constraining all the six parameters \((F_{\rm g}\sigma_{8})\), \((F_{\rm g}\sigma_{8})\), \((F_{\rm t}\sigma_{8})\), \((G_{\rm g}\sigma_{8})\), \((G_{\rm g}\sigma_{8})\) and \((G_{\rm t}\sigma_{8})\) (3.15) that characterise the growth, shift, and tidal terms in the density and velocity fields in each galaxy sample as a more general test of modified gravity theories. In such an analysis, we should remove the relation \(G_{\rm g}=G_{\rm s}-(2/3)G_{\rm t}\) imposed in DHOST theories. In particular, the \(E_{\rm s}\) parameter, which represents the ratio of the coefficients of the shift terms of the non-linear density and velocity fields: \(E_{\rm s}=(G_{\rm s}\sigma_{8})/(F_{\rm s}\sigma_{8})\), is always \(E_{\rm s}=1\) in GR and Horndeski theories. Therefore, testing whether \(E_{\rm s}=1\) in each galaxy sample verifies the theory of varying the shift term, such as DHOST-like theories. In other words, it should provide a means to test the LSS consistency relation, which DHOST-like theories violate (Section 1), using the galaxy 3PCF (or bispectrum). ## 10 Conclusions This paper presents a joint analysis of the anisotropic two-point and three-point correlation functions measured from the publicly available BOSS DR12 galaxy data to test cosmological modified gravity theories. This paper has two important implications. First, it is the first work to extract cosmological information from actual galaxy data using the anisotropic component of the galaxy three-point correlation function induced by the RSD effect. Second, this analysis is the first attempt to constrain the non-linear effects of modified gravity theories from the galaxy three-point statistics. We consider DHOST theories and their subclass, Horndeski theories, which are the candidates for modified gravity theories (see Section 2.1). They are quite general theoretical frameworks of scalar-tensor theories. Since the time evolution equation of the linear density fluctuations in these theories is scale-independent (2.6), the difference with GR appears only in the linear growth rate \(f\) in the linear theory (Hirano et al., 2019). On the other hand, the non-linear gravitational effect causes a difference in the scale-dependence of the density fluctuation, which allows us to examine the deviation from GR more clearly. Specifically, Horndeski theories change the tidal term of the second-order density fluctuation from GR, while DHOST theories change both the shift and tidal terms (2.10 and 3.16) (Hirano et al., 2018). However, since non-linear bias parameters contaminate the density fluctuations, Yamauchi and Sugiyama (2021) have pointed out that one should investigate supposedly unbiased non-linear velocity fields induced by the RSD effect (see Section 3.4 for a review). Specifically, they have suggested that one should constrain the parameters \(\xi_{\rm t}\) and \(\xi_{\rm s}\), which characterise the time evolution of the tidal and shift terms of the second-order velocity field: \(\xi_{\rm t}=15/1144\) in GR and \(\xi_{\rm s}=0\) in GR and Horndeski theories. Therefore, if \(\xi_{\rm s}\neq 0\), then it is the signal specific to DHOST theories; they have also pointed out that in DHOST theories, the parameter \(\gamma=\ln_{\Omega_{m}}(f)\), which characterises the time dependence of the linear growth rate \(f\), is extended to \(\xi_{f}=\ln_{\Omega_{m}}(f/\kappa)\) with \(\kappa\) being the time-dependent function appearing in the shift term of the density fluctuation. To this end, we test DHOST and Horndeski theories by constraining these parameters \(\xi_{f}\), \(\xi_{\rm t}\), and \(\xi_{\rm s}\) using the joint analysis method of the anisotropic 2PCF and 3PCF, established by Sugiyama et al. (2021). The following is a summary of the details of the analysis methodology and the findings obtained. 1. Following Sugiyama et al. (2019), we apply the TripoSH decomposition method to the 3PCF to extract information about the anisotropic, i.e., _quadrupole_, component of the 3PCF (see Sections 3.1 and 4.2). To simplify the data analysis, we then use only two monopole components (\(\zeta_{000}\) and \(\zeta_{110}\)) and two quadrupole components (\(\zeta_{020}\) and \(\zeta_{112}\)) from the decomposed 3PCF. For the 2PCF, we adopt the commonly used Legendre decomposition method and use the monopole and quadrupole components: i.e., \(\xi_{0}\) and \(\xi_{2}\). It is worth noting that \(\zeta_{020}\) includes only the \(M=0\) mode that appears in Scoccimarro et al. (1999)'s decomposition formalism, while \(\zeta_{112}\) includes \(M\neq 0\) modes in addition to the \(M=0\) mode. Furthermore, the TripoSH-decomposed 3PCF allows a quantitative evaluation and detailed study of the survey window effect present in the measured 3PCFs (see Section 4.3). Thus, this work is the first to extract information on the \(M\neq 0\) modes from actual galaxy data, taking into account the window effect. 2. We only use data at large scales of \(80\,h^{-1}\,{\rm Mpc}\leq r\leq 150\,h^{-1}\,{\rm Mpc}\), where higher-order non-linear corrections, called loop corrections, are not expected to contribute much to the 2PCF and 3PCF. In order to test modified gravity theories consistently using smaller scales, it is necessary to construct a model that includes the non-linear effects of modified gravity theories so that they are also included in the loop corrections. To our knowledge, only one such analysis has been performed so far for the case of the power spectrum in \(f(R)\) gravity (Song et al., 2015). However, it is known that various uncertainties arise in the non-linear power spectrum in DHOST theories, such as IR cancellation breaking (Crisostomi et al., 2020; Lewandowski, 2020) and UV divergence (Hirano et al., 2020). These theoretical uncertainties should also appear in the bispectrum. Therefore, focusing only on large scales is necessary to remove the theoretical uncertainties and safely constrain the non-linear effects of modified gravity theories. Our analysis is thus the second example of a consistent analysis incorporating the non-linear effects of modified gravity from spectroscopic galaxy surveys, and the first to use the galaxy three-point statistic. 3. As a theoretical model for the 3PCF, we use the IR-resummed model (3.12) proposed by Sugiyama et al. (2021) (see Section 3.2). This model can describe the BAO damping effect while keeping the shape of the 3PCF in the tree-level solution. For this model, we have investigated how the three decomposed non-linear effects, i.e., the growth, shift, and tidal terms, affect the 3PCF multipoles (see Figures 1 and 2 in Section 3.3). For example, in the quadrupole components (\(\zeta_{202}\) and \(\zeta_{112}\)), the dominant term is the product of the linear density fluctuation and the linear velocity field that appears during the coordinate transformation from real space to redshift space; otherwise, the non-linear effects of the density and velocity fields contribute to the quadrupole component to the same extent. Figures 12-19 in Section 9.1 show the \(\zeta_{000}\), \(\zeta_{110}\), \(\zeta_{202}\), and \(\zeta_{112}\) measured from the four BOSS galaxy samples and the corresponding theoretical models calculated using the best-fit parameters. * We have used the \(2048\) publicly available Patchy mocks to compute the covariance matrices of the 2PCF and 3PCF in Section 5. In our analysis, we ensure that the number of data bins in the 2PCF and 3PCF is sufficiently smaller than the number of the \(2048\) mocks. In particular, the parameter \(M_{2}\) (5.5), which represents the impact of a finite number of mocks on the final parameter error, is at most \(M_{2}\sim 1.1\) (see Section 6.4). * To understand the nature of the covariance matrix, we have calculated the cumulative \(\mathrm{S/N}\) of the 2PCF and the 3PCF in Section 5.4. The results show that the cumulative \(\mathrm{S/N}\) of the 3PCF has different characteristics from that of the 2PCF. In the case of the 2PCF, the galaxy sample at \(z=0.61\) with a larger volume has a smaller \(\mathrm{S/N}\) at \(z=0.61\). On the other hand, for the 3PCF, the \(\mathrm{S/N}\) at \(z=0.38\) is comparable to the \(\mathrm{S/N}\) at \(z=0.61\). Therefore, the difference in survey volume cannot explain the relationship between the \(\mathrm{S/N}\) of the 3PCF at \(z=0.38\) and \(0.61\). A possible explanation for this 3PCF \(\mathrm{S/N}\) behaviour is that the covariance matrix of the 3PCF depends strongly on the number density of the galaxies (see Sugiyama et al., 2020): the BOSS sample at \(z=0.38\) has a higher number density than the sample at \(z=0.61\), even with a smaller survey volume (Table 1). We interpret this higher number density as why the \(\mathrm{S/N}\) at \(z=0.38\) is as high as that at \(z=0.61\). * We have investigated the extent to which higher-order terms in the TripoSH decomposition of the 3PCF contain cosmological information by Fisher analysis (see Section 7.2). The results show that \(\zeta_{202}\) is the main cosmological information in the quadrupole 3PCF, while other information is contained in the higher-order term \(\zeta_{112}\) in addition to \(\zeta_{202}\). Since \(\zeta_{112}\) contains the \(M\neq 0\) modes in Scoccimarro et al. (1999)'s decomposition formalism but not in \(\zeta_{202}\), this result indicates the importance of the \(M\neq 0\) modes. * In Section 8, we have reported that at large scales (\(\geq 80\,h^{-1}\,\mathrm{Mpc}\)), there can be statistically significant differences between the 3PCFs measured from the BOSS galaxies and the corresponding theoretical models, regardless of whether we assume GR, Horndeski or DHOST theories. For example, the \(p\)-value obtained from the SGC sample at \(z=0.38\) is less than \(0.01\), and the \(p\)-value obtained from the combined sample of the four BOSS samples is \(0.001\) (see Section 8.1). This result means that the discrepancies between the galaxy data and the theoretical models cannot be explained within the framework of scalar-tensor theory, even if they are due to unknown physical effects. Other results show that the discrepancy is mainly due to the monopole component of the 3PCF rather than the quadrupole component (see Section 8.2) and that this discrepancy cannot be explained even if the prior distribution of the parameters is changed (see Section 8.3). Finally, we have repeated the same analysis for the \(100\) Patchy mocks as for the BOSS sample in Section 8.4. The results show a statistically significant difference of more than \(5\sigma\) between the \(p\)-values of the Patchy mocks and the BOSS galaxies. Therefore, the statistical variability of the Patchy mock galaxies cannot explain the low \(p\)-values (\(p\sim 0.001\)) obtained from the BOSS galaxies. * In this paper, we assume that the discrepancy between the BOSS galaxy sample and the theoretical model is due to an inappropriate 3PCF covariance matrix computed from the Patch mocks. We then take a conservative approach by artificially rescaling the 3PCF covariance matrix at \(z=0.38\) by \(15\%\) for NGC and \(25\%\) for SGC, resulting in acceptable \(p\)-values (see Section 8.5). To confirm the validity of this method, we have presented in Section 9 the results of constraining the parameters of interest with and without rescaling the covariance matrix and have confirmed that there is no significant difference in the final results obtained in these two cases. We interpret this result as being due to a more significant degeneracy effect between the parameters than the \(\sim 20\%\) difference in the 3PCF covariance matrix. Therefore, we do not expect that calculating the covariance matrix from simulation data that better reproduces the distribution of the BOSS galaxies will significantly change the results of the present paper. * We have constrained \(f\sigma_{8}\) from the BOSS galaxies assuming GR in Section 9.3. There, we have shown that adding isotropic and anisotropic 3PCF components (\(\zeta_{000}\), \(\zeta_{110}\), \(\zeta_{202}\), and \(\zeta_{112}\)) does little to improve the results compared to the 2PCF-only analysis. Nevertheless, the analysis using the Patchy mocks shows that the 3PCF information does help to reduce the bias of the mean value of \(f\sigma_{8}\) (see Section 9.2). Finally, we obtain \(f\sigma_{8}=0.549^{+0.097}_{-0.122}\) at \(z=0.38\) and \(f\sigma_{8}=0.394^{+0.088}_{-0.099}\) at \(z=0.61\) in the joint analysis of the anisotropic 2PCF and 3PCF assuming GR (9.2). These \(f\sigma_{8}\) results are not as competitive as existing constraints (e.g., Alam et al., 2017; Ivanov et al., 2020; Lange et al., 2022; Kobayashi et al., 2022) because we only use large-scale information (\(r\geq 80\,h^{-1}\,\mathrm{Mpc}\)). One may think that adding the 3PCF information does not improve the \(f\sigma_{8}\) results due to the focus on large scales only (\(r\geq 80\,h^{-1}\,\mathrm{Mpc}\)). To test this concern, we have performed a Fisher analysis that includes small scales (\(30\,h^{-1}\,\mathrm{Mpc}\leq r\leq 150\,h^{-1}\,\mathrm{Mpc}\)) and find that even if we extend the used scales to \(30\,h^{-1}\,\mathrm{Mpc}\), there is no improvement in the \(f\sigma_{8}\) results (see Section 7.4). However, note that we use the IR-resummed tree-level model of the 3PCF in this Fisher analysis. Therefore, if we use a theoretical model with various loop corrections applicable down to small scales, parameter degeneracy may break, and it may still be possible to obtain improved \(f\sigma_{8}\) constraints through a joint analysis of the 2PCF and 3PCF. * We have constrained \(\sigma_{8}\) from the BOSS galaxies assuming GR in Section 9.6. Thus, while the 3PCF information does not improve the \(f\sigma_{8}\) constraints, it helps to break the degeneracy between parameters by providing information on \(\sigma_{8}\): e.g., it allows us to constrain \(f\). We have obtained \(\sigma_{8}=0.692^{+0.209}_{-0.591}\) at \(z=0.38\) and \(\sigma_{8}=0.568^{+0.144}_{-0.547}\) at \(z=0.61\) at the \(1\sigma\) level. These results are consistent with \(\sigma_{8}=0.681\), \(0.606\) at \(z=0.38\), \(0.61\) calculated from the cosmological parameters in a flat \(\Lambda\)CDM model given by Planck 2018. The ratio of the standard deviation to the mean for \(\sigma_{8}\) is \((\sigma_{8})_{\mathrm{std}}/(\sigma_{8})_{\mathrm{mean}}=0.66\) at \(z=0.38\) and \(0.71\) at \(z=0.61\), indicating that the galaxy sample at \(z=0.38\) provides a better constraint on \(\sigma_{8}\). This result can be attributed to the higher number density of the sample at \(z=0.38\) compared to that at \(z=0.61\), similar to the argument of the cumulative \(\mathrm{S/N}\) in Section 5.4. * Our main results, the constraints on the \(\xi_{f}\), \(\xi_{\mathrm{t}}\), and \(\xi_{\mathrm{s}}\) parameters in DHOST theories, are summarised in Sections 9.7, 9.8, and 9.9. There, we obtain \(\xi_{f}=0.791^{+0.963}_{-0.691}\) (9.5), \(\xi_{\rm t}=5.414^{+6.007}_{-3.734}\) (9.7), and \(\xi_{\rm s}=5.378^{+4.993}_{-2.777}\) (9.9) at the \(1\sigma\) level; we also have \(-0.907<\xi_{f}<2.447\) (9.6), \(-1.655<\xi_{\rm t}\) (9.8), and \(-0.504<\xi_{\rm s}\) (9.10) at the \(95\%\) confidence level. Since we cannot detect the signal of the tidal and shift terms in the second-order velocity field in the present analysis, we can only present the \(95\%\) lower bounds of the \(\xi_{\rm t}\) and \(\xi_{\rm s}\) parameters. These results are consistent with the GR predictions \(\xi_{f}=\gamma=6/11\), \(\xi_{\rm t}=15/1144\), and \(\xi_{\rm s}=0\) (see Figure 21). Moreover, we have checked the consistency of the estimated results from the BOSS galaxy sample with the Fisher analysis for the constraints on the \(\xi_{f,{\rm t},{\rm s}}\) parameters in DHOST theories in Section 9.11. In Horndeski theories, we obtain \(\xi_{f}=\gamma=0.485^{+0.967}_{-0.708}\) and \(\xi_{\rm t}=5.151^{+6.112}_{-4.016}\) at the \(1\sigma\) level, and \(-1.216<\gamma<2.175\) and \(-2.098<\xi_{\rm t}\) at the \(95\%\) confidence level. The \(\gamma\) constraint in Horndeski theories obtained here is not directly comparable to those obtained from existing studies by,e.g., Gil-Marin et al.(2017) because we simultaneously vary the \(\xi_{\rm t}\) parameter in Horndeski theories. We have shown that the anisotropic component of the 3PCF contributes significantly to the constraints on the shape of the non-linear velocity field in Section 9.10. In particular, the constraints on the parameters \(\xi_{\rm t}\) and \(\xi_{\rm s}\) are \(\sim 35\) and \(\sim 20\) times better when the anisotropic component is added than when only the isotropic component is considered. This result strongly supports the main claim of this paper that the anisotropic three-point statistics should be considered to test the non-linearity of modified gravity theories. Below is a summary of some of the concerns and future enhancements to the results of this paper. * In order to encourage the future development of the anisotropic 3PCF analysis, we comment on the situation beyond the assumptions used to derive the non-linear effects of DHOST theories that we focus on in this paper (see Section 2.2). First, our analysis can be applied to other modified gravity theories, such as \(f(R)\) gravity models and brane-world models. In addition, it should also be possible to constrain effects such as the CDM-baryon relative velocity and massive neutrinos, which give rise to characteristic non-linear behaviour. The calculations of DHOST theories in this paper assume minimal coupling between the metric field and the scalar field, Gaussianity of the initial conditions, and the quasi-static limit, but we need additional correction terms if these assumptions are removed. In addition, since DHOST theories modify the shift term from GR, we cannot exclude the possibility of shift bias, which we do not consider in a \(\Lambda\)CDM model. In the presence of shift bias, we cannot use the \(\xi_{\rm s}\) and \(\xi_{\rm t}\) parameters to constrain DHOST theories, but we expect the \(E_{\rm s}\) and \(E_{\rm t}\) parameters constrained at each redshift to remain valid (Section 9.12). * We also comment on some improvements in our analysis of the anisotropic 3PCF (see Section 3.5). First, as more mock catalogues are created in the future, increasing the number of multipoles in the 3PCF to be considered should improve the results of this work (e.g., Byun & Krause 2022). Second, as shown in Figure 10, we can dramatically improve the current parameter constraints by using the theoretical model of the 3PCF, which is applicable to small scales (see Section 7.4). Third, although we have used the shape of the linear power spectrum calculated by an \(\Lambda\)CDM model in a high-\(z\) region in this work, it needs to be calculated in the framework of DHOST theories in the future (e.g., Hiramatsu & Yamauchi 2020). Fourth, we have calculated the Gaussian function describing the damping effect of the BAO signal for a \(\Lambda\)CDM model, but we also need to constrain this function itself. Finally, we have neglected the Alcock-Paczynski (AP) effect in this work; the analysis method of the anisotropic 3PCF that includes the AP effect has been established by Sugiyama et al. (2021) using the Patchy mock and should be straightforward to apply to actual galaxy data. We hope that addressing these issues will further improve our results. Finally, in Section A we provide the software package that can reproduce all the results obtained in this paper, HITOMI. The aim of HITOM is to make available all the programs we have used to complete the anisotropic 3PCF analysis, from downloading the SDSS DR12 galaxy data, measuring the 2PCFs and 3PCFs, computing the theoretical models, calculating the covariance matrices, the window function corrections, MCMC analysis, and producing figures and tables. This makes it easier for any user to see how partial improvements to HITOMI, e.g. improved 3PCF model calculations, feed through to the final parameter constraints. Furthermore, by replacing the BOSS galaxy data used in HITOMI, our analysis can be easily applied to future galaxy surveys such as DESI (DESI Collaboration et al. 2016), Euclid (Laureijs et al. 2011), and PFS (Takada et al. 2014). ## Acknowledgements NSS acknowledges financial support from JSPS KAKENHI Grant Number 19K14703. Numerical computations were carried out on Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan. The work of SH was supported by JSPS KAKENHI Grants No. JP21H01080. The work of TK was supported by JSPS KAKENHI Grant No. JP20K03936 and MEXT-JSPS Grant-in-Aid for Transformative Research Areas (A) "Extreme Universe", No. JP21H05182 and No. JP21H05189. The work of DY was supported in part by JSPS KAKENHI Grants No. 19H01891, No. 22K03627. SS acknowledges the support for this work from NSF-2219212. SS is supported in part by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. H-JS is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under DE-SC0019091 and DE-SC0023241. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement 853291). FB is a University Research Fellow.
2306.12434
Using Internal Bar Strength as a Key Indicator for Trading Country ETFs
This report aims to investigate the effectiveness of using internal bar strength (IBS) as a key indicator for trading country exchange-traded funds (ETFs). The study uses a quantitative approach to analyze historical price data for a bucket of country ETFs over a period of 10 years and uses the idea of Mean Reversion to create a profitable trading strategy. Our findings suggest that IBS can be a useful technical indicator for predicting short-term price movements in this basket of ETFs.
Aditya Pandey, Kunal Joshi
2023-06-14T15:59:21Z
http://arxiv.org/abs/2306.12434v1
# Using Internal Bar Strength as a Key Indicator for Trading Country ETFs ###### Abstract In this report, we aim to investigate the effectiveness of using internal bar strength (IBS) as a key indicator for trading country exchange-traded funds (ETFs). The study uses a quantitative approach to analyze historical price data for a bucket of country ETFs over a period of 10 years and use the idea of Mean Reversion to create a profitable trading strategy. Our findings suggest that IBS can be a useful technical indicator for predicting short-term price movements in this basket of ETFs. **Keywords:** Systematic Trading, Internal Bar Strength, Exchange Traded Funds (ETFs), Quantitative Analysis, Country Selection Strategies, Mean Reversion Contributing authors: [email protected]; [email protected]; \({}^{\dagger}\)These authors contributed equally to this work. ## 1 Introduction The Internal Bar Strength Indicator measures the position current trading session's close relative to the session's high-low range. It is generally used as an indicator of mean-reversion. Pagonidis [1] proposed that stocks that close with an IBS below 0.2, will rise in price the next day, while stocks that close with an IBS above 0.8, will often decline in value in the following session. These findings were on daily data of various ETFs covering the period from the inception for each ETF to 5/12/2013. We tested this on more recent data to find similar results. Our approach is similar to the above mentioned one in which if the IBS of a country ETF is low, it suggests that the price has closed near the low of its daily range, which may indicate that the market is oversold and due for a bounce back up. This could be an opportunity to enter a long position with the expectation that the price will rise. Conversely, if the IBS of a country ETF is high, it suggests that the price has closed near the high of its daily range, which may indicate that the market is oversought and due for a correction or pullback. This could be an opportunity to enter a short position with the expectation that the price will fall. We present a strategy based on this that aims to be always in and hedge across multiple ETFs. We also present modifications to this strategy that show how changes in holding periods and number of ETFs invested in affects the performance of the strategy. ## 2 Data and Methodology ### Data Our analysis uses ETF data from 2009/1/1 to 2019/12/31. ETF data is sourced from Yahoo Finance [2]. The descriptions are in **Table 1.** ### Methodology IBS over \(n\) days is calculated as: \[IBS_{n}=\frac{Close-Low_{n}}{High_{n}-Low_{n}} \tag{1}\] where the \(Low_{n}\) and \(High_{n}\) could be calculated over \(n\)-days (\(n=1\) being the default) \(IBS\) takes values from 0 (close is lowest) to 1 (close is highest). We analysed our data in Python using Numpy and Pandas. Charts were created using Seaborn. \begin{table} \begin{tabular}{|c|c|c|} \hline **Country** & **Ticker** & **ETF** \\ \hline \hline India & PIN & Invesco India ETF \\ \hline China & FXI & iShares MSCI China Large-Cap ETF \\ \hline South Korea & EWI & iShares MSCI Italy ETF \\ \hline Mexico & EWW & iShares MSCI Mexico ETF \\ \hline South Africa & EZA & iShares MSCI South Africa ETF \\ \hline Taiwan & EWT & iShares MSCI Taiwan ETF \\ \hline Japan & EWJ & iShares MSCI Japan ETF \\ \hline USA & IVV & iShares Core S\&P 500 ETF \\ \hline UK & EWU & iShares MSCI United Kingdom ETF \\ \hline EU & EZU & iShares MSCI Eurozone ETF \\ \hline Australia & EWA & iShares MSCI Australia ETF \\ \hline Singapore & EWS & iShares MSCI Singapore ETF \\ \hline Canada & EWC & iShares MSCI Canada ETF \\ \hline Israel & EIS & iShares MSCI Israel ETF \\ \hline Brazil & EWZ & iShares MSCI Brazil ETF \\ \hline \end{tabular} \end{table} Table 1: ETFs ## 3 The Strategy The threshold strategy is used on individual ETFs and has varying performance. We show this performance in **Table 2**. However, the time in for these strategies is low which hurts the Sharpe Ratio. Our strategy takes a basket of multiple ETFs and considers daily IBS values for all. We also include a the probabilities of getting a positive return on following a threshold bases strategy in **Table 3**. These probabilities are similar to the ones seen in [1] for the probability of an Up day split by IBS quintiles. We see that the distribution of minimum and maximum IBS values is heavy-tailed. The minimum values skew towards 0 and the maximum values skew towards 1. \begin{table} \begin{tabular}{|c|c|c|} \hline **Strategy/ETF** & **Sharpe Ratio** & **Time In** \\ \hline IBS Min-Max & **2.907858** & **100\%** \\ \hline PIN & 2.166918 & 54.463\% \\ \hline EWJ & 1.558597 & 41.489\% \\ \hline EWI & 1.475513 & 46.91\% \\ \hline EIS & 1.259877 & 47.488\% \\ \hline EZA & 1.080578 & 45.573\% \\ \hline FXI & 1.051496 & 39.357\% \\ \hline EWA & 0.974023 & 45.356\% \\ \hline EWT & 0.956515 & 42.031\% \\ \hline EZU & 0.708747 & 48.103\% \\ \hline EWS & 0.665158 & 43.188\% \\ \hline EWZ & 0.48236 & 44.958\% \\ \hline EWU & 0.288471 & 43.802\% \\ \hline IVV & 0.200902 & 48.066\% \\ \hline EWW & -0.192655 & 46.765\% \\ \hline EWC & -0.451279 & 44.814\% \\ \hline \end{tabular} \end{table} Table 2: Sharpe Ratios of Min-Max IBS Strategy and Single ETF Threshold Strategy \begin{table} \begin{tabular}{|c|c|c|} \hline **Ticker** & **Long on IBS \textless{}0.2** & **Short on IBS \textless{}0.8** \\ \hline EWJ & 0.606061 & 0.514364 \\ \hline EIS & 0.587719 & 0.513981 \\ \hline PIN & 0.580692 & 0.578240 \\ \hline EWT & 0.572383 & 0.502770 \\ \hline FXI & 0.567686 & 0.523052 \\ \hline IVV & 0.565502 & 0.460481 \\ \hline EZU & 0.552209 & 0.476647 \\ \hline EWS & 0.550308 & 0.514563 \\ \hline EWI & 0.548000 & 0.511222 \\ \hline EWZ & 0.543672 & 0.508006 \\ \hline EZA & 0.540835 & 0.509859 \\ \hline \end{tabular} \end{table} Table 3: Probabilities of Positive Returns We then implement a **min-max** strategy which goes **long on the ETF which has the minimum IBS** of that day and goes **short on the ETF which has the maximum IBS** of that day. We enter at close of the current day and exit at the close of the next day. This gives us much better results overall. We have included a snippet of our strategy performance when we randomize the basket size and choice of ETFs in the basket. In our opinion, choosing from a basket of country ETFs based on their IBS offers advantages compared to just using a single pair of ETFs. Using a basket of ETFs can provide diversification benefits by spreading the risk across multiple countries, rather than constantly re-using the same pair. This helps reduce the overall risk. Using a basket of ETFs also allows for more opportunities for trading. With multiple countries in the basket, there are more frequent opportunities for trades based on IBS signals as there is a much higher probability of having the \(min(IBS)\) and \(max(IBS)\) of the ETFs in the desired thresholds or having these values sufficiently low/high. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **ETF Basket** & **SR** & **Long Only SR** & **Short Only SR** & **N (No. of ETFs)** \\ \hline india, tw, can, israel, uk, spore, aus & 3.904 & 1.705 & 0.839 & 2 \\ \hline india, tw, can, israel, uk, spore, aus & 3.733 & 1.997 & 1.246 & 1 \\ \hline israel, tw, japan, spore, sk, india & 3.424 & 1.541 & 0.968 & 2 \\ \hline spore, india, usa, usa, israel, uk & 3.354 & 1.562 & 0.621 & 2 \\ \hline uk, india, sk, spore, china, eu & 3.311 & 1.475 & 1.200 & 1 \\ \hline china, uk, india, sk, spore, can & 3.279 & 1.472 & 1.234 & 1 \\ \hline china, uk, india, sk, spore, can & 3.201 & 1.310 & 0.772 & 2 \\ \hline uk, india, sk, spore, china, eu & 3.184 & 1.257 & 0.791 & 2 \\ \hline spore, india, usa, aus, israel, uk & 3.153 & 1.742 & 1.015 & 1 \\ \hline brazil, aus, tw, china, uk, sa, india & 3.089 & 1.338 & 0.724 & 2 \\ \hline \end{tabular} \end{table} Table 4: Best Performing Buckets Figure 1: Distribution of Min and Max IBS values ## 4 Results ### Comparing the performance to buy and hold strategy The strategy significantly outperforms a buy and hold strategy for all ETFs that we considered. **Fig. 2**: Equity graphs comparing MinMax Strategy to Buy-N-Hold Strategies ### Comparing the performance across basket sizes We ran our strategy on random combinations ranging from size 2 to size 14 of the ETFs in **Table 1**. We see a trend of increasing Sharpe ratios with an increase in basket size. This can be alluded to a higher likelihood of the minimum and maximum IBS values being closer to 0 and 1 respectively (i.e. if. **Fig. 3**: MinMax Strategy performance across basket sizes ### Comparing the performance between long only and short only strategies We see that long only strategies have a relatively higher Sharpe Ratio than short only strategies (see **Table 4**) and generally tend to perform better equity graphs. ### Comparing the performance across multi-ETF trades We modify our strategy slightly by sorting the IBS values and pick the **top-N** and **bottom-N** ETFs to go short and long on respectively. An increase in **N**, increases the Sharpe ratio. As we go long (or short) on multiple ETFs we hedge against an ETF that performs against expectations. ### Comparing the performance across multi-day holding periods All the previous strategies were over a 1-day holding period. The strategy below varies the holding period between 1 and 4 days. We see a general drop in performance as holding period increases. The 1 day IBS indicator does not seem to have an effect beyond horizons of one day. Figure 6: Strategy Performance across number of ETFs held Figure 7: Strategy Performance across Holding Periods and ETFs held ### Comparing the performance across multi-day IBS calculations We tried to calculated IBS over a 2-day period. This strategy also sees a drop in performance compared to a 1-day IBS metric. ### Comparing Close-to-Close MinMax with Open-to-Open MinMax All the strategies so far have been Close-to-Close trades. We compare this with an Open-to-Open strategy to see the importance of getting in at Close. As seen from **Table 5** the Sharpe Ratios for a Close-to-Close strategy are not good underlying the **importance of getting in at close**. ### Comparing MinMax strategy to a Threshold based strategy over ETF baskets We go back to the threshold strategy. Instead of implementing it on a single ETF, we implement it over our ETF baskets. It goes long and short on ETFs if their respective IBS values exceed predefined thresholds. **Note**: The strategy does not enter a trade unless both long and short thresholds are crossed. We compare the above strategy with our MinMax strategy across holding periods as well as multi-ETF trades. The threshold strategies universally have a worse performance than the MinMax strategies. ## 5 Effect of Trading Costs and Slippage IBS is calculated at the Close. Our strategy also enters at the Close price. In practice, this is difficult to setup. Realistically, we can calculate the IBS taking the price just before close and then entering (or not) right after. This will result in slippage and may affect performance Shorting ETFs also has an additional borrow cost. We have assumed a borrow rate of 0.01% daily. However, this rate is not constant and varies across ETFs. We performed an analysis which compares strategy performs across a range of interest rates which can be seen in **Table 6**. The strategy performs reasonably well till a borrow rate of around 0.15% daily (approx. 55.75% annual) after which borrow costs negate performance. In situations where borrow rates are high, a long-only strategy would still perform well (See **Table 4**). ETFs trades are subject to commissions depending on the trading platform used. Interactive Brokers [3] for example charges $0 commissions for trade volumes under 300,000 shares. However, commissions apply in a tiered manner for volumes higher than that. We have not factored in costs of commissions in our analysis. Figure 11: Borrow Rate (in %) vs. Sharpe Ratio ## 6 Performance of Emerging vs Developed Economies ETFs As the ETFs we picked are all country specific, we applied our strategy by splitting our "master" basket into an emerging economies basket and developed economies basket. We used MSCI's market classification [4] to split these ETFs as follows - **Emerging Economies** - Brazil, India, South Korea, China, Mexico, South Africa and Taiwan **Developed Economies** - Japan, USA, UK, EU, Australia, Canada, Singapore and Israel. We see that while emerging market ETFs had a higher mean and median Sharpe ratio for single and multi-ETF strategies, developed market ETFs had the best performing bucket (EU, Australia, Japan, Singapore, Israel, UK at 2.9). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline **ETF Basket vs Daily Borrowing Costs** & **0.01\%** & **0.05\%** & **0.1\%** & **0.15\%** & **0.2\%** & **0.25\%** & **0.3\%** \\ \hline **india, tw, can, Israel, uk, spore, aus** & 3.733 & 3.197 & 2.526 & 1.856 & 1.185 & 0.515 & -0.155 \\ \hline **uk, india, sk, spore, china, eu** & 3.311 & 2.808 & 2.178 & 1.548 & 0.919 & 0.289 & -0.339 \\ \hline **china, uk, india, sk, spore, can** & 3.279 & 2.770 & 2.134 & 1.499 & 0.863 & 0.227 & -0.408 \\ \hline **spore, india, us, assi, rael, uk** & 3.153 & 2.622 & 1.957 & 1.292 & 0.627 & -0.037 & -0.702 \\ \hline **brazil, india, sa, Japan, tw, spore** & 2.873 & 2.451 & 1.924 & 1.397 & 0.870 & 0.342 & -0.184 \\ \hline **aus, brazil, sa, india, eu, tw, spore** & 2.869 & 2.426 & 1.872 & 1.317 & 0.763 & 0.209 & -0.345 \\ \hline **sa, can, china, uk, india** & 2.822 & 2.360 & 1.782 & 1.203 & 0.625 & 0.047 & -0.530 \\ \hline **brazil, aus, tw, china, uk, sa, india** & 2.789 & 2.360 & 1.824 & 1.288 & 0.752 & 0.216 & -0.319 \\ \hline **israel, tw, japan, spore, sk, india** & 2.735 & 2.263 & 1.673 & 1.083 & 0.493 & -0.097 & -0.687 \\ \hline **brazil, spore, can, sa, china, india, mex** & 2.733 & 2.271 & 1.695 & 1.119 & 0.542 & -0.033 & -0.610 \\ \hline \end{tabular} \end{table} Table 6: How Borrowing Costs affect Sharpe Ratio Figure 12: Sharpe Ratios of Emerging vs Developed market buckets However, the overall performance is worse compared to mixed buckets. As seen in Table 7, the ETFs when chosen from **only** Emerging or **only** Developed bucket do worse than when there is no such restriction on picking ETFs (as seen in Table 8). \begin{table} \begin{tabular}{|l|l|l|} \hline **ETF_Basket** & **Sharpe_Ratio** & **Type** \\ \hline eu, aus, japan, spore, israel, uk & 2.942 & Developed \\ \hline spore, uk, israel, eu, can, aus & 2.830 & Developed \\ \hline tw, sa, china, india & 2.819 & Emerging \\ \hline spore, uk, usa, israel, japan, aus & 2.733 & Developed \\ \hline india, tw, sa, china, sk & 2.724 & Emerging \\ \hline \end{tabular} \end{table} Table 7: Top 5 ETF Buckets (From Emerging + Developed Buckets Only) Figure 13: Sharpe Ratios of Buckets when subjected to Constraints (Developed vs Emerging vs Mixed) ## 7 Conclusion We attempt to build on the Pagonidis' findings [1]. Initially, we corroborate their findings on likelihood of mean reversion based on IBS thresholds in **Table 3** on recent data. The analysis reiterates that IBS is a strong predictor of close-to-close returns on equity ETFs. We present a new strategy that aims to maximise the IBS effect by combining ETFs in a basket and going long and short on ETFs with maximum and minimum IBS values. The analysis includes modifications of the above strategy where aspects such as - holding period, number of held ETFs, IBS calculation periods - among others are varied. The presented strategy also outperforms the threshold-based strategy on multiple metrics largely due to having a 100% time-in. Our approach to use a bucket of ETFs also reduces the risk of being over-reliant on any one ETF as the signal is calculated daily and by picking more than one ETF per day, the returns are averaged between them. We must note that while our proposed strategy does well by going long and short on various ETFs, any trading costs including Commissions, Borrow Rate and Slippage could reduce the performance of the strategy - especially on the Short Side [5]. While we have tried to include some of these costs while calculating and presenting our results, real world costs might vary.
2301.01389
A collapsar origin for GRB 211211A is (just barely) possible
Gamma-ray bursts (GRBs) have historically been divided into two classes. Short-duration GRBs are associated with binary neutron-star mergers (NSMs), while long-duration bursts are connected to a subset of core-collapse supernovae (SNe). GRB 211211A recently made headlines as the first long-duration burst purportedly generated by an NSM. The evidence for an NSM origin was excess optical and near-infrared emission consistent with the kilonova observed after the gravitational wave-detected NSM GW170817. Kilonovae derive their unique electromagnetic signatures from the properties of the heavy elements synthesized by rapid neutron capture (the r-process) following the merger. Recent simulations suggest that the "collapsar" SNe that trigger long GRBs may also produce r-process elements. While observations of GRB 211211A and its afterglow ruled out an SN typical of those that follow long GRBs, an unusual collapsar could explain both the duration of GRB 211211A and the r-process-powered excess in its afterglow. We use semianalytic radiation transport modeling to evaluate low-mass collapsars as the progenitors of GRB 211211A-like events. We compare a suite of collapsar models to the afterglow-subtracted emission that followed GRB 211211A, and find the best agreement for models with high kinetic energies and an unexpected pattern of Ni-56 enrichment. We discuss how core-collapse explosions could produce such ejecta, and how distinct our predictions are from those generated by more straightforward kilonova models. We also show that radio observations can distinguish between kilonovae and the more massive collapsar ejecta we consider here.
Jennifer Barnes, Brian D. Metzger
2023-01-03T23:36:19Z
http://arxiv.org/abs/2301.01389v1
# A collapsar origin for GRB 211211A is (just barely) possible ###### Abstract Gamma-ray bursts (GRBs) have historically been divided into two classes. Short-duration GRBs are associated with binary neutron-star mergers (NSMs), while long-duration bursts are connected to a subset of core-collapse supernovae (SNe). GRB 211211A recently made headlines as the first long-duration burst purportedly generated by an NSM. The evidence for an NSM origin was excess optical and near-infrared emission consistent with the kilonova observed after the gravitational wave-detected NSM GW170817. Kilonovae derive their unique electromagnetic signatures from the properties of the heavy elements synthesized by rapid neutron capture (the \(r\)-process) following the merger. Recent simulations suggest that the "collapsar" SNe that trigger long GRBs may also produce \(r\)-process elements. While observations of GRB 211211A and its afterglow ruled out an SN typical of those that follow long GRBs, an unusual collapsar could explain both the duration of GRB 211211A and the \(r\)-process-powered excess in its afterglow. We use semianalytic radiation transport modeling to evaluate low-mass collapsars as the progenitors of GRB 211211A-like events. We compare a suite of collapsar models to the afterglow-subtracted emission that followed GRB 211211A, and find the best agreement for models with high kinetic energies and an unexpected pattern of \({}^{56}\)Ni enrichment. We discuss how core-collapse explosions could produce such ejecta, and how distinct our predictions are from those generated by more straightforward kilonova models. We also show that radio observations can distinguish between kilonovae and the more massive collapsar ejecta we consider here. Supernovae: core-collapse supernovae -- Nucleosynthesis: \(r\)-process -- Gamma-ray bursts 0000-0002-4000-0002]Jennifer Barnes 0000-0002-2886-7088]Brian D. Metzger ## 1 Introduction The durations of gamma-ray bursts (GRBs) follow a bimodal distribution, with short (sGRB) and long (lGRB) varieties (Kouveliotou et al., 1993). Observations have tied these two classes of ultrarelativistic jets to distinct progenitors, with lGRBs arising from a subset of highly kinetic core-collapse supernovae (CCSNe; e.g. Galama et al., 1998) and sGRBs originating in compact binary mergers (Abbott et al., 2017). However, analyses of GRB populations (e.g. Zhang & Choi, 2008; Tarnopolski, 2015) indicate overlap between the distributions of the durations that characterize each class, raising the spectre of GRBs whose timescales are outliers among bursts triggered by the same progenitor (e.g. Bromberg et al., 2013). While a few lGRBs with no obvious associated SNe have been tentatively attributed to a non-SN progenitor (Della Valle et al., 2006; Gal-Yam et al., 2002; Fynbo et al., 2006), the uncertain nature of the electromagnetic (EM) counterparts to compact binary mergers impeded the definitive association of these bursts with mergers. Nevertheless, it was suggested that these "hybrid" sGRB/lGRB events were related to a subclass of bursts whose light curves exhibited sGRB-like prompt spikes followed by temporally extended variable X-ray emission lasting tens or hundreds of seconds (e.g. Norris & Bonnell, 2006; Perley et al., 2009). Multi-messenger observations of the binary neutron-star merger (NSM) GW170817 improved this situation dramatically by confirming (Goldstein et al., 2017) the theorized (Paczynski, 1986; Eichler et al., 1989; Narayan et al., 1992) association between mergers and sGRBs and providing a detailed look at the merger's "kilonova" counterpart (Arcavi et al., 2017; Chornock et al., 2017; Coulter et al., 2017; Drout et al., 2017; Evans et al., 2017; Kasliwal et al., 2017; Kilpatrick et al., 2017; McCully et al., 2017; Nicholl et al., 2017; Shappee et al., 2017; Smartt et al., 2017; Soares-Santos et al., 2017; Tanvir et al., 2017; Valenti et al., 2017). This allowed Rastinejad et al. (2022) (henceforth R22) to connect the recent GRB 211211A to an NSM (see also Troja et al., 2022) despite its long duration: a \(T_{90}\) of \(\sim\)34 s according to the _Fermi_ Gamma-ray Burst Monitor (Mangan et al., 2021), or \(\sim\)51 s, as measured by _Swift_'s Burst Alert Telescope (Stamatikos et al., 2021). The association was based on the similarity of the optical and near-infrared (NIR) transient that emerged after the burst to the kilonova that arose following GW170817, as well as on the GRB's extended emission, whose duration and spectral evolution mimicked those observed to follow some sGRBs (e.g., Gompertz et al., 2022). In a variation on that theme, Yang et al. (2022) proposed that the progenitor of GRB 211211A was the merger of a white dwarf with a NS or stellar-mass black hole (BH), which produces an accretion disk as disrupted white dwarf material circularizes around the central remnant (Fryer and Woosley, 1998). However, this interpretation is in tension with semianalytic (Metzger, 2012; Margalit and Metzger, 2016; Kaltenborn et al., 2022) and numerical (Fernandez et al., 2019; Zenati et al., 2019) simulations of these disks, which cast doubt on their ability to effectively neutronize, a precondition for \(r\)-production. The LIGO-Virgo gravitational-wave (GW) detector network was offline at the time of GRB 211211A, so no GW data were available to confirm a compact object merger coincident with the burst. However, the position of the burst, offset 7.91 kpc from the center of its putative host galaxy (R22), supports the merger theory, as compact object binaries receive kicks during the SN explosions of their component stars, and often travel far from their hosts' centers before they merge (e.g Kalogera et al., 1998). Some authors (primarily Waxman et al., 2022, who propose an alternate, dust-based explanation for the NIR emission) have cast doubt on the host identification. However, since a distance is required to determine the luminosity of the transient and make comparison to our models, we are unable to engage with the undiscovered-host hypothesis in this work. Kilonovae are distinguishable by their uniquely red spectra, a hallmark imparted by the high opacities of select elements burned by rapid neutron capture (the \(r\)-process), a nucleosynthesis channel that operates in the neutron-rich gas formed from NS material unbound during the merger. However, kilonovae may not be the only explosions in which the \(r\)-process occurs. General relativistic magnetohydrodynamic (GRMHD) simulations of the accretion disks that form in the CCSN explosions of rapidly rotating massive stars ("collapsars") suggest that conditions in these disks can become neutron-rich (Siegel et al., 2019), allowing the \(r\)-process to synthesize heavy elements in winds blown off the disk. While not all simulations of collapsar disks predict a robust \(r\)-process in disk outflows (Miller et al., 2020; Just et al., 2022; Fujibayashi et al., 2022), the \(r\)-process collapsar hypothesis is also supported by patterns in Galactic chemical evolution that seem to require an \(r\)-process source that tracks star formation (Cote et al., 2019; Naidu et al., 2022). (A short delay time characterizes CCSNe in general, but is harder to square with NSMs, which represent the endpoint of an evolutionary track that unfolds over hundreds of millions or even billions of years (e.g. Belczynski et al., 2002).) Collapsars were originally proposed to explain lGRBs and the high-velocity, broad-lined Type Ic (Ic-BL) SNe that often accompany them. The implication then is that \(r\)-production may coincide with GRBs regardless of their duration. We investigate here the possibility that GRB 211211A was triggered by a collapsar, and that its optical and NIR counterpart, which we label as a transient of undetermined classification, T211211A, is the emission from an \(r\)-process-enriched SN, albeit a unique one. We describe our semi-analytic radiation transport scheme, and the models to which we apply it, in SS2. In SS3, we present the models that best reproduce the emission of T211211A, and discuss their properties. We explore in SS4 what subclass of collapsars might be able to produce these properties, but ultimately fail to convince ourselves that such explosions represent a superior explanation for T211211A. We also outline how radio observations can distinguish between the low-mass collapsar progenitors we focus on and the more conventional kilonova explanation for T211211A. We leave our parting thoughts in SS5. ## 2 Methods We use a semianalytic radiation transport model to predict the emission from \(r\)-process-enriched collapsars with a variety of parameters, which we compare to observations of T211211A. ### Radiation Transport Model We repurpose the radiation transport framework developed by Barnes and Metzger (2022) (hereafter BM22), in which the SN ejecta is divided into concentric shells whose internal energies evolve in response to radioactive heating, adiabatic expansion, and the diffusion and free-streaming of radiation. A full discussion of the implementation can be found in BM22. Here, we highlight minor adjustments we have made to our previous models and methods, which better position us to study the apparently low-mass and high-velocity explosion (R22) that produced T211211A. First, we no longer assume that \({}^{56}\)Ni is evenly distributed in the ejecta. The ejecta configurations we consider are described in more detail in SS2.2. For consistency, when calculating the \(\gamma\)-ray opacity to determine the deposition of \({}^{56}\)Ni/Co decay energy (a la Colgate et al., 1980), we now include only the ejecta layers that contain \({}^{56}\)Ni. We also now explicitly account for the thermalization of \(r\)-process decay products beyond \(\gamma\)-rays. Our current models have lower masses and higher velocities than the \(r\)-process-enriched SNe of BM22. The resulting lower densities reduce the optical depth for thermalizing interactions (Barnes et al., 2016), rendering suspect the assumption of efficient thermalization of \(\beta^{-}\)- and \(\alpha\)-particles and fission fragments. We adopt the approximate analytic formula for thermalization efficiency \(f_{\rm th}^{\rm rp}\) from Barnes et al. (2016), \[f_{\rm th}^{\rm rp}=0.36\left(\exp[-0.55t_{\rm d}]+\frac{\ln[1+0.26t_{\rm d}^{ 0.9}]}{0.26t_{\rm d}^{0.9}}\right),\] where \(t_{\rm d}\) is the time in days, and we have chosen coefficients corresponding to kilonovae with \(r\)-process masses and velocities most similar to those of our low-mass collapsar models. This factor is applied to a baseline \(r\)-process heating rate \(\dot{Q}_{\rm rp}=2.0\times 10^{10}\;t_{\rm d}^{-1.3}\;{\rm erg}\;{\rm s}^{-1}\;{ \rm g}^{-1}\)(e.g. Metzger et al., 2010; Korobkin et al., 2012). Finally, the short rise time of T211211A motivates an explicit accounting of the thermal energy deposited in the ejecta during the explosion. (In typical GRB-SNe, which rise to peak \(\sim\)1-2 weeks after explosion, and which burn larger quantities of \({}^{56}\)Ni (Prentice et al., 2016; Taddia et al., 2019; Perley et al., 2020), energy from \({}^{56}\)Ni decay rapidly dominates the adiabatically degrading initial thermal energy, preventing the thermal component from influencing the light curve.) We assume there is a characteristic time, \(t_{\rm eq}\), at which the thermal and kinetic energy in a given ejecta layer are in equipartition. The subsequent conversion of the former to the latter accelerates each layer to its final kinetic energy. By the time the SN light curve becomes visible, this conversion is effectively complete; though thermal energy remains, it is insufficient to alter the ejecta's velocity structure. Thus, it is valid to approximate the initial thermal energy as equal to half the final kinetic energy in ejecta shell \(i\), \(E_{\rm k,i}\). The residual energy at \(t_{0}\), the start time of the simulation, is then \[E_{\rm th,i}=\frac{1}{2}E_{\rm k,i}\left(\frac{t_{0}}{t_{\rm eq}}\right)^{-1}. \tag{1}\] The models of R22 also include a thermal component, which they attribute to a cocoon created by the GRB jet as it burrows through the ejecta. In the collapsar scenario, \(E_{\rm th,i}\) could be the product of an initial supernova explosion. It could also result from a shock interaction that occurs when the eventual accretion disk wind collides with either the SN ejecta or (in the case of an temporally accelerating disk outflow; see Sec. 4.1) with itself. In the interest of limiting the dimensionality of our model suite, we do not treat \(t_{\rm eq}\) as a free parameter. However, preliminary explorations found that \(t_{\rm eq}=1\;{\rm s}\) allowed us to fit the early blue and ultraviolet (UV) emission. This value should be treated as a rough indicator--the exact balance that is achieved between thermal and kinetic energy and whether that balance is uniform over the entire ejecta, for example, are open questions. Nevertheless, it points to heating timescales that could be compatible with either jet breakout or a prompt explosion. ### Model Suite Our model suite is summarized in Table 1. Based on the arguments of R22, we focus on collapsar models with low masses and high velocities. We consider total ejecta masses in the range \(0.5M_{\odot}\leq M_{\rm ej}\leq 1.0M_{\odot}\), and average ejecta velocities \(v_{\rm ej}\) of 0.1-0.35\(c\), where \(v_{\rm ej}=\sqrt{2E_{\rm k}/M_{\rm ej}}\), with \(E_{\rm k}\) the ejecta's kinetic energy. In all our models, mass density follows a broken power law, \(\rho(v)\propto v^{-d}\), with \(d=1\) (10) in the inner (outer) parts of the ejecta. The low luminosities of T211211A, relative to the GRB-SN population, suggest lower quantities of \({}^{56}\)Ni, so we restrict our exploration to models with \(0.01M_{\odot}\leq M_{56}\leq 0.1M_{\odot}\). We consider \(r\)-process masses \(M_{\rm rp}/M_{\odot}\) of 0.0, 0.02, 0.05, and 0.08. These values were motivated by the luminosity of T211211A, which constrains the total radioactive mass to be low. That they are lower than what was suggested by Siegel et al. (2019) for typical collapsar \(r\)-process yields (\(\lesssim\)1\(M_{\odot}\)) also reflects the overall lower ejecta masses in this work. (In contrast, Siegel et al. (2019) focused on the more massive progenitors proposed by Heger et al. (2000) to explain CCSNe with higher \(M_{\rm ej}\).) As mentioned in SS2.1, our ejecta structure is more complex than in BM22, since the \({}^{56}\)Ni mass fraction is no longer required to be uniform. Instead, we extend \({}^{56}\)Ni from some inner normalized mass coordinate \(\psi_{56}\) to the edge of the ejecta. Such a configuration might be realized if _r_-process winds fail to mix completely with the earlier ejecta containing whatever \({}^{56}\)Ni is burned by the prompt explosion. As in BM22, the _r_-process material is mixed from the center of the ejecta out to a normalized mass coordinate \(\psi_{r\rm p}\). Given \(M_{\rm ej}\), \(M_{56}\), and \(M_{\rm rp}\), the quantities \(\psi_{56}\) (\(\psi_{\rm rp}\)) can take on values from 0 to \([1-M_{56}/M_{\rm ej}]\) (\(M_{\rm rp}/M_{\rm ej}\) to 1). For each parameter combination, we choose five values of \(\psi_{56}\) and \(\psi_{r\rm p}\) that are spaced uniformly within the ranges defined above. We assume that \({}^{56}\)Ni (_r_-process material) is evenly distributed over \(m_{\rm enc}\geq\psi_{56}\) (\(m_{\rm enc}\leq\psi_{r\rm p}\)), and consider all combinations of \(\psi_{56}\) and \(\psi_{r\rm p}\) for which the sum of \({}^{56}\)Ni and _r_-process mass fractions is less than or equal to unity everywhere in the ejecta. As in BM22, the opacity of an ejecta shell is determined by its composition. Ejecta lacking both \({}^{56}\)Ni and _r_-process elements is assumed to have a baseline opacity of 0.05 cm\({}^{2}\) g\({}^{-1}\). ### Model Evaluation We calculate the broadband evolution of our model in \(ugriz\), \(B\), \(J\), and \(K\) bands for every combination of the parameters delineated in Table 1, and compare the results to the afterglow-subtracted photometry of T21121A published in R22, for times \(\geq\)0.05 days. We quantify the agreement between the data and each instantiation of the model using a simple chi-square metric, \[\chi^{2} =\sum_{i}\frac{(F_{\rm obs,}i-F_{\rm pred,}i)^{2}}{\sigma_{i}^{2}}\] \[+\sum_{j}\frac{\left[\max(F_{\rm pred,}j-F_{\rm ul,}j,0)\right]^ {2}}{\sigma_{\rm est}},\] where \(F_{\rm obs,}i\) (\(F_{\rm pred,}i\)) is the observed (predicted) flux corresponding to measurement \(i\), which we derive from reported magnitudes, and \(\sigma_{\rm i}\) is the uncertainty on the \(i\)th measurement. The second sum runs over reported upper limits, \(\{F_{\rm ul,}j\}\). Its terms contribute to \(\chi^{2}\) only when the model's predicted flux exceeds the upper limit. The variable \(\sigma_{\rm est}\) is an estimated uncertainty on the upper limit, which we set to 0.1 mag. ## 3 Results We perform a grid search to locate the model in the suite with the lowest \(\chi^{2}\), and find that the best match to the data (with \(\chi^{2}\approx 32\)) is achieved by the parameters \(M_{\rm ej}=1.0M_{\odot}\), \(v_{\rm ej}=0.26c\), \(M_{56}=0.01M_{\odot}\), \(M_{r\rm p}=0.05M_{\odot}\), \(\psi_{56}=0.99\), and \(\psi_{r\rm p}=0.76\). The light curve for this model is compared to data in Fig. 1. While this model agrees well with the data, degeneracies among the parameters and the simplicity of the semianalytic model motivate us to investigate additional ejecta models. Furthermore, our procedure does not circumscribe the distribution of \({}^{56}\)Ni in the ejecta beyond the physical requirement that \((1-\psi_{56})M_{\rm ej}\geq M_{56}\). The model above, which features an outer shell composed \begin{table} \begin{tabular}{l l r} \hline _Symbol_ & _Definition_ & _Values_ \\ \hline \(M_{\rm ej}\) & Total ejecta mass & \(0.5M_{\odot}\) – \(1.0M_{\odot}\), \\ & & \(\Delta M_{\rm ej}/M_{\rm ej}=0.08\) \\ \(v_{\rm ej}\) & Average ejecta & \(0.1c\) – \(0.35c\), \\ & velocity & \(\Delta v_{\rm ej}/v_{\rm ej}=0.18\) \\ \(M_{56}\) & \({}^{56}\)Ni mass & \(0.01M_{\odot}\) – \(0.1M_{\odot}\), \\ & & \(\Delta M_{56}=0.01\) \\ \(M_{r\rm p}\) & _R_-process mass & (0.0, 0.02, 0.05, 0.08)\(M_{\odot}\) \\ \(\psi_{56}\) & Lowest mass & \(0\) – \((1-M_{56}/M_{\rm ej})\), \\ & coordinate with \({}^{56}\)Ni & \(\Delta\psi_{56}=(1-M_{56}/M_{\rm ej})/5\) \\ \(\psi_{r\rm p}\) & Highest mass & \((M_{\rm rp}/M_{\rm ej})\) – \(1\), \\ & coordinate with & \(\Delta\psi_{r\rm p}=(1-M_{\rm rp}/M_{\rm ej})/5\) \\ & _r_-process matter & \\ \hline \end{tabular} \end{table} Table 1: Parameters of the model suite of pure \({}^{56}\)Ni, is allowed within our framework. However, it is worth determining whether less extreme ejecta configurations can reproduce the data with comparable fidelity. In SS3.1, we zoom out and identify larger populations of models with a range of parameters that nonetheless provide good matches to the photometry of T211211A. ### Properties of successful models Before presenting predictions generated by particular parameter combinations, we briefly survey the landscape of all models that provide a satisfactory fit to the observations. We define a satisfactory fit as one for which \(\chi^{2}\leq 100\). Since our model has six degrees of freedom (\(N_{\rm dof}=6\)) and is fit against 40 observations and upper limits, this translates to a reduced chi-square metric \(\chi^{2}_{\rm red}\equiv\chi^{2}/N_{\rm dof}\lesssim 2.5\). This filter selects \(\sim\)1600 models, or just over 2% of the full suite. Fig. 2 shows how the six model parameters are distributed within the good-fitting model set. Models with good fit scores draw from the full range of \(M_{\rm ej}\) we consider, though they evince a slight preference for lower ejecta masses. The range of velocities is narrower; agreement with the data is easier to achieve for \(v_{\rm ej}\gtrsim 0.2c\). While such velocities are similar to those inferred for the kilonova model of R22, when combined with low-mass collapsars' larger ejecta masses (vis-a-vis kilonova), they imply kinetic energies near or beyond the upper limit of what has historically been considered possible for SNe (Thompson et al., 2004; Mazzali et al., 2014; Chen et al., 2017). The parameters governing \(r\)-process and \({}^{56}\)Ni production and distribution complete the picture. As the third panel shows, all \(r\)-process masses we consider (except \(M_{\rm rp}=0\), which cannot produce the observed NIR excess) can yield photometry more or less consistent with observations. Masses of \({}^{56}\)Ni are more tightly constrained; none of the good-fitting models have \(M_{56}>0.05M_{\odot}\). As indicated in the final panel, the majority of the good-fitting models feature a particular mixing pattern, in which \(r\)-process material is mixed out from the center to fairly high normalized mass coordinates \(m_{\rm enc}\), while \({}^{56}\)Ni is concentrated in the outermost layers of the ejecta. We will discuss in SS4 if this configuration is strictly necessary to reproduce the photometry of T211211A, and whether an outflow with such a radially stratified composition could be produced in nature. ### Successful model clusters To better understand how successful models are situated within the six-dimensional parameter space in Figure 2: The distribution of model parameters for models with \(\chi^{2}\leq 100\). The good-fitting models span the full range of \(M_{\rm ej}\) in our model suite (_first panel_), but draw primarily from the upper end of our \(v_{\rm ej}\) range (\(v_{\rm ej}\gtrsim 0.2c\); _second panel_). While various \(r\)-process masses, \(0.01M_{\odot}\leq M_{rp}\leq 0.08M_{\odot}\), can be compatible with the observations, lower \({}^{56}\)Ni masses (\(M_{56}\lesssim 0.05M_{\odot}\)) are preferred (_third panel_). The majority of the successful models (_fourth panel_) feature well-mixed \(r\)-process material, but concentrate their \({}^{56}\)Ni in a thin shell at the outer edge of the ejecta. In the top two panels, the variable widths of the bars reflect the logarithmic spacing of the model parameters. which our suite is defined, we use the Agglomerative Clustering routine of Python's scikit-learn package (Pedregosa et al., 2011) to sort them into five groups. The hierarchical clustering algorithm in the SciPy library guided our choice of the number of clusters. The coordinates of the cluster centroids are reported in Table 2, along with the percentage of good-fitting models belonging to each cluster. These data provide additional insight into the combinations of parameters capable of reproducing the photometry of T211211A. While some of the cluster centroids share the combination of high \(E_{\rm k}\) and extreme \(\psi_{56}\) suggested by Fig. 2, Table 2 shows that these characteristics are not required to reproduce the data within our error tolerance. In fact, aside from centroid 5, all the centroids differ from the best-fit model in at least one significant way. Of particular interest are centroid 1, which has barely half the kinetic energy of the best-fit model; centroid 2, which has both lower \(E_{\rm k}\) and a lower \(\psi_{\rm rp}\); and centroid 3, which has more extensive \({}^{56}\)Ni mixing. Still however, Table 2 suggests some trade-off between \(\psi_{56}\) and \(v_{\rm ej}\). Successful models with more extensive \({}^{56}\)Ni mixing have higher average velocities. This is required to reproduce the light curves' rapid evolution; a spatially extended emitting region must expand faster to yield a similar light-curve time scale. In Fig. 3, we show the light curves produced by the centroids (1, 2, and 3) highlighted above. While the agreement with observations is by definition poorer than for the best-fit model, each set of parameters reproduces the fundamental characteristics of T211211A. Given the simplicity of our radiation transport method, the only slightly poorer fits are not sufficient reason to discard these models. ## 4 Discussion As explained in SS3.2, due to degeneracies among parameters, low-mass collapsar models with varying physical properties reproduce the photometry of T211211A with comparable fidelity. However, even these degeneracies do not allow infinite flexibility; all of the models have very high velocities and/or poorly mixed \({}^{56}\)Ni that would render them outliers among observed GRB-SNe and SNe Ic-BL. We next discuss two possible interpretations of these results, and outline how radio observations can distinguish low-mass collapsars from standard kilonovae. ### A low-mass collapsar? The low ejecta masses we explore here, which are necessitated by the swift evolution of T211211A, are already a departure from the standard collapsar picture, in which a few solar masses of stellar material are ejected (e.g. Cano et al., 2017). The formation of an accretion disk--the defining feature of the collapsar model--is enabled by the rapid rotation of the pre-explosion star. Processes that remove mass from the star earlier in its evolution (e.g., line-driven winds or stripping by a companion) also siphon away the angular momentum that allows disk formation. Our low-\(M_{\rm ej}\) models thus correspond more naturally to a scenario in which a large fraction of the pre-explosion mass is captured by the NS or BH formed during the explosion than one in which the progenitor mass is unusually low at the point of collapse. The low masses and modest \({}^{56}\)Ni production that characterize our good-fitting models could plausibly arise from the explosion of a star with slightly less angular momentum than in more typical collapsars (e.g. Janiuk and Proga, 2008; Murguia-Berthier et al., 2020). The proto-neutron star produced when such a progenitor collapses (e.g., Dessart et al., 2008) would initially rotate relatively slowly. This, coupled with the delay between the initial collapse and the circularization of the outer layers into an accretion disk, may preclude the kind of prompt (\(\lesssim\)1 second post-collapse) MHD jetted explosion (e.g., Mosta et al., 2014; Varma et al., 2021) invoked to explain the copious \({}^{56}\)Ni production in more typical SNe Ic-BL (e.g. Barnes et al., 2018, though see Zenati et al. (2020) for an alternative \({}^{56}\)Ni production site). A weaker explosion could nonetheless launch a low-mass outflow enriched with \({}^{56}\)Ni burned in the inner layers (Maeda and Nomoto, 2003), thus forming the outer layers of the SN ejecta. Subsequent material would be ejected once the infalling material had coalesced into an accretion disk. While most of the disk mass would accrete onto the central remnant, powering a relativistic jet (e.g. Bromberg and Tchekhovskoy, 2016), a fraction would become gravitationally unbound and expand outward at mildly relativistic velocities (e.g. Siegel and Metzger, 2017). The accretion rate onto the disk will decline with time, with consequences for nucleosynthesis in the winds. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline _Index_ (\%) & \(M_{\rm ej}^{\dagger}\) & \(v_{\rm ej}^{\dagger}\) & \(M_{\rm 56}^{\dagger}\) & \(M_{\rm rp}^{\dagger}\) & \(\psi_{56}\) & \(\psi_{\rm rp}\) \\ \hline 1 & (19) & 0.61 & 0.25 & 0.036 & 0.053 & 0.94 & 0.68 \\ 2 & (27) & 0.65 & 0.23 & 0.022 & 0.067 & 0.97 & 0.33 \\ 3 & (19) & 0.60 & 0.31 & 0.012 & 0.053 & 0.60 & 0.75 \\ 4 & (19) & 0.88 & 0.22 & 0.024 & 0.053 & 0.97 & 0.66 \\ 5 & (15) & 0.63 & 0.30 & 0.016 & 0.049 & 0.97 & 0.64 \\ \hline \hline \end{tabular} \({}^{\dagger}\) Values in units of \(M_{\odot}\). \({}^{\ddagger}\) Values in units of \(c\). \end{table} Table 2: Cluster centroids of the successful models Early high accretion rates through the disk support cooling by neutrino emission (De & Siegel, 2021), followed by the neutronization of the disk mid-plane (Siegel et al., 2019; Just et al., 2022; Fujibayashi et al., 2022). If the newly neutron-rich matter from the mid-plane escapes the disk without re-protonizing, an _r_-process can occur as it decompresses upon ejection. As the accretion rate drops, neutronization of the infalling material ceases, truncating _r_-production in disk outflows. Thereafter, disk winds are composed of He and, to a much lesser degree, iron-peak elements formed in the disk-wind outflows when the electron fraction \(Y_{\rm e}\approx 0.5\)(Siegel et al., 2019; Zenati et al., 2020). These later ejections account for the non-radioactive mass in our ejecta models. The time-dependent disk-outflow properties also depend on the strength and structure of the magnetic field feeding the BH. The early, _r_-process-rich winds are likely ejected with velocities \(\sim\)0.1\(c\), corresponding to a weak poloidal magnetic field (Siegel & Metzger, 2017). However, subsequent outflows may be launched at increasingly high velocities, as continual accretion strengthens the magnetic field in the disk (e.g. Tchekhovskoy & Giannios, 2015; Gottlieb et al., 2022). For a sufficiently strong and ordered poloidal magnetic flux, wind velocities could reach \(\approx\)0.3\(c\)(Christie et al., 2019). The higher velocity of the later-stage ejecta would induce mixing between disk-wind outflows launched at different times and substantively increase the ejecta's total kinetic energy. Such velocity evolution can therefore account both for the high average velocities and the compositional profiles of the good-fitting models. However, the general lack of evidence for \({}^{56}\)Ni-mixing means that the earliest mass ejection must occur at velocities high enough to avoid mixing with the disk-wind matter. We see that with a modest degree of fine-tuning, this scenario can explain the fundamental features of our favored ejecta models. We emphasize that this ejecta configuration is likely to differ from a garden-variety (higher angular momentum) collapsar, for which the overall ejecta mass is larger and a greater fraction of the disk outflows may be _r_-process-enriched, due to the higher accretion rates at early times. ### A collapsar in kilonova clothing? Figure 3: Due to degeneracies among model inputs, diverse sets of parameters produce comparable light curves. The panels above show the broadband light curves for some of the centroids defined in Table 2, which differ from the best fit model either in their level of \({}^{56}\)Ni or _r_-process mixing, or in their kinetic energy. Data from R22 are shown for comparison. While very large \(E_{\rm k}\) and minimal \({}^{56}\)Ni mixing are common to many of the good-fitting models, they are apparently not required to reproduce the data. While we argued in SS4.1 that nature may produce ejecta similar to those described in SS3.1, a more skeptical reading of our analysis is that it selects models whose emission is fundamentally similar to a kilonova. The two traits that distinguish our low-mass collapsars from kilonovae are the production, albeit limited, of \({}^{56}\)Ni and the significant quantities of non-\(r\)-process ejecta. However, our good-fitting models have ejecta configurations that dampen the effects of these attributes on their emission, relative to comparable kilonova models. The low mass of \({}^{56}\)Ni, combined with its position at high velocities, limits its impact on the resulting SN. Of the relatively little energy produced by \({}^{56}\)Ni decay, only a small fraction is thermalized, due to the low densities near the outer edge of the ejecta where the \({}^{56}\)Ni is located (Colgate et al., 1980). What energy does thermalize diffuses rapidly through the low-optical-depth layers at the ejecta's edge. Its effects are ephemeral, and easily overpowered by the signal from the ejecta's residual thermal energy (Eq. 1). The relative invisibility of \({}^{56}\)Ni in our models is illustrated in the top panel of Fig. 4, which shows the impact of removing \({}^{56}\)Ni on the light curves of the centroid 3 model (Table 2 and Fig. 3). We selected centroid 3 because its \({}^{56}\)Ni is mixed more thoroughly into the ejecta than in other centroids, which should increase the sensitivity of the emission to \({}^{56}\)Ni decay. Although the model including \({}^{56}\)Ni, whose light curves form the upper bounds of the shaded curves in Fig. 4's top panel, is brighter than the model without, whose light curves constitute the lower bounds, these differences become most significant at later times, when the data are less constraining, and modeling efforts face more uncertainties (e.g., the nature of optically thin emission; see BM22). The effect at \(t\lesssim 1\) day is minimal, because at these times the radiation of residual thermal energy dominates. To test whether our assumption of an initial thermal component biases our analysis against models with larger \(M_{56}\) or lower \(\psi_{56}\), we run a separate model grid with the same parameter ranges defined in Table 1, but which omits \(E_{\rm th,i}\) as defined by Eq. 1. Instead, we initialize the internal energies of the ejecta shells by estimating the combined effects of radioactive heating and adiabatic expansion for \(t\leq t_{0}\), which results in much lower internal energies. The bottom panel of Fig. 4 shows the light curves of the best-fit model from this grid, which has \(M_{\rm ej}=0.58M_{\odot}\), \(v_{\rm ej}=0.3c\), \(M_{56}=0.06M_{\odot}\), \(M_{r\rm p}=0.08M_{\odot}\), \(\psi_{56}=0.90\), and \(\psi_{r\rm p}=0.78\). Its \(\chi^{2}\) is 86, higher than the best-fit model in our original suite, but comparable to the models in our good-fitting subset. While \(M_{56}\) is slightly higher than in the original good-fitting model subset (see Fig. 2), the \({}^{56}\)Ni is again concentrated in the ejecta's exterior. This suggests that the \({}^{56}\)Ni in our original suite was not forced to the edge of the ejecta by our adopted model for \(E_{\rm th,i}\), but rather that significant and/or well-mixed \({}^{56}\)Ni decreases agreement with obser Figure 4: Modified low-mass collapsar models probe the effects of \({}^{56}\)Ni and on the emission. In both panels, we compare to data from R22. _Top panel_: Broadband light curves for an unaltered centroid 3 model and a version with \(M_{56}=0\), which form the upper and lower bounds of the filled curves, respectively. Removing \({}^{56}\)Ni does not fundamentally change the emission; the apparent differences at \(t\gtrsim 1\) day are due mainly to our assumptions about emission from optically thin ejecta. _Bottom panel:_ The best-fit model (with \(M_{56}=0.06M_{\odot}\) and \(\psi_{56}=0.9\)) from a suite in which heating is due solely to radioactivity fails to match the early signal, suggesting that \({}^{56}\)Ni-heating is not a substitute for \(E_{\rm th,i}\). The minor role of \({}^{56}\)Ni in our original good-fitting models is not due to our inclusion of an initial thermal energy reservoir (Eq. 1), but instead reflects the incompatibility of the early data with copious, well-mixed \({}^{56}\)Ni. vations. In particular, \({}^{56}\)Ni, on its own, cannot explain the earliest emission, particularly in bluer bands. Given that \({}^{56}\)Ni is not necessary to explain the late-time signal (see top panel) and appears to be insufficient to explain the earlier parts of the light curves, we conclude that \({}^{56}\)Ni is _allowed_ but not _required_ by the data. The position of \({}^{56}\)Ni in our good-fitting models also calls into question the import of the ejecta's non-radioactive material. With \({}^{56}\)Ni restricted to the outermost layers, the outward diffusion of the energy from \({}^{56}\)Ni-decay is effectively independent of \(M_{\rm ej}\). While energy from \(r\)-process decay must diffuse through a larger fraction of the ejecta, the opacity it encounters is dominated by \(r\)-process elements; the low opacity of the inert material means its effect on diffusion times is minimal. Thus, though non-radioactive matter dominates \(M_{\rm ej}\), its influence on the emission may be subtle. To explore the role of non-radioactive material, we transform the centroid 3 collapsar model into a kilonova by excising all of its non-\(r\)-process ejecta. (I.e, this model has \(M_{\rm ej}=M_{\rm rp}=0.053M_{\odot}\) and a reduced \(v_{\rm ej}=0.26c\) on account of its lower mass.) Our adopted \(r\)-process opacity (\(\kappa_{\rm rp}=10\) cm\({}^{2}\) g\({}^{-1}\)) means this pure \(r\)-process model corresponds to a kilonova that originated in low-\(Y_{\rm e}\) conditions and is rich in lanthanides and actinides. In other words, its composition is akin to that of a "red" kilonova (e.g Barnes & Kasen, 2013). The model's light curves are displayed in Fig. 5. The agreement in \(J\) and \(K\) remains decent, confirming that non-radioactive matter has only a small impact on radiation from the \(r\)-process-enriched layers. The largest effect is on the early signal, particularly in bluer bands, which suffers because of a reduction in the initial internal energy resulting from the reduced mass of the kilonova ejecta. (\(E_{\rm th,\it i}\) scales with shell mass in our model; see Eq. 1.) While we do not attempt to optimize a kilonova model here, our current method for determining \(E_{\rm th,\it i}\) suggests such disagreement would be robust across a broad range of kilonova parameters, owing to the vastly different mass scales of kilonovae and even low-mass collapsars. However, kilonova models with added complexity can avoid the early-time disagreement. R22 achieved a good fit to the observations by incorporating two additional, lower-opacity (hence bluer) kilonova components, as well as a shock-heated cocoon. In an echo of our earlier discussion of \({}^{56}\)Ni, we conclude that large quantities of non-radioactive mass are neither ruled out by the data nor necessary to explain them. ### Tie-breaker: radio emission Our analysis does not conclusively favor a low-mass collapsar origin for T211211A. However, the possibility remains that it, or a future transient with similar properties, could be generated by a collapsar explosion with the combination of parameters detailed in SS2.2. In the event that nature conspires to produce such an explosion, its late-time radio signal could offer a way to distinguish it from a kilonova born of an NSM. Both collapsars and kilonovae generate synchrotron radio emission as their ejecta collide with material surrounding the explosion site and decelerate. The rise of the resulting radio light curve, which takes anywhere from a few to several years, is related to the distribution of the fastest material, and therefore sensitive to assumptions about the density profile at the edge of the ejecta. In contrast, the eventual light-curve peak reflects the total kinetic energy contained in the ejecta, which is greater for energetic low-mass collapsars than for mergers by more than an order of magnitude, due principally to the higher masses of the former. Because of the recentness of GRB 211211A, radio non-detections obtained since the burst, like those of R22, most strongly constrain the high-velocity tail of the ejected matter. Continued observations will be invaluable for probing the total kinetic energy of the explosion. Following Nakar & Piran (2011) and Kathirgamaraju et al. (2019), we estimate the properties of the radio Figure 5: Broadband light curves for a kilonova model containing only the \(r\)-process ejecta from centroid 3, compared to photometry from R22. Non-\(r\)-process material is not required to explain the NIR emission of T211211A. However (see text), the lower masses of kilonovae compared to collapsars require different assumptions about the initial thermal energy in order to match the earliest and bluest observations. signals from collapsars and kilonovae. The time at which the radio light curve peaks is \[t_{\rm pk}\approx 2.7\ {\rm yr}\left(\frac{E_{51}}{\beta_{0}^{2}}\right)^{1/3} \left(\frac{3}{5\beta_{0}}-1\right), \tag{2}\] where \(E_{51}\) is the kinetic energy of the explosion in foe, \(\beta_{0}\) is the velocity of the slowest ejecta layer, and we have eliminated the dependence on the circumburst number density by fixing \(n\) to the value reported in R22 (\(n=0.54\)). If we additionally adopt the values R22 derived for the fractions of energy in electrons (\(\epsilon_{\rm e}=3.28\times 10^{-2}\)) and magnetic fields (\(\epsilon_{\rm B}=1.52\times 10^{-4}\)), we can estimate the peak flux at a given radio frequency \(\nu\) as \[F_{\nu,{\rm pk}}\approx 26\ {\rm\mu Jy}\ E_{51}\beta_{0}^{\frac{5p-7}{2}}\nu_{ 9.5}^{\frac{1-p}{2}}. \tag{3}\] In Eq. 3, \(\nu_{9.5}\) is \(\nu\) normalized to \(10^{9.5}\) Hz and R22's value of \(p=2.014\) is used to calculate the prefactor (and for consistency should be adopted when evaluating the exponents). We have also converted from luminosity to flux assuming the distance to T211211A is 350 Mpc (R22). Fig. 6 shows the peak time and peak flux at 6 GHz of our best-fit collapsar model and our five centroids, calculated according to Eqs. 2 and 3 and assuming that \(\beta_{0}=v_{\rm ej}/c\) for each model. (The exact value of \(\beta_{0}\) is difficult to define for realistic ejecta density profiles, but since the fastest moving layers of the ejecta carry the majority of the kinetic energy, this choice is reasonable.) The pink shaded region in Fig. 6 shows the range of peak properties for collapsars with parameters that this work suggested might produce emission consistent with T211211A: \(0.5M_{\odot}\leq M_{\rm ej}\leq 1.0M_{\odot}\) and \(0.2c\leq v_{\rm ej}\leq 0.35c\). For comparison, we also plot the peak properties of a kilonova with \(m_{\rm ej,k}=0.05M_{\odot}\) (the total \(r\)-process mass suggested by R22) and a range of velocities \(0.1c\leq v_{\rm ej,k}\leq 0.3\) as a dashed black line. (We consider a range of velocities because the minimum velocity is nontrivial to define in the case of a multi-component model like the one constructed in R22.) Due to their greater kinetic energies, the collapsar models have much higher fluxes at peak than a kilonova would when the parameters beyond \(E_{51}\) and \(\beta_{0}\) are held constant. Ground-based radio telescopes (e.g., the Very Large Array) could easily distinguish between these cases with long-term monitoring of the radio signal. ## 5 Conclusion We have used semianalytic radiation transport modeling to investigate the possibility that the ambiguous GRB 211211A originated not in a compact object merger, as proposed by R22, Troja et al. (2022) and Yang et al. (2022), but rather in the CCSN explosion of a star with less angular momentum than a typical lGRB progenitor. According to this theory, the \(r\)-process elements that provide the NIR excess observed in the GRB afterglow were synthesized not from neutron-rich material expelled during the coalescence of a neutron-star binary, but from ordinary stellar material that became neutron-rich in an accretion disk mid-plane as a result of weak interactions in the presence of electron degeneracy (Siegel et al., 2019). Our model assumes that the expulsion of this material from the disk enriches the central core of the SN ejecta with \(r\)-process elements. We find that certain regions of our parameter space produce emission that broadly agrees with observations of the afterglow-subtracted light curves of T211211A. However, the particular constellations of parameters required to achieve a reasonable fit--namely very high velocities and the presence of \({}^{56}\)Ni only at the outer edges of the ejecta--point to an explosion distinct from the standard picture of collapsars. Further bedeviling the interpretation of T211211A is the fact that \({}^{56}\)Ni--at least when restricted to the Figure 6: Even low-mass collapsars generate much brighter radio afterglow emission than kilonovae. The time-to-peak and peak flux at 6 GHz for our best-fit collapsar model and the centroid parameters of Table 2 are plotted as pink diamonds. The pink shaded region shows the expected peak properties for collapsars with masses (velocities) in the range \(0.5M_{\odot}\)–\(1.0M_{\odot}\) (\(0.2c\)–\(0.35c\)), which typify the properties of our good-fitting models. We show as a dashed black line the peak properties of kilonovae with \(m_{\rm ej,k}=0.05M_{\odot}\) and \(0.1c\leq v_{\rm ej,k}\leq 0.3c\). ejecta's edge--has only a minor impact on the emission. The large quantity of non-radioactive material (the other feature that distinguishes our low-mass collapsars from the merger-driven models of R22 and Yang et al. (2022)) plays a larger role, but its importance is contingent on our assumptions about how internal energy is generated in the earliest phases of the explosion. Equally plausible treatments put forward by R22 are able to account for the early blue emission without appealing to mass beyond the \(r\)-process material required to explain the NIR excess. Thus we conclude that although a collapsar could explain T211211A, nothing about T211211A's emission serves as a smoking gun for a collapsar progenitor. Fortunately, the lack of clarity surrounding GRB 211211A and its afterglow will itself be transient. We have shown that radio observations can easily distinguish signals produced by collapsar ejecta from those generated by the much less massive outflows produced by merging compact objects. Furthermore, in the future, gravitational-wave detectors will definitively settle the question of a merger v. collapsar trigger for difficult-to-classify GRBs. In the multi-messenger era, we can hope to understand the full diversity of GRB emission and progenitors. ## 6 Acknowledgments The authors thank A. Polin, J. Rastinejad, G. Schroeder, and A. V. Villar for helpful conversations. J.B. gratefully acknowledges support from the Gordon and Betty Moore Foundation through Grant GBMF5076 B.D.M. is supported in part by the National Science Foundation (Grants AST-2009255, AST-2002577). This work was performed in part at Aspen Center for Physics, which is supported by National Science Foundation PHY-1607611, as well as the Kavli Institute for Theoretical Physics at the University of California at Santa Barbara, which receives funding from the National Science Foundation though Grant PHY-1748958.
2305.04706
A new construction of an MDS convolutional code of rate 1/2
Maximum distance separable convolutional codes are characterized by the property that the free distance reaches the generalized Singleton bound, which makes them optimal for error correction. However, the existing constructions of such codes are available over fields of large size. In this paper, we present the unique construction of MDS convolutional codes of rate 1/2 and degree 5 over the field F_{11}.
Zita Abreu, Raquel Pinto, Rita Simões
2023-05-08T13:38:57Z
http://arxiv.org/abs/2305.04706v2
# A new construction of an MDS convolutional code of rate \(1/2\) ###### Abstract Maximum distance separable convolutional codes are characterized by the property that the free distance reaches the generalized Singleton bound, which makes them optimal for error correction. However, the existing constructions of such codes are available over fields of large size. In this paper, we present the unique construction of MDS convolutional codes of rate \(1/2\) and degree \(5\) over the field \(\mathbb{F}_{11}\). **Keywords:** Convolutional codes; free distance; generalized Singleton bound; maximum distance separable (MDS) codes. 2000 Mathematics Subject Classification: 94B10, 11T71 ## 1 Introduction Nowadays, all communication systems that work with digitally represented data require the use of error correction codes because all real channels are noisy. One type of error-correcting codes is the convolutional codes. The class of these classical codes is extensively investigated in the literature [1, 2]. One of the main objectives at the moment is to build codes of a certain rate and degree having as large distance as possible. The distance of a convolutional code measures the robustness of the code since it provides a means to assess its capability to protect data from errors. Codes with longer distance are better because they allow to correct more errors. One type of distance for convolutional codes is the free distance, which is considered for decoding (the process of error correction) when the codeword is fully received. Convolutional codes with maximal free distance (with a certain rate and degree) are called Maximum Distance Separable Codes (MDS). These codes are the ones that present the best performance in error correction among all convolutional codes with fixed rate. Up to now, there are not many known constructions of MDS convolutional codes. The first construction was obtained by Justesen in [6] for codes of rate \(1/n\) and restricted degrees. In [7] Smarandache and Rosenthal presented constructions of convolutional codes of rate \(1/n\) and arbitrary degree \(\delta\). However, these constructions require a larger field size than the constructions obtained in [6]. Later, Gluesing-Luerssen and Langfeld presented in [8] a novel construction of convolutional codes of rate \(1/n\) with the same field size as the ones obtained in [6] but also with a restriction on the degree of the code. After that, Gluesing-Luerssen, Smarandache and Rosenthal [3] constructed MDS convolutional codes for arbitrary parameters. Lieb and Pinto [4] defined a new construction of convolutional codes of any degree and sufficiently low rate using superregular matrices with a specific property. In code constructions, the size of the field is very important for practical implementations since it is directly connected with the computational efficiency of the encoding and decoding algorithms and the complexity of the decoding algorithm, which grows as the size of the field does. In this paper, we present the unique construction of an MDS \((2,1,5)\) convolutional codes over the field \(\mathbb{F}_{11}\). The interest of this construction lies in the fact that up to date there is no constructions for convolutional codes with the same parameters over any field smaller than \(\mathbb{F}_{11}\). ## 2 Preliminaries A **convolutional code**\(\mathcal{C}\) of rate \(k/n\) is an \(\mathbb{F}_{q}[D]\)-submodule of \(\mathbb{F}_{q}[D]^{n}\) of rank \(k\), where \(\mathbb{F}_{q}[D]\) is the ring of polynomials with coefficients in the field \(\mathbb{F}_{q}\). A \(k\times n\) matrix \(G(D)\) with entries in \(\mathbb{F}_{q}[D]\) whose rows constitute a basis of \(\mathcal{C}\) is called a **generator matrix** for \(\mathcal{C}\). This matrix is a full row rank matrix such that \[\mathcal{C} = \mathrm{Im}_{\mathbb{F}_{q}[D]}\ G(D)\] \[= \{v(D)\in\mathbb{F}_{q}[D]^{n}:v(D)=u(D)G(D)\ \mbox{with}\ u(D)\in\mathbb{F}_{q}[D]^{k}\}.\] Two generator matrices \(G_{1}(D),\ G_{2}(D)\in\mathbb{F}_{q}[D]^{k\times n}\) are said to be **equivalent generator matrices** if \(\mathrm{Im}_{\mathbb{F}_{q}[D]}\ G_{1}(D)=\mathrm{Im}_{\mathbb{F}_{q}[D]}\ G_{2}(D)\), which happens if and only if \(G_{1}(D)=U(D)G_{2}(D)\) for some unimodular matrix (square polynomial matrix with determinant in \(\mathbb{F}_{q}\setminus\{0\}\)) \(U(D)\in\mathbb{F}_{q}[D]^{k\times k}\). Since two equivalent generator matrices differ by left multiplication with a unimodular matrix, they have equal \(k\times k\) full-size minors, up to the multiplication by a nonzero constant. The maximum degree of the full-size minors of a generator matrix of \(\mathcal{C}\) is called the **degree** of \(\mathcal{C}\), and it is normally denoted by \(\delta\). Additionally, a convolutional code of rate \(k/n\) and degree \(\delta\) is also denoted by an \((n,k,\delta)\) convolutional code. A matrix \(G(D)\in\mathbb{F}[D]^{k\times n}\) is said to be **left prime** if \(G(D)=X(D)\tilde{G}(D)\) for some \(X(D)\in\mathbb{F}[D]^{k\times k}\) and \(\tilde{G}(D)\in\mathbb{F}[D]^{k\times n}\), then \(X(D)\) is unimodular. Since two equivalent generator matrices differ by left multiplication by a unimodular matrix, if a convolutional code admits a left prime generator matrix then all its generator matrices are left prime and the code is said to be **non-catastrophic**. A convolutional code that does not admit a left prime generator matrix is said to be **catastrophic**. The **free distance** of a convolutional code measures its capability of detecting and correcting errors introduced during information transmission through a noisy channel and it is defined as \[d_{free}(\mathcal{C})=\min\{wt(v(D))|v(D)\in\mathcal{C},v(D)\neq 0\},\] where \(wt(v(D))\) is the Hamming weight of \(v(D)=\sum_{t=0}^{\deg(v(D))}v_{t}D^{t}\in\mathbb{F}_{q}^{n}[D]\) that is defined as \(wt(v(D))=\sum_{t=0}^{\deg(v(D))}wt(v_{t})\), where the weight \(wt(v)\) of \(v\in\mathbb{F}_{q}^{n}\) is the number of nonzero components of \(v\). Once channel transmission is complete, \(\mathcal{C}\) can detect up to \(s\) errors in any received word \(w(D)\) if \(d_{free}(\mathcal{C})\geq s+1\) and can correct up to \(t\) errors in \(w(D)\) if \(d_{free}(\mathcal{C})\geq 2t+1\). In [5] Smarandache and Rosenthal obtained an upper bound for the free distance of an \((n,k,\delta)\) convolutional code \(\mathcal{C}\) given by \[d_{free}(\mathcal{C})\leq(n-k)\Big{(}\Big{\lfloor}\frac{\delta}{k}\Big{\rfloor} +1\Big{)}+\delta+1.\] This bound is called the **generalized Singleton bound**. An \((n,k,\delta)\) convolutional code with free distance equal to the generalized Singleton bound is called **Maximum Distance Separable (MDS)** convolutional code. Note that an MDS \((n,1,\delta)\) convolutional code has free distance equal to \(n(\delta+1)\). ## 3 A construction of MDS Convolutional Codes In this section will give a construction of an MDS \((2,1,5)\) convolutional code over \(\mathbb{F}_{11}\). First, we will state the following trivial result which will be recurrently used in the proof of the next theorem. **Lemma 1**: _Let \(G_{0}=\begin{bmatrix}8&8\end{bmatrix}\in\mathbb{F}_{11}^{1\times 2}\), \(G_{1}=\begin{bmatrix}5&6\end{bmatrix}\in\mathbb{F}_{11}^{1\times 2}\) and \(G_{2}=\begin{bmatrix}1&1\end{bmatrix}\in\mathbb{F}_{11}^{1\times 2}\). and \(a,b,c\in\mathbb{F}_{11}\). Then_ 1. _if_ \(a\) _or_ \(b\) _are different from zero, then_ \(wt(aG_{0}+bG_{1})\geq 1\) _and_ \(wt(aG_{1}+bG_{2})\geq 1\)_._ 2. _if_ \(b\) _is different from zero, then_ \(wt(aG_{0}+bG_{1}+cG_{2})\geq 1\)_._ **Proof 1**: _It immediately follows from the fact that \(G_{0}\) is a multiple of \(G_{2}\) and that \(G_{1}\) and \(G_{2}\) are linearly independent._ In the next theorem, we present the first construction up to now of an MDS \((2,1,5)\) convolutional code over the field \(\mathbb{F}_{11}\). This is the first construction of an MDS \((2,1,5)\) convolutional code in the literature with a relatively high degree over a small field, see [3], [4], [6] and [8]. **Theorem 1**: _Let_ \[G(D)=G_{0}+G_{1}D+G_{2}D^{2}+G_{2}D^{3}+G_{1}D^{4}+G_{0}D^{5},\] _with \(G_{0}=\begin{bmatrix}8&8\end{bmatrix}\in\mathbb{F}_{11}^{1\times 2}\), \(G_{1}=\begin{bmatrix}5&6\end{bmatrix}\in\mathbb{F}_{11}^{1\times 2}\) and \(G_{2}=\begin{bmatrix}1&1\end{bmatrix}\in\mathbb{F}_{11}^{1\times 2}\). The \((2,1,5)\) convolutional code \(\mathcal{C}=\text{Im}_{\mathbb{F}_{11}[D]}\ G(D)\) is MDS._ **Proof 2**: _To prove that \(\mathcal{C}\) is MDS we have to show that \(d_{free}(\mathcal{C})=12\). Note that \(v(D)=u_{0}G(D)\) has weight \(12\) for every \(u_{0}\in\mathbb{F}_{11}\setminus\{0\}\). We will show next that \(wt(v(D))\geq 12\) for every \(v(D)=\sum_{i\in\mathbb{N}_{0}}v_{i}D^{i}=u(D)G(D)\) with \(u(D)=\sum_{i\in\mathbb{N}_{0}}u_{i}D^{i}\in\mathbb{F}_{11}[D]\setminus\{0\}\) of degree greater or equal than \(1\). We will assume without loss of generality that \(u_{0}\neq 0\) and we will consider several cases depending on the degree of \(u(D)\)._ _Case 1: If \(u(D)=u_{0}+u_{1}D\), with \(u_{0},u_{1}\neq 0\), then \(v_{0}=u_{0}G_{0}\), \(v_{1}=u_{0}G_{1}+u_{1}G_{0}\), \(v_{2}=u_{0}G_{2}+u_{1}G_{1}\), \(v_{3}=(u_{0}+u_{1})G_{2}\), \(v_{4}=u_{0}G_{1}+u_{1}G_{2}\), \(v_{5}=u_{0}G_{0}+u_{1}G_{1}\) and \(v_{6}=u_{1}G_{0}\). It is clear that \(wt(v_{i})=2\) for \(i=0,6\) and, by Lemma 1, \(wt(v_{i})\geq 1\), when \(i=1,2,4,5\). It is now necessary to study the weight of \(v_{3}=\begin{bmatrix}u_{0}+u_{1}&u_{0}+u_{1}\end{bmatrix}\)._ _Case 1.1: If \(u_{0}+u_{1}\neq 0\), then \(wt(v_{3})=2\). More, \(v_{1}=\begin{bmatrix}5u_{0}+8u_{1}&6u_{0}+8u_{1}\end{bmatrix}\) and \(v_{2}=\begin{bmatrix}u_{0}+5u_{1}&u_{0}+6u_{1}\end{bmatrix}\)._ _Case 1.1.1: If \(u_{0}\neq 5u_{1}\) and \(u_{0}\neq 6u_{1}\) then \(wt(v_{i})=2\), \(i=1,2\) and therefore \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 1.1.2: If \(u_{0}=5u_{1}\), then \(wt\big{(}v(D)\big{)}=12\) since \(v_{1}=u_{1}\begin{bmatrix}0&5\end{bmatrix}\), \(v_{2}=u_{1}\begin{bmatrix}10&0\end{bmatrix}\), \(v_{3}=u_{1}\begin{bmatrix}6&6\end{bmatrix}\), \(v_{4}=u_{1}\begin{bmatrix}4&9\end{bmatrix}\) and \(v_{5}=u_{1}\begin{bmatrix}1&2\end{bmatrix}\). If \(u_{0}=6u_{1}\), we also have \(wt\big{(}v(D)\big{)}=12\) since \(v_{1}=u_{1}\begin{bmatrix}5&0\end{bmatrix}\), \(v_{2}=u_{1}\begin{bmatrix}0&1\end{bmatrix}\), \(v_{3}=u_{1}\begin{bmatrix}7&7\end{bmatrix}\), \(v_{4}=u_{1}\begin{bmatrix}9&4\end{bmatrix}\) and \(v_{5}=u_{1}\begin{bmatrix}9&10\end{bmatrix}\)._ _Case 1.2: If \(u_{0}+u_{1}=0\) (i.e. \(u_{1}=10u_{0}\)) then \(v_{1}=u_{0}\begin{bmatrix}8&9\end{bmatrix}\), \(v_{2}=u_{0}\begin{bmatrix}7&6\end{bmatrix}\), \(v_{3}=u_{0}\begin{bmatrix}0&0\end{bmatrix}\), \(v_{4}=u_{0}\begin{bmatrix}4&5\end{bmatrix}\) and \(v_{5}=u_{0}\begin{bmatrix}3&2\end{bmatrix}\). Therefore \(wt\big{(}v(D)\big{)}=12\)._ _Case 2: If \(u(D)=u_{0}+u_{1}D+u_{2}D^{2}\), with \(u_{0},u_{2}\neq 0\) then: \(v_{0}=u_{0}G_{0}\), \(v_{1}=u_{0}G_{1}+u_{1}G_{0}\), \(v_{2}=u_{0}G_{2}+u_{1}G_{1}+u_{2}G_{0}\), \(v_{3}=(u_{0}+u_{1})G_{2}+u_{2}G_{1}\), \(v_{4}=u_{0}G_{1}+(u_{1}+u_{2})G_{2}\), \(v_{5}=u_{0}G_{0}+u_{1}G_{1}+u_{2}G_{2}\), \(v_{6}=u_{1}G_{0}+u_{2}G_{1}\) and \(v_{7}=u_{2}G_{0}\). Note that \(wt(v_{i})=2\), for \(i=0,7\)._ _Case 2.1: If \(u_{1}=0\) and since \(G_{0}=8G_{2}\) then \(v_{1}=u_{0}G_{1}\), \(v_{2}=(u_{0}+8u_{2})G_{2}\), \(v_{3}=u_{0}G_{2}+u_{2}G_{1}\), \(v_{4}=u_{0}G_{1}+u_{2}G_{2}\), \(v_{5}=(8u_{0}+u_{2})G_{2}\), and \(v_{6}=u_{2}G_{1}\). It is clear that \(wt(v_{i})=2\) for \(i=1,6\) and, by Lemma 1, \(wt(v_{i})\geq 1\) when \(i=3,4\)._ _If \(u_{0}+8u_{2}\neq 0\) (i.e. \(u_{0}\neq 3u_{2}\)) then \(wt(v_{2})=2\) and then \(wt\big{(}v(D)\big{)}\geq 12.\) If \(u_{0}=3u_{2}\) it is easy to see that \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 2.2: If \(u_{1}\neq 0\), by Lemma 1, we have that \(wt(v_{i})\geq 1\) for \(i=1,2,3,4,5,6\). Note that \(v_{1}=\begin{bmatrix}5u_{0}+8u_{1}&6u_{0}+8u_{1}\end{bmatrix}\) and \(v_{6}=\begin{bmatrix}8u_{1}+5u_{2}&8u_{1}+6u_{2}\end{bmatrix}\). Thus if \(5u_{0}+8u_{1}\neq 0\) (i.e, \(u_{1}\neq 9u_{0}\)), \(6u_{0}+8u_{1}\neq 0\) (i.e, \(u_{1}\neq 2u_{0}\)), \(8u_{1}+5u_{2}\neq 0\) (i.e, \(u_{2}\neq 5u_{1}\)) and \(8u_{1}+6u_{2}\neq 0\) (i.e, \(u_{2}\neq 6u_{1}\)) then \(wt(v_{i})=2\) for \(i=2,6\) and therefore \(wt\big{(}v(D)\big{)}\geq 12\)._ _If \(u_{1}=9u_{0}\) then \(v_{2}=\begin{bmatrix}2u_{0}+8u_{2}&8u_{2}\end{bmatrix}\) and \(v_{6}=\begin{bmatrix}6u_{0}+5u_{2}&6u_{0}+6u_{2}\end{bmatrix}\). Therefore, if \(2u_{0}+8u_{2}\neq 0\) (i.e, \(u_{0}\neq 7u_{2}\)), \(6u_{0}+5u_{2}\neq 0\) (i.e, \(u_{0}\neq u_{2}\)) and \(6u_{0}+6u_{2}\neq 0\) (i.e. \(u_{0}\neq 10u_{2}\)) then \(wt(v_{2})=wt(v_{6})=2\) and consequently \(wt\big{(}v(D)\big{)}\geq 12\)._ _If \(u_{0}=7u_{2}\) then \(v_{3}=u_{2}\begin{bmatrix}9&10\end{bmatrix}\), \(v_{5}=u_{2}\begin{bmatrix}9&6\end{bmatrix}\) and \(v_{6}=u_{2}\begin{bmatrix}3&4\end{bmatrix}\). There _for \(wt\big{(}v(D)\big{)}\geq 12\)._ _If \(u_{0}=u_{2}\) then \(v_{2}=u_{2}\big{[}10\quad 8\big{]}\), \(v_{3}=u_{2}\big{[}4\quad 5\big{]}\), \(v_{4}=u_{2}\big{[}4\quad 5\big{]}\) and \(v_{5}=u_{2}\big{[}10\quad 8\big{]}\). Therefore \(wt\big{(}v(D)\big{)}\geq 12\)._ _Finally, if \(u_{0}=10u_{2}\) then \(v_{2}=u_{2}\big{[}6\quad 8\big{]}\), \(v_{3}=u_{2}\big{[}6\quad 7\big{]}\), \(v_{4}=u_{2}\big{[}9\quad 8\big{]}\) and \(v_{5}=\big{[}3\quad 5\big{]}\). Therefore \(wt\big{(}v(D)\big{)}\geq 12\). In the same way \(wt\big{(}v(D)\big{)}\geq 12\) for \(u_{1}=2u_{0}\), \(u_{2}=5u_{1}\) and \(u_{2}=6u_{1}\)._ _Case 3: If \(u(D)=u_{0}+u_{1}D+u_{2}D^{2}+u_{3}D^{3}\), with \(u_{0},u_{3}\neq 0\), then \(v_{0}=u_{0}G_{0}\), \(v_{1}=u_{0}G_{1}+u_{1}G_{0}\), \(v_{2}=u_{0}G_{2}+u_{1}G_{1}+u_{2}G_{0}\), \(v_{3}=(u_{0}+u_{1})G_{2}+u_{2}G_{1}+u_{3}G_{0}\), \(v_{4}=(u_{0}+u_{3})G_{1}+(u_{1}+u_{2})G_{2}\), \(v_{5}=u_{0}G_{0}+u_{1}G_{1}+(u_{2}+u_{3})G_{2}\), \(v_{6}=u_{1}G_{0}+u_{2}G_{1}+u_{3}G_{2}\), \(v_{7}=u_{2}G_{0}+u_{3}G_{1}\) and \(v_{8}=u_{3}G_{0}\). Clearly \(wt(v_{i})=2\) for \(i=0,8\)._ _Case 3.1: If \(u_{1}=0\) and \(u_{2}=0\) then: \(v_{1}=u_{0}G_{1}\), \(v_{2}=u_{0}G_{2}\), \(v_{3}=u_{0}G_{2}+u_{3}G_{0}\), \(v_{4}=(u_{0}+u_{3})G_{1}\), \(v_{5}=u_{0}G_{0}+u_{3}G_{2}\), \(v_{6}=u_{3}G_{2}\), and \(v_{7}=u_{3}G_{1}\). Since \(wt(v_{i})=2\), for \(i=1,2,6,7\) it follows that \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 3.2: If \(u_{1}=0\) and \(u_{2}\neq 0\) then \(v_{1}=u_{0}G_{1}\), \(v_{2}=u_{0}G_{2}+u_{2}G_{0}\), \(v_{3}=u_{0}G_{2}+u_{2}G_{1}+u_{3}G_{0}\), \(v_{4}=(u_{0}+u_{3})G_{1}+u_{2}G_{2}\), \(v_{5}=u_{0}G_{0}+(u_{2}+u_{3})G_{2}\), \(v_{6}=u_{2}G_{1}+u_{3}G_{2}\) and \(v_{7}=u_{2}G_{0}+u_{3}G_{1}\). Clearly \(wt(v_{1})=2\) and \(wt(v_{i})\geq 1\), for \(i=3,4,6,7\). Since \(G_{0}=8G_{2}\) we have that \(v_{2}=(u_{0}+8u_{2})G_{2}\). Thus, if \(u_{0}+8u_{2}\neq 0\), \(wt(v_{2})=2\) and therefore \(wt\big{(}v(D)\big{)}\geq 12\)._ _If \(u_{0}+8u_{2}=0\), i.e., \(u_{0}=3u_{2}\), we have that \(v_{3}=(3u_{2}+8u_{3})G_{2}+u_{2}G_{1}\), \(v_{4}=(3u_{2}+u_{3})G_{1}+u_{2}G_{2}\) and \(v_{5}=(3u_{2}+u_{3})G_{2}\). If \(3u_{2}+u_{3}\neq 0\) (i.e., \(u_{3}\neq 8u_{2}\)) then \(wt(v_{5})=2\) and it follows that \(wt\big{(}v(D)\big{)}\geq 12\). If \(u_{3}=8u_{2}\) then \(v_{3}=u_{2}\big{[}6\quad 7\big{]}\), \(v_{6}=u_{2}\big{[}2\quad 3\big{]}\) and \(v_{7}=u_{2}\big{[}4\quad 1\big{]}\). Therefore \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 3.3: Using the same reasoning as in Case 3.2, it follows that \(wt\big{(}v(D)\big{)}\geq 12\) if \(u_{1}\neq 0\) and \(u_{2}=0\)._ _Case 3.4: If \(u_{1}\neq 0\) and \(u_{2}\neq 0\) then by Lemma 1, \(wt(v_{i})\geq 1\), for \(i=1,2,3,5,6,7\)._ _Case 3.4.1: Since \(v_{1}=\big{[}5u_{0}+8u_{1}\quad 6u_{0}+8u_{1}\big{]}\) and \(v_{4}=(u_{0}+u_{3})G_{1}+(u_{1}+u_{2})G_{2}\), if \(5u_{0}+8u_{1}\neq 0\), \(6u_{0}+8u_{1}\neq 0\) and \(u_{0}+u_{3}\neq 0\) then \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 3.4.2: If \(5u_{0}+8u_{1}=0\), i.e., \(u_{0}=5u_{1}\), then \(v_{2}=\big{[}10u_{1}+8u_{2}\quad 8u_{2}\big{]}\) and \(v_{4}=(5u_{1}+u_{3})G_{1}+(u_{1}+u_{2})G_{2}\)._ _Case 3.4.2.1: Therefore if \(10u_{1}+8u_{2}\neq 0\) and \(5u_{1}+u_{3}\neq 0\) then \(wt(v_{2})=2\) and \(wt(v_{4})\geq 1\) and consequently \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 3.4.2.2: If \(10u_{1}+8u_{2}=0\) (i.e., \(u_{1}=8u_{2}\)) then \(v_{4}=\big{[}5u_{3}\quad 7u_{2}+6u_{3}\big{]}\) has weight \(2\) if \(7u_{2}+6u_{3}\neq 0\) and therefore \(wt\big{(}v(D)\big{)}\geq 12\). If \(7u_{2}+6u_{3}=0\) (i.e., \(u_{2}=7u_{3}\)) then \(v_{5}=u_{3}\big{[}9\quad 10\big{]}\) and therefore \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 3.4.2.3: If \(5u_{1}+u_{3}=0\) (i.e., \(u_{3}=6u_{1}\)) then \(v_{4}=(u_{1}+u_{2})G_{2}\) and therefore, if \(u_{1}+u_{2}\neq 0\), \(wt(v_{4})=2\) and then \(wt\big{(}v(D)\big{)}\geq 12\). If \(u_{1}+u_{2}=0\), then \(v_{2}=u_{2}\big{[}9\quad 8\big{]}\) and \(v_{6}=u_{2}\big{[}2\quad 3\big{]}\). Therefore \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 3.4.3: In the same way, we prove that \(wt\big{(}v(D)\big{)}\geq 12\) if \(6u_{0}+8u_{1}=0\) or if \(u_{0}+u_{3}=0\)._ _Case 4: If \(u(D)=u_{0}+u_{1}D+u_{2}D^{2}+u_{3}D^{3}+u_{4}D^{4}\), with \(u_{0},u_{4}\neq 0\) _then \(v_{0}=u_{0}G_{0}\), \(v_{1}=u_{0}G_{1}+u_{1}G_{0}\), \(v_{2}=u_{0}G_{2}+u_{1}G_{1}+u_{2}G_{0}\), \(v_{3}=(u_{0}+u_{1})G_{2}+u_{2}G_{1}+u_{3}G_{0}\), \(v_{4}=(u_{0}+u_{3})G_{1}+(u_{1}+u_{2})G_{2}+u_{4}G_{0}\), \(v_{5}=u_{0}G_{0}+(u_{1}+u_{4})G_{1}+(u_{2}+u_{3})G_{2}\), \(v_{6}=u_{0}G_{0}+u_{2}G_{1}+(u_{3}+u_{4})G_{2}\), \(v_{7}=u_{2}G_{0}+u_{3}G_{1}+u_{4}G_{2}\), \(v_{8}=u_{3}G_{0}+u_{4}G_{1}\) and \(v_{9}=u_{4}G_{0}\). Clearly \(wt(v_{i})=2\) for \(i=0,9\)._ _Case 4.1: If \(u_{1}=0\), \(u_{2}=0\) and \(u_{3}=0\) then \(v_{1}=u_{0}G_{1}\), \(v_{2}=u_{0}G_{2}\), \(v_{3}=u_{0}G_{2}\), \(v_{7}=u_{4}G_{2}\) and \(v_{8}=u_{4}G_{1}\). Since \(wt(v_{i})=2\), for \(i=1,2,3,7,8\) then \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.2: If \(u_{1}=0\), \(u_{2}=0\) and \(u_{3}\neq 0\) then \(v_{1}=u_{0}G_{1}\), \(v_{2}=u_{0}G_{2}\), \(v_{3}=u_{0}G_{2}+u_{3}G_{0}\), \(v_{4}=(u_{0}+u_{3})G_{1}+u_{4}G_{0}\), \(v_{5}=u_{0}G_{0}+u_{4}G_{1}+u_{3}G_{2}\), \(v_{6}=u_{0}G_{0}+(u_{3}+u_{4})G_{2}\), \(v_{7}=u_{3}G_{1}+u_{4}G_{2}\) and \(v_{8}=u_{3}G_{0}+u_{4}G_{1}\). We have that \(wt(v_{i})=2\), for \(i=1,2\) and by Lemma 1, \(wt(v_{i})\geq 1\) when \(i=5,7,8\). Since \(G_{0}=8G_{2}\) then \(v_{3}=(8u_{3}+u_{0})G_{2}\) has weight \(2\) if \(u_{0}\neq 3u_{3}\) and consecutively \(w\big{(}v(D)\big{)}\geq 12\). If \(u_{0}=3u_{3}\) then \(v_{4}=4u_{3}G_{1}+u_{4}G_{0}\) which has weight greater or equal than 1, by Lemma 1 and therefore \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.3: Using a similar reasoning as in Case 4.2, if \(u_{1}=0\), \(u_{2}\neq 0\) and \(u_{3}=0\), we have \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.4: In the same way as Case 4.2, \(u_{1}\neq 0\), \(u_{2}=0\) and \(u_{3}=0\), we have \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.5: If \(u_{1}=0\), \(u_{2}\neq 0\) and \(u_{3}\neq 0\) then \(v_{1}=u_{0}G_{1}\), \(v_{2}=u_{0}G_{2}+u_{2}G_{0}\), \(v_{3}=u_{0}G_{2}+u_{2}G_{1}+u_{3}G_{0}\), \(v_{4}=(u_{0}+u_{3})G_{1}+u_{2}G_{2}+u_{4}G_{0}\), \(v_{5}=u_{0}G_{0}+u_{4}G_{1}+(u_{2}+u_{3})G_{2}\), \(v_{6}=u_{0}G_{0}+u_{2}G_{1}+(u_{3}+u_{4})G_{2}\), \(v_{7}=u_{2}G_{0}+u_{3}G_{1}+u_{4}G_{2}\) and \(v_{8}=u_{3}G_{0}+u_{4}G_{1}\). Note that \(wt(v_{1})=2\) and by Lemma 1, \(wt(v_{i})\geq 1\) for \(i=3,5,6,7,8\). Since \(v_{2}=(u_{0}+8u_{2})G_{2}\) it follows that \(wt\big{(}v(D)\big{)}\geq 12\) if \(u_{0}+8u_{2}\neq 0\). If \(u_{0}+8u_{2}=0\) (i.e. \(u_{0}=3u_{2}\)) then \(v_{4}=(3u_{2}+u_{3})G_{1}+u_{2}G_{2}+u_{4}G_{0}\). By Lemma 1, if \(3u_{2}+u_{3}\neq 0\) (i.e. \(u_{3}\neq 8u_{2}\)) then \(wt(v_{4})\geq 1\) and therefore \(wt\big{(}v(D)\big{)}\geq 12\). If \(u_{3}=8u_{2}\) then \(v_{3}=u_{2}(3G_{2}+G_{1})+8u_{2}G_{0}=u_{2}\big{[}6\quad 7\big{]}\) and we get \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.6: Analogously to the Case 4.5, for \(u_{1}\neq 0\), \(u_{2}\neq 0\) and \(u_{3}=0\), we have \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.7: If \(u_{1}\neq 0\), \(u_{2}=0\) and \(u_{3}\neq 0\) then \(v_{1}=u_{0}G_{1}+u_{1}G_{0}\), \(v_{2}=u_{0}G_{2}+u_{1}G_{1}\), \(v_{3}=(u_{0}+u_{1})G_{2}+u_{3}G_{0}\), \(v_{4}=(u_{0}+u_{3})G_{1}+u_{1}G_{2}+u_{4}G_{0}\), \(v_{5}=u_{0}G_{0}+(u_{1}+u_{4})G_{1}+u_{3}G_{2}\), \(v_{6}=u_{0}G_{0}+(u_{3}+u_{4})G_{2}\), \(v_{7}=u_{3}G_{1}+u_{4}G_{2}\) and \(v_{8}=u_{3}G_{0}+u_{4}G_{1}\). By Lemma 1\(wt(v_{i})\geq 1\) for \(i=1,2,7,8\)._ _Case 4.7.1: If \(u_{0}\neq 5u_{1}\) and \(u_{0}\neq 6u_{1}\) and since \(v_{1}=\big{[}5u_{0}+8u_{1}\quad 6u_{0}+8u_{1}\big{]}\) and \(v_{2}=\big{[}u_{0}+5u_{1}\quad u_{0}+6u_{1}\big{]}\) then \(wt(v_{i})=2\), for \(i=1,2\). Additionally, since \(v_{7}=\big{[}5u_{3}+u_{4}\quad 6u_{3}+u_{4}\big{]}\) and \(v_{8}=\big{[}8u_{3}+5u_{4}\quad 8u_{3}+6u_{4}\big{]}\), if \(u_{4}\neq 6u_{3}\) and \(u_{4}\neq 5u_{3}\), \(wt(v_{i})=2\), for \(i=7,8\) and therefore \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.7.2: If \(u_{0}=5u_{1}\), then \(v_{3}=(6u_{1}+8u_{3})G_{2}\), \(v_{4}=(5u_{1}+u_{3})G_{1}+(u_{1}+8u_{4})G_{2}\), \(v_{5}=(7u_{1}+u_{3})G_{2}+(u_{1}+u_{4})G_{1}\), \(v_{6}=(7u_{1}+u_{3}+u_{4})G_{2}\), \(v_{7}=u_{3}G_{1}+u_{4}G_{2}\) and \(v_{8}=u_{3}G_{0}+u_{4}G_{1}\)._ _Case 4.7.2.1: If \(5u_{1}+u_{3}\neq 0\) (i.e. \(u_{3}\neq 6u_{1}\)), \(6u_{1}+8u_{3}\neq 0\) (i.e. \(u_{3}\neq 2u_{1}\)) and \(u_{1}+u_{4}\neq 0\) (i.e. \(u_{4}\neq 10u_{1}\)) _Therefore \(wt(v_{i})=2\) for \(i=6,7\), \(wt(v_{5})\geq 1\) and consequently \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.7.2.3: Similar to Case 4.7.2.2. If \(u_{3}=2u_{1}\), then \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.7.2.4: Similar to Case 4.7.2.2. If \(u_{4}=10u_{1}\) then \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.7.3: Similar to Case 4.7.3. If \(u_{0}=6u_{1}\) then \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.7.4: If \(u_{4}=6u_{3}\) then \(v_{1}=u_{0}G_{1}+u_{1}G_{0}\), \(v_{2}=u_{0}G_{2}+u_{1}G_{1}\), \(v_{3}=(u_{0}+u_{1}+8u_{3})G_{2}\), \(v_{4}=(u_{0}+u_{3})G_{1}+(u_{1}+4u_{3})G_{2}\), \(v_{5}=(u_{1}+6u_{3})G_{1}+(8u_{0}+u_{3})G_{2}\), \(v_{6}=(8u_{0}+7u_{3})G_{2}\), \(v_{7}=u_{3}\big{[}0\quad 1\big{]}\) and \(v_{8}=u_{3}\big{[}5\quad 0\big{]}\)._ _Case 4.7.4.1: If \(u_{0}+u_{3}\neq 0\), \(8u_{0}+7u_{3}\neq 0\) and \(u_{1}+6u_{3}\neq 0\) then \(wt(v_{i})\geq 1\), for \(i=4,5\) and \(wt(v_{6})=2\) and therefore \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.7.4.2: If \(u_{0}+u_{3}=0\), i.e., \(u_{0}=10u_{3}\) then \(v_{4}=(u_{1}+4u_{3})G_{2}\) and \(v_{6}=u_{3}\big{[}10\quad 10\big{]}\). So \(wt(v_{6})=2\) and \(wt(v_{4})=2\) if \(u_{1}+4u_{3}\neq 0\) and therefore \(wt\big{(}v(D)\big{)}\geq 12\). If \(u_{1}+4u_{3}=0\), i.e. \(u_{1}=7u_{3}\) then \(v_{5}=u_{3}\big{[}3\quad 5\big{]}\) and \(v_{3}=u_{3}\big{[}3\quad 3\big{]}\). So \(wt(v_{i})=2\), for \(i=3,5\) and consequently \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.7.4.3: Similarly to Case 4.7.4.2, it is possible to conclude that if \(8u_{0}+7u_{3}=0\), i.e., \(u_{0}=6u_{3}\) then \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.7.4.4: If \(u_{1}+6u_{3}=0\), i.e., \(u_{1}=5u_{3}\) then \(v_{4}=(u_{0}+u_{3})G_{1}+9u_{3}G_{2}\), \(v_{5}=(8u_{0}+u_{3})G_{2}\) and \(v_{3}=(u_{0}+2u_{3})G_{2}\). Then \(wt(v_{i})=2\), for \(i=3,5\) if \(8u_{0}+u_{3}\neq 0\) and \(u_{0}+2u_{3}\neq 0\). If \(8u_{0}+u_{3}=0\) (i.e. \(u_{3}=3u_{0}\)) then \(v_{4}=u_{0}\big{[}5\quad 9\big{]}\) and \(v_{3}=u_{0}\big{[}7\quad 7\big{]}\). So \(wt(v_{i})=2\), for \(i=3,4\) and therefore \(wt\big{(}v(D)\big{)}\geq 12\). If \(u_{0}+2u_{3}=0\) (i.e. \(u_{3}=5u_{0}\)) then \(v_{4}=u_{0}\big{[}9\quad 4\big{]}\) and \(v_{5}=u_{0}\big{[}2\quad 2\big{]}\). So \(wt(v_{i})=2\), for \(i=4,5\) and consequently \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.7.5: Similar to Case 4.7.4. If \(u_{4}=5u_{3}\) then \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.8: If \(u_{1}\neq 0\), \(u_{2}\neq 0\) and \(u_{3}\neq 0\) then, since \(G_{0}=8G_{2}\), \(v_{1}=u_{0}G_{1}+u_{1}G_{0}\), \(v_{2}=(u_{0}+8u_{2})G_{2}+u_{1}G_{1}\), \(v_{3}=(u_{0}+u_{1}+8u_{3})G_{2}+u_{2}G_{1}\), \(v_{4}=(u_{0}+u_{3})G_{1}+(u_{1}+u_{2}+8u_{4})G_{2}\), \(v_{5}=(8u_{0}+u_{2}+u_{3})G_{2}+(u_{1}+u_{4})G_{1}\), \(v_{6}=u_{2}G_{1}+(u_{3}+u_{4}+8u_{0})G_{2}\), \(v_{7}=u_{3}G_{1}+(8u_{2}+u_{4})G_{2}\) and \(v_{8}=u_{3}G_{0}+u_{4}G_{1}\). By Lemma 1, \(wt(v_{i})\geq 1\) for \(i=1,2,3,6,7,8\). Note that \(v_{1}=\big{[}5u_{0}+8u_{1}\quad 6u_{0}+8u_{1}\big{]}\) and \(v_{8}=\big{[}8u_{3}+5u_{4}\quad 8u_{3}+6u_{4}\big{]}\)._ _Case 4.8.1: If \(5u_{0}+8u_{1}\neq 0\), \(6u_{0}+8u_{1}\neq 0\), \(8u_{3}+5u_{4}\neq 0\) and \(8u_{3}+6u_{4}\neq 0\) then \(wt(v_{i})=2\) for \(i=2,8\) and consequently \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.8.2: If \(5u_{0}+8u_{1}=0\) (i.e. \(u_{0}=5u_{1}\)) then \(v_{1}=\big{[}0\quad 5\big{]}\), \(v_{2}=\big{[}10u_{1}+8u_{2}\quad 8u_{2}\big{]}\) and \(v_{8}=\big{[}8u_{3}+5u_{4}\quad 8u_{3}+6u_{4}\big{]}\)_ _Case 4.8.2.1: If \(10u_{1}+8u_{2}\neq 0\), \(8u_{3}+5u_{4}\neq 0\) and \(8u_{3}+6u_{4}\neq 0\) then \(wt(v_{i})=2\) for \(i=2,8\) and thus \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.8.2.2.2: If \(10u_{1}+8u_{2}=0\) (i.e. \(u_{2}=7u_{1}\)) then \(v_{5}=(8u_{1}+u_{3})G_{2}+(u_{1}+u_{4})G_{1}\) and \(v_{4}=(5u_{1}+u_{3})G_{1}+(8u_{1}+8u_{4})G_{2}\)._ _Case 4.8.2.2.2.1: If \(u_{1}+u_{4}\neq 0\) (i.e. \(u_{4}\neq 10u_{1}\)) and \(5u_{1}+u_{3}\neq 0\) (i.e. \(u_{3}\neq 6u_{1}\)) then \(wt(v_{i})\geq 1\) for \(i=4,5\) and consequently \(wt\big{(}v(D)\big{)}\geq 12\)._ _Case 4.8.2.2.2: If \(u_{4}=10u_{1}\) then \(v_{4}=(5u_{1}+u_{3})G_{1}\) and \(v_{5}=(8u_{1}+u_{3})G_{2}\)._ _Case 4.8.2.2.2.2.1: If \(5u_{1}+u_{3}\neq 0\) (i.e. \(u_{3}\neq 6u_{1}\)) then \(wt(v_{4})=2\) and consequently \(wt\big{(}v(D)\big{)}\geq 12\)._ _If \(u_{3}=6u_{1}\) then \(v_{5}=u_{1}\big{[}3\quad 3\big{]}\). Therefore \(wt(v_{5})=2\) and \(wt\big{(}v(D)\big{)}\geq 12\)._ **Case 4.8.2.2.3:** Similar to Case 4.8.2.2.2. If \(5u_{1}+u_{3}=0\) then \(wt\big{(}v(D)\big{)}\geq 12\). **Case 4.8.2.3:** Similar to Case 4.8.2.2. If \(8u_{3}+5u_{4}=0\) then \(wt\big{(}v(D)\big{)}\geq 12\). **Case 4.8.2.4:** Similar to Case 4.8.2.2. If \(8u_{3}+6u_{4}=0\) then \(wt\big{(}v(D)\big{)}\geq 12\). **Case 4.8.3:** Similar to Case 4.8.2.1, if \(6u_{0}+8u_{1}=0\) or \(8u_{3}+5u_{4}=0\) or \(8u_{3}+6u_{4}=0\) then \(wt\big{(}v(D)\big{)}\geq 12\). **Case 5:** Using the same reasoning as before it can be proved that \(wt\big{(}v(D)\big{)}\geq 12\) if \(u(D)=u_{0}+u_{1}D+u_{2}D^{2}+u_{3}D^{3}+u_{4}D^{4}+u_{5}D^{5}\), with \(u_{0},u_{5}\neq 0\). _Finally, let us consider the last case._ **Case 6:** _If \(u(D)=u_{0}+u_{1}D+\cdots+u_{n}D^{n}\), with \(u_{0},u_{n}\neq 0\) and \(n>5\), then \(v_{0}=u_{0}G_{0}\), \(v_{1}=u_{0}G_{1}+u_{1}G_{0}\), \(v_{2}=u_{0}G_{2}+u_{1}G_{1}+u_{2}G_{0}\), \(v_{3}=u_{0}G_{2}+u_{1}G_{2}+u_{2}G_{1}+u_{3}G_{0}\), \(v_{4}=u_{0}G_{1}+u_{1}G_{2}+u_{2}G_{2}+u_{3}G_{1}+u_{4}G_{0}\), \(v_{5}=u_{0}G_{0}+u_{1}G_{1}+u_{2}G_{2}+u_{3}G_{2}+u_{4}G_{1}+u_{5}G_{0}\), \(\cdots\),\(v_{n+1}=u_{n}G_{1}+u_{n-1}G_{2}+u_{n-2}G_{2}+u_{n-3}G_{1}+u_{n-4}G_{0}\), \(v_{n+2}=u_{n}G_{2}+u_{n-1}G_{2}+u_{n-2}G_{1}+u_{n-3}G_{0}\), \(v_{n+3}=u_{n}G_{2}+u_{n-1}G_{1}+u_{n-2}G_{0}\), \(v_{n+4}=u_{n}G_{1}+u_{n-1}G_{0}\) and \(v_{n+5}=u_{n}G_{0}\). As before, \(wt(v_{0})=wt(v_{n+5})=2\)._ _Let us first show that the weight of \(v(D)|_{[0,5]}:=v_{0}+v_{1}D+v_{2}D^{2}+v_{3}D^{3}+v_{4}D^{4}+v_{5}D^{5}\) has weight greater or equal than \(6\) for all \(u_{0},u_{1},u_{2},u_{3},u_{4},u_{5}\in\mathbb{F}_{11}\) with \(u_{0}\neq 0\)._ **Case 6.1:** _Let us consider first the case \(u_{1}=0\). Then \(v_{1}=u_{0}G_{1}\) has weight 2, \(v_{2}=(u_{0}+8u_{2})G_{2}\), \(v_{3}=(u_{0}+8u_{3})G_{2}+u_{2}G_{1}\) and \(v_{4}=(u_{2}+8u_{4})G_{2}+(u_{0}+u_{3})G_{1}\). In this case, if \(u_{2}\neq 0\) and \(u_{0}+u_{3}\neq 0\), then, by Lemma 1, \(wt(v_{3})\geq 1\) and \(wt(v_{4})\geq 1\) and \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\)._ _If \(u_{2}=0\), then \(v_{2}=u_{0}G_{2}\) and therefore \(wt(v_{2})=2\), which implies that \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\)._ _If \(u_{0}+u_{3}=0\), i.e., \(u_{0}=10u_{3}\), then \(v_{2}=(10u_{3}+8u_{2})G_{2}\) and \(v_{3}=7u_{3}G_{2}+u_{2}G_{1}\). In this case if \(10u_{3}+8u_{2}\neq 0\) then \(wt(v_{2})=2\) and therefore \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\). If \(10u_{3}+8u_{2}=0\), i.e., \(u_{3}=8u_{2}\), then \(v_{3}=u_{2}\big{[}6\quad 7\big{]}\) and therefore \(wt(v_{3})=2\) which implies that \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\)._ **Case 6.2:** _Let us consider now that \(u_{1}\neq 0\). Then \(v_{1}=\big{[}5u_{0}+8u_{1}\quad 6u_{0}+8u_{1}\big{]}\)._ **Case 6.2.1:** _If \(u_{0}\neq 6u_{1}\) and \(u_{0}\neq 5u_{1}\) then \(wt(v_{1})=2\) and \(wt(v_{2})\geq 1\), by Lemma 1._ _If \(u_{2}\neq 0\) then also \(wt(v_{3})\geq 1\) and therefore \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\)._ _If \(u_{2}=0\) then \(v_{3}=(u_{0}+u_{1}+8u_{3})G_{2}\) and \(v_{4}=(u_{1}+8u_{4})G_{2}+(u_{0}+u_{3})G_{1}\). Since \(v_{4}=\big{[}0\quad 0\big{]}\) then \(u_{1}=3u_{4}\) and \(u_{0}=10u_{3}\). If \(u_{1}=3u_{4}\) and \(u_{0}=10u_{3}\) it follows that \(v_{3}=(7u_{3}+3u_{4})G_{2}\) and \(v_{2}=10u_{3}G_{2}+3u_{4}G_{1}\)._ _If \(7u_{3}+3u_{4}=0\), i.e. \(u_{3}=9u_{4}\) then \(v_{2}=u_{4}\big{[}6\quad 9\big{]}\) which has weight \(2\)._ _If \(7u_{3}+3u_{4}\neq 0\) then \(wt(v_{3})=2\). Thus \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\)._ **Case 6.2.2:** _Let us consider now that \(u_{0}=6u_{1}\). Then \(v_{1}=u_{1}\big{[}5\quad 0\big{]}\), \(v_{2}=6u_{1}G_{2}+u_{1}G_{1}+u_{2}G_{0}\), \(v_{3}=7u_{1}G_{2}+u_{2}G_{1}+u_{3}G_{0}\) and \(v_{4}=(6u_{1}+u_{3})G_{1}+(u_{1}+u_{2})G_{2}+u_{4}G_{0}\). Then we have that \(wt(v_{1})=1\) and, by Lemma 1, \(wt(v_{2})\geq 1\)._ **Case 6.2.2.1:** If \(u_{2}\neq 0\) and \(6u_{1}+u_{3}\neq 0\) (i.e. \(u_{3}\neq 5u_{1}\)) then \(wt(v_{3})\geq 1\) and \(wt(v_{4})\geq 1\) and therefore \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\). **Case 6.2.2.2:** If \(u_{2}=0\) then \(v_{2}=u_{1}\big{[}0\quad 1\big{]}\), \(v_{3}=(7u_{1}+8u_{3})G_{2}\), \(v_{4}=(6u_{1}+u_{3})G_{1}+u_{1}G_{2}+u_{4}G_{0}\). **Case 6.2.2.1.1:** If \(7u_{1}+8u_{3}\neq 0\) (i.e. \(u_{1}\neq 2u_{3}\)) then \(wt(v_{3})=2\) and \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\). **Case 6.2.2.1.2:** If \(u_{1}=2u_{3}\) then \(v_{4}=\big{[}u_{3}+8u_{4}\quad 3u_{3}+8u_{4}\big{]}\). Thus, if \(u_{3}+8u_{4}\neq 0\) and \(3u_{3}+8u_{4}\neq 0\) (i.e. \(u_{3}\neq 3u_{4}\) and \(u_{3}\neq u_{4}\)) then \(wt(v_{4})=2\) and \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\). _If \(u_{3}=3u_{4}\) then \(wt(v_{4})=1\) and \(v_{5}=(3u_{4}+u_{5})G_{0}+7u_{4}G_{1}+3u_{4}G_{2}\) is such that \(wt(v_{5})\geq 1\) and \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\)._ _If \(u_{3}=u_{4}\) then \(vt(v_{4})=1\) and \(v_{5}=(u_{4}+u_{5})G_{0}+3u_{4}G_{1}+u_{4}G_{2}\) has also weight greater or equal than \(1\). So we conclude that \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\)._ **Case 6.2.2.3:** On the other hand, if \(u_{3}=5u_{1}\) then \(v_{1}=u_{1}\big{[}5\quad 0\big{]}\), \(v_{2}=6u_{1}G_{2}+u_{1}G_{1}+u_{2}G_{0}\), \(v_{3}=\big{[}3u_{1}+5u_{2}\quad 3u_{1}+6u_{2}\big{]}\). Thus \(wt(v_{1})=1\) and \(wt(v_{2})\geq 1\) by Lemma 1. **Case 6.2.2.3.1:** If \(3u_{1}+5u_{2}\neq 0\) and \(3u_{1}+6u_{2}\neq 0\) (i.e. \(u_{1}\neq 2u_{2}\) and \(u_{1}\neq 9u_{2}\)) then \(wt(v_{3})=2\) and therefore \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\)._ **Case 6.2.2.3.2:** If \(u_{1}=2u_{2}\) then \(wt(v_{3})=1\) and \(v_{4}=(7u_{1}+8u_{4})G_{2}\). **Case 6.2.2.3.2.1:** If \(7u_{1}+8u_{4}\neq 0\), then \(wt(v_{4})=2\) and therefore \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\). **Case 6.2.2.3.2.2:** If \(7u_{1}+8u_{4}=0\), i.e., \(u_{1}=2u_{4}\) and then \(v_{2}=u_{1}\big{[}4\quad 5\big{]}\) which has weight \(2\) and then \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\). **Case 6.2.2.3.3:** Similar to Case 6.2.2.3.2. If \(u_{1}=9u_{2}\) then \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\). **Case 6.2.3:** In the same way as the Case 6.2.2, it is possible to conclude that if \(u_{0}=5u_{1}\) then \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\). Thus, we have proven that \(wt\big{(}v(D)|_{[0,5]}\big{)}\geq 6\). Note that \(v_{n}=\hat{v_{5}},v_{n+1}=\hat{v_{4}},v_{n+2}=\hat{v_{3}},v_{n+3}=\hat{v_{2}}, v_{n+4}=\hat{v_{1}}v_{n+5}=\hat{v_{0}}\) for \[\hat{v}(D)=(u_{n}+u_{n-1}D+u_{n-2}D^{2}+u_{n-3}D^{3}+u_{n-4}D^{4}+u_{n-5}D^{5}) G(D).\] _Since \(wt(\hat{v_{0}}+\hat{v_{1}}D+\hat{v_{2}}D^{2}+\hat{v_{3}}D^{3}+\hat{v_{4}}D^{4}+ \hat{v_{5}}D^{5})\geq 6\) then_ \[wt(v_{n}D^{n}+v_{n+1}D^{n+1}+v_{n+2}D^{n+2}+v_{n+3}D^{n+3}+v_{n+4}D^{n+4}+v_{ n+5}D^{n+5})\geq 6\] _for all \(u_{n-5},u_{n-4},u_{n-3},u_{n-2},u_{n-1},u_{n}\in\mathbb{F}_{11}\) with \(u_{n}\neq 0\) thus \(wt\big{(}v(D)\big{)}\geq 12\)._ Note that the convolutional code defined in Theorem 1 is catastrophic, since \[G(D)=(1+D)[(1+10D+D^{2}+10D^{3}+D^{4})G_{0}+(D+10D^{2}+D^{3})G_{1}+D^{2}G_{2}],\] because \(1+D\) is not a unimodular factor. A noncatastrophic convolutional code can be constructed using a similar construction, by slightly changing the last three coefficients of \(G(D)\). For example, \[G(D)=G_{0}+G_{1}D+G_{2}D^{2}+G_{2}D^{3}+aG_{1}D^{4}+bG_{0}D^{5}\] with \(G_{0}=\begin{bmatrix}8&8\end{bmatrix}\in\mathbb{F}_{11}^{1\times 2}\), \(G_{1}=\begin{bmatrix}5&6\end{bmatrix}\in\mathbb{F}_{11}^{1\times 2}\) and \(G_{2}=\begin{bmatrix}1&1\end{bmatrix}\in\mathbb{F}_{11}^{1\times 2}\) is a generator matrix of a noncatastrophic \((2,1,5)\) convolutional code for every \((a,b)\in\mathbb{F}_{11}^{2}\setminus\{(1,1)\}\). Is still an open problem to find it this code is MDS for some \((a,b)\in\mathbb{F}_{11}^{2}\setminus\{(1,1)\}\). **Remark 1**: _The coefficients of the generator matrix \(G(D)\) defined in the Theorem 1, \((G_{0},G_{1},G_{2},G_{2},G_{1},G_{0})\) are in a palindrome format and are such that the generator matrix defined by the first three coefficients_ \[\tilde{G}(D)=G_{0}+G_{1}D+G_{2}D^{2} \tag{1}\] _is an MDS \((2,1,2)\) convolutional code over \(\mathbb{F}_{11}\) defined by Justesen in [6] (see also [2]). Justesen gave (the first) construction of MDS convolutional codes of rate \(1/n\). In particular, for \(n=2\), the proposed construction is given by the following theorem._ **Theorem 2** ( [6]): _For \(n=2\) and \(|\mathbb{F}_{q}|\geq 3\), set \(s_{2}:=\left\lceil\frac{|\mathbb{F}_{q}|-1}{2}\right\rceil\) and \(\delta:=\left\lfloor\frac{2}{9}|\mathbb{F}_{q}|\right\rfloor\). Moreover, let \(\alpha\) be a primitive element of \(\mathbb{F}_{q}\), and set \(g_{1}(x):=(x-\alpha)(x-\alpha^{2})\) and \(g_{2}(x):=g_{1}(x\alpha^{-s_{2}})\). Then \(G(D)=[g_{1}(D)\;g_{2}(D)]\) is the generator matrix of an MDS \((2,1,2)\) convolutional code._ _Considering \(\delta=2\), the field \(\mathbb{F}_{11}\) and \(\alpha=2\) as a primitive element of \(\mathbb{F}_{11}\), the Theorem 2 gives the generator matrix \(\tilde{G}(D)\) considered in (1) of an MDS convolutional code, and repeating the coefficients of \(\tilde{G}(D)\) in reverse order we define the generator matrix of Theorem 1. However, this reasonig does not apply for all the codes defined by Theorem 2. In particular Theorem 2 gives an MDS \((2,1,2)\) convolutional code over the fields \(\mathbb{F}_{9}\) and \(\mathbb{F}_{11}\), by considering any primitive element of the field. However, we checked that when we take_ \[\tilde{G}(D)=G_{0}+G_{1}D+G_{2}D^{2}\] _defined in Theorem 2 by considering \(\delta=2\) and \(\mathbb{F}_{9}\), the generator matrix_ \[G(D)=G_{0}+G_{1}D+G_{2}D^{2}+G_{2}D^{3}+G_{1}D^{4}+G_{0}D^{5}\] _defines a \((2,1,5)\) convolutional code which is not MDS, for any chosen primitive element of the fields. Moreover, the same happens when we consider the field \(\mathbb{F}_{11}\) and a primitive element of \(\mathbb{F}_{11}\) different from \(2\)._ ## Acknowledgements This work is supported by The Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology (FCT), UIDB/04106/2020 and UIDP/04106/2020. The work of the first author was also supported by FCT grant UI/BD/151186/2021.
2310.11803
Spacecraft Charging of the Morazán MRZ-SAT Satellite in Low Earth Orbit: Initial Results on the Influence of Energetic Electron Anisotropy on Differential Charging
The advent of the modular CubeSat satellite architecture has heralded a revolution in satellite missions, drastically lowering the technical and financial barriers to space. Surface charging resulting from energetic electron poses a direct risk to satellites in space, causing electric arcing and breakdowns. This risk is exacerbated for small technology demonstration CubeSats that are less resilient than larger satellites. An upcoming CubeSat launch is the first CubeSat project originating from Honduras, the Moraz\'an satellite (MRZ-SAT), due to launch in 2024. This will carry earth observational payloads to detect natural disasters. This study conducts simulations using the Electro-Magnetic Spacecraft Environment Simulator code to study absolute and differential charging of the MRZ-SAT cube-sat in Low Earth Orbit (LEO). The MRZ-SAT hosts four antennas, an architecture which lends itself well to studying and understanding differential charging in LEO. The MRZ-SAT was first simulated in a typical benign ionospheric plasma environment. Here the antenna located in the ambient plasma wake displayed the maximum charging up to --0.9 V, 0.24 V biased to the main cube. An energetic electron population was then included and the wake antenna subsequently charged to greater values of --2.73 V, now 1.56 V biased to the main cube. The anisotropy of the energetic electrons was then varied, and this differential charging trend appeared exacerbated with anisotropies of 0.5 to 0.05 inducing absolute wake antenna voltages up to --4.5 V and differential voltage biases 50 and 100 \% greater than when an isotropic population was considered. This study highlights the importance of electron anisotropy in LEO to surface charging and identifies this property in the energetic electron distribution functions as inducing potentially greater risks to satellites of electrical arcing and breakdown.
Raphael Bertrand-Delgado, Ravindra Desai, Fernando Zorto-Aguilera, Zeqi Zhang, Yohei Miyake
2023-10-18T08:46:04Z
http://arxiv.org/abs/2310.11803v1
# 74th International Astronautical Congress (IAC), Baku, Azerbaijan, 2-6 October 2023. ###### Abstract The advent of the modular CubeSat satellite architecture has heralded a revolution in satellite missions, drastically lowering the technical and financial barriers to space. As a result, over 600 CubeSat missions are due to launch in 2023 with various scientific and technology-focused applications. Surface charging resulting from energetic electron poses a direct risk to satellites in space, causing electric arcing and breakdowns. This risk is exacerbated for small technology demonstration CubeSats that are less resilient than larger satellites. An upcoming CubeSat launch of significance is the first CubeSat project originating from Honduras, the Morazin satellite (MRZ-SAT), due to launch in 2024. This will carry earth observational payloads to detect natural disasters, such as floods and landslides, which preferentially affect Central America and aims to build the first disaster forecasting capabilities for remote Central American regions. In this study we conduct simulations using the Electro-Magnetic Spacecraft Environment Simulator code to study absolute and differential charging of the MRZ-SAT cube-at in Low Earth Orbit (LEO). The MRZ-SAT hosts four antennas extending from four sides of the spacecraft, an architecture which lends itself well to studying and understanding differential charging in LEO. The MRZ-SAT was first simulated in a typical benign ionospheric plasma environment. Here the antenna located in the ambient plasma wake displayed the maximum charging up to \(-0.9\) V, \(0.24\) V biased to the main cube. An energetic electron population was then included and the wake antenna subsequently charged to greater values of \(-2.73\) V, now \(1.56\) V biased to the main cube. The anisotropy of the energetic electrons was then varied, and this differential charging trend appeared exacerbated with anisotropies of \(0.5\) to \(0.05\) inducing absolute wake antenna voltages up to \(-4.5\) V and differential voltage biases \(50\) and \(100\) % greater than when an isotropic population was considered. This study highlights the importance of electron anisotropy in LEO to surface charging and identifies this property in the energetic electron distribution functions as inducing potentially greater risks to satellites of electrical arcing and breakdown. Aceronyms/Abbreviations MRZ-SAT Morazin satellite Universidad Nacional Autonoma de Honduras LCO Low Earth Orbit MECO Middle Earth Orbit Geosynchronous Earth Orbit Defense Meteorological Satellite Program ERS European Remote-Sensing Satellite Electro-Magnetic Spacecraft Environment Simulator PIC Particle-In-Cell HPC High Performance Computer ## 1 Introduction In 2024, Honduras will launch the nation's first satellite, the Morazin Satellite (MRZ-SAT), to low earth orbit (LEO) [1]. The MRZ-SAT project is led by the University National Autonomy Honduras (UNAH) in collaboration with the Universidad de Costa Rica (UCR), the Universidad San Carlos de Guatem (USAC), and Kyushu Institute of Technology, Japan. The Morazin Project's mission has three principal aims. The first is to demonstrate a space-based early disaster warning system, informing about potential hydro-meteorological hazards, such as floods and landslides in remote and threatened areas of Honduras, Guatemala and Costa Rica. The second is to provide means of communication with the affected populations living in areas that would have possibly lost their telecommunication infrastructures during meteorological incidents [2]. Thirdly, the MRZ-SAT has an important educational purpose. Through its design and build, and subsequent data returned from on-board cameras, the mission aims to provide scientific learning tools for elementary, high-school, and university students to inspire the next generation of Honduran engineers and scientists [3]. Satellite surface charging is caused by energetic electrons with energies of up to several keV and presents a direct risk to satellites in space causing electric arcing and break-down [4]. This risk is exacerbated for small technology demonstration Cube-Sats that are less resilient than larger satellites, often using commercial off the shelf components. Compared to geostationary earth orbit (GEO) where satellites encounter the high-temperature plasma sheet on the night-side [5; 6], the plasma environment in LEO is colder, and spacecraft charging is not typically considered as great a risk as at GEO [7; 8]. Nonetheless, significant charging has been observed in polar regions; for example, a potential of -2000 V was regularly measured on the satellite DMSP F12 [9]. Another example is the complete loss of the ERS-1 satellite in March 2000 and the ASCA satellite in July of the same year, both satellites having respectively an altitude of 772 km and 570 km [10; 11]. More recently, the Jason-3 satellite has reported charging events of up to -2000 V negative for a variety of geomagnetic conditions and notably down to 60\({}^{\circ}\) latitudes [12]. Energetic electrons at LEO are produced when enhanced solar wind conditions drive magnetic reconnection and substorms in the magnetotail. Energetic electrons subsequently precipitate into the polar atmosphere and produce the polar aurora and associated Region 1 current system [13; 14]. Following the impact of solar storms [15], the magnetosphere can become highly compressed, bringing the night-side reconnection X-line closer to Earth [16] and expanding the auroral oval to lower latitudes [17; 18]. A consequent build up of the partial night-side and global ring current results in the formation of further Region 2 current systems at even lower latitudes [19]. Energetic electrons in LEO derive from kinetic plasma instabilities which scatter particles into the loss cone and from the auroral acceleration region (AAR) [20]. A spacecraft immersed within a plasma will gain a net charge despite the condition of quasi-neutrality. The electrons are much lighter than the ions and move at a higher velocity and therefore preferentially stick to the spacecraft surfaces causing a net negative charge within a distance of the Debye length [21]. Electron emission processes, due to sunlight or particle impact, can conversely cause the surface to emit electrons and the net charge to move to positive potentials. Spacecraft charging risks are exacerbated when different parts of the spacecraft charge faster than the charge can equalise, thus causing differential charging [22]. A difference of potential can lead to a discharge by electric arcs, a hazard that can partially or completely damage the satellite. Simulating a satellite embedded inside plasmas can help to explain and mitigate adverse charging behaviour. In this article is a description of a self-consistent three-dimensional study of the MRZ-SAT in LEO. The simulation method is first discussed in Section 2, along with the satellite architecture, environmental conditions and inclusion of energetic electrons. Section 3 first analyses the electric charging of the MRZ-SAT in conditions representative of the ionospheric plasma at 400 km of altitude [23]. Energetic electrons are then implemented in the simulations in Section 4, in order to study their impact on the net electric charge between different parts of the spacecraft. Subsection 4.2 studies the effects of electron anisotropy on net and differential charging. ## 2 Method ### EMSES simulations technique This study utilises the three-dimensional electromagnetic spacecraft environment simulator (EMSES) code, which simulates spacecraft-plasma interactions using the Particle-In-Cell (PIC) method [24]. The PIC approximation corresponds to replacing the high amount of electrons and ions with charged macro-particles to reproduce a continuous phase space. This code has successfully been applied to study spacecraft charging at Earth [24; 25], and Saturn [26; 27; 28]. The plasma flow is injected as a drifting Maxwellian and exits through an outflow boundary condition. Further boundaries are periodic, allowing a continuous space orthogonal to the plasma flow. Each plasma species has mass and charge normalised to the proton scale with a real ion-to-electron mass ratio. The spacecraft body can be composed of multiple structures, either perfectly conducting or electrically insulated from each other. The charge accumulation caused by impinging particles is redistributed over its whole surface in order to maintain an equipotential distribution via the capacity matrix method. The charge density is used to solve Poisson's equation for the electrostatic potential. The simulated domain is a 3-D grid, with a spacing chosen to resolve the Debye length scale. ### Morazan Satellite Figure 1 shows the 1U MRZ-SAT as a main cube of dimensions 10 \(\times\) 10 \(\times\) 10 cm\({}^{3}\)[29] and Figure 2 shows this represented within the EMSES simulation domain that has dimensions of 128 \(\times\) 128 \(\times\) 128 grid cells, with a grid width of 1 cm, as shown in Figure 2. Four rectangular antennas extend outward from each edge with dimensions \(0.5\ \times 0.5\ \times 20\ \mathrm{cm^{3}}\). The four antennas are all located on the (\(\mathbf{X_{B}}\), \(\mathbf{Y_{B}}\)) plane set on the z upper part of the cube, two along \(\mathbf{X_{B}}\), amongst the two, one is on the lower half of the plane (between \(x=0\) and \(x=64\)). A \(64\times 64\times 64\ \mathrm{cm^{3}}\) subset of the simulation domain is used for visualisation and the antenna are referred to as x- or y-negative and x- or y-positive in accordance with the direction they extend from the main spacecraft in this coordinate system. In this study, the four antennas and main body are considered to be electrically insulated from one another to examine differential charging phenomena. While in reality charge can flow between the different structures, this assumption will reproduce the underlying behaviour of potentially hazardous differential charging due to non-conducting elements involved in cubesat designs, and particularly during rapid onset events where charge accumulates faster than it can equilibrate throughout the spacecraft and leak away into the ambient plasma. ### Ionospheric plasma The MRZ-SAT will be released from the International Space Station (ISS) and will therefore follow a similar orbit at an altitude of 400 km and an orbital velocity of 7.9 km/s [29]. At this altitude, positively charged particles from the ionosphere are mainly ions O\({}^{+}\)[23], therefore in this study, these are considered the only injected positive ions. Both species, ions and electrons, are considered to have the same temperature: \(T_{e}=T_{i}\). The ionospheric plasma is considered coming along \(\mathbf{X_{B}}\) from negative to positive with parameters as shown in Table 1. ### Magnetic field and energetic electrons Figure 3 presents the different reference systems used. (\(\mathbf{X_{I}}\), \(\mathbf{Y_{I}}\), \(\mathbf{Z_{I}}\)) characterises the Geocentric Reference System. (\(\mathbf{X_{0}}\), \(\mathbf{Y_{0}}\), \(\mathbf{Z_{0}}\)) is the Orbital Reference System, the origin situated at the cubesat's centre of mass, \(\mathbf{X_{0}}\) points in the same direction as the cubesat's linear velocity vector (Ram direction), tangent to the orbit it describes, and \(\mathbf{Z_{0}}\) always points in the direction of the Earth's centre. In this study, the used system will generally be the Moving Reference System (\(\mathbf{X_{B}}\), \(\mathbf{Y_{B}}\), \(\mathbf{Z_{B}}\)) characterised by being free to translate or rotate with the cubesat. Particularly, in the simulation the Moving Reference System will be taken as \(\mathbf{X_{B}}=\mathbf{X_{0}}\) and \(\mathbf{Z_{B}}=-\mathbf{Z_{0}}\). Following the ISS orbit, the cubesat maximum latitude will be \(51.65^{\circ}\)[29]. Most of the energetic electron precipitations are detected at a latitudes at or greater than \(55^{\circ}\)[30; 19]. However, during large solar storms, ionospheric current systems and energetic electrons can reach latitudes lower than \(50^{\circ}\)[17; 18]. Indeed, the ASCA satellite, located at an altitude of 570 km, was completely lost on 15 July 2000 after a geomagnetic storm where the Kp index reached 9 [11]. For the MRZ-SAT at peak latitude, we use a magnetic field under the assumption of a dipole with moment, 7.94 A-m\({}^{2}\), with strength of 43.14 \(\mu\)T at MRZ-SAT, and oriented \(58.14^{\circ}\) relative to \(\mathbf{Z_{I}}\). In this study the MRZ-SAT is considered on the nightside of Earth with zero solar illumination, thus without photoelectron emission. A range of energetic electrons is considered to be \(\mathbf{X_{I}}\), \(\mathbf{Y_{I}}\), \(\mathbf{Z_{I}}\), and \(\mathbf{Z_{I}}\). Figure 1: CAD model of the MRZ-SAT structure. Figure 3: MRZ-SAT reference coordinate system. Figure 2: Representatin of the MRZ-SAT in the EMSS domain. A subset surrounding the spacecraft is shown in the subsequent simulationr results. trons are considered, from 0.1 to 10 keV, for fluxes of \(10^{10}-10^{12}\) / cm\({}^{2}\)/s. Electron anisotropy is well known to affect spacecraft charging at GEO and Middle Earth Orbit (MEO) [31; 32], but this has yet not been identified or studied in LEO. Electron anisotropy will be represented in the simulation by partitioning the electron energy between the thermal energy for the case of an isotopic distribution and the drift velocity for the case of an electron beam, and variations in between. In each scenario the simulated energy flux will be equal to \(1.56\times 10^{9}\) keV/cm\({}^{2}\)/s/sr. ## 3 Plasma interaction The Morazan Satellite is first simulated in a typical plasma environment without energetic electrons to understand the ambient plasma interaction. The ionospheric plasma parameters are given in Section 2.3. Figure 4 presents two slices on the \((\mathbf{X_{B}},\,\mathbf{Y_{B}},\,Z_{B}=72)\) plane and Table 2 the final potentials reached by each spacecraft structure and their difference relative to the main cube. The negative spacecraft potentials cause the spacecraft and antennas to appear as regions devoid of electrons but with enhanced ion fluxes. The enhanced ions fluxes are due to the antenna physical domain being implemented sub-grid and are therefore not visible as empty regions of plasma. A low density wake is present behind the satellite with the ion wake appearing larger than the electron wake due to their larger inertia and therefore slower refilling of the wake. Table 2 shows the potential of the different spacecraft structures. The potential varies between -0.61 and -0.9 V with the x-positive antenna having a lower potential than the rest of the conducting bodies. This can be explained by the wake visible behind the satellite displaying a lower plasma density surrounding the antenna and the latter therefore experiencing a higher ratio of ionospheric electrons to O\({}^{+}\)ions. The difference with the main cube is also shown, which indicates that the highest difference reaches 0.24 V. ## 4 Energetic electrons ### Energetic electrons #### 4.1.1 Energetic electrons To simulate energetic electron fluxes that the MRZ-SAT may experience, an energy flux of \(1.56\times 10^{9}\) keV/cm\({}^{2}\)/s/sr was simulated, based upon Jason-3 and ASCA observations [17; 30]. Due to electrons moving significantly faster than the spacecraft, the electron energy is considered primarily as a thermal energy of 1 keV. Their drift velocity is of a similar order of magnitude to the spacecraft velocity and they are therefore intended to represent an isotropic flow. Figure 5 and Table 3 show the simulation results where energetic electrons are included. Compared to the pre \begin{table} \begin{tabular}{l c} \hline \hline Electron density, \(n_{e,0}\) [cm\({}^{-3}\)] & \(2.50\times 10^{5}\) \\ Electron temperature, \(T_{e}\) [K] & 2000 \\ Ion temperature, \(T_{i}\) [K] & 2000 \\ Flow speed, \(\mathbf{v_{flow}}\) [m/s] & \(7.90\times 10^{3}\) \(\mathbf{\hat{x}}\) \\ Magnetic field [nT] & 36.99, 0, -22.97 \\ Energetic electron flux [keV/cm\({}^{2}\)/s/sr] & \(1.56\times 10^{9}\) \\ Energetic electron energy [keV] & \(1-1.09\) \\ \hline Grid size [cm] & \(0.5-1\) \\ Time step [s] & \(3.34\times 10^{-10}\) \\ Particles per cell & 41 943 040 \\ Domain size [cm\({}^{3}\)] & \(128\times 128\times 128\) \\ \hline \hline \end{tabular} \end{table} Table 1: Ionospheric plasma and system parameters. \begin{table} \begin{tabular}{l c c} \hline \hline Component & Potential [V] & Bias [V] \\ Cube & -0.66 & 0 \\ Antenna x-negative & -0.73 & 0.17 \\ Antenna x-positive & -0.90 & 0.24 \\ Antenna y-negative & -0.70 & 0.04 \\ Antenna y-positive & -0.61 & -0.05 \\ \hline \hline \end{tabular} \end{table} Table 2: Final energy potential for each component of the cubesat and their final bias relative to the main cube for the satellite for the scenarios without energetic electrons. Figure 4: Subset of the domain showing the simulated ambient plasma interaction. The electron (left) and ion (right) densities are shown in the spacecraft frame with the plasma entering from the left. vious charging in Section 3, energetic electrons induce a greater potential which results in further deflection of ionospheric electrons from the spacecraft. The high flux of energetic electrons along the magnetic field direction in Figure 5d, produces a slight distortion in the energetic electron distribution towards the bottom left-hand corner of the simulation box. The energetic electrons causes the MRZ-SAT to charge to \(-\)1.19. Figure 5a shows the antenna x-positive charges the most. Figure 5d shows a slice in the (\(\mathbf{X_{B}}\), \(\mathbf{Z_{B}}\)) plane where \(Y_{B}=68\) and therefore only shows one antenna. Table 3 shows that the x-positive antenna has a final potential of \(-\)2.73 V, with a difference of potential of 1.56 V with the main cube, while the three others have a differences of \(-\)0.46 to \(-\)0.67 V. The energetic electrons therefore play an important role in spacecraft charging, explaining the enhanced difference between the x-positive antenna and the rest of the satellite compared with the case without energetic electrons. ### Anisotropic Energetic Electrons The electron anisotropy is defined by the ratio T\({}_{\perp}\)/\(T_{\parallel}\) relative to the magnetic field. Electron showers have been measured with variable anisotropies, depending on the energetic electron's energy and the Kp index [20, 31]. Olsson and Janhunen [32] especially study the characteristics of'middle-energy' energetic electrons, in a range between 100 and 1000 eV. #### 4.2.1 Anisotropy of 0.5 The energetic electrons are first considered with an initial anisotropy of 0.5 [20]. Figure 6 and and Table 4 present the simulation results where the the energetic electrons are also implemented with a the same total energy of 1 keV and flux as in Section 4.1. However here, the energy is equally divided between the drift velocity and the thermal energy. Therefore the parallel kinetic energy equals 500 eV, resulting in a field-aligned drift velocity of \(1.33\times 10^{7}\) m/s. Figure 6 shows the plasma interaction and notable differences to Figure 5. Figures 5(a) and 5(b) show how the enhanced charging of the x-positive antenna deflects the ambient electrons to a greater extent causing slower refilling of the wake and therefore a much larger electron wake. The ion wake in Figure 5(c) is also larger and highlights how the ions are coupled to the electrons via ambi-polar electric fields. The energetic electron wake oriented along the magnetic field is also understandably more pronounced due to the greater drift velocity of this population. #### 4.2.2 Anisotropy of 0.05 The second study case of anisotropy looks to examine the limiting scenario of a pure electron beam and thus simulates an energetic population with a kinetic energy of 1 keV and a thermal perpendicular energy of just 0.1 keV. Figure 7 and Table 5 shows the simulation and potential charging results. The most evident difference in the plasma interactions is the elongated sharp energetic electron wake in Figure 6(d). The ion wake in Figure 6(c) appears similar to in Figure 5(d) but the electron void surrounding the x-positive antenna in Figure 6(b) is larger than the case shown in Figure 5(b). The x-positive antenna reaches a negative potential of -4.41 V, an absolute potential comparable to the previous case displayed in Section 4.2.1. The other antennas and main cube, however, reach less negative values of -0.91 V to -1.06 V, nearly half as negative as in the case where the anisotropy was 0.5. This is attributed to the directional flow of the energetic electron beam reducing cross-sectional antenna and cube area visible to the energetic electrons. This lower charging of the main cube, however, interestingly causes the differential charging between the the x-positive antenna and main cube to be 25 % greater reaching 3.5 V. This dynamic of differential charging shows how the altered wake charging scales differently to the main body and results in greater differential charging. ## Conclusion In this study we have examined the absolute and differential charging of the Morazan MRZ-SAT satellite in its target orbit of 400 km and highest latitude of 51.65\({}^{\circ}\). The MRZ-SAT was first simulated subjected to the ambient thermal electron and O\({}^{+}\)ion plasma conditions. This resulted in small amount of absolute and differential charging, with the x-positive antenna charging greater than other antennas due to the shadowing of reduced wake densities. At the latitudes considered, spacecraft are not typically be exposed energetic electron but, during geomagnetic storms, current systems can close at lower latitudes and combined with enhanced magnetospheric and AAR wave activity, can produce energetic electrons in LEO at these latitudes [17, 18]. Field-aligned energetic electrons of 1 keV were first injected as a near-isotopic distribution and then with varying degrees of anisotropy relative to the magnetic field, firstly of 0.5 and then an order of magnitude lower of 0.05. The injection of this additional energetic electron population along a different axis with a near isotropic flow induced a slight distortion to the density distribution with slightly reduced densities appearing along the magnetic field. This effect was exacerbated for anisotropic energetic electron distributions with greater field-aligned drift velocities and a clear double wake structure was produced. The anisotropic electrons were subsequently found to produce lower overall charging of the main cube and of three of the antennas but the wake antenna interestingly exhibited greater charging and therefore the potential bias to the main cube was increased. This study has identified that electron anisotropy can potentially induce greater differential charging to a satellite exposed to energetic electrons in LEO. This highlights a further parameter of interest when examining surface charging risks to satellites in LEO, in addition to their energies and flux. Further studies could deepen the understanding of the effects of plasma anisotropy on surface charging and Figure 7: Slices showing the potential (top left, a), electron (top right, b), ion (bottom left, c) and energetic electron (bottom right, d) at a final sate, and an anisotropy of 1/20. \begin{table} \begin{tabular}{l c c} \hline \hline Component & Potential [V] & Bias [V] \\ Cube & -0.91 & 0 \\ Antenna x-negative & -1.10 & 0.19 \\ Antenna x-positive & -4.41 & 3.5 \\ Antenna y-negative & -1.06 & 0.15 \\ Antenna y-positive & -0.99 & 0.08 \\ \hline \hline \end{tabular} \end{table} Table 5: Final energy potential for each component of the cubesat and their final bias relative to the main cube for energetic electrons at a thermal energy inferior to 0.1 keV and a kinetic energy of 1 keV. directions to be considered include accounting for secondary electron emissions from the spacecraft [33] and ionospheric density depletions during geomagnetic storms, both phenomena are often coincident with energetic electrons [34, 35], as well as different energetic electron energies beyond those initially considered herein. ## Acknowledgements R.B.D. acknowledges the financial support from the UNAH and the University of Warwick. R.T.D. acknowledges an STFC Ernest Rutherford Fellowship ST/W004801/1. This work used the High Performance Computing Service from the University of Warwick.
2307.15072
Detecting the Presence of COVID-19 Vaccination Hesitancy from South African Twitter Data Using Machine Learning
Very few social media studies have been done on South African user-generated content during the COVID-19 pandemic and even fewer using hand-labelling over automated methods. Vaccination is a major tool in the fight against the pandemic, but vaccine hesitancy jeopardizes any public health effort. In this study, sentiment analysis on South African tweets related to vaccine hesitancy was performed, with the aim of training AI-mediated classification models and assessing their reliability in categorizing UGC. A dataset of 30000 tweets from South Africa were extracted and hand-labelled into one of three sentiment classes: positive, negative, neutral. The machine learning models used were LSTM, bi-LSTM, SVM, BERT-base-cased and the RoBERTa-base models, whereby their hyperparameters were carefully chosen and tuned using the WandB platform. We used two different approaches when we pre-processed our data for comparison: one was semantics-based, while the other was corpus-based. The pre-processing of the tweets in our dataset was performed using both methods, respectively. All models were found to have low F1-scores within a range of 45$\%$-55$\%$, except for BERT and RoBERTa which both achieved significantly better measures with overall F1-scores of 60$\%$ and 61$\%$, respectively. Topic modelling using an LDA was performed on the miss-classified tweets of the RoBERTa model to gain insight on how to further improve model accuracy.
Nicholas Perikli, Srimoy Bhattacharya, Blessing Ogbuokiri, Zahra Movahedi Nia, Benjamin Lieberman, Nidhi Tripathi, Salah-Eddine Dahbi, Finn Stevenson, Nicola Bragazzi, Jude Kong, Bruce Mellado
2023-07-12T13:28:37Z
http://arxiv.org/abs/2307.15072v1
Detecting the Presence of COVID-19 Vaccination Hesitancy from South African Twitter Data Using Machine Learning ###### Abstract Very few social media studies have been done on South African user-generated content during the COVID-19 pandemic and even fewer using hand-labelling over automated methods. Vaccination is a major tool in the fight against the pandemic, but vaccine hesitancy jeopardizes any public health effort. In this study, sentiment analysis on South African tweets related to vaccine hesitancy was performed, with the aim of training AI-mediated classification models and assessing their reliability in categorizing UGC. A dataset of 30000 tweets from South Africa were extracted and hand-labelled into one of three sentiment classes - positive, negative, neutral. The machine learning models used were LSTM, bi-LSTM, SVM, BERT-base-cased and the RoBERTa-base models, whereby their hyperparameters were carefully chosen and tuned using the WandB platform. We used two different approaches when we pre-processed our data for comparison - one was semantics-based, while the other was corpus-based. The pre-processing of the tweets in our dataset was performed using both methods, respectively. All models were found to have low F1-scores within a range of 45%-55%, except for BERT and RoBERTa which both achieved significantly better measures with overall F1-scores of 60% and 61%, respectively. Topic modelling using an LDA was performed on the miss-classified tweets of the RoBERTa model to gain insight on how to further improve model accuracy. ## Nomenclature \begin{tabular}{l l} SVM & Support Vector Machine. \\ COVID-19 & Coronavirus Disease-19. \\ NLP & Natural Language Processing. \\ BERT & Bidirectional Encoder. \\ & Representations for Transformers. \\ RoBERTa & Robustly Optimized BERT \\ & Pre-training Approach. \\ UGC & User Generated Content. \\ LSTM & Long Short Term Memory. \\ Bi-LSTM & Bidirectional-LSTM. \\ VADER & Valence Aware Dictionary and \\ & sEntiment Reasoner. \\ LDA & Latent Dirichlet Allocation. \\ AI & Artificial Intellegence. \\ NPI & Non-pharmaceutical interventions. \\ ABSA & Aspect-based Sentiment Analysis. \\ NB & Naive Bayes. \\ VAI & Vaccine Acceptance Index. \\ TF-IDF & Term Frequency Inverse Document \\ & Frequency. \\ RF & Random Forest. \\ NSP & Next Sequence Prediction. \\ MLM & Masked Language Modelling. \\ WandB & Weights and Biases. \\ \end{tabular} Introduction The still ongoing COVID-19 pandemic, which represents the most significant healthcare emergency in recent times, has had a shattering effect all over the globe - both physically and psychologically [1]. Many NPIs, such as wearing masks, washing hands regularly, and maintaining social distancing, can help reduce the spread of the virus and have been, indeed, effective in mitigating the infectious outbreak.[1] However, they are not sustainable in the long term, both in terms of acceptability and psychological and economic impact. Also, they may not fully eradicate the disease. In this context, pharmaceutical interventions, including drugs and vaccination, can play a very crucial role in combating the infection and immunization could potentially eradicate this virus. Several government agencies have worked closely with public and private organizations worldwide to provide the scientific community with the necessary resources to work toward the development of vaccines and drugs that would protect against COVID-19 infection, as well as mitigate the severity of symptoms arising from COVID-19 infection in the elderly and people with co-morbidities[2]. While drug discovery and vaccine development and roll-out are, in general, long and complicated processes, often taking an average of 10 to 15 years for development to be completed and approval to be finalized, the prompt development of the pharmacological compounds and vaccines against COVID-19 has been facilitated by several years of past basic and translational research.[2] A global research effort coupled with novel technological advancements has allowed faster ways to manufacture drugs and vaccines while extensive funding has allowed firms to run multiple trials in parallel, thereby, expediting the process enormously [2]. Specifically concerning immunization, according to WHO, by August 2022, there were 198 COVID-19 vaccine candidates in pre-clinical development and 170 in clinical development [3], but even once a vaccine has been developed and manufactured, challenges are not ended yet. The implementation of a mass immunization campaign may present, indeed, organizational and logistic hurdles and having to face vaccine hesitancy [3]. For instance, in South Africa, the national vaccination program against COVID-19 commenced on 17th February 2021 [4]. The roll-out of the vaccine strategy in South Africa was implemented in a three-phase approach: first by vaccinating the most vulnerable population such as front-line healthcare workers and, then, by catering to other essential workers, people in congregated settings, persons over 60 years old, people over 18 years old with co-morbidities, and, finally, the population over the age of 18. The goal was to vaccinate at least 67% of the population by the end of 2021 [5]. As of May 30, 2022, around 50.03% of adults in the country had had at least one COVID-19 vaccination. Gauteng leads other provinces in terms of the number of jabs administered (over ten million), followed by KwaZulu-Natal with more than five million vaccinations. It is quite apparent that most populations in different provinces have not yet been fully or partially vaccinated [3]. The lack of willingness of the public to get vaccinated against COVID-19 is a matter of great concern to both health scientists and workers in the field of public health. After the initiation of the vaccination roll-out process, the public's opinions and emotions have become quite diverse. Different studies have been conducted all over the world with the aim of trying to detect and understand the reasons behind vaccine hesitancy [6], which represents a complex, multi-factorial phenomenon[3]. From these studies, some of the reasons that were identified were erroneous beliefs such as those that the vaccines were produced too quickly without proper research being undertaken, the vaccines were thought to cause cancer and/or infertility, uncertainty regarding the second dose's availability, increased risk of serious side-effects for people with pre-existing conditions/co-morbidities, and possible allergic reactions[7]. Also instrumental in the rapid rise in vaccine hesitancy were the spread and propagation of conspiracy theories and misinformation - which were due to anti-science, political and religious posts on social media persuading users towards adopting an anti-vaccination attitude.[7] This rapid flow of multiple sources and types of misinformation drastically slowed down the acceptance of the COVID-19 vaccines. Several opposing opinions further divided the general population into groups of people and created a near-hostile temperament toward the topic of vaccination.[7] A previous study done by B. Mellado et al was published in a paper with the title: Leveraging Artificial Intelligence and Big Data to Optimize COVID-19 Clinical Public Health and Vaccination Roll-Out Strategies in Africa" has shown that "Big data and artificial intelligence (AI) machine learning techniques and collaborations can be instrumental in an accurate, timely, locally nuanced analysis of multiple data sources to inform CPH decision-making, vaccination strategies and their staged roll-out" [8]. Therefore, the government and other agencies should analyze people's sentiments about vaccination campaigns to maximize and optimize their roll-out, collecting available data from different social networking sites. Examples of UGC [9] include tweets, Facebook status updates, videos, blogs, forum posts, and consumer-produced product reviews, among others. [10] UGC can be mined in order to identify trends and make predictions on a range of diverse subjects and topics, spanning from product launches and sales to political campaigns and elections, natural disasters, infectious epidemics, and pandemics. Concerning the latter topic, there are several studies where UGC has been used to understand people's opinions about the Coronavirus and its spread, government measures taken to control its spread, and the development and administering of vaccines [11]. However, to the best of our knowledge, the public's hesitation associated with getting vaccinated against COVID-19 has been investigated mainly in the Global North, but, to a lesser extent, in the Global South.This is very apparent if one considers that by the 9th July 2021, the share of people that have been partially or fully vaccinated per continent where all under 50%, with North America and Europe having 44% and 43% of its residents receiving at least one vaccination against COVID-19, followed by South America with 34%, which is way ahead of Asia and the Oceania each having respective shares of 25% and 19%, respectively, and then Africa with a dismal amount of under 5%. [26] The fact that Africa is way behind in terms of vaccination rates as compared to the rest of the world, this further justifies and motivates the importance of this study [26]. Moreover, the platform most frequently used for delivering thoughts on the COVID-19 situation, since its emergence until now and especially during the year 2021, was Twitter - hence justifying using Twitter data as opposed to data from other social media platforms for this study. [26] A total of 20 related works were analyzed, with the 6 most relevant papers mentioned in the upcoming literature review section, with each using either NLP techniques and/or ML methods in order to probe the public's sentiments towards certain pandemic-related topics such as vaccination and lockdown measures through user comments on one or more social media platforms extracted within or from one country/continent or amongst several countries/continents, with the intention of guiding policy-makers in making decisions, given the devastating effect of the pandemic. Most studies exclusively used automated labelling methods in their sentiment analysis, while some included both manual and automated labelling in their experiments. The machine learning models that were commonly used included state-of-the-art models such as BERT, classical models such as SVM and novel recurrent neural networks such as LSTM/Bi-LSTM. All these related works showcased the power of sentiment analysis and the potential prowess of using NLP techniques in conjunction with machine learning methods in the research in extracting meaningful conclusions pertaining to people's feelings/ opinions towards a particular topic - which can be used in future studies to create more sophisticated models that would help policymakers in making decisions during a pandemic or public health crises. No studies exclusively used manual labelling in their research, while some used partial manual labelling and others using ABSA with relatively good results. However, there are many limitations to these studies involving data bias given the intrinsic characteristics of social media users being young and from more urbanized areas, as well as model bias given the choice of keyword selection. Moreover, other limitations arise from the tremendous amount of time manual labelling takes as opposed to automated labelling, class imbalance, dataset size and characteristics, as well as conflict from subtle deviations in terms of agreement with the choice of definition for vaccine hesitancy and the accompanying sentiment labels, along with the method of pre-processing and the rules used in the labelling process. With these observations in mind, this study explored vaccine hesitancy in South Africa, which is in the Southern Hemisphere, using Twitter data as a source of public opinion. More specifically, the aim was to quantify and qualify the public's willingness to be vaccinated in order to develop an AI model that would be able to detect the presence of vaccine hesitancy and track its dynamics, thereby, paving the path to an AI-mediated response to a global health crisis. This would allow for a faster, more efficient, implementation and deployment of disaster management systems for the detection, mitigation, and eradication of infectious pandemics. ## 2 Related Work In 2020, M.B. Mutanga and A. Abayomi used Twitter data from South Africa and identified issues relating to the pandemic using an LDA, which they showcased in a paper entitled: "Tweeting on COVID-19 pandemic in South Africa: LDA-based topic modelling approach." From the LDA analysis, some topics that were being discussed were identified pertaining to the sale and consumption of alcohol, lockdown, daily rates of infection, police brutality, 5G radiation causing COVID-19 and vaccines, as well as conspiracy theories. These topics were an illustration of the attitudes and perceptions the citizens had towards the topic of vaccines. The findings also revealed people's resistance to measures that affect their economic activities, and their unwillingness to take tests or vaccines as a result of fake news and conspiracy theories [21]. The study was very comprehensive but is limited given that as the COVID-19 pandemic continues its offence and new sources of damage and opportunities are being found, future work needed to be inclusive of extracting the emotion behind the sentiments from the collected tweets - in order to investigate the evolution of the public's opinions with time before and after certain remarkable events. Testing of additional topic extraction algorithms, including a combination of NLP techniques and machine learning methods toward an automatic classification and prediction of diverse factors relating to the COVID-19 pandemic were not performed in this study [21]. In 2022, a paper entitled: "Sentiment analysis tracking of COVID-19 vaccine through tweets," by A. Sarrirete et al., people's sentiments towards vaccination during the pandemic from tweets that were scraped via the use of the TAGS tool from Twitter users from all over the world were investigated, using a hybrid approach, which combined the use of linear, probability and/or decision tree classifiers with a statistical-, semantics- and/or a dictionary-based approach. In other words, the hybrid approach uses NLP techniques in conjunction with ML methods, and in this case, were applied in order to classify text and extract the degree of vaccination hesitancy towards COVID-19 vaccines in general[22] From the corpus analysis, emojis and words related to a sentiment were identified. The frequency of these keywords was recorded and each tweet was classified based on the keyword frequency using the aforementioned machine learning models. It was found that the tweets could be separated into positive and negative sentiments, with a dominance towards the negatives. Although, several tweets were collected, analyzed and classified based on keywords frequency, manual labelling was absent and more testing is needed on tweets using machine learning techniques to compare the results with the NLP techniques, and generalizing the algorithm to different hashtags and other applications [22]. In 2021, a paper entitled: "Sentiment Analysis of COVID-19 Vaccine Perception Using NLP," by M.A. Mudassir, Y. Mor, R. Munot et al., the sentiments of the people residing in India with regards to the COVID-19 vaccine were analyzed. The paper used three different classification models i.e., TextBlob, VADER, and ABSA to perform the sentiment analysis on English tweets that were posted by users in India and then chose the best deep learning model after comparing their results based on F1-score and test accuracy[23]. TextBlob and VADER are commonly used automated labelling algorithms, while ABSA is an ML that finds and attributes sentiment to aspects, features, and topics that it has categorized within the body of text - more in line with a human perspective used when manually labelling text. In this study, 2000 or 10 % of the tweets in the dataset were manually labelled and tested on the three different models. The model with the highest accuracy was chosen and rest of the tweets were labelled using this model. It was found that ABSA produced the best result out of the other models due to its ability to focus on the specified aspects enhanced by the Attention based Transformer model and it was argued that ABSA should be used more frequently in sentiment analysis studies tasks which have a narrow focus rather than general purpose models[23]. The results of this study showed that the insights gained from the ABSA model were more detailed and descriptive than other techniques that fail to give a more than a general overview of sentiment - however it is a notably a significantly slower method, which will need to be investigated in future studies. Thus, this study illustrates the advantages of using other methods of text classification used in the training phase such as manual labelling in conjunction with ABSA, instead of solely relying on automated labelling methods [23]. In 2021, a paper entitled: "Dynamic assessment of the COVID-19 vaccine acceptance leveraging social media data," by L. Li and J. Zhou et al, over 29,000,000 vaccine-related tweets from 08 August to the 19th April 2021 were collected and quantified using a VAI, which they computed based on opinion classifications identified with the help of NLP techniques and provided them with a quantitative metric to show the level of vaccine acceptance across different geographic scales in the U.S. Text classification was either automated and performed using TextBlob and VADER or manually labelled into one of three classes i.e. positive, negative or unrelated [24]. A fixed sample of 20000 unique tweets from the collected tweets that were most frequently re-tweeted were manually labelled according to specific labelling criteria, which were based on the CDC strategy in order to consolidate confidence in COVID-19 vaccines, whereby 10% of this dataset was chosen for the testing sample. A total of 9 candidate models were selected and then trained and tested on the aforementioned tweets and the best model in terms of F1 score and accuracy was selected after an extensive grid search was performed in order to obtain the model-specific set of hyperparameters whose values have been optimized to provide the best possible performance [24]. The TF-IDF + RF that was trained on an augmented training set obtained the best overall performance and hence was applied to the entire dataset in subsequent steps. A classification was assigned to each tweet and used to compute a user-based vaccine acceptance measure. Different VAI measures were constructed for national-level, state-level and country-level analysis, respectively [24]. At the national level, it showed that the VAI transitioned from negative to positive in 2020 and stayed steady after January 2021 - which was supported by national vaccination rates over that time interval - and re-iterated via a comprehensive analysis of the state- and county-level data. The paper discussed information characteristics that enabled a consistent method of estimation of the VAI [24]. The findings supported the use of social media to understand opinions and to offer a fast and inexpensive way to assess vaccine acceptance.- which is also relevant. Therefore, future work could consider using NLP and machine learning tools trained in other languages and integrating data from surveys or models to complement the social media estimation, as well as considering the generalizability of this research framework by applying it to investigate the vaccine acceptance on other types of vaccine and in a broader geographical scale, such as the vaccine acceptance over HPV vaccine and flu vaccine in different countries [24]. In 2021, a paper entitled: "Applying Machine Learning to Identify Anti-Vaccination Tweets during the COVID-19 Pandemic", by Quyen G. and Kien G. et al, the performance of various different NLP models i.e., BERT, NB, SVM and Bi-LSTM networks with pre-trained GLoVe embeddings in identifying anti-vaccination tweets published during the COVID-19 pandemic were evaluated [25]. From the 1st of Jan up until the 23rd of August 2020, 150,000,000 tweets from all over the world were collected using a Twitter Stream API which allowed public access to a one percent sample of the daily stream of Twitter. After removing all non-English tweets and re-tweets, \(\approx\)75,000,000 tweets were left behind and used for training and testing [25]. A systematic random sampling method was used to select 20,854 tweets from 1,474,276 tweets for automated labelling. This sampling method made sure that tweets from across different time intervals during the pandemic were chosen. Tweets were labelled as either "anti-vaccination" or "other" as the model was aimed to use for stance analysis using stance analysis, in which a tweet is determined to be in favour or against a target [25]. The optimal model performance on the test set for the BERT model was: accuracy = 91.6%, precision = 93.4%, recall = 97.6%, F1 score = 95.5%, and AUC = 84.7%. From this result along with the other optimized model performances, it was concluded that the BERT models had outperformed all of the other models across all metrics and that given its excellent performance is viable as an identifier of anti-vaccination attitudes from tweets [25]. However, since stance analysis was used, which is different from sentiment analysis in which a tweet is classified as positive or negative, a negative tweet may not mean anti-vaccine while a positive tweet may not mean pro-vaccine and moreover, only two classes were chosen, which both may have served to inflate the model to such high-performance values. Moreover, it may be possible that BERT has a high correlation with tweets labelled with Textblob and Vader, which needs to be investigated [25]. Hence, this study should be repeated and cross-checked with the results of similar studies, as well as to check if such a correlation exists and also to compare results against a manually labelled dataset, as well as performing sentiment analysis on the dataset using the same labels and then extending the number of classes to three, in order to verify whether or not this model is reliable as a tool for identifying anti-vaccination attitudes across the globe towards the COVID-19 vaccines [25]. In 2021, a paper entitled: "Deep Learning-Based Sentiment Analysis of COVID-19 Vaccination Responses from Twitter Data", by K.N. Alam and Md.S. Khan et al, the authors used a Kaggle dataset called "All COVID-19 Vaccines Tweets" consisting of 125906 vaccine-related tweets from across the globe to train LSTM and Bi-LSTM sentiment classifiers, whereby the tweets were labelled by VADER and not by hand [26]. However, portions of the labelled tweets were assessed and if the label didn't match the sentiment that a human would have given it based on some rules, the cut-offs used for polarity identification were adjusted until more tweets had automated labels that agreed with their ascribed manual labels. This process was iterative and was done until the optimal or near-optimal cut-offs for the three sentiments are found[26]. From the datasets, 125,906 tweets were analyzed using the lexicon-based VADER and separated into three parameters: positive, negative, and neutral. It was found that neutral tweets formed the majority; the negative reactions were lowest in frequency, indicating that fear and unrest related to COVID-19 vaccination procedures were still at large[26]. LSTM and Bi-LSTM models were trained and tested on this dataset, in which The LSTM architecture showed 90.59% accuracy, and the Bi-LSTM model showed 90.83% accuracy, and both models showed good prediction scores in the precision, recall, F-1 scores, and confusion matrix calculation[26]. Upon other analyses, it was found that people's reactions towards vaccines, the words "first dose", "second dose", "Moderna", "Pfizer", "Bharat BioNTech", "death", "emergency", "Covishield" and "clinical trial" were very commonly used by twitter users in Canada and India, along with alarming words like "blood clot", "feel" and "trial"[26]. Furthermore, from January 21 to the end of February 21, the number of tweets related to vaccines was fewer than 500; from March 21 it rose to nearly 3000, indicating that people were very excited about the vaccines after the completion of the clinical trials and the vaccines were to be administered in large numbers. Then, from March 21 to the present, tweets regarding COVID-19 vaccines had fluctuated from 1000 to 2500 per month, which indicated people's emotions about them had greatly transformed [26]. This is an example of another study that showed the power of using NLP techniques alongside machine-learning methods in probing people's vaccination attitudes, as well as their underlying characteristics, across the globe towards the COVID-19 vaccines [26]. ## 3 Experimental Procedure A total of 30000 tweets were collected using the Twitter Research License. The extraction focused on hashtags related to vaccines and vaccination over a time period spanning from the 5th March 2020 - when COVID-19 was first identified in South Africa - to the 24th November 2021 when the Omicron variant was first detected in South Africa. Duplicate tweets were removed, leaving 27069 unique tweets. In this study, two distinct pre-processing methods were used: corpus-based and semantics-based methods - each having their own unique emoji dictionaries. In the corpus-based or lexical pre-processing method, contractions were removed and replaced by their full forms, uppercase text was lower-cased, integers were removed, hashtags were removed, hyperlinks were replaced by the word 'url', @mentions were replaced by the term 'atUser', repetitions of emojis, as well as all punctuation marks were removed. Thereafter, using a pre-defined emoji lexicon, relevant emojis were replaced by their physical descriptions in words, while other emojis not thought to convey any sentiment were discarded. This was followed by the replacement of common slang terms with their formal definitions, using a slang-term lexicon. The last step was, then, tokenisation using the TweetTokenizer from the NLTK database. In the semantics-based pre-processing method, the same afore-mentioned procedure was followed with a few differences i.e. all punctuation marks and integers were not removed, upper-cased text was not lower-cased, each @mention was replaced by the word 'Name' followed by an integer denoting its position relative to other @mentions in each tweet. Furthermore, the emoji lexicon was revised in order to describe the context of the emotion inherent in the emojis and the dictionary of slang terms was extended to include slang terms meaning vaccine or vaccinated such as 'vaxxed' and 'vaxx'. Certain hallmark pre-processing steps were not followed i.e., the removal of stop-words, lemmatization, and/or stemming. This was deliberately done in order to preserve the context of the tweets and, thus, the core sentiment. See Tables 4 and 5 under Section II of the Appendix for more details. Topic Modelling was then performed. The procedure was as follows: The dataset was converted to a list of tweets. Thereafter, standard pre-processing was performed, in which hashtags, urls, emojis and punctuation were removed. Bi-gram and trigram models were built with functions defined for stopword removal and text lemmatization. Then contractions were replaced with their full form and stopwords removed. Lemmatization was performed, in which the nouns, verbs, adverbs and adjectives were kept. A dictionary was created using 'id2word', in which each word was given its own integer I.D. The corpus from the lemmatized data was created in the form of a list of term and term frequency entries. An LDA was then built using the Gensim module lda-model function, whereby the optimal value for the number of topics was determined to be 5 at a coherence value of 0.3707. The top 30 most salient terms in each topic were extracted, topics visualized using the pyLDAvis tool and thereafter, the topics where identified. ### Hand-Labelling of Tweets Before we can motivate why we used manual labelling over automated labelling that uses sentiment analysis algorithms, it is useful to consider why automated sentiment analysis is so popular and the challenges that arise in machine learning when using a classification algorithm or when building a specific type of machine learning classification model. Firstly, in both manual and automatic labelling, there are unavoidable factors that will impact the reliability of the labelled data to some extent, thus making any analytic results not directly applicable to real-world problems. The most concerning factors are mentioned below: * Subjectivity of Text * Context and Tone of the statement * Presence of Sarcasm and Irony * Presence of Negations * Use of Emojis and Special Characters * Use of idiomatic expressions * Use of Colloquialisms and Slang [12]. These are all relevant because many misclassifications by an algorithm or classification model arise directly from these factors. Moreover, even though many classification algorithms have been formulated such as TextBlob and VADER [12], AI-based algorithms continue to struggle with - or are completely incapable of - detecting and understanding human emotion, and since tweets contain a strong emotional component, this may give rise to misinterpretation of text and incorrect labelling - when analysed by humans [13]. Despite this, the benefit of using automated labelling over manual labelling is 2-fold: * Removal of Text Subjectivity * Reliable and Realistic Labels [12]. Firstly, text subjectivity which arises from the fact that the meaning behind a statement is understood through our own life experiences and unconscious biases that, we, as humans have developed over the years, is no longer an influencing factor. This is obvious, since the tone and context of a piece of text is not considered by an algorithm i.e., it is always objective in its decision making - unlike humans who often will encounter texts that are difficult to classify. Secondly, for human beings, labelling text is a long, tiring, and time-consuming process, which is not the case of machines. For example, sentiment analysis algorithms can analyze hundreds of Mbs of text within minutes - while the average human would struggle to label more than 45 tweets in an hour [12]. However, we would like to draw meaningful conclusions from our analysis. So, it is important that the dataset that we used for training and testing NLP classifiers have reliable and realistic labels that are applicable in the real world [14]. Thus, hand-labelling of our dataset is justified in this regard. Furthermore, even though the precision of these classification algorithms is quite high owing to a consistent sentiment analysis not impacted by subjectivity, the accuracy of the labels from a human perspective would be incredibly low [15]. Even though human beings would occasionally disagree on the correct label of a text in a large enough dataset, they are still much better apt at understanding the meaning behind the text [15]. It is possible to mitigate the effect of subjectivity when hand-labelling text. This is especially useful and the findings of sentiment analysis would be relevant and important to policymakers. However, one can extend this mitigation by creating a fixed and unchanging bias that is used during the manual labelling process. This is not easy but the more defined the subject matter of the text that we are analysing, the more consistent and dependable the dataset will be, once labelling is complete [12]. This is imperative and aligns with the aim of this study, which is to create a machine learning model that would be able to accurately predict sentiments pertaining to vaccine hesitancy in order to guide current policy-makers during a pandemic. The hand-labelling of the dataset was done by several persons in the team using a strict, clear, and consistent set of rules to minimise the frequency of disagreements on the correct label for a particular tweet. Such workforce also serves to minimize labelling errors, maximize quality control by checking that the labelling rules used were correctly and consistently implemented amongst the labellers and by finding consensus on difficult-to-label tweets. A collection of 30000 tweets were selected to be hand-labelled. A label is ascribed to the tweet based on the opinion of the author towards a particular theme or topic - in this case, vaccination. Each tweet was hand-labelled into one of three sentiment classes i.e., positive, negative, or neutral. The criteria for hand-labelling involved answering a simple question: "Does the author of this comment approve or disapprove in taking a vaccination shot against COVID-19 and to what extent does he/she agree or disagree?" To answer this question, a careful look at the punctuation, grammar, choice of words and symbols as well as tone inherent in the tweets were examined. To make things easier to categorize, easy-to-label tweets were labelled first and difficult-to-label, in which both negative and positive sentiments could be found in the tweet, were left towards the end. We adopted IBM's definitions of vaccine hesitancy, which was based on WHO's definition of vaccine hesitancy in this paper, in which a negative sentiment was defined as a refusal to get vaccinated - which is referred to as overt hesitancy, a positive sentiment was defined as a decisive decision to get vaccinated, while a neutral sentiment was defined as a delay towards getting vaccinated i.e. an indecisive temperament towards vaccination - this is referred to as subtle hesitancy [16]. Additionally, a statement in which the author's viewpoint is unclear or unrelated to vaccination is by default labelled as neutral. Hence, we argue that the practice of hand-labelling is superior to automated classification algorithms - which frequently mislabel text that contain certain tones and contexts especially when negations, colloquial slang, emojis, and sarcasm are present, as previously said. Refer to Table 3 under Section I of the Appendix for examples whereby the afore-mentioned statement is true In the next section, we introduce the type of machine learning algorithms/models that we used, briefly discussing the architecture of the models, as well the data processing, training, and testing steps that were involved. ### Support Vector Machines The Support Vector Machine (SVM) algorithm is a popular algorithm amongst supervised machine learning algorithms, which can be utilized for classification purposes as well as for solving regression problems. In our model, feature extraction and the "Bag of Words" model were implemented in the post pre-processing steps with vectorisation being performed by the TFidfVectorizer on the text samples, while the labels were label\(-\)encoded using the Label\(-\)Encoder function from sklearn. The max number of features, in this case, was chosen to be 5000. This procedure is the preferred data preparation technique for SVMs, within the context of NLP[17]. ### LSTM and Bi-LSTM Both LSTMs and Bi-LSTMs are recurrent neural networks (RNNs). LSTM-based models make use of both Long-Term Memory (LTM) and Short-Term Memory (STM) in simplifying calculations through the application of gates i.e., Forget Gate, Learning Gate, Recall Gate and an Output Gate [18]. Owing to its bi-directionality, the Bi-LSTM is, in general, considered to be more effective in the deep learning process as compared to LSTMs [18]. The architecture of both models was chosen to be the same, for this study, i.e., both the LSTM and Bi-LSTM models consisted of an Input layer followed by an Embedding layer, Dense layer, two LSTM or bi-LSTM layers, another Dense layer and finally an Output Dense layer, whereby each individual layer is separated by a Dropout layer. The activation function for all the Dense layers was chosen to be'relu', except for the Output layer with activation function,'softmax'. The model was compiled with a loss function of 'categorical crossentropy'. The argument for stacking LSTM or Bi-LSTM layers on top of each other is to allow for a greater model complexity [18]. The labels for the targets, were not label-encoded as was performed in the case of SVM into categorical variables of varying weight-age but made into categorical variables that each carry equal weighting. Our choice of embedding technique was feature extraction with a maximum number of features set to 2000. ### BERT and RoBERTa Over the past years, supervised models have shown consistently better results than unsupervised models, until the introduction of a new pre-trained BERT text attachment model, which enabled unprecedented precision of results in many automated word processing tasks. This model replaced the widely known "word2vec" model in prevalence, becoming the industry standard. This is the motivation for the use of BERT in the study. BERT-base-cased was chosen, since it does not lowercase the sample text, thus preserving tone and context, and will take less computational power and time to train than BERT-large-cased. Soon after the construction of BERT, Robustly optimized BERT approach, RoBERTa was formed. Since RoBERTa is a retraining of BERT with improved training methodology, more data and computational power, in which the training procedure is improved whereby RoBERTa removes the Next Sentence Prediction (NSP) task from BERT's pre-training and, instead, it introduces dynamic mask ing, this model i.e. RoBERTa-base was also chosen in the study. Both the BERT-base-cased and RoBERTa-base models were chosen for fine-tuning and both trained and evaluated on our dataset, with results then being compared to pre-selected pre-trained models evaluated on our dataset. ### Model Hyper-parameters Used In this study, all the machine learning models underwent extensive hyperparameter tuning using Bayesian optimization. The hyperparameters chosen for tuning in the case of the SVM were the cost function, C, \(\gamma\), and the kernel. The hyperparameters chosen for tuning for both the LSTM and bi-LSTM models were the dropout rate, learning rate, weight decay, batch size, dense units, embedded dimensions, hidden dimensions, number of epochs and choice of optimizer. The hyperparameters chosen for tuning for both BERT-base-cased and RoBERTa-base were the learning rate, batch size and number of epochs - but not the weight decay, which was set to zero. Given the uniqueness and complexity of the data-set and subject matter, the slight shift away from a balanced dataset, the small size of the dataset, as well as the non-typical method of labelling that was used, the overall and individual F1-scores were chosen as the defining measures for which the models could be assessed. For the Model-specific hyperparameters and their pre-selected ranges, please see Tables 6 and 7 of the Appendix under Section-III. Next, we will discuss model performance. ## 4 Results and Discussion ### Machine Learning Models Applying hand-labeling of the tweets, the distribution of sentiments in our dataset were as follows: 31.7% positive; 36.1% neutral; 32.2% negative. There is not a dominant sentiment, and each sentiment is distributed equally within the sentiment population. Table 1, above, shows a summary and comparison of optimised model performance for the various models. In terms of model performance, the LSTM model achieved an overall precision of 48% and overall accuracy of 49%, with a combined F1-score of 49% using the semantics pre-processing, with a similar result being achieved with an overall precision of 50%, an overall accuracy of 48%, and a combined F1-score of 48%, on the lexical pre-processing approach. As expected, the Bi-LSTM models performed better than the LSTM models, with the Bi-LSTM model achieving an overall precision of 49%, an overall accuracy of 51%, and a combined F1-score of 50% on the semantics pre-processing approach, but a significantly better result on the lexical pre-processing approach with a higher overall precision of 53%, and an overall accuracy of 52%, yielding an F1-score of 52%. Furthermore, the SVM model achieved identical results for both pre-processing methods with an overall precision of 54%, an overall accuracy of 54%, and a combined F1-score of 54%. It is clear that the SVM model produced results that were better than both pairs of LSTM and Bi-LSTM models, which may sound counter-intuitive at first glance, but can be explained as follows: feature embeddings as used in the SVM model generally perform better than word embeddings as were used in the LSTM/bi-LSTM models in the context of NLP. Results of pre-selected BERT and RoBERTa pre-trained models served as a comparison to the performance of our respective fine-tuned models. Since, their classification measures were much lower than those of our fine-tuned models, we showed that the manual labelling of the vaccination hesitancy data-set was a novel approach and confirmed that the dataset is more complex than simple sentiment analysis on texts in categorising them into positive, negative or neutral labels based on the overall emotion inherent in the text samples and not on a specific topic. The pre-trained models chosen was a RoBERTa twitter sentiment model by the Cardiff NLP Group [19]. This pre-trained RoBERTa model was trained on \(\approx\) 58M tweets and was fine-tuned for sentiment analysis using the TweetEval benchmark using positive, negative and neutral labels. The pre-trained BERT model that was chosen is a fine-tuned version of a multilingual BERT-base multilingual sentiment model by the NLP-Town Group [20], which is a model for sentiment analysis using positive, negative and neutral labels, trained to classify product reviews on a scale of 1 to 5 stars, in English, German, French, Spanish, and Italian. The pre-trained BERT model achieved an overall precision of 46%, and an overall accuracy of 46%, yielding an F1-score of 46%, while our fine-tuned BERT-base-cased model achieved a much better result with an overall precision of 60%, and an overall accuracy of 61%, yielding an F1-score of 60%. By pairwise comparison, the RoBERTa models performed better than the BERT models with the pre-trained RoBERTa model achieving an overall precision of 46%, overall accuracy of 48%, yielding an F1-score of 48%, while the RoBERTa model achieved a much better result with a higher overall precision of 62%, and an overall accuracy of 61%, yielding a an F1-score of 61%. Hence, from these results, one can conclude that based on the overall weighted F1-score, the best models for this classification problem in decreasing order of optimal performance were fine-tuned Roberta-base, fine-tuned BERT-base-cased, SVM, Bi-LSTM, LSTM, pre-trained RoBERTa-base and lastly pre-trained BERT-base-uncased. ### Topic Modelling An LDA was performed on the set of tweets that were miss-classified by the model with the highest efficiency - in this case, RoBERTa base. The top 10 Most Frequent Terms per LDA Cluster Grouping are shown, below, in Table 2. Given these keywords, the topics related to vaccine hesitancy were inferred and described i.e., mass vaccination roll out schemes in terms of availability and service delivery, defiance in response to international travel restrictions that target the non-vaccinated or partially-vaccinated population, safety concerns about severe side-effects from the vaccine, as well as concerns of ineffectiveness of vaccines in preventing the spread of the virus. The close enough 5 topics that are inferred from the LDA in which the clusters were visualized are mentioned, below, in Figure 1: * Topic 1 : Inefficient mass vaccination * Topic 2 : Selective air travel restrictions * Topic 3 : Severe side-effects * Topic 4 : Inescapable from illness/death * Topic 5 : Ineffective to COVID ### Limitation Of Study Overall, the model performance is quite low, with 60-65%, being a very average score and also the range pertaining to the best performance achieved from our set of models. This could be due to a number of reasons i.e., in the case of the LSTM/bi-LSTM models, additional LSTM layers may be needed in order for the model to better grasp the complexity inherent in the dataset; while for the SVM algorithm, it is possible that a different word embedding technique and alternative model other than feature extraction and 'Bag of Words' would have resulted in a better performance; while in the case of BERT and RoBERTa, the use of the -LARGE-formulations of these models instead of the -BASE-formulations should be used to improve the performance, and their performances may further be enhanced by incorporating some of the pre-processing steps used in the other non-Transformer models or fine-tuning on other BERT and RoBERTa models trained on similar pandemic-related use-cases for sentiment classification or by using a particular BERT or RoBERTa model with a respective and alternative tokenizer taken from a different BERT and RoBERTa model. ## 5 Conclusion In conclusion, the models used were LSTM, bi-LSTM, SVM, BERT-base-cased and RoBERTa-base, whereby their hyperparameters were carefully chosen and tuned using the WandB platform and trained on a hand-labelled dataset containing tweets from South Africa on the topic of vaccine hesitancy. Out of all The machine learning models, excluding the pre-trained and fine-tuned ones, SVM was the best model with an overall F1-score of 54%, followed by the bi-LSTM with an overall F1-score of 52% and lastly the LSTM with an overall F1-score of 49%. The best model overall was the fine-tuned RoBERTa-base model with an overall F1-score of 61%, followed closely behind by the fine-tuned BERT-base-case model with an overall F1-score of 60%, where the best model was defined as the model with the highest overall F1-score. From the LDA on the miss-classified tweets of the fine-tuned RoBERTa model, certain types of vaccine hesitancy where identified as topics, which would serve to improve our best model's performance in future studies to better detect vaccine hesitancy and guide policymakers in managing the pandemic. Furthermore, since BERT and RoBERTa are Transformer models, meaning that they can be Figure 1: _Distribution of topics in the sample space of tweets miss-classified by the RoBERTa-base algorithm._ \begin{table} \begin{tabular}{|p{34.1pt}|p{34.1pt}|} \hline & \multicolumn{2}{|c|}{Top 10 Most Frequent Terms} \\ \hline T1 & vaccine, government, people, service, shortage, roll-out, manage, slow, help, far \\ \hline T2 & travel, spread, forced, location, ban, global, spread, rate, cases, news \\ \hline T3 & pain, afraid, hospital, deadly, risk, serious, allergy, report, approve, poison \\ \hline T4 & fear, die, spread, infection, real, fast, unsafe, hospitals, symptoms, stuck \\ \hline T5 & variant, mutation, new, ineffective, dose, booster, second, ongoing, re-infection, partially \\ \hline All & vaccination, vaccine, people, covid, health, virus, country, work, today, need \\ \hline \end{tabular} \end{table} Table 2: **Top 10 Most Frequent Terms per LDA Cluster.** trained on downstream tasks, further training on additional data-sets with pandemic-related use-cases other than vaccination hesitancy, such as public compliance to other safety measures or the degree of faith in government interventions etc - whose data may originate from other countries around the globe - could essentially pave the way towards a universal tool for early disease detection and the enforcement of public compliance during public health crises or emergencies. ## Appendix ### Hand-Labelling of Tweets Here we highlight the pitfalls of using text classification algorithms over hand-labelling using explicit examples. In Table 3, the four different cases of tweets one would encounter when performing sentiment analysis along with three hand-labelled examples for each case, each corresponding to one of the three sentiment classes i.e., positive (+), negative (-), neutral (0) are provided. The four categories are: clear-cut cases, borderline cases, difficult-to-label tweets and same text tweets. Clear-cut cases correspond to tweets whose sentiment labels are obvious and there is no debate on the validity of its classification - in other words the tweet's polarity is heavily skewed towards a single sentiment type. Borderline cases correspond to tweets that can arguably take on one of two labels i.e., either neutral or positive or alternatively neutral or negative, whereby the author's point of view is debatable. Difficult-to-label tweets are tweets that contain both positive and negative sentiments each with high polarity scores, which makes it difficult to decided on the overall text polarity. Same text tweets are a class of tweets whereby the raw text is identical but differ in the amount of punctuation and/or emojis present in the tweet, which serve to change the message behind the tweet often through the introduction of satire. Two different classification algorithms were selected namely, VADER and TextBlob. These classification algorithms were then given each example tweet and their predicted labels were compared to the manually-classified tweet labels. The results are presented in the table. Overall VADER correctly predicted the labels of 50% of the tweets, in which 100% of the clear-cut case examples were classified correctly, while none, or 0%, of the border-line case tweets were classified correctly and only one third, 33%, of the difficult-to-label or the same text tweets were correctly labelled. VADER was able to get 50% recall for each respective class. Comparatively, TextBlob correctly predicted the labels of a third, or \(\approx\) 33%, of all the tweets, in which two thirds, or \(\approx\) 67%, of the clear-cut case examples were classified correctly, while none, or 0%, of the border-line case tweets were classified correctly and none, 0%, of the difficult-to-label and the same text tweets were correctly labelled. TextBlob got recalls of 25% for the positives, 75% for the neutrals but nothing, 0%, for the negatives. This shows that both classification algorithms perform well on simple clear-cut examples, but become much less efficient in correctly classifying tweets, as the complexity of the tweets increases. Furthermore, given the recall values, it is apparent that VADER is equally good in labelling each sentiment type, while TextBlob strongly favours a neutral label. In both cases, the overall accuracies are very low in comparison to hand-labelling and it is clear that when given same text tweets, the algorithms are unable to identify sarcasm or the nuanced effect of changing punctuation marks i.e., from! to?, given that VADER provided a positive label for each sentiment belonging to the same text case, while TextBlob provided all neutral labels. Hence, the table clearly highlights the advantages of manual over automated hand-labelling. ## Appendix II Materials and Methods Here, we show the similarities and differences between the corpus-based and semantics-based approaches. From Table 4, which provides and contrasts the emoji-to-text translation of each approach, it is clear that the two emoji lexicons are very different from each other i.e., while the lexical definition of an emoji is the statement of the physical features of the emoji in words with any punctuation marks, the semantics definition is the statement of the message and emotional intensity attached to that message that is communicated through the use of a particular emoji. The emotional intensity is given by the choice of punctuation marks; in this case only exclamation marks or full stops were used. Not shown in the table are examples of emojis that both methods do not have a definition for and are instead replaced by white-spaces. These emojis were deemed to not carry a sentiment, with an example being the soccer ball emoji. Here we provide and contrast the pre-processing rules used in the two approaches with explicit examples. The idea behind using two different pre-processing methods was to compare the performance of the models once they were fully trained in order to gain insight into how the models 'learn'. Unfortunately, since similar results were obtained by both approaches and a full semantics analysis was not performed, no conclusions or insights could be made on this matter. From Tables 5, it is clear that the pre-processing steps involved in each approach are \begin{table} \begin{tabular}{|l|l|l|} \hline \multicolumn{4}{|c|}{Emoji Lexicon} \\ \hline & Lexical Definition & Semantics Definition \\ \hline & grinning face & I am happy about this! \\ \hline & face with raised eyebrow & I am serious about this. \\ \hline & face with rolling eyes & I do not take this seriously. \\ \hline & face with steam from nose & I am angry at this! \\ \hline & anxious face with sweat & I do not like this. \\ \hline & aauseated face & I am disgusted by this! \\ \hline & smiling face with sunglasses & I am proud of this! \\ \hline & pile of poo & This is nonsense! \\ \hline & broken heart & I am sad about this! \\ \hline & zzzz & I am asleep! \\ \hline & anger symbol & I am angry about this! \\ \hline \end{tabular} \end{table} Tab. 4: **A Comparison of the different Emoji Lexicons.** \begin{table} \begin{tabular}{|l|l|l|l|} \hline \multicolumn{4}{|c|}{Pre-processing Rules} \\ \hline Entity & Sample Text & Lexical & Semantics \\ present & & Approach & Approach \\ \hline URLs & https://host & url &'\\ & /location & & \\ \hline Slang & vaxxed 2day & vaxxed & vaccinated \\ Terms & & today & today \\ \hline hashtags & \#word & word & word \\ \hline @mentions & @USER1 & atUser & Name1 \\ & @USER2 & & Name2 \\ \hline Letter & HeLIO & hello & HeLIO \\ Cases & & & \\ \hline numbers & \(30^{th}\) covid19 & \(30\) covid & \(30^{th}\) covid19 \\ & 2021 & & 2021 \\ \hline contractions & it’s can’t & it is cannot & it is cannot \\ \hline Repetition & \(x^{2}x^{2}\),\(x^{2}\) the t!! & zzz the t!! & I am sleep- \\ \hline Misspelling & i r ded & i r ded & i r ded \\ \hline \end{tabular} \end{table} Tab. 5: **Pre-processing Rules for the two Approaches.** largely the same, with the exception of how emojis are treated. In both methods, contractions are expanded into their full expressions, spelling errors are not corrected, back-to-back repetitions of emojis within a tweet are discarded leaving behind one of them before it is translated, slang terms are replaced by their formal expressions, hashtags in front of words are removed leaving the words behind, urls and @mentions are replaced by some more generic expression, double spaces are contracted into single ones. The minor differences come about in the way that the urls, @mentions and slang words related to vaccination are treated by each approach. In the lexical approach, there are no corrections for slang terms pertaining to the word vaccine and its different forms according to the part of speech it adopts in a sentence. The major differences between the two approaches are in the way emojis, punctuation marks, uppercase letters and numerical characters are treated. In the semantics approach, punctuation marks and back-to-back repetitions of punctuation marks are not discarded, uppercase letters were not lower-cased and numerical characters were kept. This is not the case in the lexical approach. ## Appendix III Machine Learning Models In Table 6, below, we highlight the hyperparameters of each model; we explicitly show the chosen model-specific hyper-parameters and their associated tuning ranges. The SVM model was chosen to have three hyperparameters i.e., the kernel, gamma, and the cost function, while the LSTM and bi-LSTM models were chosen to have eight hyperparameters i.e, the dropout rate, learning rate, batch size, dense units, embedding dimensions, hidden dimensions, number of epochs, weight decay and the choice of optimizer. The BERT and RoBERTa models were chosen to have three hyperparameters, namely, the number of epochs, dropout rate, and the learning rate. The choice of possible kernels for the SVM models were chosen to be rbf or linear kernels, while the choice of optimizers for the RNN models were chosen to be adam, adamax, rmsprop and SGD. Note that optimizer and weight decay parameters were not chosen as hyperparameters for the transformer models, since it was found that the weight decay was least important with regards to the other three hyperparameters when included as a hyperparameter and since adamW is the standard optimizer used in all BERT/RoBERTa models. Also note that the tuning ranges are much larger than usual, especially in the case of the RNN models, so that an extensive hyperparameter search and optimisation could be performed using the WandB platform. The mode of optimisation was chosen to be Bayes' optimisation. In Table 7, below, we explicitly provide the optimised hyper-parameter values for each model on the lexical-based and semantics-based pre-processing methods, respectively. In the case of the SVM models, the obtained values were identical for both pre-processing methods. One can immediately see that some of the obtained optimal hyperparameter values are unusual or uncommon among these models, particularly in the case of the RNN classification models, for both pre-processing methods, respectively, owing to the uniqueness and complexity of our particular dataset for this particular use case and the wide range of tuning values for each hyperparameter. \begin{table} \begin{tabular}{|l|l|l|l|} \hline Model & SVM & LSTM/Bi-LSTM & BERT/RoBERTa \\ \hline kernel & rbf, linear & n/a & n/a \\ \hline \(\gamma\) & (\(10^{-4}\); \(10^{-3}\)) & n/a & n/a \\ \hline C & (\(10^{-4}\); \(10^{-3}\)) & n/a & n/a \\ \hline dropout & n/a & (0,0.9) & n/a \\ \hline weight decay & n/a & (0,0.9) & n/a \\ \hline learning rate & n/a & (0,0.9) & (\(10^{-6}\),\(10^{-4}\)) \\ \hline epochs & n/a & (0,1000) & (2,12) \\ \hline batch size & n/a & (1,1250) & (8,64) \\ \hline dense units & n/a & (1,1250) & n/a \\ \hline embed. dim. & n/a & (1,1250) & n/a \\ \hline hidden dim. & n/a & (1,1250) & n/a \\ \hline optimiser & n/a & Adam,Adamax, RMSprop,SGD & n/a \\ \hline \end{tabular} \end{table} Tab. 6: **Chosen Model-specific Hyper-parameters to tune.** In Table 8, below, we show the model performances of various classification models on the hand-labelled Covid-19 dataset. The results from the VADER and TextBlob algorithms served as a comparison of the degree of similarity or dissimilarity in the criteria used when classifying sentiments via automated means versus classifying sentiments using a manual approach. In this case, these algorithms showed a poor correlation with the hand labels i.e, overall F1-scores of 43% and 37%, respectively - which again highlights the superiority of hand-labelling over manual labelling. The pre-trained NLP-Town BERT model when tested on the COVID-19 dataset achieved an overall precision of 46%, an overall accuracy of 46% yielding an F1-score of 46%, while the pre-trained Cardiff-NLP RoBERTa model achieved a similar result with an overall precision of 46%, overall accuracy of 48% yielding an F1-score of 47%. The Original BERT-BASE-CASED model when tested on the COVID-19 dataset achieved an overall precision of 50%, an overall accuracy of 48% yielding an F1-score of 50%, while our COVID-19 BERT-BASE-CASED model achieved a much better result with an overall precision of 60%, overall accuracy of 61% yielding an F1-score of 60%. By comparison, The Original RoBERTa-BASE model when tested on the COVID-19 dataset achieved an overall precision of 49%, an overall accuracy of 50% yielding an F1-score of 49%, while the COVID-19 RoBERTa-BASE model achieved a much better result with an overall precision of 62%, overall accuracy of 61% yielding an F1-score of 61%. The superior performances of the COVID-19 models, when compared to the pre-trained NLP-Town BERT and NLPTown RoBERTa models illustrates both the complexity of the dataset and use-case when compared to sentiment analysis done on much simpler use-cases and using the same labels, as well as the cultural and linguistic differences in the way people communicate in South Africa as compared to the rest of the world. The superior performances of the COVID-19 models, when compared to the Original BERT and RoBERTa models shows that significant training has been achieved. ## Acknowledgments We give special thanks to the IBM team with whom we had enormous discussion, as well as Malipalema Khang and Abhaya Kumar Swain for technical support during the initial phase of this project. We also thank Mahnaz Alavinejad for useful discussion. We give a big thank you to Canada's International Development Research Centre (IDRC) and the Swedish Inter- national Development Cooperation Agency (SIDA) (Grant No. 109559-001) for funding this research.
2305.03069
Probing the onset of maximal entanglement inside the proton in diffractive DIS
It has been proposed that at small Bjorken $x$, or equivalently at high energy, hadrons represent maximally entangled states of quarks and gluons. This conjecture is in accord with experimental data from the electron-proton collider HERA at the smallest accessible $x$. In this Letter, we propose to study the onset of the maximal entanglement inside the proton using Diffractive Deep Inelastic Scattering. It is shown that the data collected by the H1 Collaboration at HERA allows to probe the transition to the maximal entanglement regime. By relating the entanglement entropy to the entropy of final state hadrons, we find a good agreement with the H1 data using both the exact entropy formula as well as its asymptotic expansion which indicates the presence of a nearly maximally-entangled state. Finally, future opportunities at the Electron Ion Collider are discussed.
Martin Hentschinski, Dmitri E. Kharzeev, Krzysztof Kutak, Zhoudunming Tu
2023-05-04T18:00:00Z
http://arxiv.org/abs/2305.03069v2
# Probing the onset of maximal entanglement inside the proton in diffractive DIS ###### Abstract It has been proposed that at small Bjorken \(x\), or equivalently at high energy, hadrons represent maximally entangled states of quarks and gluons. This conjecture is in accord with experimental data from the electron-proton collider HERA at the smallest accessible \(x\). In this Letter, we propose to study the onset of the maximal entanglement inside the proton using Diffractive Deep Inelastic Scattering. It is shown that the data collected by the H1 Collaboration at HERA allows to probe the transition to the maximal entanglement regime. By relating the entanglement entropy to the entropy of final state hadrons, we find a good agreement with the H1 data using both the exact entropy formula as well as its asymptotic expansion which indicates the presence of a nearly maximally-entangled state. Finally, future opportunities at the Electron Ion Collider are discussed. Entanglement entropy, DIS, diffraction ## I Introduction At the heart of the theory of strong interactions, Quantum Chromodynamics (QCD), there is the phenomenon of _color confinement_ that we still do not understand. We know perfectly well that it exists and our own existence is the proof, but its mechanism has been one of the most important unsolved problems in modern physics [1]. Recent advances in quantum information science have allowed to look at this problem from a different perspective [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14], see Ref. [15] for a recent review. In fact, confinement can be viewed as an ultimate limit of entanglement as the quarks and gluons are not just correlated, but simply cannot exist in isolation. In Quantum Mechanics, an isolated proton is a pure quantum state with zero von Neumann entropy. However, when viewed as a collection of quasi-free partons such as in the parton model [16; 17; 18], the proton possesses a non-zero entropy associated with different ways to distribute partons in the phase space. To resolve this paradox, a proposal has been made in Ref. [3] that a Deep Inelastic Scattering (DIS) process probes only a part of the wave function of the proton, thus described by a reduced density matrix where the unobserved part is traced over. There is an entanglement entropy associated with the measured reduced density matrix, which represents the entropy associated with the parton distributions. Thus, DIS process can be viewed as a sudden quench of the entangled quantum state of the proton, as a result of which a finite entropy is produced [3; 19; 20; 21]. This final state entropy can be measured from the multiplicity distribution of the produced hadrons. More specifically, in an electron-proton (\(ep\)) DIS process, the virtual photon emitted by the electron has a four-momentum \(q\) that probes only a part of the proton wave function with a transverse spatial size of \(\sim 1/Q\), where \(Q^{2}=-q^{2}\) characterizes the resolution of the probe. This measurement provides access to a subset of the total density matrix, \(\rho\), of the proton. This lack of information about the rest of the proton gives rise to the entanglement entropy, \(S_{E}=-{\rm tr}\rho_{\rm A}\ln\rho_{\rm A}\), where the reduced density matrix \(\rho_{A}={\rm tr}_{B}\rho\) is obtained by tracing over the unobserved degrees of freedom of the total density matrix \(\rho\). The entropy production in high energy scattering has been investigated also in Refs. [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43]. Based on an explicit model of QCD evolution at small Bjorken \(x\), it has been conjectured [3] that the inclusive DIS process probes a proton in the maximally entangled state, i.e., in a state where a large number of partonic micro-states occur with equal probabilities, \(P_{n}(Y)=1/\langle n\rangle\). Here \(Y\) is the rapidity and \(n\) is the number of resolved constituents of the proton. This maximally entangled state corresponds to an entropy \(S=\ln n\), which has been confirmed by comparison of calculations to data both in proton-proton collisions [19] and inclusive \(ep\) DIS [44; 45; 46; 47]. Therefore, the questions of interest that arises from these findings are, how does the maximally entangled state emerge, and whether there are conditions under which the constituents of the proton are _not_ maximally entangled? It has been found that in non-diffractive DIS process at sufficiently small \(x\), one probes a maximally entangled state of the proton [45; 46]. However, it is known that \(\sim 15\%\)[48; 49] of the inclusive DIS cross section measured at HERA is from diffractive processes, where a rapidity gap in the distribution of the hadronic final states is observed (for a review see [50]). These diffractive processes are believed to probe different components of the parton wave function of the proton, in which the parton evolution is "delayed" by the presence of the rapidity gap1[52; 53; 54; 55; 56]. Footnote 1: For other work on relation of diffraction at high energy scattering and entanglement we refer the Reader to [51] In this Letter, we present the first study of the entanglement entropy associated with diffractive deep elastic scattering (DDIS) processes, based on a dipole cascade model [57]. To validate our model, we compare it to the published data from the H1 Collaboration on charged particle multiplicity distributions in DDIS at the top HERA energy. Finally, we discuss future opportunities at the upcoming Electron-Ion Collider (EIC) at Brookhaven National Laboratory. ## II Cascade model for diffraction We consider DDIS of an electron on a proton target. As for inclusive DIS events, these events are characterized by the virtuality of the photon \(q^{2}=-Q^{2}\) as well as Bjorken \(x=Q^{2}/2p\cdot q\), where \(q\) and \(p\) denote the four-momentum of the virtual photon and proton respectively, see also Fig. 1. Diffractive events are further characterized by \(x_{\mathbb{P}}\) which denotes the proton's momentum fraction carried by the Pomeron. The magnitude of the rapidity gap \(y_{0}\) is related to \(x_{\mathbb{P}}\) by \(y_{0}\simeq\ln 1/x_{\mathbb{P}}\). The variable \(\beta\) denotes the Pomeron's momentum fraction carried by the quark interacting with the virtual photon. For collinear kinematics, \(x=\beta\cdot x_{\mathbb{P}}\). With \(Y=\ln 1/x\), the width of the rapidity interval occupied by the diffractive system \(X\) formed in the collision is \(y_{X}=Y-y_{0}\simeq\ln 1/\beta\). For large invariant mass \(M_{X}\) or small values of \(\beta\) of the diffractive system \(X\), using factorization and the limit of large number of colors, the diffractive system can be described as a set of color dipoles [52; 53; 54; 55; 56]. Within the 1+1 dimensional model for the distribution of dipoles [57] used in [3; 58], the probability \(p_{n}^{D}(y_{X})\) to have exactly \(n\) dipoles is described by the following cascade equation: \[\frac{\partial p_{n}^{D}(y_{X})}{\partial y_{X}}=-n\Delta p_{n}^{D}(y_{X})+(n -1)\Delta p_{n-1}^{D}(y_{X}), \tag{1}\] where \(\Delta\) controls the rate at which the number of dipoles grows. In the following we consider a slight generalization of the solution to this equation used for the inclusive case, _i.e_, \[p_{n}^{D}(y_{X})=\frac{1}{C}e^{-\Delta y_{X}}\left(1-\frac{1}{C}e^{-\Delta y_ {X}}\right)^{n-1}. \tag{2}\] Introducing the additional constant \(C\geq 1\) allows to take into account the possibility that more than one dipole exists at \(y_{X}=0\). For diffractive reactions, the exchanged Pomeron serves as a source for the generation of diffractive dipoles and therefore \(p_{n\geq 1}(0)\neq 0\) is possible, see also [52; 53; 54; 55; 56]. With the above modification we have for the average number of dipoles, \[\left\langle\frac{dn(\beta)}{d\ln 1/\beta}\right\rangle=\sum_{n}np_{n}^{D}(y_{X} )=C\left(\frac{1}{\beta}\right)^{\Delta}, \tag{3}\] which can be identified with the number of partons per unit of \(\ln 1/\beta\). The latter can be related to the diffractive parton distribution functions (PDF) \(\beta x_{\mathbb{P}}f(\beta,x_{\mathbb{P}})\) in the low \(\beta\) region. ## III Diffractive DIS data Data used in this Letter was collected by the H1 Collaboration [59] during the HERA 1 period. The measurements of charged particle multiplicity distributions were performed in the rest frame of the hadronic final-state \(X\). A minimum pseudo-rapidity gap of \(\sim 4.3\) units was imposed. The data analysis was done separately for the forward and backward hemispheres. To evaluate the entanglement entropy, one should include all charged particles in the diffractive final states. Therefore, we combine the measured multiplicity distributions from forward and backward hemispheres Figure 1: Kinematics of the neutral current diffractive DIS process \(ep\to epX\). into one single distribution, which is done by convoluting two independent probability distributions. The entropy of the final hadronic state is calculated as follows: \[S_{\rm hadron}=-\sum P_{N}\log P_{N}, \tag{4}\] where \(P_{N}\) is the probability to detect \(N\) charged hadrons. Similar analyses were done in Refs. [19; 47]. Note that the H1 measurement in Ref. [59] was presented in the KNO form [60] and a simple conversion to the multiplicity distribution \(P_{N}\) has been done for each measured \(M_{X}\) or \(\beta\) bin. Comparing to the inclusive DIS measurement of hadron entropy in Ref. [47], the covered phase space of charged hadrons in the DDIS measurement was larger [59], with track selection within \(|\eta_{\rm lab}|<2.0\) and transverse momentum larger than 100 MeV/c. ## IV Numerical results and the model comparison In the following, we compare our model to the data from the H1 Collaboration [59]. We use a description based on a direct extrapolation of the 1+1 dimensional model Eqs. (1) and (2) to the relevant values of \(\beta\). To this end, we use the fact that the number of partons and the number of dipoles coincide in the low \(\beta\) region and are directly given by the corresponding leading order diffractive PDFs. To compare with the available data set, we average over \(Q^{2}\) and integrate over the region probed in \(x_{\mathbb{P}}\): \[\left\langle\frac{dn(\beta)}{d\ln 1/\beta}\right\rangle =\frac{1}{Q_{\rm max}^{2}-Q_{\rm min}^{2}}\int\limits_{Q_{\rm min }^{2}}^{Q_{\rm max}^{2}}dQ^{2}\int\limits_{x_{\mathbb{P},\rm min}}^{x_{ \mathbb{P},\rm max}}dx_{\mathbb{P}}\] \[\beta\left[f_{\Sigma/p}^{D}\left(\beta,x_{\mathbb{P}},Q^{2} \right)+f_{g/p}^{D}\left(\beta,x_{\mathbb{P}},Q^{2}\right)\right], \tag{5}\] where \[f_{\Sigma/p}^{D}\left(\beta,x_{\mathbb{P}},Q^{2}\right) =\sum_{f=1}^{n_{f}}\left[f_{q_{f}/p}^{D}\left(\beta,x_{\mathbb{P} },Q^{2}\right)\right.\] \[\left.+f_{\bar{q}/p}^{D}\left(\beta,x_{\mathbb{P}},Q^{2}\right) \right], \tag{6}\] and \(Q_{\rm min}^{2}=7.5\) GeV\({}^{2}\), \(Q_{\rm max}^{2}=100\) GeV\({}^{2}\), while \(x_{\mathbb{P},\rm min}=0.0003\) and \(x_{\mathbb{P},\rm max}=0.05\). The selected phase space is chosen to reproduce the phase space in which the H1 data was analyzed. To fix the free parameters of the model, \(C\) and \(\Delta\), we impose that the average number of dipoles given by Eq.(3) in the low \(\beta\) region, \(\beta\in[10^{-5},10^{-4}]\) should agree with predictions based on diffractive PDFs, for which we use the leading order results GKG18-DPDFs (Set A), provided by the authors of [61]. In particular we use for the diffractive PDF of the parton \(i\) the following parametrization \[\beta f_{i/p}^{D}(\beta,x_{\mathbb{P}},Q^{2})=F_{\mathbb{P}/p}(x_{\mathbb{P}} )\cdot\beta f_{i/\mathbb{P}}(\beta,Q^{2}), \tag{7}\] with Pomeron flux factor \[F_{\mathbb{P}/p}(x_{\mathbb{P}})=A_{P}\cdot\frac{D}{x_{\mathbb{P}}^{\lambda_{ P}}^{\rho}}, \tag{8}\] where \(A_{P}=2.39187\), \(D=0.142735\) and \(\lambda_{P}=1.185\). We then find \(\Delta=0.29233,C=4.27382\). Invoking parton-hadron duality, this number should approximately agree with the average number of hadrons measured in DDIS. As noted in [44], experiments measure only the charged hadron multiplicity and one assumes \[\left\langle\frac{dn(\beta)}{d\beta}\right\rangle_{\rm charged}\simeq\frac{2}{3} \left\langle\frac{dn(\beta)}{d\beta}\right\rangle. \tag{9}\] To describe the entropy of charged hadrons, we thus replace in our expression \(C\to C^{\prime}=2/3\cdot C=2.84921\). Using the parameters listed above, the probability distribution in Eq. (2) yields the average number of partons. This can be used to estimate the number of charged hadrons, if the ratio between charged and neutral particle yield is taken to be a constant, _i.e.,_ independent of \(\beta\). The resulting probability distribution is illustrated in Fig. 2 for \(n=1,\ldots,50\). In the low \(\beta\) region, the probabilities \(p_{n}\) become equal. In the limit \(\beta\to 0\) the probability distribution is therefore constant (different multiplicities have equal probabilities) and the entropy reaches a maximum, corresponding to a maximally entangled state. At moderate values of \(\beta\in[0.06,0.41]\), probed by the currently available data set, we observe the gradual transition to the maximally entangled regime. Away from the maximally entangled region, configurations with a few partons have a considerably higher probability than those with many partons - therefore the entropy does not reach its maximal value. Figure 2: Probabilities \(p_{n}(y_{X})\) with \(y_{X}=\ln(1/\beta)\) as extracted from leading order diffractive PDFs for \(n=1,\ldots,50\) for the charged hadron multiplicities. The shaded region indicates the region in \(\beta\) probed by the H1 data set. To compare with hadron entropy extracted from the H1 charged hadron multiplicity distribution, we assume (in accord with the local parton-hadron duality [62]) that the multiplicity distributions of hadrons and dipoles are the same, \(p_{N}=p_{n}\). We thus use the expression for the hadron entropy (4) with the dipole probabilities given by Eq.(2). In the maximally entangled regime, all dipole multiplicity probabilities become equal; we write down this universal value as \(p_{n}\equiv 1/Z\). The entanglement entropy then takes the form \[S(Z)=-\sum_{n}p_{n}\ln p_{n}=(1-Z)\ln\frac{Z-1}{Z}+\ln Z. \tag{10}\] We can perform the Taylor expansion of this formula at \(Z\to\infty\), when the number of partonic microstates becomes large. This yields \[S_{\text{asym.}}(Z)=\ln Z+1+\mathcal{O}(1/Z), \tag{11}\] which describes a maximally entangled state and is only applicable in the low \(\beta\) region. In the truly asymptotic region \(\beta\to 0\), the unity in (11) may be neglected. However, when the number of partonic microstates is not too large (the case of DDIS in the H1 kinematics), this constant term is still numerically important. For numerical evaluation we use \(Z=C^{\prime}\beta^{-\Delta}\). Our results are shown in Fig. 3 in comparison to the H1 DDIS data. Uncertainties have been estimated through a variation of the factorization scale of the diffractive leading order PDFs in the range \(\mu\to[Q/2,2Q]\). The plot shows that the central value of the result (10) is closer to the data than the asymptotic result (11). We see, however, that the curves approach each other at smaller values of \(\beta\) indicating that the entanglement entropy reaches its maximal value. There is an important lesson learned from this study for the future DIS experiments at the EIC. The QCD evolution of parton density in rapidity is delayed in DDIS by the rapidity gap. Therefore, to study effects of the rapidity evolution, it is essential to have a large detector coverage to impose different rapidity gaps. Currently, the detector design of the ePIC experiment at the EIC has coverage up to \(\sim\)3.5-4 in pseudorapidity in the hadron-going direction, which is significantly larger than at HERA. In addition, the forward region at the EIC will also have a large acceptance coverage, which enables further control on the rapidity gap size. Quantitative studies of the onset of maximally entangled regime in the ePIC experiment should be performed in the near future. ## V Conclusions In conclusion, we investigated the onset of maximally entangled regime inside the proton in diffractive deep inelastic scattering. Using diffractive parton distribution functions and a dipole cascade model, we described the hadron entropy measured by the H1 experiment. We find that the maximally entangled regime sets in at small values of \(\beta\), and that the approach to this regime is controlled by the magnitude of the rapidity gap. This is because the rapidity gap delays the QCD evolution in rapidity, and thus delays the onset of the maximal entanglement by reducing the Hilbert space of partonic states. By relating the entanglement entropy to the entropy of final state hadrons, we find a good agreement with the H1 data at small \(\beta\) using both the exact entropy formula as well as its asymptotic expansion which indicates the presence of a nearly maximally-entangled state. Our study opens new possibilities for the investigation of quantum entanglement inside the proton using diffractive deep inelastic scattering at the Electron Ion Collider. ## Acknowledgements We thank V. Guzey and H. Khanpour for providing their codes for diffractive PDFs and S. Munier for useful correspondence. M. Hentschinski acknowledges support by Consejo Nacional de Ciencia y Tecnologia grant number A1 S-43940 (CONACYT-SEP Ciencias Basicas). The work of D. Kharzeev was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, Grants No. DE-FG88ER41450 and DE-SC0012704 and by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under Contract No.DE-SC0012704. The work of K. Kutak has been partially supported by Figure 3: Exact and asymptotic entropy as a function of \(\beta\). H1 data [59] extracted from the multiplicity distributions are shown, where statistical and systematic uncertainty are added in quadrature and presented as error bars. The uncertainty bands correspond to a variation of the factorization scale of leading order diffractive PDFs in the range \(\mu\to[Q/2,2Q]\) European Union's Horizon 2020 research and innovation program under grant agreement No.824093 and the The Kosciuszko Foundation for the Academic year 22/23 for the project "Entropy of dense system of quarks and gluons". The work of Z. Tu is supported by the U.S. Department of Energy under Award DE-SC0012704. K. Kutak wants to acknowledge the BNL Nuclear Theory Department for hospitality during the period when the project was initiated.
2306.02627
On hyperbolic dimension gap for entire functions
Polynomials and entire functions whose hyperbolic dimension is strictly smaller than the Hausdorff dimension of their Julia set are known to exist but in all these examples the latter dimension is maximal, i.e. equal to two. In this paper we show that there exist hyperbolic entire functions $f$ having Hausdorff dimension of the Julia set $\HD (\J _f)<2$ and hyperbolic dimension $\HypDim(f)<\HD(\J_f)$.
Volker Mayer, Mariusz Urbański
2023-06-05T06:54:10Z
http://arxiv.org/abs/2306.02627v2
# On hyperbolic dimension gap for entire functions ###### Abstract. Polynomials and entire functions whose hyperbolic dimension is strictly smaller than the Hausdorff dimension of their Julia set are known to exist but in all these examples the latter dimension is maximal, i.e. equal to two. In this paper we show that there exist hyperbolic entire functions \(f\) having Hausdorff dimension of the Julia set \(\operatorname{HD}(\mathcal{J}_{f})<2\) and hyperbolic dimension \(\operatorname{HypDim}(f)<\operatorname{HD}(\mathcal{J}_{f})\). 2020 Mathematics Subject Classification: Primary 37F10; Secondary 30D05, 28A80 The research of Volker Mayer was supported in part by the ANR-DFG project QuaSiDy ANR-21-CE40-0016. The research of Mariusz Urbanski was supported in part by the Simons grants 581668 and 900989. Mariusz Urbanski also thanks the SRMI in Sydney for warm hospitality and support during the work on this research. hyperbolic dimension and the Hausdorff dimension of the Julia set. In general, since hyperbolic sets of \(f\) are subsets of the Julia set of \(f\), we have that \[\operatorname{HypDim}(f)\leq\operatorname{HD}(\mathcal{J}_{f}). \tag{1.1}\] Examples of entire functions with strict inequality are known ([19], [21]). Quite recently Avila-Lyubich [1, 2] showed that there exist Feigenbaum polynomials having this property. But in all known examples with strict inequality in (1.1) the Hausdorff dimension of the Julia set is maximal, i.e. equal to two and Avila-Lyubich mention that for arbitrary polynomials \(f\) with \(\operatorname{HD}(\mathcal{J}_{f})<2\) one should have equality. Here we show that this is not the case for entire functions even inside the Eremenko-Lyubich class \(\mathcal{B}\) consisting of all entire functions having a bounded set of finite singularities. In order to prove Theorem 1.1 we first need good candidates of entire functions whose Julia sets have Hausdorff dimension less than two. The first such examples where provided by Gwyneth Stallard during 1990's. The interested reader can find an overview in her survey in [16]. These examples are entire functions having one single logarithmic tract over infinity; see Section 2.1 for the definition of the singularities of entire functions, in particular of logarithmic tract. As nowadays it is well known, the geometry of such a tract or the growth of the function in the tract influences the size of the Julia set. Particularly interesting for the present work is her family of intermediate growth in [18]. The growth does depend on a parameter \(p>0\) and these functions are defined by the formula \[E(z):=\frac{1}{2i\pi}\int_{L}\frac{\exp\left(e^{(\log\xi)^{1+p})}\right)}{\xi-z }d\xi\,, \tag{1.2}\] where \(L\) is the boundary of the region \[G=\left\{x+iy\in\mathbb{C}:|y|<\frac{\pi x}{(1+p)(\log x)^{p}}\;,\;x>3\right\}, \tag{1.3}\] oriented in the clockwise direction, for \(z\in\mathbb{C}\backslash\overline{G}\) and by analytic continuation for \(z\in\overline{G}\). Appropriate details of such analytic extension are given in Section 2.2.. The reader should have in mind that this function is close to \[f(z)=\exp\left(e^{(\log z)^{1+p}}\right)\quad\text{ for }z\in G \tag{1.4}\] and is bounded elsewhere. Here \((\log z)^{1+p}\) is defined so that it gives real values for real \(z>e\). Consider then the family \(\left(\mathbf{E}_{l}:\mathbb{C}\to\mathbb{C}\right)_{l\in\mathbb{C}}\) defined by the formula \[\mathbf{E}_{l}(z):=E(z-l).\] Shifting in this way the function \(E\) by a large \(l>0\) makes the logarithmic tract is backward invariant and yields \(J_{\mathbf{E}_{l}}\subset G\). Consequently, only the dynamics of \(\mathbf{E}_{l}\) in \(G\), the domain on which \(\mathbf{E}_{l}\) is close to the function \(\mathbf{f}_{l}\), given by the formula \[\mathbf{f}_{l}(z):=f(z-l),\] is relevant for our purposes. The details of this and the definition of the Julia set in the present setting are given in Section 2.1. **Fact 1.2** (Stallard [20]).: _Let \(p>0\). All the functions \(\mathbf{E}_{l}\), \(l\in\mathbb{C}\), belong to the Eremenko-Lyubich class \(\mathcal{B}\) and there exists a constant \(C_{p}>0\) such that for all real \(l>C_{p}\) we have that_ \[\operatorname{HD}(J_{\mathbf{E}_{l}})=1+\frac{1}{1+p}<2\,.\] In the present note we analyze the hyperbolic dimension of these functions. In fact, we first work with the functions \(\mathbf{f}_{l}\) and then transfer the results to the globally defined entire functions \(\mathbf{E}_{l}\). The key point is to employ the thermodynamic formalism of [11] and, in particular, the Bowen's Formula from this paper that determines hyperbolic dimension. We will see that \(\lim_{l\to\infty}\operatorname{HypDim}(\mathbf{E}_{l})=1\) which clearly implies that \(\operatorname{HypDim}(\mathbf{E}_{l})<\operatorname{HD}(\mathcal{J}_{ \mathbf{E}_{l}})\) provided that \(l>C_{p}\) is large enough. _Acknowledgement:_ We would like to thank the referee for excellent refereeing, comments, and suggestions which improved the final exposition of our results. ### Notation We use standard notation such as \(\mathbb{D}(z,r)\) for the open disk in \(\mathbb{C}\) with center \(z\in\mathbb{C}\) and radius \(r>0\). When the center is the origin, we also use the simplified notation \[\mathbb{D}_{r}:=\mathbb{D}(0,r).\] The complement of its closure will be denoted by \[\mathbb{D}_{r}^{*}:=\mathbb{C}\backslash\overline{\mathbb{D}}_{r}.\] Frequently we deal with half-spaces. Let \[\mathcal{H}_{s}:=\left\{z\in\mathbb{C}:\;\Re z>s\right\}\quad,\quad s\geq 0\,.\] When \(s=0\), then we also write \(\mathcal{H}\) for \(\mathcal{H}_{0}\). Many constants, especially those in Fact 2.3, depend on the parameter \(p\) of the definitions of the functions \(E\) and \(f\). However, this will be fixed throughout the whole paper and we may ignore it. We say that \[A\leq B\] for non-negative real expressions \(A\) and \(B\) if and only if there exists a positive constant \(C\) independent of variable parameters involved in \(A\) and \(B\) such that \(A\leqslant CB\). We then say that \(A\succeq B\) if and only if \(B\preceq A\). Finally, \(A\succeq B\) if and only if \(A\preceq B\) and \(B\preceq A\). ## 2. Singularities, models and approximating entire functions ### General definitions Iversen's classification of singularities is explained in length in [9], see also [4]. An entire function \(g:\mathbb{C}\to\mathbb{C}\) can have only two types of singular values. Firstly, a point \(b\in\hat{\mathbb{C}}\) is a _critical value_ of \(g\) if and only if \(b=g(c)\) for some \(c\in\mathbb{C}\) with \(g^{\prime}(c)=0\). Secondly, a complex number \(b\in\hat{\mathbb{C}}\) is an _asymptotical value_ of \(g\) if and only if there exists a continuous function \(\gamma:[0,+\infty)\to\mathbb{C}\) such that \[\lim_{t\to+\infty}\gamma(t)=\infty\text{ and }\lim_{t\to+\infty}f(\gamma(t))=b.\] In this latter case for every \(r>0\) there exists an unbounded connected component \(\Omega_{r}\) of \(g^{-1}(\mathbb{D}(b,r))\) such that \[\Omega_{r^{\prime}}\subset\Omega_{r}\] whenever \(r^{\prime}<r\) and \[\bigcap_{r>0}\Omega_{r}=\emptyset.\] Such a choice of components is called an asymptotic tract over \(b\) and it is called _logarithmic tract_ in the case when the map \(g:\Omega_{r}\to\mathbb{D}(b,r)\backslash\{b\}\) is a universal covering for some \(r>0\). The set of singular values of \(f\) is proved to consist of all critical and asymptotic values of \(f\). Its intersection with \(\mathbb{C}\) will be denoted by \(S(g)\). We consider functions belonging to the Eremenko-Lyubich class \(\mathcal{B}\) that consists of all entire functions \(g\) for which \(S(g)\) is a bounded set. These functions are also called of _bounded type_. If \(g\in\mathcal{B}\), then there exists \(r>0\) such that \(S(g)\subset\mathbb{D}_{r}\). Then \(g^{-1}(\mathbb{D}_{r}^{*})\) consists of mutually disjoint unbounded Jordan domains \(\Omega_{r}\) with real analytic boundaries such that \(g:\Omega\to\mathbb{D}_{r}^{*}\) is a covering map (see [8]). Thus, an entire function \(g\) in class \(\mathcal{B}\) has only logarithmic singularities over infinity. As we already mentioned it, the connected components of \(g^{-1}(\mathbb{D}_{r}^{*})\) are called _tracts_ or, more precisely, _logarithmic tracts_. Then there exist all holomorphic branches of the logarithm of \(g\) restricted to \(\Omega_{r}\). Fix one of them and denote it by \(\tau\). So, \[g|_{\Omega_{r}}=\exp\circ\tau, \tag{2.1}\] where \[\varphi=\tau^{-1}:\mathcal{H}_{\log r}\to\Omega_{r}\] is a conformal homeomorphism. In addition, \(\varphi\) extends continuously to \(\infty\) and \(\varphi(\infty)=\infty\). Keeping this notation, if we restrict \(g\) to the tracts over infinity then it is now standard, especially since the appearance of the papers [5, 6] by Chris Bishop, to call the map \[g_{|g^{-1}(\mathbb{D}^{*}_{r})}:g^{-1}(\mathbb{D}^{*}_{r})\to\mathbb{D}^{*}_{r}\] a model function. We will see that the functions considered in our current paper have only one single tract over infinity. This is the reason why we use the following simplified definition of a model function. This is in the spirit of the definition in [14], see [5, 6] for the general one. **Definition 2.1**.: _A model is any holomorphic map_ \[g=e^{\tau}:\Omega_{r}\to\mathbb{D}^{*}_{r},\] _where_ 1. \(r\in[1,+\infty)\)_,_ 2. \(\Omega_{r}\) _is a simply connected unbounded domain in_ \(\mathbb{C}\)_, called a tract, such that_ \(\partial\Omega_{r}\) _is a connected subset of_ \(\mathbb{C}\)__ _and_ 3. \(\tau:\Omega_{r}\to\mathcal{H}_{\log r}\) _is a conformal homeomorphism fixing infinity; the latter more precisely meaning that_ \[\tau(z)\to\infty\;\;\text{as}\;\;z\to\infty.\] The tract \(\Omega_{r}\) may or may not intersect the disk \(\mathbb{D}_{r}\). The later case has important dynamical consequences. **Definition 2.2**.: _If \(f\) is a model or an entire function of bounded type and if there exists \(r>0\) such that_ \[S(f)\subset\mathbb{D}_{r}\quad\text{ and}\quad\overline{f^{-1}(\mathbb{D}^{*}_{r})}\subset\mathbb{D}^{*}_{r}, \tag{2.2}\] _then \(f\) is called of disjoint type._ If \(f\) is such a disjoint type model or entire function, then the _Julia set_ of \(f\) is defined to be \[\mathcal{J}_{f}:=\left\{z\in\mathbb{D}^{*}_{r}:f^{n}(z)\in\mathbb{D}^{*}_{r}\; \;\text{for all}\;\;n\geq 1\right\}.\] For disjoint type entire functions this definition coincides with the usual one, see Proposition 2.2 in [15]. ### Elementary properties of the functions \(E\) and \(f\) We now discuss some elementary properties of the functions introduced in the introduction and we examine how they behave with respect to the above definitions. To start with, we recall that these functions have been introduced and studied by Stallard and her paper [18, Section 3] lists elementary properties of \(E\) and \(f\). We recall now some necessary facts from this paper. Let's denote: \[G_{x_{0},\kappa}:=\left\{z=x+iy\in\mathbb{C}:x>x_{0}\text{ and }|y|<\kappa\frac{ \pi x}{(1+p)(\log x)^{p}}\right\}\] and abbreviate \(G_{x_{0}}=G_{x_{0},1}\) so that the set \(G\) of (1.3) is \(G_{3}=G_{3,1}\). For any integer \(n\geq 3\) let \(\sigma_{n+1}\) be the boundary of the open set \(G\backslash\overline{G}_{n+1}\). The orientation of \(\sigma_{n+1}\) and of all following boundary curves are always understood in the clockwise direction. Cauchy's Integral Formula shows that \[\frac{1}{2i\pi}\int_{\sigma_{n+1}}\frac{f(\xi)}{\xi-z}d\xi=0\quad\text{for every }z\notin\overline{G}.\] Therefore, still for \(z\notin\overline{G}\), \[E(z)=\frac{1}{2i\pi}\int_{\partial G}\frac{f(\xi)}{\xi-z}d\xi=\frac{1}{2i\pi }\int_{\partial G_{n+1}}\frac{f(\xi)}{\xi-z}d\xi\,.\] It thus follows that the right hand side integral gives the holomorphic extension of \(E\) to the domain \(\mathbb{C}\backslash\overline{G}_{n+1}\). Consider now an arbitrary point \(z\in G\backslash\overline{G}_{n+1}\). Then, Cauchy's Residue Theorem shows that \[E(z)= \frac{1}{2i\pi}\int_{\partial G_{n+1}}\frac{f(\xi)}{\xi-z}d\xi\] \[= -\frac{1}{2i\pi}\int_{\partial(G_{n}\backslash\overline{G}_{n+1}) }\frac{f(\xi)}{\xi-z}d\xi+\frac{1}{2i\pi}\int_{\partial G_{n}}\frac{f(\xi)}{ \xi-z}d\xi\] \[= f(z)+\frac{1}{2i\pi}\int_{\partial G_{n}}\frac{f(\xi)}{\xi-z}d\xi.\] Starting with this observation, one can get the following fact which is contained in Lemma 3.1 in [18] along with its proof. **Fact 2.3**.: _Let \(\widetilde{L},\widetilde{L}\) be the boundary of \(G_{D+1,\frac{5}{6}},G_{D-1,\frac{7}{6}}\) respectively. Then there exist constants \(C,D>3\) such that the following hold._ 1. _If_ \(z\notin G_{D}\) _then_ \[|E(z)|\leq C\] _and_ \[|E(z) _if_ \(z\in G_{D}\) _then_ \[|E(z)-f(z)|\leq C\ \ \text{as well as}\ \ |E^{\prime}(z)-f^{\prime}(z)|\leq C.\] 2. \[E(z)=\frac{1}{2\pi i}\int_{\widetilde{L}}\frac{f(t)}{t-z}dt\ \ \ \text{ for }z\notin G_{D}\] _and_ \[E(z)=f(z)+\frac{1}{2\pi i}\int_{\widetilde{L}}\frac{f(t)}{t-z}dt\ \ \ \text{ for }z\in G_{D}.\] 3. _If_ \(z\in G_{D,\frac{7}{6}}\backslash\text{\rm Int}(G_{D,\frac{5}{6}})\) _then_ \[|f(z)|\leq\exp\Big{(}-\frac{1}{2}e^{\frac{1}{2}(\log\Re z)^{1+p}}\Big{)}.\] Item (1) from this Fact 2.3 shows that \(f^{-1}(\mathbb{D}_{r}^{*})\subset G_{D}\) for every \(r>2C\). Elementary estimates, based on the explicit representation of \(f\), show that \(f^{-1}(\mathbb{D}_{r}^{*})\) is a simply connected unbounded domain in \(\mathbb{C}\). It turns out that the same is true for the approximating entire function \(E\); details can be found in Proposition 2.2 of [14]. Thus, we have the following. **Fact 2.4**.: _Let \(C\) be given by Fact 2.3. Then, there exists \(r_{0}>4C\) such that_ \[S(E)\subset\mathbb{D}_{r_{0}/2}\] _and for every \(r\geq r_{0}/2\), both sets \(E^{-1}(\mathbb{D}_{r}^{*})\) and \(f^{-1}(\mathbb{D}_{r}^{*})\) are simply connected unbounded domains in \(\mathbb{C}\) contained in \(G_{D}\). They will be respectively denoted by_ \[\Omega_{E,r}:=E^{-1}(\mathbb{D}_{r}^{*})\ \ \text{ and }\ \ \Omega_{f,r}:=f^{-1}(\mathbb{D}_{r}^{*})\,.\] From now on fix any \[r\geq r_{0}/2, \tag{2.3}\] where \(r_{0}\) comes from Fact 2.4. Then the map \(f:\Omega_{f,r}\to\mathbb{D}_{r}^{*}\) is of the form \(f(z)=e^{\tau(z)}\) with \(\tau:\Omega_{f,r}\to\mathcal{H}_{\log r}\) given by \[\tau(z):=\exp((\log z)^{1+p}).\] We have to know what the inverse conformal homeomorphism \(\varphi=\tau^{-1}:\mathcal{H}_{\log r}\to\Omega_{f,r}\) looks like. Indeed, a straightforward calculation gives \[\varphi(\xi)=\exp\Big{(}(\log\xi)^{\frac{1}{1+p}}\Big{)}\,, \tag{2.4}\] where \(\log\) is the principal branch of logarithm again, i.e. determined by the requirement that \(\log 1=0\). In conclusion, \[f_{|\Omega_{f,r}}=e^{\tau}:\Omega_{f,r}\to\mathbb{D}_{r}^{*} \tag{2.5}\] is a model as defined in Definition 2.1; fact 2.3 explains how the entire function \(E\) approximates this model. **Lemma 2.5**.: _There exists a constant \(K\geq 1\) such that_ \[\frac{1}{K}\leq\frac{|\varphi^{\prime}(\xi+iy)|}{|\varphi^{\prime}(\xi)|}\leq K\] _for every \(\xi\) with \(\Re(\xi)\geq\log r_{0}\) and every \(0\leq y\leq 2\pi\)._ Proof.: The statement follows from Koebe's Distortion Theorem since the conformal map \(\varphi=\mathcal{H}_{\log r_{0}}\to\Omega_{f,r_{0}}\) is in fact defined on the half space \(\mathcal{H}_{\log(r_{0}/2)}\). ### Disjoint Type Versions of \(E_{l}\) and \(f_{l}\) Given any \(l\in\mathbb{C}\), the functions \(\mathbf{f}_{l}=f\circ T_{l}\) and \(\mathbf{E}_{l}=E\circ T_{l}\), where \(T_{l}\) is the translation \(z\mapsto z-l\), have been defined in the introduction. We have that \(\mathbf{E}_{l}\in\mathcal{B}\) since it is known, see [20], that \(E\in\mathcal{B}\). Obviously, \[\Omega_{\mathbf{f}_{l},r}:=\mathbf{f}_{l}^{-1}(\mathbb{D}_{r}^{*})=f^{-1}( \mathbb{D}_{r}^{*})+l=\Omega_{f,r}+l, \tag{2.6}\] and also \[\Omega_{\mathbf{E}_{l},r}=\mathbf{E}_{l}^{-1}(\mathbb{D}_{r}^{*})=E^{-1}( \mathbb{D}_{r}^{*})+l. \tag{2.7}\] By Fact 2.4, for all \(r\geq r_{0}/2\) and \(l\in[0,+\infty)\), all these tracts are contained in respective sets \(G_{D}+l\). So, setting \[l_{r}:=\max\{0,r-D\}\,, \tag{2.8}\] we have that \[\Omega_{\mathbf{f}_{l},r}\;,\;\Omega_{\mathbf{E}_{l},r}\subset\mathbb{D}_{r}^ {*} \tag{2.9}\] for all \(r\geq r_{0}/2\) and all \(l\geq l_{r}\). Consequently, all the functions \(\mathbf{f}_{l},\mathbf{E}_{l}\), \(l\geq l_{r}\), are of disjoint type and for their Julia sets we have that \[\mathcal{J}_{\mathbf{f}_{l}},\mathcal{J}_{\mathbf{E}_{l}}\subset\mathbb{D}_{r} ^{*} \tag{2.10}\] for all \(r\geq r_{0}/2\) and all \(l\geq l_{r}\). Recall that for the model \(f\) we have the expression (2.5). The analogous expression for \(\mathbf{f}_{l}\) is \[\mathbf{f}_{l|\Omega_{\mathbf{f}_{l},r}}=e^{\tau_{l}}:\Omega_{\mathbf{f}_{l}, r}\to\mathbb{D}_{r}^{*} \tag{2.11}\] where \(\tau_{l}(z)=\tau(z-l)\) so that the inverse of \(\tau_{l}\) is \[\varphi_{l}=\varphi+l:\mathcal{H}_{\log r}\to\Omega_{\mathbf{f}_{l},r} \tag{2.12}\] where \(\varphi\) is still the conformal map defined by (2.4) ## 3. Thermodynamical formalism Our ultimate goal is to determine the hyperbolic dimension of the functions \(\mathbf{E}_{l}\) which, under certain conditions, can be done by employing the methods of thermodynamic formalism. The hyperbolic dimension is then given by the zero of the topological pressure, the fact that goes back to Bowen [7]. In the present context, namely for disjoint type models and entire functions of bounded type, such a theory has been developed in [11]. Let \(\mathcal{C}_{b}(\mathbb{D}_{r}^{*})\) be the vector space of all complex-valued bounded continuous functions defined on \(\mathbb{D}_{r}^{*}\). Endowed with the supremum norm, it becomes a Banach space. Let \(g:=\mathbf{f}_{l}\) or \(g:=\mathbf{E}_{l}\). Given \(t>0\), the transfer operator for the map \(g\) and for the parameter \(t\), acting on a function \(h\in\mathcal{C}_{b}(\mathbb{D}_{r}^{*})\), is defined by the formula \[\mathcal{L}_{g,t}h(w):=\sum_{g(z)=w}|g^{\prime}(z)|_{1}^{-t}h(z)\ \ \text{for every}\ w\in\mathbb{D}_{r}^{*}, \tag{3.1}\] where \[|g^{\prime}(z)|_{1}:=\frac{|g^{\prime}(z)|}{|g(z)|}|z|\] is the logarithmic derivative of \(g\) evaluated at the point \(z\). We are to find out for which parameters \(t>0\) the following two crucial properties hold: \[\|\mathcal{L}_{g,t}1\!\!1\|_{\infty}<+\infty\quad\text{and}\quad\lim_{w\to \infty}\mathcal{L}_{g,t}1\!\!1(w)=0. \tag{3.2}\] Indeed, since our map \(g\) is of disjoint type, once (3.2) is verified then, following [11, Section 8], we deduce that the whole thermodynamic formalism, along with all its applications obtained in [11], holds. Especially Bowen's Formula does. This formula involves topological pressure which for the disjoint type map \(g\) is given at a parameter \(t\in(0,+\infty)\) by the formula \[\mathrm{P}(g,t)=\lim_{n\to\infty}\frac{1}{n}\log\mathcal{L}_{g,t}^{n}1\!\!1(w), \tag{3.3}\] where \(w\in\mathbb{D}_{r}^{*}\) is any arbitrarily chosen point. The limit exists and is independent of \(w\) because of Theorem 8.1 in [11] which ultimately goes back to Lemma 5.8 and Corollary 5.18 in [10]. ### Estimates for the Transfer operators of the Model Functions \(\mathbf{f}_{l}\) **Proposition 3.1**.: _Let \(\mathcal{L}_{\mathbf{f}_{l},t}\) be the transfer operator of \(\mathbf{f}_{l}\), \(l\geqslant 0\), with a parameter \(t>0\). Fix \(r\geqslant r_{0}\). Let \(w_{0}\in\mathbb{D}_{r}^{*}\). Then_ \[\mathcal{L}_{\mathbf{f}_{l},t}1\!\!1(w_{0})<\infty\quad\text{if and only if}\quad t>1\,.\] _Moreover, if \(t>1\) then (3.2) holds for \(g=\mathbf{f}_{l}\)._ Proof.: Having \(w_{0}\in\mathbb{D}_{r}^{*}\) and \(t>0\), let us start exactly as in the proof of Theorem 4.1 in [11]. If \(z_{l}\in\mathbf{f}_{l}^{-1}(w_{0})\) then, using (2.11), the logarithmic derivative can be expressed as follows: \[|\mathbf{f}_{l}^{\prime}(z_{l})|_{1}=|\tau_{l}^{\prime}(z_{l})z_{l}|=\frac{| \varphi_{l}(\xi)|}{|\varphi_{l}^{\prime}(\xi)|}=|(\log\varphi_{l})^{\prime}( \xi)|^{-1}\] where \(\xi=\tau_{l}(z_{l})\) and where \(\varphi_{l}=\varphi+l\) is the map of (2.12). Notice that \(\xi=u+iv\) does not depend on \(l\), where \(u=\log|w_{0}|\). From this, together with Lemma 2.5, we get that \[\mathcal{L}_{\mathbf{f}_{l},t}\mathds{1}(w_{0})=\sum_{\exp(\xi)=w_{0}}|(\log \varphi_{l})^{\prime}(\xi)|^{t}=\int_{\mathbb{R}}|(\log\varphi_{l})^{\prime}( \log|w_{0}|+iv)|^{t}dv\,.\] Now, since \(\varphi_{l}=\varphi+l\) and since we have the explicit expression (2.4) for \(\varphi\), we can calculate as follows: \[|(\log\varphi_{l})^{\prime}(\xi)|=\left|\frac{\varphi(\xi)}{\varphi(\xi)+l} \right|\frac{1}{1+p}\frac{1}{|\xi||\log\xi|^{\frac{p}{1+p}}}\asymp\left|\frac{ \varphi(\xi)}{\varphi(\xi)+l}\right|\frac{1}{|\xi|(\log|\xi|)^{\frac{p}{1+p}}}.\] since \(\arg(\xi)\in(-\pi/2,\pi/2)\). Therefore, \[\mathcal{L}_{\mathbf{f}_{l},t}\mathds{1}(w_{0})\asymp\int_{\mathbb{R}}\left| \frac{\varphi(\xi)}{\varphi(\xi)+l}\right|^{t}\frac{1}{|\xi|^{t}(\log|\xi|)^{ \frac{tp}{1+p}}}dv. \tag{3.4}\] Since \(\lim_{|v|\to+\infty}\varphi(\log|w_{0}|+iv)=\infty\), we have that \[\frac{2}{3}\leq\left|\frac{\varphi(\xi)}{\varphi(\xi)+l}\right|\leq 2\] whenever \(|v|=|\Im(\xi)|\) is sufficiently large. Thus we get from (3.4) that \(\mathcal{L}_{\mathbf{f}_{l},t}\mathds{1}(w_{0})\) is finite if and only if \(t>1\). The uniform bound of \(\|\mathcal{L}_{\mathbf{f}_{l},t}\|_{\infty}<\infty\) also follows from (3.4). Indeed, let \(w=e^{\xi}\in\mathbb{D}_{r}^{*}\). Then \(z=\varphi(\xi)\in G_{D}\), whence \(x=\Re(z)>0\). Thus, \[\left|\frac{\varphi(\xi)}{\varphi(\xi)+l}\right|^{2}=\frac{x^{2}+y^{2}}{(x+l) ^{2}+y^{2}}\leq 1. \tag{3.5}\] It follows from this that \[\mathcal{L}_{\mathbf{f}_{l},t}\mathds{1}(w)\leq\int_{\mathbb{R}}\frac{1}{|\xi| ^{t}(\log|\xi|)^{\frac{tp}{1+p}}}dv=\frac{1}{2}\int_{\mathbb{R}}\frac{1}{(u^{2 }+v^{2})^{\frac{t}{2}}(\log(u^{2}+v^{2}))^{\frac{tp}{1+p}}}dv.\] Since for every for \(w\in\mathbb{D}_{r}^{*}\) we have \(u\geqslant u_{r}=\log r\) it follows that \[\sup_{w\in\mathbb{D}_{r}^{*}}\mathcal{L}_{\mathbf{f}_{l},t}\text{\rm 1 \kern-3.8pt{\rm l}}(w)\leq C:=\int_{\mathbb{R}}\frac{1}{(u_{r}^{2}+v^{2})^{\frac {t}{2}}(\log(u_{r}^{2}+v^{2}))^{\frac{tp}{1+p}}}dv<+\infty. \tag{3.6}\] Finally, if \(t>1\) then \(\delta=(t-1)/2>0\), whence \[\mathcal{L}_{\mathbf{f}_{l},t}\text{\rm 1\kern-3.8pt{\rm l}}(w)\leq\frac{1}{u^ {\delta}}\int_{\mathbb{R}}\frac{1}{|u_{r}+iv|^{1+\delta}}dv\leq\frac{1}{(\log| w|)^{\delta}}. \tag{3.7}\] This shows that \(\lim_{w\to\infty}\mathcal{L}_{\mathbf{f}_{l},t}\text{\rm 1\kern-3.8pt{\rm l}}(w)=0\). The next result gives an estimate for the topological pressure. More precisely, it shows that for a given \(t>1\) the pressure \(\operatorname{P}(\mathbf{f}_{l},t)<0\) for all sufficiently large values of \(l\). **Proposition 3.2**.: _Let \(t>1\). Fix \(r\geqslant r_{0}\). Then, for every \(\varepsilon>0\) there exists \(l_{\varepsilon,r,t}\geqslant l_{r}\) such that_ \[\mathcal{L}_{\mathbf{f}_{l},t}\text{\rm 1\kern-3.8pt{\rm l}}(w)\leqslant \varepsilon\quad\text{for every $l\geqslant l_{\varepsilon,r,t}$ and every $w\in\mathbb{D}_{r}^{*}$}\,.\] Proof.: Let \(t>1\) and \(\varepsilon>0\). We are in the same situation as in the proof of Proposition 3.1. The first benefit we take out of this proof is that the convergence \(\lim_{w\to\infty}\mathcal{L}_{\mathbf{f}_{l},t}\text{\rm 1\kern-3.8pt{\rm l}}(w)=0\) is uniform in \(l\geqslant 0\); see (3.7). Therefore, there exists \(r_{\varepsilon}\geqslant r\) such that \[\mathcal{L}_{\mathbf{f}_{l},t}\text{\rm 1\kern-3.8pt{\rm l}}(w)<\varepsilon \quad\text{whenever $|w|\geqslant r_{\varepsilon}$ and $l\geqslant 0$}.\] Moreover, this proof shows that the integral \[\int_{\mathbb{R}}\frac{1}{|\xi|^{t}(\log|\xi|)^{\frac{tp}{1+p}}}dv\quad \xi=u+iv\,,\] converges uniformly for \(u\geqslant u_{r}=\log r\). Therefore, there exists \(V=V_{\varepsilon,t}\) such that \[\int_{|v|\geqslant V}\frac{1}{|\xi|^{t}(\log|\xi|)^{tp/1+p}}dv\leqslant\frac{ \varepsilon}{2}\quad\text{for every $u\geqslant u_{r}$}.\] So, by invoking now (3.4) and (3.5), we conclude that it remains to estimate the integral \[\int_{|v|<V}\left|\frac{\varphi(\xi)}{\varphi(\xi)+l}\right|^{t}\frac{1}{|\xi |^{t}(\log|\xi|)^{\frac{tp}{1+p}}}dv\] from above by \(\varepsilon/2\) for all \(l\geqslant 0\) large enough and all \(w\in\mathbb{D}_{r}^{*}\backslash\mathbb{D}_{r_{\varepsilon}}^{*}\). Here we used again the notation \(\xi=u+iv\), \(u=\log|w|\). Notice that all points \(\xi\) that appear in this integral belong to the compact set \[K=\{\xi=u+iv:\log r\leqslant u\leqslant\log r_{\varepsilon}\text{ and }|v| \leqslant V\}.\] Since \(M:=\sup_{\xi\in K}\{|\varphi(\xi)|\}<+\infty\), we have that \[\left|\frac{\varphi(\xi)}{\varphi(\xi)+l}\right|\leq\frac{M}{l-M}\quad,\] for every \(l>M\) and all \(\xi\in K\). Thus, \[\int_{|v|<V}\left|\frac{\varphi(\xi)}{\varphi(\xi)+l}\right|^{t}\frac{1}{|\xi| ^{t}(\log|\xi|)^{\frac{tp}{1+p}}}dv\leq C\frac{M}{l-M}\leq\frac{\varepsilon}{2},\] where \(C\in(0,+\infty)\) is the constant coming from (3.6) and the last inequality was written assuming that \(l\) is large enough. ### Behavior of the Transfer Operators for Entire functions \(\mathbf{E}_{l}\) We now have sufficiently strong estimates for the transfer operators of the models \(f_{l}\). Since ultimately we are after the entire functions \(\mathbf{E}_{l}\), we have to carry over these estimates to the transfer operators of these functions \(\mathbf{E}_{l}\). Since the entire functions approximate the models, i.e. since we have Fact 2.3, we are in a similar situation as in [12] where also the operators of some models and approximating entire functions have been compared. Following the approach of that paper we will prove the following. **Proposition 3.3**.: _There exist constants \(\mathcal{K}\in[1,+\infty)\) and \(r_{1}\geq r_{0}\) such that for every \(t>1\), all \(l\in\mathbb{C}\), and all \(r\geq r_{1}\), we have that_ \[\frac{1}{\mathcal{K}^{t}}\leq\frac{\mathcal{L}_{\mathbf{E}_{l},t}\mathbb{1}(w )}{\mathcal{L}_{\mathbf{f},t}\mathbb{1}(w)}\leq\mathcal{K}^{t}\quad\text{for all}\quad w\in\mathbb{D}_{r}^{*}.\] In our proof of Proposition 3.3 we adapt here the approach of [12], particularly Section 7 of that paper. We will show that [12, Lemma 7.3] holds in the present setting if \(r\geq r_{0}\) is large enough. This will suffice. We first shall prove the following. **Fact 3.4**.: _For all sufficiently large \(r\geq r_{0}\), say \(r\geq r_{1}\geq r_{0}\), we have that_ \[\frac{1}{2}\leq\frac{|\mathbf{E}_{l}(z)|}{|\mathbf{f}_{l}(z)|}\leq 2\quad\text{and} \quad\frac{1}{2}\leq\frac{|\mathbf{E}_{l}^{\prime}(z)|}{|\mathbf{f}_{l}^{\prime }(z)|}\leq 2\] _for all \(l\in\mathbb{C}\) and all \(z\in\Omega_{\mathbf{f}_{l},r}\)._ Proof.: The first inequality is a direct consequence of item (1) in Fact 2.3 combined with the inequality \(r\geq r_{0}>4C\) established in Fact 2.4. In order to proof the second inequality we also start with item (1) in Fact 2.3. It gives \[\left|\frac{|E^{\prime}(z)|}{|f^{\prime}(z)|}-1\right|\leq\frac{C}{|f^{\prime }(z)|}\quad\text{for all }z\in\Omega_{f,r}\subset G_{D}.\] This time we have to estimate \(|f^{\prime}(z)|\) and to show that there exists some \(r\geqslant r_{0}\) such that \[\frac{C}{|f^{\prime}(z)|}\leqslant\frac{1}{2}\quad\text{for all }z\in\Omega_{f,r}. \tag{3.8}\] Remember that \(f(z)=e^{\tau(z)}=e^{\varphi^{-1}(z)}\) for every \(z\in\Omega_{f,r}\). Thus, \[f^{\prime}(z)=\frac{f(z)}{\varphi^{\prime}(\xi)}\quad\text{where }\xi=\varphi^{-1}(z) \in\mathcal{H}_{\log r}.\] Obviously \(|f(z)|>r\) but what about \(|\varphi^{\prime}(\xi)|\)? From the formula (2.4) we get \[\varphi^{\prime}(\xi)=\frac{1}{1+p}\exp((\log\xi)^{\frac{1}{1+p}})\frac{1}{( \log\xi)^{\frac{p}{1+p}}\,\xi}.\] If \(v:=\log\xi\) then \[|\varphi^{\prime}(\xi)|\leqslant\left|\frac{\exp(v^{\frac{1}{1+p}})}{v^{ \frac{p}{1+p}}e^{v}}\right|=\frac{\exp\left(\Re\big{(}v^{\frac{1}{1+p}}-v \big{)}\right)}{|v|^{\frac{p}{1+p}}}. \tag{3.9}\] Since \(\xi\in\mathcal{H}_{\log r}\), \(\Re v>\log\log r\), and \(|\Im v|<\pi/2\), so if we write \(v=se^{i\alpha}\), then \[s>\log\log r\quad\text{and}\quad|\alpha|<\frac{\pi/2}{\log\log r}.\] Thus, \[\Re\big{(}v^{\frac{1}{1+p}}-v\big{)}=-s\left(\cos\alpha-s^{-\frac{p}{1+p}} \cos\Big{(}\frac{\alpha}{1+p}\Big{)}\right)\leqslant-\frac{s}{2}\leqslant- \frac{\log\log r}{2}\] provided \(r\) is sufficiently large. In this case we get from (3.9) that \[|\varphi^{\prime}(\xi)|\leqslant\frac{1}{\sqrt{\log r}(\log\log r)^{\frac{p}{ 1+p}}}.\] This shows that (3.8) holds for all \(r\geqslant r_{0}\) sufficiently large. Thus, 3.4 holds for \(f\) and \(E\), i.e. if \(l=0\). It then holds for all \(l\in\mathbb{C}\) because of (2.6). Having established Fact 3.4, the proof of Proposition 7.4 in [12] applies word by word and shows that the required inequality in Proposition 3.3 holds. ## 4. Proof of Theorem 1.1 As it was explained in the Introduction, it suffices to show that \[\lim_{l\to\infty}\operatorname{HypDim}(\mathbf{E}_{l})=1. \tag{4.1}\] In order to do this fix \(t>1\). Fix also any \(r\geq r_{1}\), for example \(r=r_{1}\). By virtue of Proposition 3.2, we have that \[\mathcal{L}_{f_{l},t}1\!\!1(w)\leq\mathcal{K}^{-t}\] for all \(l\geq l_{\mathcal{K}^{-t},r,t}\) and all \(w\in\mathbb{D}_{r}^{*}\). So, by Proposition 3.3, \[\mathcal{L}_{E_{l},t}1\!\!1(w)\leq 1\] for all \(l\geq l_{\mathcal{K}^{-t},r,t}\) and all \(w\in\mathbb{D}_{r}^{*}\). In conjunction with (3.3), this gives that \[\operatorname{P}(E_{l},t)\leq 0\] for all \(l\geq l_{\mathcal{K}^{-t},r,t}\). So, if \(X\subset\mathcal{J}_{E_{l}}\) is an arbitrary hyperbolic set for \(E_{l}\), then \[\operatorname{P}(E_{l}|_{X},t)\leq 0.\] The supremum over all hyperbolic sets of the left hand side of this inequality is the hyperbolic pressure \(P_{hyp}(E_{l},t)\) of \(E_{l}\) evaluated at \(t\). So, we have that \[\operatorname{P}_{hyp}(E_{l},t)=\sup\{\operatorname{P}(E_{l}|_{X},t):X\text{ is a hyperbolic set for }E_{l}\}\leq 0. \tag{4.2}\] Now, we want to use the Bowen's Formula of [3]. Theorem B of this paper applies to the functions \(E_{l}\) and states that the hyperbolic dimension of the set \(E_{l}\) is equal to \[\operatorname{HypDim}(\mathbf{E}_{l})=\inf\{s>0:\operatorname{P}_{hyp}(E_{l},s)\leq 0\}.\] Combined with (4.2), we thus get that \[\operatorname{HypDim}(\mathbf{E}_{l})\leq t\] for all \(l\geq l_{\mathcal{K}^{-t},r,t}\). So, the formula (4.1) is established and the proof of Theorem 1.1 is complete.
2306.03539
Bernoulli factories and duality in Wright-Fisher and Allen-Cahn models of population genetics
Mathematical models of genetic evolution often come in pairs, connected by a so-called duality relation. The most seminal example are the Wright-Fisher diffusion and the Kingman coalescent, where the former describes the stochastic evolution of neutral allele frequencies in a large population forwards in time, and the latter describes the genetic ancestry of randomly sampled individuals from the population backwards in time. As well as providing a richer description than either model in isolation, duality often yields equations satisfied by quantities of interest. We employ the so-called Bernoulli factory - a celebrated tool in simulation-based computing - to derive duality relations for broad classes of genetics models. As concrete examples, we present Wright-Fisher diffusions with general drift functions, and Allen-Cahn equations with general, nonlinear forcing terms. The drift and forcing functions can be interpreted as the action of frequency-dependent selection. To our knowledge, this work is the first time a connection has been drawn between Bernoulli factories and duality in models of population genetics.
Jere Koskela, Krzysztof Łatuszyński, Dario Spanò
2023-06-06T09:37:17Z
http://arxiv.org/abs/2306.03539v3
# Bernoulli factories and duality in Wright-Fisher and Allen-Cahn models of population genetics ###### Abstract Mathematical models of genetic evolution often come in pairs, connected by a so-called duality relation. The most seminal example are the Wright-Fisher diffusion and the Kingman coalescent, where the former describes the stochastic evolution of neutral allele frequencies in a large population forward in time, and the latter describes the genetic ancestry of randomly sampled individuals from the population backward in time. As well as providing a richer description than either model in isolation, duality often yields equations satisfied by unknown quantities of interest. We employ the so-called Bernoulli factory--a celebrated tool in simulation-based computing--to derive duality relations for broad classes of genetics models. As concrete examples, we present Wright-Fisher diffusions with general drift functions, and Allen-Cahn equations with general, nonlinear forcing terms. The drift and forcing functions can be interpreted as the action of frequency-dependent selection. To our knowledge, this work is the first time a connection has been drawn between Bernoulli factories and duality in models of population genetics. _Keywords:_ Allen-Cahn equation, Bernoulli factory, duality, frequency-dependent selection, Wright-Fisher diffusion _2020 MSC:_ 35C99, 60J70, 60J90, 92D10 ## 1 Introduction The Bernoulli factory problem is to construct a realisation of a Bernoulli\((f(p))\) random variable (or an \(f(p)\)-coin) using an almost surely finite number of independent \(p\)-coins, where \(f:[0,1]\mapsto[0,1]\) is a known function but \(p\in[0,1]\) is unknown. The special case \(f(p)=1/2\) was formulated and solved by John von Neumann [11]. Later, Keane and O'Brien provided a necessary and sufficient condition for a given function \(f\) to have a Bernoulli factory [10]. In brief, \(f\) has a Bernoulli factory if and only if it is continuous and _polynomially bounded_: \[\min\{f(p),1-f(p)\}\geq\min\{p,1-p\}^{n} \tag{1}\] for all \(p\in[0,1]\) and some \(n\geq\mathbb{N}\), or identically equal to zero or one. However, the proof of Keane and O'Brien is only partly constructive: it relies on a recursively defined sequence whose explicit solution is intractable. Constructions of algorithms have relied of approximating \(f\) by Bernstein polynomials, which are naturally associated with \(p\)-coins [12, 1, 13], or with other series expansions of \(f\) with non-negative coefficients [14]. Many seminal models of population genetics rely on the random propagation of alleles from one generation to the next. A prototypical example is the Wright-Fisher model, in which a population of fixed size \(N\in\mathbb{N}\) evolves in discrete generations. Individuals carry one of two alleles, \(a\) or \(A\), and each individual inherits the allele of a parent which it samples independently and uniformly from the previous generation. The mechanism of sampling alleles by sampling parents ensures that the model carries information of the forward-in-time evolution of allele frequencies, as well as the backward-in-time genealogies of samples of individuals. Frequently, these two modelling perspectives satisfy a duality relation which renders both models more tractable than they would be in isolation. We direct interested readers to e.g. [15] for an introduction to Wright-Fisher and genealogical models in population genetics. In the absence of mutation, inheriting an allele from a uniformly sampled parent models _neutral_ evolution, where the conditional mean allele frequency in a generation equals that in the previous generation. Non-neutral models in which the mean is not constant can be obtained by sampling offspring alleles as \(f(p)\)-coins when the allele frequency in the previous generation is \(p\), and \(f\) models so-called _frequency-dependent selection_. In order to retain the aforementioned backward-in-time genealogical picture and its associated duality, it is desirable to generate the \(f(p)\)-coins in a given generation by sampling parental alleles from the previous generation. Since the allele frequency in the parental generation is \(p\), parental alleles can be thought of as \(p\)-coins, motivating a connection to Bernoulli factories. Our contribution is to use a Bernoulli factory to extend two standard models of population genetics to more general settings than has been done previously. They are (i) the non-neutral Wright-Fisher diffusion and its ancestral selection graph dual [13, 14, 15, 16], for which the function \(f\) models frequency-dependent selection as described above, and (ii) the Allen-Cahn PDE which models stationary allele frequencies in a spatial continuum, in which \(f\) appears as external forcing [14]. The remainder of the manuscript is organised as follows. In Section 2, we review the Bernoulli factory of [12] which turns out to be convenient for our purposes. In Sections 3 and 4 we introduce the Wright-Fisher diffusion with frequency-dependent selection and the Allen-Cahn equation. In each case, we also demonstrate how Bernoulli factories facilitate the construction of very large classes of these models. Section 5 concludes with a discussion on connections to earlier results, as well as some potential extensions. ## 2 The Keane-O'Brien factory Let \(f:[0,1]\mapsto[0,1]\) be continuous and polynomially bounded as in (1). Let \(\bar{X}_{n}(p)\) be the sample mean of \(n\) independent \(p\)-coins. Define sequences of functions \(f_{k}\) and integers \(\eta(f,k)\) as follows: set \(f_{1}(p):=f(p)\), and \[f_{k+1}(p):=\frac{4}{3}\Bigg{(}f_{k}(p)-\frac{1}{4}\mathbb{P}\Big{(}f_{k}(\bar {X}_{\eta(f,k)}(p))\geq\frac{1}{2}\Big{)}\Bigg{)},\] where each \(\eta(f,k)\) is finite, independent of \(p\), and large enough, so that \[f_{k}(p)-\frac{1}{4}\mathbb{P}\Big{(}f_{k}(\bar{X}_{\eta(f,k)}(p))\geq\frac{1 }{2}\Big{)}\in[0,3/4].\] Such \(\eta(f,k)\) exists by [12, page 218]. Let \(L\sim\text{Geo}(1/4)\) be independent of all \(p\)-coins. Then \[\mathds{1}\Big{\{}f_{L}(\bar{X}_{\eta(f,L)}(p))\geq\frac{1}{2}\Big{\}}\sim \text{Ber}(f(p))\] is a Bernoulli factory for \(f\)[12, pages 217-219]. It is based on the series expansion \[f(p) =\sum_{k=1}^{\infty}\Big{(}\frac{3}{4}\Big{)}^{k-1}\frac{1}{4} \mathbb{P}\Big{(}f_{k}(\bar{X}_{\eta(f,k)}(p))\geq\frac{1}{2}\Big{)}\] \[=\sum_{k=1}^{\infty}\Big{(}\frac{3}{4}\Big{)}^{k-1}\frac{1}{4} \sum_{j=0}^{\eta(f,k)}\binom{\eta(f,k)}{j}p^{j}(1-p)^{\eta(f,k)-j}\mathds{1} \Big{\{}f_{k}\Big{(}\frac{j}{\eta(f,k)}\Big{)}\geq\frac{1}{2}\Big{\}}, \tag{2}\] which converges uniformly in \(p\). Pseudocode for this Keane-O'Brien factory is shown in Algorithm 1. ``` 0: Sequences \(\{f_{k}\}_{k\geq 1}\), \(\{\eta(f,k)\}_{k\geq 1}\). 1: Sample \(L\sim\text{Geo}(1/4)\). 2: Sample \(\bar{X}_{\eta(f,L)}(p)\sim\text{Bin}(\eta(f,L),p)\). 3:if\(f_{L}(\bar{X}_{\eta(f,L)}(p)/L)\geq 1/2\)then 4: Return 1. 5:else 6: Return 0. 7:endif ``` **Algorithm 1** Bernoulli factory for continuous \(f:[0,1]\mapsto[0,1]\) satisfying (1). It is worth highlighting two features of the Keane-O'Brien factory which will turn out to be essential. **Remark 1**.: The Keane-O'Brien factory is defined for all \(p\in[0,1]\) and \(f(p)\in[0,1]\). Factories based on Bernstein polynomial approximation typically have to constrain the range (and sometimes the domain) of \(f\) to (subsets of) \((0,1)\) to avoid degenerate distributions [11, 12, 13]. In population genetics applications, it is essential to allow the whole range \(p\in[0,1]\), and desirable to allow \(f(0)=0\) and \(f(1)=1\) to model fixation. **Remark 2**.: The random variable \(L\), and hence the number of \(p\)-coins \(\eta(f,L)\) needed to determine the allele of a child, is independent of the realisations of the coins. This will facilitate the construction of ancestral graphs containing all possible ancestors before the alleles carried by any of those ancestors are known. ## 3 The frequency-dependent Wright-Fisher diffusion Consider a population of \(N\) individuals \(\{\mathbf{Z}_{k}^{(N)}\}_{k\geq 0}\) evolving in discrete time, where the state of the system at time \(k\) is \(\mathbf{Z}_{k}^{(N)}:=(Z_{1}^{(N)}(k),\ldots,Z_{N}^{(N)}(k))\) with \(Z_{k}^{(N)}\in\{a,A\}\). Let \[\tilde{Y}_{k}^{(N)}:=\frac{1}{N}\sum_{i=1}^{N}\mathds{1}_{\{A\}}(Z_{i}^{(N)}( k))\] be the frequency of the \(A\) allele in generation \(k\). Each individual in generation \(k+1\) samples its allele conditionally independently given \(\tilde{Y}_{k}^{(N)}\), where the probability of an \(A\) allele is \(f^{(N)}(Y_{k}^{(N)})\) with \[f^{(N)}(p):=\Big{(}1-\frac{\sigma}{N}\Big{)}p+\frac{\sigma}{N}f(p)\] for a fixed constant \(\sigma>0\) and continuous function \(f:[0,1]\mapsto[0,1]\) satisfying (1). Concretely, the realisation of each allele is implemented as follows. With probability \(1-\sigma/N\), a given individual samples one parent and inherits its allele. With the complementary probability, the individual samples an independent copy of \(L\), followed by \(\eta(f,L)\) parents chosen uniformly at random from the population with replacement. Conditional on \(L\) and \(\tilde{Y}_{k}^{(N)}\), the allele of the individual is \(A\) if event \(\{f_{L}(\bar{X}_{\eta(f,L)})\geq\tilde{Y}_{k}^{(N)}\}\) happens. By (2), the marginal probability of an \(A\) allele is \(f^{(N)}(\tilde{Y}_{k}^{(N)})\). Our aim is to verify convergence of the allele frequency process \(\{\tilde{Y}_{k}^{(N)}\}_{k\geq 0}\) in the infinite population limit. To that end, we define the continuous-time process \((Y_{t}^{(N)})_{t\geq 0}\) via \[Y_{t}^{(N)}:=\tilde{Y}_{\lfloor Nt\rfloor}^{(N)}.\] **Theorem 1**.: _Let \(f\) be polynomially bounded and Lipschitz continuous on \([0,1]\). Then, in the Skorokhod topology, \((Y_{t}^{(N)})_{t\geq 0}\to(Y_{t})_{t\geq 0}\) weakly as \(N\to\infty\), where \(Y_{t}\) solves_ \[\mathrm{d}Y_{t}=\sigma(f(Y_{t})-Y_{t})\mathrm{d}t+\sqrt{Y_{t}(1-Y_{t})} \mathrm{d}W_{t} \tag{3}\] _subject to appropriate initial conditions, where \((W_{t})_{t\geq 0}\) is a Brownian motion._ Proof.: Existence, uniqueness, and the Feller property of the putative limiting process all follow from [1, Theorem 2.8] because the drift function \(\sigma(f(y)-y)\) is Lipschitz continuous. The discrete-time process \(\bar{Y}_{k}^{(N)}\) has generator \[L^{N}h(y):=\sum_{k=0}^{N}\binom{N}{k}\Big{(}\frac{\sigma}{N} \Big{)}^{k}\Big{(}1-\frac{\sigma}{N}\Big{)}^{N-k} \sum_{x=0}^{N-k}\binom{N-k}{x}y^{x}(1-y)^{N-k-x}\] \[\times\sum_{z=0}^{k}\binom{k}{z}f(y)^{z}(1-f(y))^{k-z}h\Big{(} \frac{x+z}{N}\Big{)}-h(y).\] To show the claimed convergence, take \(h\) to be a \(C^{2}\) function with bounded third derivatives, which is a convergence-determining class. Expanding \(h\) around \(y\) on the right-hand side yields \[L^{N}h(y)= \sum_{k=0}^{N}\binom{N}{k}\Big{(}\frac{\sigma}{N}\Big{)}^{k}\Big{(} 1-\frac{\sigma}{N}\Big{)}^{N-k}\sum_{x=0}^{N-k}\binom{N-k}{x}y^{x}(1-y)^{N-k-x}\] up to terms which are of lower order since \(h^{\prime\prime\prime}\) is bounded. Evaluating the binomial expectations yields \[NL^{N}h(y)=\sigma(f(y)-y)h^{\prime}(y)+\frac{1}{2}y(1-y)h^{\prime\prime}(y)+o( 1),\] as \(N\to\infty\). The proof of weak convergence is completed by [13, Theorem 19.28] because the limiting diffusion is Feller. **Remark 3**.: The requirement of a Lipschitz drift could be slightly relaxed by using the more cumbersome conditions for a Feller semigroup given in [16, Theorem 3.3]. They cover drifts satisfying a condition akin to \[|f(y)-f(z)|\leq-C|y-z|\log(|y-z|),\] or small variations thereof, where \(C>0\) is a constant. See equations (14) and (15), as well as Remark 2.3, of [16] for a precise class of non-Lipschitz functions which can be handled. Next we consider a sample of \(n\in\mathbb{N}\) individuals from a given generation (which we say lived at time \(0\)) in the pre-limiting Wright-Fisher model \(\{\mathbf{Z}_{k}^{(N)}\}_{k\geq 0}\). We define the _ancestral process_\(A_{k}^{(N)}\) as the number of lineages which are ancestral to the sample \(k\) generations in the past. The number of lineages can decrease by one when two lineages find a common ancestor, or increase by \(\eta(f,L)-1\) whenever a lineage samples \(\eta(f,L)\) ancestors. In the pre-limiting particle system, any number of these events can co-occur in one generation, particularly when \(n\geq 3\). But transitions other than isolated binary mergers and single multifurcations turn out to vanish in a suitably rescaled infinite population limit. To that end, we define the continuous time Markov jump process \[A_{t}:=\lim_{N\to\infty}A_{\lfloor Nt\rfloor}^{(N)} \tag{4}\] whose existence we prove next. **Theorem 2**.: _The limit in (4) exists, and \((A_{t})_{t\geq 0}\) has generator_ \[Gh(n)=\binom{n}{2}[h(n-1)-h(n)]+\sigma n\sum_{k=1}^{\infty}\Big{(}\frac{3}{4} \Big{)}^{k-1}\frac{1}{4}[h(n+\eta(f,k)-1)-h(n)].\] Proof.: The pre-limiting, discrete-time process \(\{A_{k}^{(N)}\}_{k\geq 0}\) undergoes a myriad of transitions involving subsets of individuals finding common ancestors, as well as individuals branching into many potential ancestors, potentially in the same generation. However, only transitions with a per-generation probability \(\Theta(1/N)\) will contribute to the time-rescaled limit with a finite rate. Events with probability \(o(1/N)\) will not occur in the limit at all, while events with probability \(\omega(1/N)\) will appear at a dense set of times. However, it will turn out that the latter only result in identity transitions in \(A_{t}\), and hence do not affect the limit. The probability of two lineages originating from a common ancestor one generation earlier, with neither being involved in a branching event, is \[\Big{(}1-\frac{\sigma}{N}\Big{)}^{2}\frac{1}{N}=\frac{1}{N}+o(1/N), \tag{5}\] implying that two lineages will merge to a common ancestor at rate \(1\) in the limit. A triple merger, or more than one simultaneous merger, has probability at most \[\Big{(}1-\frac{\sigma}{N}\Big{)}^{3}\frac{1}{N^{2}}=o(1/N),\] and hence will not appear in the limit. A single individual branches into \(\eta(f,k)\) ancestral lineages with probability \[\frac{\sigma}{N}\Big{(}\frac{3}{4}\Big{)}^{k-1}\frac{1}{4}, \tag{6}\] while any event involving more than two lineages branching in one generation has probability at most \[\Big{(}\frac{\sigma}{N}\Big{)}^{2}=o(1/N).\] Hence, only isolated branching events appear in the limit. It is also clear from (5) and (6) that the probability of at least one merger and branching event in one generation is \(o(1/N)\). All other transitions involve no mergers or branching events, and hence do not affect the limiting ancestral process. Noting that there are \(\binom{n}{2}\) pairs of individuals to merge with probability (5) and \(n\) individuals to branch with probabilities (6) yields the claimed generator \(G\). Duality between the Wright-Fisher diffusion (3) and the ancestral process (4) is a relation \[\mathbb{E}_{y}[h(Y_{t},n)|Y_{0}=y]=\mathbb{E}_{n}[h(y,A_{t})|A_{0}=n],\] where \(y\in[0,1]\) and \(n\in\mathbb{N}\) are respective initial conditions, and \(h\) is a _duality function_, the specification of which will require some exposition. We follow [10] and define the random function \(P_{t}(y)\) as the conditional probability that all \(n\) leaves at time zero in the ancestral process carry allele \(A\), given that the \(A\) allele frequency at time \(t\) in the past is \(y\in[0,1]\), and given the realisation of the ancestral process \(A_{0:t}:=(A_{s})_{s\in[0,t]}\) started from the \(n\) lineages. For example, in the absence of branching events we have \(P_{t}(y)=y^{A_{t}}\), while when \(n=1\), a single branching event into \(\eta(f,k)\) ancestors and no mergers in the history \(A_{0:t}\) yields \[P_{t}(y):=\sum_{j=0}^{\eta(f,k)}\binom{\eta(f,k)}{j}y^{j}(1-y)^{\eta(f,k)-j} \mathds{1}\Big{\{}f_{k}\Big{(}\frac{j}{\eta(f,k)}\Big{)}\geq\frac{1}{2}\Big{\}},\] where \(f_{k}\) is as specified in Section 2. Other patterns of merger and branching events will result in a more complicated Bernstein polynomial of degree \(A_{t}\), \[P_{t}(y):=\sum_{j=0}^{A_{t}}V_{t}(j)\binom{A_{t}}{j}y^{j}(1-y)^{A_{t}-j},\] where \(V_{t}(j)\) is the random Bernstein coefficient which equals the probability that the \(n\) leaves all carry allele \(A\), given that \(j\) of \(A_{t}\) roots do. As in [10, Definition 2.12 and Proposition 2.13], the vector \((\mathbf{V}_{t})_{t\geq 0}:=(V_{t}(0),\dots,V_{t}(n))_{t\geq 0}\) is also a Markov jump process with transitions \[\mathbf{v}\mapsto\Bigg{(}\sum_{j=0}^{i\wedge\eta(f,k)}\frac{\binom{i }{j}\binom{n+\eta(f,k)-1-j}{\eta(f,k)-j}}{\binom{n+\eta(f,k)-1}{i}}[\mathds{1 }\{f_{h}(j/\eta(f,k))\geq 1/2\}v_{i+1-j}\] \[+\mathds{1}\{f_{k}(j/\eta(f,k))<1/2\}v_{i-j}]\Bigg{)}_{i=0}^{n+ \eta(f,k)-1} \tag{7}\] at rate \(n\sigma\mathbb{P}(L=k)\), and \[\mathbf{v}\mapsto\Big{(}\frac{i}{n-1}v_{i+1}+\frac{n-1-i}{n-1}v_{i}\Big{)}_{i=0}^ {n-1} \tag{8}\] at rate \(\binom{n}{2}\). Also, let \[\mathbf{B}_{n}(x):=\left(\binom{n}{i}x^{i}(1-x)^{n-i}\right)_{i=1}^{n}\] be the vector of order \(n\) Bernstein polynomials, and for vectors \(\mathbf{u}\) and \(\mathbf{v}\) of common length \(n+1\), let \[\langle\mathbf{u},\mathbf{v}\rangle:=\sum_{i=0}^{n}u_{i}v_{i}\] be the usual inner product. **Theorem 3**.: _The processes \((Y_{t})_{t\geq 0}\) and \((V_{t})_{t\geq 0}\) are dual, in that for all \(t\geq 0\),_ \[\mathbb{E}[\langle\mathbf{B}_{n}(Y_{t}),\mathbf{v}\rangle|Y_{0}=y]=\mathbb{E}[\langle\bm {B}_{\text{dim}(\mathbf{V}_{t})-1}(y),\mathbf{V}_{t}\rangle|\mathbf{V}_{0}=\mathbf{v}], \tag{9}\] _for each \(y\in[0,1]\), every \(n\in\mathbb{N}\), and each \(\mathbf{v}\in\mathbb{R}^{n+1}\)._ Proof.: The proof is a small adaptation of that of [13, Theorem 2.14] to our more general setting. Duality between the coalescing mechanism of the ancestral process (8) and the diffusion coefficient of the Wright-Fisher diffusion (3) is standard, and we omit it to focus on establishing the same relation for the branching mechanism (7) and the drift term in (3). Following [13, Section 4.3], let \(Y_{k}^{y}\sim\text{Bin}(k,y)\) and \(K_{k,i}^{n}\sim\text{Hyp}(n+k-i,k,i)\) be independent random variables, where \(\text{Hyp}(a,b,c)\) denotes the hypergeometric distribution with \(b\) draws without replacement from a population of size \(a\) containing \(c\) successes. Let \(H:(y,\mathbf{v})\mapsto\langle\mathbf{B}_{\text{dim}(\mathbf{v})-1}(y),\mathbf{v}\rangle\) be the putative duality function. Then, for \(\mathbf{v}\in\mathbb{R}^{n+1}\) we have \[\partial_{y}H(y,\mathbf{v}) =n\mathbb{E}[v_{Y_{n-1}^{y}+1}-v_{Y_{n-1}^{y}}],\] \[H(y,\mathbf{v}) =\mathbb{E}[v_{Y_{n}^{y}}]=\mathbb{E}[(1-y)v_{Y_{n-1}^{y}}+yv_{Y_ {n-1}^{y}+1}],\] \[f(y) =\mathbb{E}[\mathds{1}\{f_{L}(\bar{X}_{\eta(f,L)}\geq 1/2)],\] where \(L\sim\text{Geo}(1/4)\) and the third equality is due to (2). We also have \[(\bar{X}_{\eta(f,L)},Y_{n-1}^{y})\stackrel{{ d}}{{=}}(K_{\eta(f,k),Y_{n+ \eta(f,L)-1}^{n}}^{n},Y_{n+\eta(f,L)-1}^{y}-K_{\eta(f,k),Y_{n+\eta(f,L)-1}^{n}}^ {n}).\] Thus the drift of the Wright-Fisher diffusion (3) satisfies \[\sigma(f(y)-y)\partial_{y}H(y,\mathbf{v})\] \[=\sigma\mathbb{E}[\mathds{1}\{f_{L}(\bar{X}_{\eta(f,L)}(y))\geq 1 /2\}-y]n\mathbb{E}[v_{Y_{n-1}^{y}+1}-v_{Y_{n-1}^{y}}]\] \[=n\sigma\mathbb{E}[\mathds{1}\{f_{L}(\bar{X}_{\eta(f,L)}(y))\geq 1 /2\}v_{Y_{n-1}^{y}+1}-\mathds{1}\{f_{L}(\bar{X}_{\eta(f,L)}(y))\geq 1/2\}v_{Y_{n-1}^ {y}}-yv_{Y_{n-1}^{y}+1}\] \[\qquad+yv_{Y_{n-1}^{y}}+v_{Y_{n-1}^{y}}-v_{Y_{n-1}^{y}}]\] \[=n\sigma(\mathbb{E}[\mathds{1}\{f_{L}(\bar{X}_{\eta(f,L)}(y))\geq 1 /2\}v_{Y_{n-1}^{y}+1}+\mathds{1}\{f_{L}(\bar{X}_{\eta(f,L)}(y))<1/2\})v_{Y_{n -1}^{y}}]-H(y,\mathbf{v}))\] \[=n\sigma\Bigg{(}\sum_{k=1}^{\infty}\Big{(}\frac{3}{4}\Big{)}^{k- 1}\frac{1}{4}\sum_{j=0}^{n+\eta(f,k)-1}\mathbb{E}\left[\mathds{1}\Bigg{\{}f_ {k}\Bigg{(}\frac{K_{\eta(f,k),j}^{n}}{\eta(f,k)}\Bigg{)}\geq 1/2\Bigg{\}}v_{j-K_{ \eta(f,k),j}^{n}+1}\right.\] \[\qquad\qquad+\mathds{1}\Bigg{\{}f_{k}\Bigg{(}\frac{K_{\eta(f,k), j}^{n}}{\eta(f,k)}\Bigg{)}<1/2\Bigg{\}}v_{j-K_{\eta(f,k),j}^{n}}\Bigg{]}\binom{n+\eta(f,k)-1}{j}y^{ j}(1-y)^{n+\eta(f,k)-1-j}\] \[\qquad\qquad-H(y,\mathbf{v})\Bigg{)},\] which is precisely (7) applied to the \(\mathbf{v}\)-argument of \(H(y,\mathbf{v})=\langle\mathbf{B}_{\text{dim}(\mathbf{v})-1}(y),\mathbf{v}\rangle\). ## 4 The Allen-Cahn model of spatial genetics The Allen-Cahn equation on a Lipschitz domain \(\Omega\subseteq\mathbb{R}^{d}\) is \[\partial_{t}u-\Delta u=\lambda f(u), \tag{10}\] for a given \(\lambda>0\) and \(f:[0,1]\mapsto[0,1]\), and subject to suitable initial and boundary conditions. In [1], the authors consider a model of the spatially structured frequency \(u(x,t)\) of an allele \(A\) governed by \[\partial_{t}u-\Delta u =\frac{1}{\varepsilon^{2}}u(1-u)(2u-1+\nu\varepsilon)\text{ for }x\in\Omega,t>0, \tag{11}\] \[\partial_{n}u =0\text{ for }x\in\partial\Omega,t>0,\] \[u(x,0) =u_{0}(x)\text{ for }x\in\Omega,\] where \(\varepsilon>0\), \(\nu>0\), \(\partial_{n}\) denotes the normal derivative at the boundary, and \(u_{0}:\Omega\to[0,1]\) (see also [10]). They construct a solution to (11) by using a particle system in which a single particle started at a location \(x\in\Omega\) undergoes a Brownian motion with speed \(2\), branches into three independent copies at rate \((1+\varepsilon\nu)/\varepsilon^{2}\), and particles are reflected from the boundary. At a given end time \(t>0\), a leaf particle at location \(z\in\Omega\) samples one of two alleles, \(a\) and \(A\), with respective probabilities \((1-u_{0}(z),u_{0}(z))\). All particles sample their alleles independently. Then, particles propagate their alleles rotowards along the realisation of the Brownian tree. The allele of an internal branch is decided by a majority vote among its three children unless exactly one child carries allele \(A\), in which case the parent branch carries \(A\) with probability \(2\nu\varepsilon/(3+3\nu\varepsilon)\), and \(a\) otherwise. The probability that the root particle \(x\in\Omega\) carries allele \(A\) under these dynamics solves (11) [1, Proposition 2.4]. The ingredients of the particle system can be read off from (11). The Laplacian \(\Delta\) is the generator of Brownian motion run at speed \(2\), the branching rate \((1+\varepsilon\nu)/\varepsilon^{2}\) is an upper bound for the right-hand side, and \[\frac{1}{\varepsilon^{2}}u(1-u)(2u-1+\nu\varepsilon)=\frac{1+\nu\varepsilon}{ \varepsilon^{2}}\Bigg{(}u^{3}+3u^{2}(1-u)+\frac{2\nu\varepsilon}{3+3\nu \varepsilon}3u(1-u)^{2}-u\Bigg{)},\] which demonstrates that the nonlinearity in (11) has an interpretation as the voting system described above. It would be straightforward to adapt the proof of [1, Proposition 2.4] to any other polynomial right-hand side of the form \[\lambda\sum_{j=0}^{m}p_{j}\binom{m}{j}u^{j}(1-u)^{m-j}-u,\] for some \(m\in\mathbb{N}\), \(\lambda>0\), and coefficients \(p_{j}\in[0,1]\) by setting \(\lambda\) as the branching rate into \(m\) particles, and suitably adapting the voting scheme. Our contribution is to prove an analogous result for (10), subject to the same initial and boundary conditions as (11), when \(f\) is merely continuous and polynomially bounded. Consider a branching Brownian motion reflected off the boundary \(\partial\Omega\), with branching rate \(\lambda\), started from a single particle at \(x\in\Omega\) at time \(0\). At a branching event, the number of offspring is given by \(\eta(f,L)\), where \(L\sim\mathrm{Geo}(1/4)\). At a terminal time \(t>0\), a leaf particle at \(z\in\Omega\) samples allele \(a\) (resp. \(A\)) with probability \(1-u_{0}(z)\) (resp. \(u_{0}(z)\)), and each leaf carries out this choice independently. Alleles are propagated rotowards along the tree: a branch with \(\eta(f,k)\) offspring, \(j\) of whom carry allele \(A\), carries allele \(A\) if \(f_{k}(j/\eta(f,k))\geq 1/2\), and carries allele \(a\) otherwise. Let \(F(x,t)\) be the allele carried by the root particle at position \(x\in\Omega\) when the branching process is run until time \(t>0\). **Theorem 4**.: _Suppose \(f\) is continuous and polynomially bounded as in (1). Viewed as a function of \(x\in\Omega\), the probability \(\mathbb{P}(F(x,t)=A)\) solves (10) subject to the initial and boundary conditions in (11)._ Proof.: The proof is an adaptation of that of [1, Proposition 2.4] to our more general setting. To verify that \(q(x,t):=\mathbb{P}(F(x,t)=A)\) solves (10) on the interior of \(\Omega\), let \(S\) denote the first branching time of the initial particle and \((W_{t})_{t\geq 0}\) be a Brownian motion. For small \(h>0\), \[q(x,t+h) =\mathbb{P}(F(x,t+h)=A|S>h)\mathbb{P}(S>h)+\mathbb{P}(F(x,t+h)=A| S\leq h)\mathbb{P}(S\leq h)\] \[=\mathbb{E}_{x}[q(W_{h},t)|S>h]\mathbb{P}(S>h)\] \[\quad+\mathbb{P}(S\leq h)\sum_{k=1}^{\infty}\Big{(}\frac{3}{4} \Big{)}^{k-1}\frac{1}{4}\sum_{j=0}^{\eta(f,k)}\binom{\eta(f,k)}{j}\mathbb{E}_ {x}[q(W_{S},t)^{j}(1-q(W_{S},t))^{\eta(f,k)-j}]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times \mathds{1}\Bigg{\{}f_{k}\Big{(}\frac{j}{\eta(f,k)}\Big{)}\geq\frac{1}{2} \Bigg{\}}\] \[=\mathbb{E}_{x}[q(W_{h},t)|S>h]e^{-\lambda h}+\mathbb{E}_{x}[f(q( W_{S},t))|S\leq h](1-e^{-\lambda h}),\] where the subscript in \(\mathbb{E}_{x}\) denotes the starting point of \((W_{t})_{t\geq 0}\) and the last equality follows via (2). Since \(f\) is continuous, regularity of the heat semigroup and the tower law (viewing \(q(x,t)\) as the expectation of an indicator function) yield \[\mathbb{E}_{x}[f(q(W_{S},t))|S\leq h]=f(\mathbb{E}_{x}[q(W_{s},t)])+O(h)=f(q(x, t))+O(h).\] Hence, \[\partial_{t}\mathbb{P}(F(x,t)=A) =\lim_{h\to 0}\frac{\mathbb{E}_{x}[\mathbb{P}(F(W_{h},t)=A)|S>h]- \mathbb{P}(F(x,t)=A)}{h}e^{-\lambda h}\] \[\quad+\lim_{h\to 0}\frac{f(\mathbb{P}(F(x,t)=A))-\mathbb{P}(F(x,t)=A) }{h}(1-e^{-\lambda h})\] \[=\Delta\mathbb{P}(F(x,t)=A)+\lambda[f(\mathbb{P}(F(x,t)=A))- \mathbb{P}(F(x,t)=A)].\] The boundary condition is inherited from reflecting Brownian motion [1] since branching events occur at a finite rate and hence will not take place on the boundary. ## 5 Discussion In [10], the authors prove analogues of Theorems 1-3 for the case \[f(x)-x:=\sum_{\ell=2}^{m}\beta_{\ell}\sum_{i=0}^{\ell}\binom{\ell}{i}x^{i}(1- x)^{\ell-i}\Big{(}p_{i,\ell}-\frac{i}{\ell}\Big{)}, \tag{12}\] for a fixed constant \(m\in\mathbb{N}\), positive coefficients \(\{\beta_{\ell}\}_{\ell=2}^{m}\), and a sequence of \([0,1]\)-valued coefficients \(\{\{p_{i,\ell}\}_{i=0}^{\ell}\}_{\ell=2}^{m}\). Earlier work by Gonzalez Casanova and Spano also covered the case \[f(x):=\sum_{j=0}^{\infty}\pi_{j}x^{j},\] where \(\{\pi_{j}\}_{j=0}^{\infty}\) is a probability mass function [12]. Our results cover all Lipschitz continuous, polynomially bounded functions \(f:[0,1]\mapsto[0,1]\), which includes both of these classes as special cases. Furthermore, [10, Section 2.10] mentions that their approach should extend to the \(m=\infty\) case. Our Bernoulli factory approach demonstrates that that is true, and also that there is no further difficulty in handling our more general class of drift functions. The works of [12] and [10] are more general than our results in two ways: they tackle sequences of drift functions \(f_{N}\to f\) as \(N\to\infty\), and they incorporate jumps in the limiting forward-in-time Wright-Fisher diffusion, along with multiple mergers into the reverse-time ancestral process. Both of these generalisations could be incorporated into our model at the cost of increased technicality. We have chosen to omit them to focus on the class of drift functions, which is our main interest. The link our work establishes between individual-based models and diffusive scaling limits provides rigorous justification for a range of diffusive approximations which have been obtained for non-neutral finite-population models [13, 14]. In addition, we provide an associated genealogical description and a formal duality relation between the two processes. Genealogical processes of Wright-Fisher diffusions with frequency-dependent selection are also considered in [11], though their approach is to condition on the allele frequency trajectory and hence avoid the need for branching events in the ancestral process. Their setting is formulated for generic drift functions of the form \(\beta(x)x(1-x)\), though in practice they focus on Bernstein polynomials of degree no more than two. Approaches based on moment duality, with duality function \(h(x,n)=x^{n}\), have been used to obtain series expansions of Wright-Fisher diffusion transition functions [1], and [10] use Bernstein duality in (9) to study fixation when the drift is of the form (12) using the Bernstein coefficient process \((\mathbf{V}_{t})_{t\geq 0}\). It is unclear whether similar results can be usefully obtained in our setting in practice because the sequences \(\{f_{k}\}_{k\geq 1}\) and \(\{\eta(f,k)\}_{k\geq 1}\) are difficult to compute. Hence, so are the Bernstein coefficients. The construction of the solution to the Allen-Cahn equation (10) from a branching Brownian motion is also an example of duality between these two processes. This result could also be generalised by replacing the Laplacian with a more general second order differential operator, provided that it has suitable regularity and generates a diffusion with tractable reflecting behaviour at boundaries. The key advantage of the Keane-O'Brien factory is that it covers the whole class of continuous, polynomially bounded functions \(f\). Other factories could be used to obtain different ancestral processes, Wright-Fisher diffusions, and constructions of the solution to the Allen-Cahn equation. However, it is essential that the number of coins needed by the factory is independent of the realisations of those coins, or at least that there is a almost surely finite upper bound on the number of coins regardless of realisations. Otherwise, the trick of keeping track of all resulting branches to fill in alleles (or votes) later cannot work, because the number of branches cannot be determined until earlier alleles have been resolved. As far as we are aware, the only other somewhat general Bernoulli factory with this independence property is that of [14], which is identical to the branching mechanism used in [11] and applies to exactly the same functions. Seminal Bernoulli factories, such as that of Nacu and Peres [13] based on Bernstein polynomial approximation of \(f\), inherently link the decision to keep flipping coins to the outcomes of earlier coins. Finally, there are multivariate analogues of Bernoulli factories, in which independent, \(m\)-sided dice with probability mass function \((p_{1},\ldots,p_{m})\) are used to construct a \(v\)-sided die with mass function \(f(p_{1},\ldots,p_{m})\). To date, attention has focused on domains which exclude boundaries, and where the coordinates of \(f\) are rational functions [13, 12], or where \(v=1\) [PLS]. We believe a suitable extension of such _dice enterprises_ to cases including boundaries could be used to obtain analogues of the convergence and duality results presented here in the case with more than two non-neutral alleles. ## Funding and data sharing statements JK was supported by EPSRC research grant EP/V049208/1. Data sharing is not applicable to this article as no new data were generated or analysed.
2305.02377
Privacy in Population Protocols with Probabilistic Scheduling
The population protocol model introduced by Angluin et al. in 2006 offers a theoretical framework for designing and analyzing distributed algorithms among limited-resource mobile agents. While the original population protocol model considers the concept of anonymity, the issue of privacy is not investigated thoroughly. However, there is a need for time- and space-efficient privacy-preserving techniques in the population protocol model if these algorithms are to be implemented in settings handling sensitive data, such as sensor networks, IoT devices, and drones. In this work, we introduce several formal definitions of privacy, ranging from assuring only plausible deniability of the population input vector to having a full information-theoretic guarantee that knowledge beyond an agent's input and output bear no influence on the probability of a particular input vector. We then apply these definitions to both existing and novel protocols. We show that the Remainder-computing protocol given by Delporte-Gallet et al. in 2007 (which is proven to satisfy output independent privacy under adversarial scheduling) is not information-theoretically private under probabilistic scheduling. In contrast, we provide a new algorithm and demonstrate that it correctly and information-theoretically privately computes Remainder under probabilistic scheduling.
Talley Amir, James Aspnes
2023-05-03T18:32:06Z
http://arxiv.org/abs/2305.02377v1
# Privacy in Population Protocols with Probabilistic Scheduling ###### Abstract The population protocol model [2] offers a theoretical framework for designing and analyzing distributed algorithms among limited-resource mobile agents. While the original population protocol model considers the concept of anonymity, the issue of privacy is not investigated thoroughly. However, there is a need for time- and space-efficient privacy-preserving techniques in the population protocol model if these algorithms are to be implemented in settings handling sensitive data, such as sensor networks, IoT devices, and drones. In this work, we introduce several formal definitions of privacy, ranging from assuring only plausible deniability of the population input vector to having a full information-theoretic guarantee that knowledge beyond an agent's input and output bear no influence on the probability of a particular input vector. We then apply these definitions to both existing and novel protocols. We show that the Remainder-computing protocol from [9] (which is proven to satisfy output independent privacy under adversarial scheduling) is not information-theoretically private under probabilistic scheduling. In contrast, we provide a new algorithm and demonstrate that it correctly and information-theoretically privately computes Remainder under probabilistic scheduling. Keywords:Mobile ad-hoc networks Population protocols Information-theoretic privacy. ## 1 Introduction Various issues arise when applying the theoretical population protocol model to real-world systems, one of the most critical of which is that of preserving privacy. The motivation for furthering the study of privacy within population protocols is to better adapt these algorithms to the real-world systems that they aim to model, such as sensor networks, systems of IoT devices, and swarms of drones, all of which handle sensitive data. Previous research in private population protocols only considers adversarial scheduling, which makes generous assumptions about our obliviousness to the scheduler's interaction choices and offers only very weak criteria for satisfying the definition of "privacy." In this work, we further refine these definitions considering a realistic range of threat models and security concerns under arbitrary schedules. ### Related Work Research in private computation within ad hoc networks is distributed (pun intended) over multiple academic fields. We limit our review of the literature to works that most closely relate to the theoretical model we study in this paper. #### 1.1.1 Population Protocols Privacy was introduced to the population protocol model in [9], where the authors define a notion of privacy called _output independent privacy_ and provide protocols satisfying this definition for computing the semilinear predicates. Output independent privacy basically states that for any input vector and execution yielding a particular sequence of observations at an agent, there exists a different input vector and execution yielding the same sequence of observations at that agent. The practicality of this definition relies on adversarial scheduling, which allows the schedule of interactions to delay pairs of agents from interacting for an unbounded number of steps. Due to adversarial scheduling, the _existence_ of an execution is sufficient to achieve plausible deniability: Agents have no metric for estimating time elapsed nor approximating how many interactions in which another agent has participated. Therefore, the observed state of an agent cannot be used to infer the agent's input as it may have deviated from its original state over the course of many interactions. However, if instead the scheduler is probabilistic, then there arises the issue of data leakage from inferring the population's interaction patterns. #### 1.1.2 Sensor Networks Population protocols are designed to model sensor networks, but there is a large body of literature on sensor networks that is not connected to the population protocol model. The capacity of agents in the domain of sensor networks is much larger than is assumed in population protocols; in particular, much of the privacy-preserving algorithms in this area involve encryption, which requires linear state space in the size of the population. In recent years, viral exposure notification via Bluetooth has become a popular area of study [6; 8], and one that demands verifiable privacy guarantees due to widespread laws governing protected health data. However, the solutions in [6; 8] require centralization and high storage overhead. The closely related problem of anonymous source detection is studied in [4; 5]; however, these works require superconstant state space and only address this one task. Other research in wireless sensor networks investigates private data aggregation, which most closely resembles the goal of our research [7; 13; 16]. As before, these works require high computation and local memory as they implement their solutions using homomorphic encryption. Where alternative methods are used to avoid relying on encryption, a specialized network topology is needed for success [15] or only specific functions are computable [16]. While far from comprehensive, this sample of related works suggests that much of the research on privacy in wireless sensor networks is either limited by network topology or relies on computationally intensive encryption. For this reason, our goal is to develop privacy-preserving solutions for data aggregation in population protocols, bearing in mind the resource restrictions of the model. ### Contribution In this work, we study the privacy of population protocols in the random scheduling model. We demonstrate how existing privacy definitions fail under certain modelling assumptions, give new precise definitions of privacy in these settings, and offer a novel protocol in the uniform random scheduling population protocol model satisfying the new privacy definitions. In this work, we restrict our focus to computing the Remainder predicate. ## 2 Preliminaries A **population protocol**\(\mathcal{P}\) is a tuple \((Q,\delta,\Sigma,\mathcal{I},O,\mathcal{O})\) consisting of **state set**\(Q\), **transition function**\(\delta\), **input set**\(\Sigma\), **input function**\(\mathcal{I}\), **output set**\(O\), and **output function**\(\mathcal{O}\)[2]. Protocols are run by a population, which consists of a set of \(n\) agents \(\{A_{j}\}_{j=1}^{n}\) each with some input \(i_{j}\in\Sigma\). At the start of the protocol, each agent converts its input to a state in \(Q\) via \(\mathcal{I}:\Sigma\to Q\). In the early population protocol literature, \(\mathcal{I}\) is only ever considered to be a deterministic function; however, in this work, we extend the model to allow for \(\mathcal{I}\) to be randomized. The transition function \(\delta:Q^{2}\to Q^{2}\) designates how the agents update their states upon interacting with each other in pairs. As a shorthand for saying \(\delta(q_{1},q_{2})=(q_{1}^{\prime},q_{2}^{\prime})\), we write \(q_{1},q_{2}\to q_{1}^{\prime},q_{2}^{\prime}\) where \(\delta\) is implied. The protocol aims to compute some function (whose output is in the output set \(O\)) on the initial inputs of the agents in the population. An agent's output value is a function of the agent's state, determined by \(\mathcal{O}:Q\to O\). The collection of agents' inputs is denoted as a vector \(I\in\Sigma^{n}\), where each index of \(I\) reflects the input of a particular agent in the population. Adopting terminology from [9], we refer to \(I\) as an **input vector**. When the size of the state space is \(O(1)\), the protocol cannot distinguish between two agents in the same state nor with the same input; therefore, we may want to refer to the multiset of input values in the input vector \(I\), denoted \(\text{multiset}(I)\). After converting these inputs to elements of \(Q\), the global state of the population is called a **configuration** and is represented as a vector \(C\in Q^{n}\), where the \(i\)-th entry of the vector denotes the state of the \(i\)-th agent. Abusing notation, we say that \(\mathcal{I}(I)=\langle\mathcal{I}(i_{j})\rangle_{j=1}^{n}\) is the configuration resulting from applying the input function \(\mathcal{I}\) to each of the agent inputs in \(I=\langle i_{j}\rangle_{j=1}^{n}\). Agents update their states via interactions with one another which are performed at discrete intervals, called **steps**. At each step, an ordered pair of agents \((A_{i},A_{j})\) is selected from the population by the **scheduler**. To distinguish between the two agents in the ordered pair, we call the first agent the **Initiator** and the second the **Responder**. When an interaction takes place, the two selected agents update their states according to the transition function \(\delta\) which may change the counts of states in the population, thereby updating the configuration. Let \(\mathcal{C}\) be the configuration space, or the set of all possible configurations for a population of \(n\) agents with state space \(Q\). We say that a configuration \(D\in\mathcal{C}\) is **reachable** from \(C\in\mathcal{C}\) via \(\delta\) if there exists some series of ordered agent pairs such that starting from \(C\), if the configuration is updated according to \(\delta\) on those ordered pairs, then the resulting configuration is \(D\)[2]. If \(D\) is reachable from \(C\), then we write \(C\to D\). The infinite sequence of configurations resulting from the scheduler's infinite choice of interaction pairs is called an **execution**. An execution of a protocol is said to **converge** at a step \(\tau\) when, for every step \(t>\tau\), the output of each agent's state at \(t\) is the same as it is at \(\tau\) (i.e. the output of every agent converges to some value and never changes thereafter). A stronger notion of termination is for a protocol to **stabilize**, meaning that after reaching some configuration \(C^{*}\), the only configurations reachable from \(C^{*}\) result in the same outputs at every agent as in \(C^{*}\). Abusing notation, we say \(\mathcal{O}(C)=\lambda\) (or, the output of the _configuration_ is \(\lambda\)) if \(\mathcal{O}(q_{j})=\lambda\) for every \(q_{j}\in C\). The goal of the population is to compute some function \(\Phi\) on the input vector \(I\), which means that the population eventually stabilizes towards a set of configurations \(\mathcal{D}\subseteq\mathcal{C}\) for which \(\mathcal{O}(D)=\Phi(I)\) for all \(D\in\mathcal{D}\). The results of our work are commensurable with those of [9] which demonstrate that the semilinear predicates, which can be expressed using Threshold and Remainder, can be computed with output independent privacy under adversarial scheduling. Our work focuses on Remainder, defined for population protocols as follows: Definition 1: Given positive integers \(k\) and \(n\), non-negative integer \(r<k\), and input vector \(I\in\mathbb{Z}_{k}^{n}\), let \(\texttt{Remainder}(I)=\texttt{True}\) iff \(\sum_{j=1}^{n}i_{j}\equiv r\pmod{k}\). The scheduler determines the pair of agents that interact at each step. The scheduler's choice of agent pairs may either be adversarial or probabilistic. An **adversarial scheduler** chooses pairs of agents to interact at each step as it desires, subject to a fairness condition. The condition used most commonly is called **strong global fairness**, and it states that if some configuration \(C\) occurs infinitely often, and \(C\to C^{\prime}\), then \(C^{\prime}\) must occur infinitely often as well [2]. This means that if some configuration _can_ occur, it eventually _must_ occur, even if the adversarial scheduler wishes to delay its occurrence indefinitely. In works adopting adversarial scheduling, it can be claimed that a protocol eventually stabilizes to the correct answer, but not how quickly. A random or **probabilistic scheduler** instead selects pairs of agents to interact with one another according to some fixed probability distribution (usually uniform) over the ordered pairs of agents. Although population protocols consider interactions to occur in sequence, the systems they model typically consist of agents participating in interactions in parallel. As such, a natural estimation of **parallel time** is to divide the total number of interactions by \(n\), as this roughly estimates the expected number of interactions initiated by a particular agent in the population. Note that in population protocols with _non-uniform_ random scheduling, this notion of time is no longer necessarily suitable. Our work crucially relies on distinguishing between an externally visible component of the agent state and a concealable secret state. Adopting notation from [1], we let \(S\) be the **internal state** space and \(M\) the set of **messages** which can be sent between the agents. Since each agent has both an internal and external state component, the total state space is then the Cartesian product of these sets \(Q=S\times M\). This means that \(\delta\) is instead a function computed locally at each agent according to its own state and the "message received" by its interacting partner \(\delta:S\times M\times M\times\{\mathsf{Initiator},\mathsf{Responder}\} \to S\times M\). This new mapping enforces the restriction that an agent can only use its received message to update its own state, and it does not observe the update to its interacting partner's state. For convenience, we use the original shorthand notation \(\langle s_{0},m_{0}\rangle,\langle s_{1},m_{1}\rangle\rightarrow\langle s_{0}^ {\prime},m_{0}^{\prime}\rangle,\langle s_{1}^{\prime},m_{1}^{\prime}\rangle\) to reflect the agents' state changes, where it is understood that the state update of \(A_{b}\) is computed independently of \(s_{1-b}\). ## 3 Adversarial Model In order to evaluate the extent to which private information can be learned by an observer in a population protocol, we must define the nature of the observer and its capabilities. In this work, we consider the agent inputs to be private information. We will consider the observer to take the form of an agent interacting in the protocol, meaning that it can observe the population only as other agents do, i.e., by participating in interactions as they are slated by the scheduler. However, we do not preclude the possibility that the observer may have greater computational capabilities than ordinary honest agents. We assume that the observer is **semi-honest**, meaning that it must adhere to the protocol rules exactly, but may try to infer additional knowledge from the system [12]. As such, the observer can only gather knowledge by interacting with other agents as prescribed by the transition function \(\delta\). Since an observer presents as an agent in the population, we can imagine that multiple adversaries may infiltrate the system. However, we restrict that each observer be **non-colluding**, meaning that it cannot communicate with other adversative nodes in the network besides participating in the protocol interactions honestly. This is because otherwise we could imagine that an observer may disguise itself as multiple agents in the population making up any fraction of the system. Although not studied within this work, it is of interest to find bounds on the fraction of agents that can be simulated by the observer in any network and still successfully hide honest agents' inputs. Notice that the restriction that the observer is both semi-honest and non-colluding is equivalent to assuming that there is only a single adversative agent in the population, because from the point of view of the observer, all other agents appear to be honest. Finally, we allow a distinction between externally visible messages and internally hidden states as in [1] to allow agents to conceal a portion of their states toward the end goal of achieving privacy. The distinction between messages and the internal state will be crucial to studying privacy in the population model as without it, there is no mechanism for hiding information from an observer. ## 4 Definitions of Input Privacy In this section, we examine definitions of privacy in population protocols under adversarial and probabilistic scheduling given our specified adversarial model. ### Output Independent Privacy The privacy-preserving population protocol from [9] operates under the adversarial scheduling model and uses constant state-space. Therefore, [9] demonstrates privacy in the context of computing semilinear predicates only. The authors offer a formal definition of input privacy under these circumstances called **output independent privacy**, defined as follows: "A population protocol has this property if and only if there is a constant \(n_{0}\) such that for any agent \(p\) and any inputs \(I_{1}\) and \(I_{2}\) of size at least \(n_{0}\) in which \(p\) has the same input, and any execution \(E_{1}\) on input \(I_{1}\), and any \(T\), there exists an execution \(E_{2}\) on input \(I_{2}\), such that the histories of \(p\)'s interactions up to \(T\) are identical in \(E_{1}\) and \(E_{2}\)." Essentially, this definition states that a semi-honest process \(p\) cannot tell whether the input vector is \(I_{1}\) or \(I_{2}\) given its sequence of observations because either input could have yielded the same observations under an adversarial scheduler. Output independent privacy is a successful measure in [9] because the scheduling in that work is assumed to be adversarial, therefore no inference can be made about the interaction pattern. The authors leverage this to achieve privacy which is best framed as "plausible deniability" - an agent may directly observe another agent's input, but the unpredictability of the adversarial scheduler disallows the observer to claim with certainty that the observed value is indeed the input. This argument breaks down when the scheduler is probabilistic because now an agent can infer a probability distribution on the interaction pattern, and thus also infer a probability distribution on the input value of the agent's interacting partner. In light of this insight, we now introduce novel definitions for the purpose of assessing privacy in population protocols with probabilistic scheduling. ### Definitions of Privacy Under Probabilistic Schedules Consider an agent \(A\) with initial state \(q_{0}^{A}=(s_{0}^{A},m_{0}^{A})\). Given its sequence of observed messages and the role (Initiator or Responder) played by \(A\) in each interaction, \(A\) can deterministically compute each of its subsequent state updates. Let's call these messages (observed by \(A\)) \(o_{1}^{A},o_{2}^{A},o_{3}^{A},...\), and denote by \(q_{\varepsilon}^{A}=\delta(\rho_{\varepsilon}^{A},s_{\varepsilon-1}^{A},m_{ \varepsilon-1}^{A},o_{\varepsilon}^{A})=(s_{\varepsilon}^{A},m_{\varepsilon} ^{A})\) the updated state of \(A\), originally in state \(q_{\varepsilon-1}^{A}=(s_{\varepsilon-1}^{A},m_{\varepsilon-1}^{A})\), upon interacting as \(\rho_{\varepsilon}^{A}\in\{\textsf{Initiator},\textsf{Responder}\}\) with another agent with message \(o_{\varepsilon}^{A}\) in its \(\varepsilon\)-th interaction. Adopting notation from [12], we denote the **view** of an agent \(A\) participating in protocol \(\mathcal{P}\) in an execution \(E\) by \(\textsf{view}_{A}^{\mathcal{P}}(E)=\langle i_{A};q_{0}^{A};(\rho_{1}^{A},o_{1} ^{A}),(\rho_{2}^{A},o_{2}^{A}),...\rangle\). This view consists of \(A\)'s input, the initial state of \(A\), and a list of \(A\)'s interactions over the course of the execution, from which every subsequent state of \(A\) can be computed.1 Footnote 1: For randomized \(\delta\), we assume \(A\) has a fixed tape of random bits that it uses to update its state, so \(A\) can still reconstruct its entire view from the specified information. Let \(\textsf{view}_{A}^{\mathcal{P}}(C)\) be a random variable representing the view of agent \(A\) drawn uniformly from all realizable executions starting from configuration \(C\) resulting from the possible randomness used by the scheduler. Similarly, let \(\vec{\textbf{view}}^{\mathcal{P}}_{\boldsymbol{A}}(\boldsymbol{I})\) be a random variable representing the view of agent \(A\) drawn from all possible executions starting from any configuration \(C\) in the range of \(\mathcal{I}(I)\) according to the probability distribution given by the randomness of \(\mathcal{I}\). In general, we use the convention that random variables appear in mathematical boldface. Privacy, like many other security-related key terms, has a wide range of technical interpretations. As such, we now offer several distinct formal definitions of privacy in the population model. #### 3.1.1 Plausible Deniability Perhaps the weakest form of privacy we can possibly define is that of _plausible deniability_, meaning that an adversary always doubts its guess of an agent's input value (even if it has unbounded resources). This is not a novel concept [9, 14], but in the context of input vector privacy for probabilistic population protocols, we define this notion as follows: Let \(\mathcal{M}_{\lambda}=\{\text{multiset}(I):\Phi(I)=\lambda\}\) be the set of all distinct multisets of inputs whose corresponding input vector evaluates to \(\lambda\),2 and let \(\mathcal{M}_{\lambda}^{\kappa}=\{\text{multiset}(I):\text{multiset}(I)\in \mathcal{M}_{\lambda}\wedge\kappa\in\text{multiset}(I)\}\) be the set of all distinct multisets of inputs outputting \(\lambda\) which contain at least one input equal to \(\kappa\). Footnote 2: Recall that agents in the same state are indistinguishable by the protocol; therefore, \(\Phi\) must map any input vectors with the same multiset of inputs to the same output. Definition 2: Let \(\mathcal{P}\) be a population protocol on \(n\) agents with input set \(\Sigma\) and let \(\mathcal{D}\) be any probability distribution on input vectors in \(\Sigma^{n}\). Then \(\mathcal{P}\) is **weakly private** if for every distribution \(\mathcal{D}\) on \(\Sigma^{n}\), every non-colluding semi-honest unbounded agent \(\mathcal{A}\) in a population of size \(n\) executing \(\mathcal{P}\), and for any view \(V=\langle i;q;\{(\rho^{\mathcal{A}}_{\varepsilon},o^{\mathcal{A}}_{\varepsilon })\}\rangle\) with output \(\lambda\) (as determined from the view \(V\)) and with \(|\mathcal{M}_{\lambda}^{i}|>1\), there exist \(I_{1}\) and \(I_{2}\) in \(\mathcal{S}_{\lambda}\) such that 1. both \(\text{multiset}(I_{1})\) and \(\text{multiset}(I_{2})\) are elements of \(\mathcal{M}_{\lambda}^{i}\), 2. \(\text{multiset}(I_{1})\neq\text{multiset}(I_{2})\), and 3. \(\Pr(\vec{\textbf{view}}^{\mathcal{P}}_{\boldsymbol{\mathcal{A}}}(\boldsymbol {I_{1}})=V)=\Pr(\vec{\textbf{view}}^{\mathcal{P}}_{\boldsymbol{\mathcal{A}}}( \boldsymbol{I_{2}})=V)\), where the probabilities in the final condition are taken over \(\mathcal{D}\), the randomness of \(\mathcal{I}\), and the uniform randomness of the scheduler. In plain English, Definition 2 says that any agent participating in the protocol cannot simply guess the "most likely" input vector because for each such vector, pending certain circumstances, there exists a distinct input vector yielding the same views for that agent with the same probabilities. This definition differs from output independent privacy [9] in that it considers adversarial strategies for guessing the input vector which rely on distributional data collected from interactions with other agents. The condition \(|\mathcal{M}_{\lambda}^{i}|>1\) necessitates that weak privacy may only hold for multisets of inputs for which plausible deniability is even possible. For example, if the output of the computation for the Or predicate is \(0\), then there is only one possible multiset of inputs that could have yielded this outcome, so there is no denying what the input vector must have been (namely, the all-zero vector). Information-Theoretic Input Privacy A stronger notion of privacy is one that claims that an observer cannot narrow down the possibility of input vectors at all based on its observations. This prompts our next definition. Let \(\mathcal{P}\) be a population protocol with input set \(\Sigma\) and let \(\mathcal{D}\) be a probability distribution on input vectors in \(\Sigma^{n}\). Let \(\mathbf{I}\sim\mathcal{D}\) be a random variable representing the selected input vector. Additionally, let \(\mathbf{i_{\mathcal{A}}}\) and \(\mathbf{\lambda_{\mathcal{A}}}\) be random variables representing the input and output at agent \(\mathcal{A}\), and let \(\textbf{view}^{\mathcal{P}}_{\mathbf{\mathcal{A}}}(\mathbf{i,\lambda})\) be a random variable representing the view of agent \(\mathcal{A}\) participating in an honest execution of \(\mathcal{P}\) that is consistent with a fixed input \(i\) at \(\mathcal{A}\) and observed output \(\lambda\). Definition 3: Protocol \(\mathcal{P}\) satisfies **information-theoretic input privacy** if for every non-colluding semi-honest unbounded agent \(\mathcal{A}\) and every input \(i\in\Sigma\), output \(\lambda\in O\), view \(V\), input vector \(I\in\mathcal{S}_{\lambda}\), and distribution \(\mathcal{D}\) on \(\Sigma^{n}\), \[Pr(\mathbf{I}=I\mid\textbf{view}^{\mathcal{P}}_{\mathbf{\mathcal{A}}}(\mathbf{i,\lambda}) =V)=Pr(\mathbf{I}=I\mid\mathbf{i_{\mathcal{A}}}=i,\mathbf{\lambda_{\mathcal{A}}}=\lambda),\] where \(V\) is consistent with input \(i\) and output \(\lambda\). The above definition essentially states that conditioned on knowing one's own input and the output of the computation, the rest of the agent's view in the protocol's computation gives no advantage in guessing the input vector. We offer an alternative definition of privacy that is independent of our main results called **input indistinguishability** that can be found in the Appendix. Intuitively, it is straightforward to see that information-theoretic privacy is the strongest of the definitions discussed in this section (see Appendix for proof): Theorem 4.1: _If \(\mathcal{P}\) is information-theoretically private, then \(\mathcal{P}\) also satisfies output independent privacy, weak privacy, and input indistinguishability._ ## 5 Private Remainder with Adversarial Scheduling As a means for comparison, we analyze the Remainder protocol from [9], shown in Algorithm 1. The protocol does not distinguish between internal state space and message space, so the entirety of each agent's state is seen by its interacting partner. The agent states are tuples \((v,f)\), where \(v\) is the value of the agent and \(f\) is a flag bit denoting whether or not the agent has decided its output yet. The protocol accumulates the total sum (modulo \(k\)) of all agents' inputs by transferring values in units rather than in full in a single interaction. As shown in (M1), the protocol subtracts 1 from one of the inputs and adds it to the other input, maintaining the invariant that the sum of all the values in the population is the same at each step. Because all computations are done modulo \(k\), (M1) can be repeated indefinitely. Transitions (M2) and (M3) handle the flag bit, ensuring that (M1) occurs an unbounded but finite number of times. The crux of the proof that Algorithm 1 satisfies output independent privacy focuses on transition (M1). When an adversarial process \(p\) interacts with an honest agent \(A\) in state \((v,f)\), \(p\) cannot know how close \(v\) is to \(A\)'s original input because, for \(n\geq 3\), we can construct multiple executions wherein \(A\) has value \(v\) upon interacting with \(p\). For example, we can construct an execution where some agent \(B\) transfers as many units to \(A\) via (M1) as needed to get \(A\)'s value to be \(v\), and as long as \(p\) and \(B\) do not interact with each other before \(p\) interacts with \(A\), \(p\)'s view is the same in this execution. However, output independent privacy does not successfully carry over to the random scheduling model because we can no longer construct _any_ execution "fooling" the process \(p\), as some such executions are of very low probability. For instance, the probability that agents \(A\) and \(B\) interact \(v^{\prime}\) times in a row, during which time \(p\) does not interact with \(B\) at all, becomes small for large values of \(v^{\prime}\). This means that it is less probable that an agent's value will deviate from its original input value early on in the execution. A formal proof of the algorithm's lack of privacy is given in the Appendix. ## 6 Private Remainder with Probabilistic Scheduling In this section, we introduce a novel algorithm for information-theoretically privately computing Remainder in the population protocol model with probabilistic scheduling. Our algorithm is inspired by the famous example of cryptographically secure multiparty computation of Remainder in a ring network. We refer to this algorithm as RingRemainder, and it works as follows: There are \(n\) agents \(A_{1},...,A_{n}\) arranged in a circle. Agent \(A_{1}\) performs the leader's role, which is to add a uniformly random element \(r\in\mathbb{Z}_{k}\) to their input and pass the sum (modulo \(k\)) to agent \(A_{2}\). For each remaining agent \(A_{i}\), upon receiving a value from \(A_{i-1}\), \(A_{i}\) adds its own input to that value and passes the resulting sum to \(A_{i+1\pmod{n}}\). When \(A_{1}\) receives a value from \(A_{n}\), it subtracts \(r\) and broadcasts the result to everyone. Suppose the agents have inputs \(i_{1},...,i_{n}\) Then \(A_{1}\) sends \(m_{1}=i_{1}+r\) to \(A_{2}\), \(A_{2}\) sends \(m_{2}=i_{1}+r+i_{2}\) to \(A_{3}\), and so on, until \(A_{n}\) sends \(m_{n}=r+\sum_{j=1}^{n}i_{j}\) to \(A_{1}\). Thus, the value broadcast to all agents \(m_{n}-r\) is exactly equal to \(\sum_{j=1}^{n}i_{j}\), the sum of the agents' inputs modulo \(k\). Assuming honest participants and secure pairwise communication, this protocol achieves information-theoretic input privacy (see Appendix for proof). We now adapt this scheme to compute Remainder in the population model with information-theoretic privacy. #### 6.1.1 Algorithm Overview Our protocol simulates the transfer of information exactly as in RingRemainder. We assume that the protocol has an initial leader with a special token that circulates the population. Each time an agent receives the token and some accompanying value, it adds its input to that value and passes the sum, along with the token, to another agent. This means the current owner of the token holds the aggregate sum of the agents' inputs who previously held the token. When an agent passes the token to another agent, it labels itself as "visited" so as to ensure that its input is included in the sum exactly one time. Once the token has visited all of the agents, it is returned to the leader (along with the total sum of all of the agents' inputs). In order to achieve this functionality, there are two crucial obstacles we must overcome: First, we need a mechanism for securely transferring a message between two agents such that no other agent learns the message except the sender and the intended recipient. This task is nontrivial because population protocols do not allow agents to verify a condition before transmitting a message in an interaction; it is assumed that the message exchange and state update occur instantaneously. To do this, we provide a secure peer-to-peer transfer subroutine in Section 6.1. Second, we need a way to determine whether or not every agent in the population has been visited by the token. When this happens, we want the final token owner to pass the token back to the leader so that the leader can remove the randomness it initially added to the aggregate that has been passed among the agents. We must try to prevent passing the aggregate back to the leader before all inputs have been incorporated into the aggregate as this would cause some agents to be excluded from the computation. In order to achieve this, we use the probing protocol from [3] which we describe in further detail in Section 6.2. Leveraging these two subroutines, we design our main algorithm for computing Remainder with information-theoretic privacy in Section 6.3. ### Secure Peer-to-Peer Transfer In order for our algorithm to guarantee input privacy, the communication of the intermediate sums between any two agents must remain secure. Here we introduce a novel secure peer-to-peer transfer protocol, defined as follows: Definition 4: Let \(M\) be a message space, \(\mathcal{D}\) be some distribution on \(M\), and \(I\) be any fixed input vector in \(\Sigma^{n}\). A **secure peer-to-peer transfer routine** is a protocol \(\mathcal{P}\) that transfers data \(m\stackrel{{\mathcal{D}}}{{\leftarrow}}M\) from one agent \(\mathsf{Sender}\) to another Receiver such that there exist PPT algorithms \(W_{1},W_{2}\) where_ \[\Pr\left(W_{1}(\mathbf{view}_{\mathsf{Sender}}^{\mathcal{P}}(\boldsymbol{I}))=m \right)=\Pr\left(W_{2}(\mathbf{view}_{\mathsf{Receiver}}^{\mathcal{P}}( \boldsymbol{I}))=m\right)=1\] _and for all \(i:A_{i}\not\in\{\mathsf{Sender},\mathsf{Receiver}\}\) and PPT algorithm \(W^{\prime}\)_ \[\Pr\left(W^{\prime}(\mathbf{view}_{\boldsymbol{A_{i}}}^{\mathcal{P}}( \boldsymbol{I}))=m\right)=\Pr(m\stackrel{{\mathcal{D}}}{{ \leftarrow}}M)\] In other words, a secure peer-to-peer transfer routine allows a \(\mathsf{Sender}\) to transfer a message \(m\) to a \(\mathsf{Receiver}\) such that only \(\mathsf{Sender}\) and \(\mathsf{Receiver}\) are privy to \(m\) and all other agents cannot guess \(m\) with any advantage over knowing only the _a priori_ distribution on the message space. Our Algorithm 2 satisfies this definition: Each agent's state \(\langle\mu,(r,L)\rangle\) consists of a hidden secret \(\mu\), and a public randomness value \(r\) and label \(L\). The goal of the protocol is to pass a secret message from one agent (marked as \(\mathsf{Sender}\) with label \(\mathfrak{S}\), of which there may only be one in the population) to another agent meeting some specified criteria labeled by \(\mathfrak{u}\), of which there may be any number (including zero). Until the \(\mathsf{Sender}\) meets an agent with label \(\mathfrak{u}\), it refreshes its randomness at each interaction to ensure that the randomness it transmits to the \(\mathsf{Receiver}\) is uniformly random (S1). When the \(\mathsf{Sender}\) finally meets some agent with \(\mathfrak{u}\), it marks that agent as the \(\mathsf{Receiver}\) and transmits its fresh randomness value \(r\); it also updates its own token to \(\mathfrak{S}^{\prime}\) to remember that it has met and labeled a \(\mathsf{Receiver}\) (S2). Then, the \(\mathsf{Sender}\) waits to meet the \(\mathsf{Receiver}\) again, at which point it gives it a message masked with the randomness it sent in the previous interaction and marks itself with the label \(\overline{\mathfrak{u}}\) to signify the end of the transmission (S3). By the end of the protocol, exactly one agent is selected as the \(\mathsf{Receiver}\) and stores \(\mu\) internally. The protocol has state space \((\mathbb{Z}_{k}\cup\{\bot\})^{2}\times\{\mathfrak{S},\mathfrak{S}^{\prime}, \mathfrak{R},\mathfrak{u},\overline{\mathfrak{u}}\}\), which for constant \(k\) is of size \(O(1)\). We prove that Algorithm 2 is a secure peer-to-peer transfer routine in the Appendix: Theorem 6.1: _Algorithm 2 is a secure peer-to-peer transfer routine._ ### Probing Protocol In order to adapt RingRemainder to the population protocol model, we need a way to detect when every agent has been included in the aggregation so the final sum can be passed back to the leader. To do this, we use a probe. A **probing protocol**, or **probe**, is a population protocol that detects the existence of an agent in the population satisfying a given predicate [3]. In essence, the probe (initiated by the leader) sends out a 1-signal through a population of agents in state 0. If the 1-signal reaches an agent satisfying the predicate, that agent initiates a 2-signal which spreads back to the leader by epidemic. Higher number epidemics overwrite lower ones, so if some agent in the population satisfies \(\pi\) then the leader eventually sees the 2-signal. The probe, used in conjunction with the phase clock from the same work [3], allows the leader to detect the presence of an agent satisfying \(\pi\) in \(O(n\log n)\) interactions using \(O(1)\) states with probability \(1-n^{-c}\) for any fixed constant \(c>0\) (see Appendix). We define the "output" of the protocol (computed only at the leader) to be 0 for states 0 and 1, and 1 for state 2 (i.e. the leader's probe outputs 1 if and only if some agent in the population satisfies \(\pi\)). At the start of each round of the phase clock, agents reset their value to 0 and the leader initiates a new probe. Both the probe and the phase clock states are components of the message space, and the transitions for these subroutines are independent of the transitions for the main protocol, so we consider the two "protocols" to be taking place in parallel. ### Remainder with Information-Theoretic Privacy We provide here a novel algorithm which computes Remainder and achieves _information-theoretic input privacy_ in the population protocol model with high probability, assuming a uniform random scheduler. First, each agent applies the input function \(\mathcal{I}\) to their input as follows: \[\mathcal{I}(i_{j},\ell)=\begin{cases}\langle i_{j}+r^{0},(r^{j},\mathfrak{S},1,Z=Z_{0})\rangle&\ell=1\\ \langle i_{j},(r^{j},\mathfrak{u},0,Z=Z_{0})\rangle&\ell=0\end{cases}\] where \(r^{j}\) is drawn uniformly at random from \(\mathbb{Z}_{k}\) for \(j\in\{0,1,...,n\}\), and \(Z\) (initialized to \(Z_{0}\)) is a probe subroutine (including its associate phase clock). The input function assumes an initial leader, specified by \(\ell=1\). The components of the state \(\langle\mu,(r,L,\ell,Z)\rangle\) are \(\mu\) (the hidden internal component of the state called the **secret**), \(r\) (the **mask**), \(L\) (the agent's **label**), \(\ell\) (the **leader bit**), and \(Z\) (the **probe**). The transitions describing the protocol can be found in Algorithm 3. The general structure of the transitions from the secure peer-to-peer transfer protocol in Algorithm 2 is used to send the intermediate sums in (R1), (R2), and (R3). However, instead of just storing the message received, the Receiver computes the sum of the message and its own input and stores the result internally. Each subsequent Sender searches the population for an agent whose input has not yet been incorporated into the sum (signified by the \(\mathfrak{u}\) state). When no one in the population has \(\mathfrak{u}\) anymore, the probe detects this and outputs 1 at the leader from this point onward. When the probe begins to output 1, with high probability every agents' label is set to \(\overline{\mathfrak{u}}\), alerting the leader to set its label to \(\mathfrak{u}\). This makes the leader the only agent able to be the next Receiver. When the leader receives the final value stored at the Sender, the leader can place the answer into a separate portion of the external state (not shown in Algorithm 3) so that all other agents can copy it, which takes \(O(n^{2}\log n)\) additional steps with high probability. The leader must also have an additional component to its _hidden_ state which stores the randomness used in its initial message transfer (also not shown in Algorithm 3). The correctness of Algorithm 3 is stated below and proven in the Appendix: Theorem 3.1: _For any fixed \(c>0\), Algorithm 3 computes_ Remainder _in a population of size \(n\) in \(\Theta(n^{3}\log n)\) steps with probability at least \(1-n^{-c}\)._ Finally, we summarize the privacy guarantee of Algorithm 3 in our final theorem and defer the proof to the Appendix: Theorem 3.2: _When Algorithm 3 correctly computes the_ Remainder _predicate, it satisfies information-theoretic input privacy._ If the protocol fails due to a phase clock error in the probing subroutine, we actually do not know how much information is leaked by the protocol, though we suspect it to be limited. We designate this as outside of the scope of this work and only make claims about privacy when the protocol succeeds. Note that it is impossible to achieve information-theoretic privacy with probability \(1\) in asynchronous distributed systems because there is always the possibility of premature termination due to indefinite exclusion of agents from the protocol. #### 3.0.1 Conclusion In this work, we offer various new security definitions in population protocols, such as multiple definitions of privacy which accommodate a range of threat models and scheduling assumptions, and a formal definition of secure peer-to-peer communication. We also develop algorithms solving secure pairwise communication in the model and information-theoretically private computation of the Remainder predicate. In order to show that we can achieve information-theoretic privacy (with high probability) for all semilinear predicates, as in [9], similar algorithms for computing Threshold and Or are also needed. We leave these problems as open for future work.
2301.10284
Determination of diffractive PDFs from global QCD analysis of inclusive diffractive DIS and dijet cross-section measurements at HERA
We present an updated set of {\tt SKMHS} diffractive parton distribution functions (PDFs). In addition to the diffractive deep-inelastic scattering (diffractive DIS) data sets, the recent diffractive dijet cross sections measurements by the H1 experiment from the HERA collider are added to the data sample. The new set of diffractive PDFs, entitled {\tt SKMHS23} and {\tt SKMHS23-dijet}, are presented at next-to-leading order (NLO) and next-to-next-to-leading order (NNLO) accuracy in perturbative QCD. Since the gluons directly contribute to jet production through the boson-gluon fusion process, the data on diffractive dijet production in inclusive DIS help to constrain the gluon density, allowing for the determination of both the quark and gluon densities with better accuracy. The NLO and NNLO theory predictions calculated using both {\tt SKMHS23} and {\tt SKMHS23-dijet} are compared to the analyzed data showing excellent agreements. The effect arising from the inclusion of diffractive dijet data and higher order QCD corrections on the extracted diffractive PDFs and data/theory agreements are clearly examined and discussed.
Maral Salajegheh, Hamzeh Khanpour, Ulf-G. Meißner, Hadi Hashamipour, Maryam Soleymaninia
2023-01-24T20:05:20Z
http://arxiv.org/abs/2301.10284v1
Determination of diffractive PDFs from global QCD analysis of inclusive diffractive DIS and dijet cross-section measurements at HERA ###### Abstract We present an updated set of SKMHS diffractive parton distribution functions (PDFs). In addition to the diffractive deep-inelastic scattering (diffractive DIS) data sets, the recent diffractive dijet cross sections measurements by the H1 experiment from the HERA collider are added to the data sample. The new set of diffractive PDFs, entitled SKMHS23 and SKMHS23-dijet, are presented at next-to-leading order (NLO) and next-to-next-to-leading order (NNLO) accuracy in perturbative QCD. Since the gluons directly contribute to jet production through the boson-gluon fusion process, the data on diffractive dijet production in inclusive DIS help to constrain the gluon density, allowing for the determination of both the quark and gluon densities with better accuracy. The NLO and NNLO theory predictions calculated using both SKMHS23 and SKMHS23-dijet are compared to the analyzed data showing excellent agreements. The effect arising from the inclusion of diffractive dijet data and higher order QCD corrections on the extracted diffractive PDFs and data/theory agreements are clearly examined and discussed. ###### Contents * I Introduction * II Theoretical Framework * II.1 QCD prediction for diffractive dijet production in (\(ep\)) scattering * II.2 Factorization theorem in diffractive dijet production * III Details of the SKMHS23 QCD analysis * III.1 Experimental data sets * III.2 SKMHS23 diffractive PDFs parametrization * III.3 Minimization and diffractive PDF uncertainty method * IV SKMHS23 fit results * V Discussion and Conclusion ## I Introduction In deep inelastic scattering (DIS) process, the diffractive reactions of the type \(ep\to eXY\), where \(X\) indicates a high-mass hadronic final state, represent about 8-10% of the events at HERA. Such type of process provides rich experimental input to test quantum chromodynamics (QCD) in the diffractive regime [1; 2; 3; 4; 5]. According to the QCD factorization theorem [6; 7], calculations of the diffractive cross sections with high enough Q\({}^{2}\) factorizes into two main different parts: a set of process-independent diffractive parton distribution functions (PDFs) and a process-dependent hard scattering coefficient function. The diffractive PDFs need to be determined from a QCD fit to the measured inclusive diffractive cross sections by applying the standard DGLAP evolution equations [8; 9; 10; 11], while the hard scattering coefficient functions are calculable in perturbative QCD. The QCD factorization is proven to hold both for the inclusive and the dijet diffractive processes [6; 7]. However, in the case of low photon virtuality, some non-perturbative quantities such as higher twist (HT) effects need to be taken into account. From an phenomenological point of view, the diffractive PDFs are determined by assuming an additional factorization that depends on the structure of a colorless exchange objects. This assumption is known as proton vertex factorization [3]. In a diffractive DIS process, the Pomeron and Reggeon flux-factors in the proton are introduced, and for the case of diffractively exchanged objects the universal Parton densities are assumed. Several measurements on the diffraction in DIS suggest the validity of the proton vertex factorization assumption in diffractive DIS [3]. Diffractive PDFs are universal quantities for all diffractive DIS reactions, with the hardness of the DIS process being ensured by the virtuality of the exchanged photon, Q\({}^{2}\)[3]. Nearly all recent progress in the extraction of diffractive PDFs stems from the widely used H1 and ZEUS diffractive DIS cross section measurements. Over the past few years, some groups have reported their sets of diffractive PDFs with uncertainties, such as H1-2006-DPDF [4], ZEUS-2010-DPDF [5], GKG18 [12], HK19 [13], MMKG19 [14], and the most recent analysis by SKMHS22 [15]. Among these diffractive PDFs determinations, the HK19 and SKMHS22 are performed up to NNLO accuracy in perturbative QCD, while the former are limited to the NLO. The GKG18 and SKMHS22 are performed in the framework of xFitter [16] in order to achieve a more reliable estimate of the diffractive PDFs uncertainties. In addition, GKG18, HK19 and SKMHS22 also analyzed the most recent H1/ZEUS combined diffractive DIS cross section measurements. Up to now, predictions for diffractive DIS, and in particular the diffractive dijet production, were performed at NLO and NNLO order in QCD. ZEUS-2010-DPDF analyzed also the diffractive dijet production data at NLO [5], and most recently the predictions for the dijet production is provided at NNLO in Ref. [17]. In this paper, we present SKMHS23-dijet, a new determination of diffractive PDFs using the previous analyzed inclusive diffractive DIS measurements by the H1 and ZEUS Collaborations, including for the first time the dijet production cross-section measurements in diffractive \(ep\) scattering data collected in the years 2005-2007 with the H1 detector at HERA. The SKMHS23-dijet is extracted from QCD analysis at NLO and NNLO accuracy in perturbative QCD. In order to analyze the dijet production data, the well-established Alpos framework [18; 19; 20] supplemented with APFEL [21], NNLOJET and fastNLO [22; 23] is used which is an object-oriented data to theory comparison fitting tool. The statistical analysis of the theory predictions for both diffractive DIS and dijet production are also performed using this program. The diffractive dijet production data which are included in SKMHS23-dijet help to constrain the gluon density, allowing for a good accuracy determination of both the quark and gluon densities. In order to examine the effect of dijet data on the extracted densities, we also present the SKMHS23 analysis in which the dijet data are excluded from the data sample. Finally, the NLO and NNLO theory predictions are compared to the analyzed data. The effect arising from the inclusion of diffractive dijet data and higher order QCD corrections on the extracted diffractive PDFs and data/theory agreement are also examined and discussed. The rest of the paper is organized as follows. The theoretical framework considered in SKMHS23 is introduced in Sec. II. This section also discusses the QCD prediction for the diffractive dijet production in an electron-proton (\(ep\)) scattering process, and the corresponding factorization theorem. The details of the SKMHS23 are presented in Sec. III which includes the experimental input, the SKMHS23 parameterizations, the heavy quark contributors to the diffractive DIS process, and finally the fitting framework and minimization strategy. The SKMHS23 fit results and main findings of this work are scrutinized and discussed in Sec. IV. Finally, Sec. V summarizes the findings, and outlines possible future developments as well. ## II Theoretical framework In this section, we describe in detail the standard theoretical framework for diffractive DIS processes in which the perturbative QCD framework is applied for the event with a large rapidity gap (LRG) in the rapidity distribution of the outgoing hadrons. We thoroughly discuss the calculation of diffractive dijet cross sections in inclusive DIS processes as well and the relevant factorization theorem. We also provide the details of the factorization of the proton diffractive PDFs. ### QCD prediction for diffractive dijet production in (\(ep\)) scattering The diffraction, \(\gamma^{*}+p\to X+p\), in the single diffractive process such as inclusive diffractive DIS, \(e(k)+p(P)\to e(k^{\prime})+p(P^{\prime})+X\), is observed when the virtual photon \(\gamma^{*}\) dissociates into the hadronic system \(X\) whereas the proton remains intact. The diffractive reaction in DIS is described by the DIS kinematic invariants which are given by \[Q^{2} =-q^{2}=(k-{k^{\prime}}^{2}),\] \[x =\frac{-q^{2}}{2P\cdot q},\] \[y =\frac{P\cdot q}{P\cdot k}, \tag{1}\] where \(Q^{2}\) is the virtuality of photon, \(x\) is the longitudinal fraction of the proton momentum carried by the struck quark (same as the Bjorken scaling variable), and \(y\) indicates the inelasticity. These quantities are related via \(Q^{2}=xys\), where the electron-proton center-of-mass energy squared is denoted by \(s\). In addition, the new quantities for diffractive kinematics are defined in relation to the scattered protons. One of them is the longitudinal momentum fraction of the exchanged Pomeron: \[x_{\not{P}}=\frac{q\cdot(P-P^{\prime})}{q\cdot P}\,. \tag{2}\] he second variable is the squared four-momentum transfer at the proton vertex: \[t=(P^{\prime}-P)^{2}\,. \tag{3}\] Finally, the last one is the fractional momentum of the diffractive exchange carried by the parton inside the Pomeron: \[\beta=\frac{x}{x_{I\!\!P}}=\frac{Q^{2}}{2q\cdot(P-P^{\prime})}\,. \tag{4}\] The cross section of diffractive dijet (\(jj\)) production, \(e+p\to e+p+jj+X^{\prime}\), is an important obsevable which can affect the behavior of diffractive PDFs. The inclusion of these data in the analysis is one of the main objective of this study. The Feynman diagram describing the diffractive dijet production in an electron-proton collision at HERA is shown in Fig. 1. For the diffractive dijet production, an additional variable needs to be introduced. According to the Feynman diagram presented in Fig. 1, in the hard subprocess, \(v\) is the four-momentum of the gluon emitted from the Pomeron. The longitudinal momentum fraction of gluon is denoted by \(z_{I\!\!P}\), the new invariant: \[z_{I\!\!P}=\frac{q\cdot v}{q\cdot(P-P^{\prime})}. \tag{5}\] It should be noted here that for the dijet production process the variable \(x\) is not the momentum fraction of the parton entering to the hard subprocess. This fraction is denoted as \(\tilde{x}\) and it is momentum fraction of the interacting parton with respect to the proton. Further, \(x_{I\!\!P}\) is the momentum fraction of the Pomeron with respect to the proton. The momentum fraction of the parton with respect to the Pomeron is denoted by the \(z_{I\!\!P}\). It can be shown that for the dijet production one can write: \[z_{I\!\!P}=\frac{\tilde{x}}{x_{I\!\!P}}. \tag{6}\] At leading order (LO), the center-of-mass energy of hard subprocess is equal to the invariant mass of the dijet system \(M_{12}\) \[M_{12}^{2}=(q+v)^{2}. \tag{7}\] In the next section, the factorization theorem will be presented and discussed. ### Factorization theorem in diffractive dijet production The factorization theorem of QCD can be employed for the diffractive processes so that the cross section of dijet production is given by the convolution of diffractive PDFs for proton \(f_{i/p}^{D}\) with the partonic cross sections \(d\hat{\sigma}\)[6; 7], \[d\sigma(e+p\to e+p+jj+X^{\prime})=\] \[\sum_{i}\int dt\int dx_{I\!\!P}\int dz_{I\!\!P}\] \[\times f_{i/p}^{D}(z_{I\!\!P},\mu_{F}^{2},x_{I\!\!P},t)\otimes d \hat{\sigma}_{ei\to jj}(\hat{s},\mu_{R}^{2},\mu_{F}^{2}). \tag{8}\] Here, the hadronic system \(X^{\prime}\) is what remains of the hadronic system \(X\) after removing the two jets. In addition, the integrals are performed over the accepted phase space and the sum runs over all the partons contributing to the cross section. The first argument of the diffractive PDFs \(f_{i/p}^{D}\) is the momentum of the parton with respect to the Pomeron. \(\mu_{F}\) and \(\mu_{R}\) represent the factorization and renormalization scales, respectively. The invariant energy squared in the subprocess is defined as \[\bar{s}\sim x_{I\!\!P}z_{I\!\!P}ys-Q^{2}. \tag{9}\] In the DIS region in which \(Q^{2}\gg\Lambda^{2}\), the only relevant contribution to the dijet production cross section is the direct process defined in Eq. (8). According to the proton vertex factorization theorem, the diffractive PDFs can be factorized into the product of two distinct terms. The first term depends on the \(x_{I\!\!P}\) and \(t\), while the second term depends only to the \(z_{I\!\!P}\) and \(\mu_{F}\). Hence, the diffractive PDFs \(f_{i/p}^{D}(z_{I\!\!P},\mu_{F}^{2};x_{I\!\!P},t)\) is given by, \[f_{i/p}^{D}(z_{I\!\!P},\mu_{F}^{2};x_{I\!\!P},t)= f_{I\!\!P/p}(x_{I\!\!P},t)f_{i/p}(z_{I\!\!P},\mu_{F}^{2})\] \[+ n_{R}f_{R/p}(x_{I\!\!P},t)f_{i/R}(z_{I\!\!P},\mu_{F}^{2})\,, \tag{10}\] where the Pomeron and Reggeon flux-factors are denoted by \(f_{I\!\!P/p}(x_{I\!\!P},t)\) and \(f_{R/p}(x_{I\!\!P},t)\), respectively. The flux-factors describe the emission of the Pomeron and Figure 1: The Feynman diagram describing the diffractive dijet production in an electron-proton collision at HERA. Reggeon from the proton target. The Reggeon contributes significantly at low \(z_{I\!\!P}\) and large \(x_{I\!\!P}\). The global normalization of the Reggeon contribution is \(n_{I\!\!R}\), which taken as free parameter in the fit. The Pomeron and Reggeon partonic distribution functions are indicated by \(f_{i/I\!\!P}(z_{I\!\!P},\mu_{F}^{2})\) and \(f_{i/I\!\!R}(z_{I\!\!P},\mu_{F}^{2})\), respectively. The parametrization and determination of these distribution functions will be discussed in detail in section III.2. Many properties of diffractive PDFs are similar to the non-diffractive PDFs. Despite the fact that the presence of the leading proton in the final state leads to an additional constraint for the calculation of diffractive PDFs, they still obey the standard DGLAP evolution equation like the ordinary PDFs [8; 9; 11]. In the analysis of diffractive PDFs the cross section for diffractive processes has a \(t\)-dependence, which one usually integrates out. Consequently, the \(t\)-dependence of diffractive PDFs is restricted to \(|t|<1.0\) GeV\({}^{2}\) here. As mentioned before, our aim in this paper is to include the dijet production cross section in the diffractive DIS analysis up to NNLO. The calculations of partonic cross sections of diffractive dijet production at NNLO accuracy and the one from dijet production in DIS are the same. Recently, the latter calculations have been used to describe the inclusive dijet cross section in DIS [24; 25]. According to Eq. (8), to calculate the diffractive dijet cross section one needs to convolute the partonic cross section \(d\hat{\sigma}_{e^{i}\to jj}\) with the diffractive PDFs \(f_{i/p}^{D}(z_{I\!\!P},\mu_{F}^{2},x_{I\!\!P},t)\). In our work, the hard (partonic) cross section is calculated using the NLOJet++ package [26], however, to account for the additional dependence of the cross section on \(x_{I\!\!P}\) and \(t\) some adjustments are required as specified in Ref. [17]. This calculation can be very time-consuming using conventional methods such as Monte Carlo integration, specially if one requests a high precision. Nevertheless, in a QCD analysis one should repeatedly evaluate this convolution for different values of diffractive PDF parameters. To overcome this difficulty, an interface to the fastNLO package is implemented in the Alpos framework [18; 19; 20]. By using the methodology of the fastNLO, the calculation of matrix elements is done only once and the convolution integral turns into a summation over a grid of integration variables [22]. Additional information and details of the fastNLO formalism can be found in e.g. [27]. ## III Details of the SMHHS23 QCD analysis In this section, we present the details of the SKMHS23 QCD analysis, including the experimental data sets analyzed in this work, the diffractive PDFs parametrization, the minimization procedures, and the diffractive PDF uncertainty method. ### Experimental data sets This section deals with the experimental data sets used in the SKMHS23 global QCD analysis, focusing on the diffractive dijet cross-section measurements at HERA. The details of the inclusive diffractive DIS experimental input are discussed in detail in our previous studies [15; 12] and we will present a short review here. The determination of diffractive PDFs relies mainly on the inclusive diffractive DIS cross-section measurements by the H1 and ZEUS collaborations. The inclusive diffractive DIS and dijet data sets which are listed in Tables 1 and 2 include the following: * The H1 measurements of the inclusive diffractive DIS cross section, H1-LRG-11, at the \(\sqrt{s}=225,252,319\) GeV\({}^{2}\) which covers the phase space of \(4.0\) GeV\({}^{2}<\) Q\({}^{2}<44.0\) GeV\({}^{2}\) and \(5.0\times 10^{-4}<x_{I\!\!P}<3.0\times 10^{-3}\)[28]. * The inclusive measurement of diffractive DIS by the H1 Collaboration, called H1-LRG-12[29]. This measurement covers the phase space \(3.0\) GeV\({}^{2}<\) Q\({}^{2}<1600\) GeV\({}^{2}\) of the photon virtuality, and the squared four-momentum transfer of \(|t|<1.0\) GeV\({}^{2}\). * The most recent published data on the diffractive DIS cross-section come from the H1 and ZEUS combined measurement which is useful to determine precise diffractive PDFs with reliable uncertainty. The kinematic range of these measurements is \(2.5\) GeV\({}^{2}<\) Q\({}^{2}<200\) GeV\({}^{2}\) for the photon virtuality, \(3.5\times 10^{-4}<x_{I\!\!P}<9.0\times 10^{-2}\) for the proton fractional momentum loss, \(1.8\times 10^{-3}<\beta<0.816\) in scaled fractional momentum variable and finally \(0.09\) GeV\({}^{2}<|t|<0.55\) GeV\({}^{2}\) in the squared four-momentum transfer at the proton vertex [30]. As discussed in detail in Refs. [12; 13], the H1 and ZEUS combined data are subject to two different corrections which are the proton dissociation background and the global normalization factor for the extrapolation from \(0.09\) GeV\({}^{2}<|t|<0.55\) GeV\({}^{2}\) to \(|t|<1.0\) GeV\({}^{2}\). * The single differential dijet cross sections measurements in diffractive DIS published by the H1 Collaboration at HERA which correspond to an integrated luminosity of \(290\) pb\({}^{-1}\)[31]. The phase space of these measurements is spanned by the photon virtuality of \(4.0\) GeV\({}^{2}<\) Q\({}^{2}<100\) GeV\({}^{2}\), and by the fractional proton longitudinal momentum loss \(x_{I\!\!P}<3.0\times 10^{-2}\). As will be discussed in detail below, the diffractive DIS dijet data is used in the SKMHS23 QCD analysis for the first time. The effect arising from the inclusion of this data on the extracted diffractive PDFs and data/theory agreements will also be discussed. Finally, in order to avoid the contributions from higher twist (HT) and some other nonperturbative effects, one needs to implement some kinematical cuts to all diffractive DID data sets mentioned above. To this end, we follow the formalism presented in Refs. [5; 12; 15] and consider some cuts on the data samples. We consider \(M_{X}\geqslant 2\) GeV, the data sets for \(\beta\geqslant 0.81\) are excluded. A \(\chi^{2}\) scan is performed in Ref. [12], to find an optimum value for the \(Q^{2}\). In this work, the region with \(Q^{2}=Q_{\rm min}^{2}\geqslant 9\ {\rm GeV}^{2}\) only is included in the fit, which shows the best data/theory description. ### Skmhs23 diffractive PDFs parametrization Like the standard PDFs, the diffractive PDFs are non-perturbative quantities and should be determined by a QCD global analysis. As it has been mentioned before, diffractive PDFs are the sum of Pomeron and secondary Reggeon contributions neglecting the possible interference terms. We consider the parametrization form for diffractive PDFs with unknown parameters at a starting scale \(\mu_{0}^{2}=1.69\ {\rm GeV}^{2}\), which is less than the squared charm mass (\(m_{c}^{2}\)) threshold. Due to the lack of experimental data for the case of diffractive processes, a somewhat less flexible parametrization form for the diffractive PDFs is employed in our work. For the same reason the Pomeron PDFs at the initial scale \(f_{i/\mathcal{P}}(z_{\mathcal{P}},\mu_{0}^{2})\) should be the same for all light partons \(i=u=d=s=\bar{u}=\bar{d}=\bar{s}\), while the gluon distribution is considered separately. The contribution of the Reggeon PDFs becomes important at large values of \(x_{\not{\!P}}\) and it is equal to the pion PDF. For the leading Pomeron pole at starting scale \(Q_{0}^{2}\), we parametrize the input gluon and quark-singlet diffractive PDFs as follows: \[zf_{g}(z,Q_{0}^{2})= \alpha_{g}z^{\beta_{q}}(1-z)^{\gamma_{q}}(1+\eta_{g}\sqrt{z}), \tag{11}\] \[zf_{q}(z,Q_{0}^{2})= \alpha_{q}z^{\beta_{q}}(1-z)^{\gamma_{q}}(1+\eta_{q}\sqrt{z}). \tag{12}\] The longitudinal momentum fraction, \(z\), at the lowest order of the hard process is equal to \(\beta\) (\(z=\beta\)), then by including the higher orders we have \(0<\beta<z\). To ensure that the diffractive PDFs vanish at \(z=1\), the above equations are multiplied by factor \(e^{\frac{-0.01}{1-z}}\), which is required for the DGLAP equations to be solvable [4; 5]. The \(x_{\not{\!P}}\)-dependence of the diffractive PDFs is then determined by Pomeron and Reggeon flux factors which are paremetrized such that, \[f_{\not{\!P}/p,\not{\!R}/p}(x_{\not{\!P}},t)=A_{\not{\!P},\overline{R}}\frac{ e^{B_{\not{\!P},\not{\!R}}t}}{x_{\not{\!P}}^{2\alpha_{\not{\!P},\not{\!R}}(t)-1}}. \tag{13}\] We assume linear trajectories of the form \[\alpha_{\not{\!P},\not{\!R}}(t)=\alpha_{\not{\!P},\not{\!R}}(0)+\alpha^{ \prime}_{\not{\!P},\not{\!R}}t. \tag{14}\] Furtehr, \(A_{\not{\!P},\not{\!R}}\) are the normalizations of the Pomeron and Reggeon terms, respectively, and are treated in the same way as in Ref. [4]. After assessing the fits using Eqs. (11) and (12), we found that the parameters \(\eta_{q}\) and \(\eta_{g}\) can not be well constraint by the diffractive data, therefore, we set them to zero. In Eqs. (13) and (14), we set the Reggeon flux parameters to the same value as in [4; 32]. For the Pomeron flux parameters \(\alpha^{\prime}_{\not{\!P}}\) and \(B_{\not{\!P}}\) we use the latest value from Ref. [33] and leave \(\alpha_{\not{\!P}}(0)\) to be free. It needs to be determined from the QCD fit. Therefore, in total we have 8 free parameters \(\alpha_{q}\), \(\beta_{q}\), \(\gamma_{q}\), \(\alpha_{g}\), \(\beta_{g}\), \(\gamma_{g}\), \(n_{\not{\!R}}\) and \(\alpha_{\not{\!P}}(0)\). These will be determined from the fit to the experimental data. For the initial inputs, we adopt the world average \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline Experiment & Observable & DIS range & dijet range & Diffractive range & \# of points & Reference \\ \hline H1-LRG (HERA II) & \(d^{2}\sigma/dp_{T}^{jet1}dQ^{2}\) & \(0.1<y<0.7\) & \(p_{T}^{jet1}>5.5\) GeV & \(x_{\not{\!P}}<0.03\) & **15** & [31] \\ & \(4<Q^{2}<100\ {\rm GeV}^{2}\) & \(p_{T}^{jet2}>4.0\) GeV & \(|t|<1\ {\rm GeV}^{2}\) & & \\ & & \(-1<\eta^{jet}<2\) & \(M_{Y}<1.6GeV\) & & \\ \hline \hline Total data & & & & & **15** \\ \hline \hline \end{tabular} \end{table} Table 2: Dijet data set used in the SKMHS23 global QCD analysis. \begin{table} \begin{tabular}{l|c|c|c|c c c} \hline \hline Experiment & Observable & \([\beta^{\rm min},\beta^{\rm max}]\) & \([x_{\not{\!P}}^{\rm min},x_{\not{\!P}}^{\rm max}]\) & \(Q^{2}\,[{\rm GeV}^{2}]\) & \# of points & Reference \\ \hline \hline H1-LRG-11 & \(\sqrt{s}=225\) GeV & \(\sigma_{r}^{D(3)}\) & [0.033–0.88] & [\(5\times 10^{-4}\) – 3\(\times 10^{-3}\)] & 4–44 & **22** & [28] \\ H1-LRG-11 & \(\sqrt{s}=252\) GeV & \(\sigma_{r}^{D(3)}\) & [0.033–0.88] & [\(5\times 10^{-4}\) – 3\(\times 10^{-3}\)] & 4–44 & **21** & [28] \\ H1-LRG-11 & \(\sqrt{s}=319\) GeV & \(\sigma_{r}^{D(3)}\) & [0.089–0.88] & [\(5\times 10^{-4}\) – 3\(\times 10^{-3}\)] & 11.5–44 & **14** & [28] \\ H1-LRG-12 & \(\sigma_{r}^{D(3)}\) & [0.0017–0.80] & [\(3\times 10^{-4}\) – 3\(\times 10^{-2}\)] & 3.5–1600 & **277** & [29] \\ H1/ZEUS combined & \(\sigma_{r}^{D(3)}\) & [0.0018–0.816] & [\(3\times 10^{-4}\) – 9\(\times 10^{-2}\)] & 2.5–200 & **192** & [30] \\ \hline \hline Total data & & & & & **526** & \\ \hline \hline \end{tabular} \end{table} Table 1: List of all diffractive DIS data points with their properties used in the SKMHS23 global QCD analysis. For each data set we provide the kinematical coverage of \(\beta\), \(x_{\not{\!P}}\), and \(Q^{2}\). The number of data points is displayed as well. The details of the kinematical cuts applied on these data sets are explained in the text. value for \(\alpha_{s}^{n_{f}=5}(M_{Z}^{2})=0.1185\)[34], the charm and bottom masses are set to \(m_{c}=1.40\) GeV and \(m_{b}=4.50\) GeV for both NLO and NNLO accuracy. The heavy flavors will be generated through the evolution equations at \(\mathrm{Q}^{2}>m_{c,b}^{2}\). For the contribution of the heavy flavors in diffractive DIS, we employ the FONLL scheme implemented in the APFEL package [21]. The FONLL is a general-mass variable flavor number scheme (GM-VFNS) and the abbreviation stands for "Fixed-Order plus Next-to-Leading Log orders". This approach introduced first in Ref. [35] to investigate the production of heavy quarks in hadro-production, then extended to DIS [36] and also to Higgs boson production [37]. This method is used to combine a fixed order calculation which corresponds to the massive \(\mathcal{O}(\alpha^{3})\) cross section with a NLL resumed computation of cross section in the massless limit. For more details, we refer the reader to Ref. [35] and references therein. In the SKMHS23 QCD analysis, we choose the FONLL-A scheme at NLO accuracy, while for the case of NNLO, the FONLL-C is considered. More details of these schemes can be found in Ref. [38]. ### Minimization and diffractive PDF uncertainty method As already discussed, the SKMHS23 QCD analysis of the diffractive PDFs is presented at NLO and NNLO in perturbative QCD with as much data as possible. As a phenomenological study of diffractive PDFs, the SKMHS23 analysis should answer three questions adequately: 1) how to adjust the free fit parameters of the model 2) how to predict observables precisely, and 3) how precise are our distributions and observable predictions. As mentioned, a QCD analysis should have a sound approach to finding the best fit parameters and evaluate their uncertainty. In order to find the best values of the free parameters, a \(\chi^{2}\) function defined as follows, is minimized [39]; \[\chi^{2}=\vec{p}^{\mathrm{T}}\mathbf{C}^{-1}\vec{p}+\sum_{k}^{N_{\mathrm{sys} }}\varepsilon_{k}^{2}, \tag{15}\] where \(\mathbf{C}\) denotes the covariance matrix of the relative uncertainties, and \(i\)th element of \(\vec{p}\) is defined as the logarithm of the ratio of a measured observable to its theoretical prediction, \[p_{i}=\log\left[\frac{\mathcal{E}_{i}}{\mathcal{T}_{i}}\right]-\sum_{k}^{N_{ \mathrm{sys}}}E_{i,k}, \tag{16}\] which means that the experimental data are distributed according to the log normal distribution and where \(E_{i,k}\) is defined as, \[E_{i,k}=\sqrt{f_{k}^{C}}\left(\frac{\delta_{\mathcal{E}_{i}}^{k,+}-\delta_{ \mathcal{E}_{i}}^{k,-}}{2}\varepsilon_{k}+\frac{\delta_{\mathcal{E}_{i}}^{k,+ }+\delta_{\mathcal{E}_{i}}^{k,-}}{2}\varepsilon_{k}^{2}\right). \tag{17}\] The parameter \(f_{k}^{C}\) denotes the fraction of the systematic errors from the source \(k\) which are considered as correlated uncertainty and the parameters \(\delta_{\mathcal{E}_{i}}^{k,-}\) and \(\delta_{\mathcal{E}_{i}}^{k,+}\) are the relative uncertainty of the \(\mathcal{E}_{i}\) measurement. The nuisance parameters \(\varepsilon_{k}\) will be treated as free parameters, and will be determined by the \(\chi^{2}\) minimization. Concerning the question of calculating the observables and evolution of the diffractive PDFs, we have used the Alpos package [19; 20], which also provides an interface to the CERN MINUIT package [40] that is responsible for the \(\chi^{2}\) minimization. For the uncertainty of diffractive PDF distributions and theoretical predictions, we have used the well stablished optimized Hessian method as described in [41] and implemented in the Alpos package. In the next section we will discuss in detail the \(\chi^{2}\) values extracted from SKMHS23 QCD fits, and the resulting diffractive PDFs in terms of their predictions and uncertainties. ## IV SkMHS23 fit results This section focuses on the main results of the SKMHS23 QCD analysis and on the new features and improvements that are introduced in this work. We first present the SKMHS23 diffractive PDFs and the fitted parameters. Then, we focus on the improvements arising from the inclusion of the higher order QCD correction. We also stress and discuss the effect and impact of the diffractive dijet production data on the extracted diffractive PDFs. Finally, we present and discuss the quality of the SKMHS23 QCD fit in terms of both individual and total data sets. Data-theory comparisons will be presented as well. In Tab. 3 we present the SKMHS23 best fit parameters and their errors extracted from the QCD analysis at NLO and NNLO accuracy using the inclusive diffractive DIS data. The best fit parameters extracted from the global QCD analysis at NLO and NNLO accuracy using both inclusive diffractive DIS and diffractive dijet data sets, entitled SKMHS23-dijet, are presented in Tab. 4. In total we have 10 parameters that need to be extracted from the QCD fit, which include 4 for both the gluon and total singlet densities and two for the Reggeon flux. For both gluon and singlet PDFs, the parameter \(\eta_{g}\) and \(\eta_{g}\) are set to zero during the QCD fits since the analyzed data sets do not constrain these parameters well enough. As one can see from Tab. 3 and Tab. 4, all the shape parameters are will determined, except for the case of \(\gamma_{g}\) which comes with large errors. This again reflects the lack of data to constrain all shape parameters. We prefer to keep \(\gamma_{g}\) free in the fit to give the gluon density enough flexibility. The extracted value for the \(n_{R}\) is rather small as expected [12]. The consistency of the parameters extracted from different QCD fits presented in Tables 3 and 4 are acceptable, however, the \(\gamma_{g}\) is mostly affected by the higher order QCD corrections and the dijet data as well. SKMHS23 and SKMHS23-dijet diffractive PDFs and their uncertainties, focusing on the perturbative convergence upon inclusion of the higher-order QCD corrections and the effect arising form the inclusion of the diffractive dijet production to the data sample. In Fig. 2, we present the NLO and NNLO SKMHS23 gluon distributions at the input scale \(Q_{0}^{2}=1.69\) GeV\({}^{2}\). The results for the higher energy value of 10, 20, 60, 100 and 200 GeV\({}^{2}\) are also shown as well. The extracted uncertainties determined using the Hessian method also are shown. We show both the absolute distributions and ratios to the NLO results. The NLO and NNLO SKMHS23 singlet distribution with their including uncertainties are shown in Fig. 3. Considering the results presented in Fig. 2 and 3, a few remarks are in order. A remarkable feature of the distributions shown in these plots is their perturbative convergence. As one can see, a difference can be seen between the NLO and NNLO results for both the gluon and the singlet densities for medium to large value of \(\beta\). For the gluon density, the NLO results are larger than the NNLO ones for a high value of \(\beta\) and smaller for the small region of \(\beta\). As can bee seen, a significant reduction for the uncertainty bands are achieved after including the higher order QCD corrections showing the effect of the inclusion the NNLO accuracy in the diffractive PDFs determination. The differences between the NLO and NNLO diffractive PDFs are rather small when going to the higher values of Q\({}^{2}\). In Figs. 4 and 5, we show the the NLO and NNLO SKMHS23-dijet gluon and singlet distributions with their uncertainties determined using the Hessian method at the input scale \(Q_{0}^{2}=1.69\) GeV\({}^{2}\). The results for the higher energy value of 10, 20, 60, 100 and 200 GeV\({}^{2}\) are also shown. We show both the absolute distributions and ratios to the NLO results. The same findings as in the \begin{table} \begin{tabular}{c|c|c} \hline \hline Parameters & SKMHS23-dijet (NLO) & SKMHS23-dijet (NNLO) \\ \hline \hline \(\alpha_{g}\) & \(0.323\pm 0.069\) & \(0.477\pm 0.094\) \\ \(\beta_{g}\) & \(0.169\pm 0.094\) & \(0.278\pm 0.083\) \\ \(\gamma_{g}\) & \(-0.099\pm 0.163\) & \(0.303\pm 0.189\) \\ \(\eta_{g}\) & \(0.0^{*}\) & \(0.0^{*}\) \\ \(\alpha_{q}\) & \(0.747\pm 0.055\) & \(0.986\pm 0.085\) \\ \(\beta_{q}\) & \(1.560\pm 0.068\) & \(1.719\pm 0.085\) \\ \(\gamma_{q}\) & \(0.442\pm 0.036\) & \(0.560\pm 0.043\) \\ \(\eta_{q}\) & \(0.0^{*}\) & \(0.0^{*}\) \\ \(\alpha_{\mathbf{P}}(0)\) & \(1.100\pm 0.0029\) & \(1.101\pm 0.0038\) \\ \(n_{\mathbf{R}}\) & \(0.00075\pm 0.000004\) & \(0.00073\pm 0.000004\) \\ \hline \(\alpha_{s}(M_{Z}^{2})\) & \(0.1185^{*}\) & \(0.1185^{*}\) \\ \(m_{c}\) [GeV] & \(1.40^{*}\) & \(1.40^{*}\) \\ \(m_{b}\) [GeV] & \(4.5^{*}\) & \(4.5^{*}\) \\ \hline \hline \end{tabular} \end{table} Table 4: The SKMHS23-dijet best fit parameters and their errors extracted from global the QCD analysis at NLO and NNLO accuracy using both inclusive diffractive DIS and diffractive dijet data sets. Values marked with (*) are fixed in the QCD fit since the analyzed datasets do not constrain these parameters well enough. The input values for \(\alpha_{s}\), \(m_{c}\) and \(m_{b}\) are also given. \begin{table} \begin{tabular}{c|c|c} \hline \hline Parameters & SKMHS23 (NLO) & SKMHS23 (NNLO) \\ \hline \hline \(\alpha_{g}\) & \(0.355\pm 0.084\) & \(0.497\pm 0.108\) \\ \(\beta_{g}\) & \(0.201\pm 0.101\) & \(0.291\pm 0.087\) \\ \(\gamma_{g}\) & \(0.018\pm 0.206\) & \(0.353\pm 0.233\) \\ \(\eta_{g}\) & \(0.0^{*}\) & \(0.0^{*}\) \\ \(\alpha_{q}\) & \(0.728\pm 0.055\) & \(0.979\pm 0.091\) \\ \(\beta_{q}\) & \(1.525\pm 0.071\) & \(1.705\pm 0.096\) \\ \(\gamma_{q}\) & \(0.437\pm 0.036\) & \(0.558\pm 0.044\) \\ \(\eta_{q}\) & \(0.0^{*}\) & \(0.0^{*}\) \\ \(\alpha_{\mathbf{P}}(0)\) & \(1.099\pm 0.0039\) & \(1.01\pm 0.0040\) \\ \(n_{\mathbf{R}}\) & \(0.00055\pm 0.000004\) & \(0.00055\pm 0.000004\) \\ \hline \(\alpha_{s}(M_{Z}^{2})\) & \(0.1185^{*}\) & \(0.1185^{*}\) \\ \(m_{c}\) [GeV] & \(1.40^{*}\) & \(1.40^{*}\) \\ \(m_{b}\) [GeV] & \(4.5^{*}\) & \(4.5^{*}\) \\ \hline \hline \end{tabular} \end{table} Table 3: The SKMHS23 best fit parameters and their errors extracted from the QCD analysis at NLO and NNLO accuracy using the inclusive diffractive DIS data. Values marked with (*) are fixed in the QCD fit since the analyzed data sets do not constrain these parameters well enough. The input values for \(\alpha_{s}\), \(m_{c}\) and \(m_{b}\) are also given. case of the SKMHS23 also hold for the SKMHS23-dijet. A significant reduction for the uncertainty bands can bee seen at the NNLO accuracy, mostly for large values of \(\beta\). In order to further scrutinize the results presented in this work and to examine the effect arising form the inclusion of the inclusive DIS dijet production data on the extracted diffractive PDFs, we present a comparison of the NLO and NNLO results for the SKMHS23 and SKMHS23-dijet in Fig. 6 at \(Q_{0}^{2}=1.69\) GeV\({}^{2}\) for the gluon and singlet distributions. The upper panel of each plot displays the absolute distributions, while the lower panel displays the SKMHS23/SKMHS23-dijet ratios. As one can see, the inclusion of the dijet data mostly affects the shape of the gluon distribution for large value of \(\beta\). It also affects the uncertainty bands of the extracted diffractive PDFs. It causes a reduction of the error bands for the gluon density at large values of \(\beta\), and for small values of \(\beta\) for the case of the total singlet density. In Tables 5 and 6 we present the values of the \(\chi^{2}\) per data point for both the individual and the total inclusive diffractive data sets included in the our analysis. The values for the SKMHS23 QCD fit are presented in Tab. 5, and the values for our SKMHS23-dijet global QCD fit which includes the inclusive diffracvie dijet production are presented in Tab. 6. The values are shown at NLO and NNLO for all the QCD analyses. Concerning the fit quality of the total data set, the most noticeable feature of the SKMHS23 and SKMHS23-dijet analyses is the slight improvement upon the inclusion of the higher-order corrections. Such kind of improvement also can be achieved after including the inclusive diffractive dijet production data in the QCD fit. As one can see, the inclusion of the dijet data improves the total \(\chi^{2}\)/dof from 1.11 to 1.09 for our NLO analysis, and from 1.10 to 1.07 for the NNLO case. The improvement of the total \(\chi^{2}\) is particularly pronounced when the dijet data are added in the NNLO fit. These findings demonstrate that both the inclusion of the NNLO corrections and considering the dijet data improve the description of the data. These findings are also consis Figure 2: The SKMHS23 gluon distribution at the input scale \(Q_{0}^{2}=1.69\) GeV\({}^{2}\), and at higher energy value of 10, 20, 60, 100 and 200 GeV\({}^{2}\). The extracted uncertainties determined using the Hessian method also are shown as well. We show both the absolute distributions and ratios to the NLO results. tent with the perturbative convergence and the uncertainty estimation discussed above after considering the NNLO accuracy. Concerning the fit quality of the individual experiments, the general trend of the \(\chi^{2}\) per data point is the same as that of the total one for all QCD analyses, with the two main exceptions. The \(\chi^{2}\) per data point for the H1-LRG-12, despite remaining good, but increases slightly as higher-order QCD corrections are included in SKMHS23 fit. For the case of H1-LRG-11 \(\sqrt{s}=252\) GeV, this value remains unchanged after inclusion of the NNLO correction in our SKMHS23 fit. For both SKMHS23 and SKMHS23-dijet analyses, the \(\chi^{2}\) per data point for the case of the H1/ZEUS combined [30] data set, are still large for the NLO and NNLO analysis. This treatment is discussed in detail in Ref. [12]. To decrease the \(\chi^{2}\) for this specific data set, one needs to impose a minimum cut on the Q\({}^{2}\) value at around 16 GeV\({}^{2}\). In this work, we prefer to consider Q\({}^{2}\geq\) Q\({}^{2}_{\rm min}\) with Q\({}^{2}_{\rm min}=9\) GeV\({}^{2}\). We now in a position to compare our diffractive PDFs to the most recent determinations available in the literature, namely the GKG18 [12] and our previous work SKMHS22-tw2-tw4-RC [15]. In the analysis by GKG18, they presented the first QCD analysis of diffractive PDFs in the framework of xFitter[16], and analyzed for the first time the H1/ZEUS combined data sets [30]. In our most recent work, SKMHS22, we presented a new set of diffractive PDFs and their uncertainties at NLO and NNLO accuracy in perturbative QCD within the xFitter framework. The diffractive PDFs has been we extracted considering the standard twist-2 contribution, the twist-4 correction, and the contribution of subleading Reggeon exchange. Since the GKG18 analysis was performed only at NLO accuracy, we limit the comparison to this order. Such a comparison is shown in Fig. 7 at Q\({}^{2}\) = 6 GeV\({}^{2}\) as a function of \(\beta\), for both gluon and total singlet distributions. Concerning the shapes of the diffractive PDFs and their error bands, a number of interesting differences and similarities between these three sets can be seen from the comparisons in Fig. 7. For the case of the gluon density, Figure 3: Same as Fig. 2 but this time for the SKMHS23 singlet distribution with their including uncertainties. overall good agreements between these three sets can be seen. However, the new analysis mostly affects the gluon density function over the large value of momentum fraction \(\beta\). The differences in shape among the three diffractive PDFs sets are more marked in the case of the total singlet. The SKMHS23-dijet analysis is in fairly good agreement with GKG18 analysis over the medium to large value of \(\beta\). Both GKG18 and SKMHS22-tw2-tw4-RC are more suppressed at small value of \(\beta\) with respect to the SKMHS23-dijet. Concerning the diffractive PDFs uncertainties, we observe that for both the gluon and total singlet distributions the three sets are in good agreement in the region covered by the high \(\beta\) data, roughly \(\beta>0.4\). Conversely, over the small region of \(\beta\), differences are more significant. Typically, the uncertainties of the SKMHS23-dijet are smaller than those of both GKG18 and SKMHS22-tw2-tw4-RC. Furthermore, we now present a comparison of the data sets used in our analysis to the corresponding \begin{table} \begin{tabular}{l c c c} \hline \hline & & SKMHS23 (NLO) & SKMHS23 (NNLO) \\ \hline Experiment & Process & \(\chi^{2}/N_{\rm pts}\) & \(\chi^{2}/N_{\rm pts}\) \\ \hline HI-LRG-11 \(\sqrt{s}=225\) GeV[28] & inclusive DDIS & 10/13 & 9/13 \\ HI-LRG-11 \(\sqrt{s}=252\) GeV[28] & inclusive DDIS & 19/12 & 19/12 \\ HI-LRG-12 [29] & inclusive DDIS & 134/165 & 136/165 \\ HI/ZEUS combined [30] & inclusive DDIS & 141/96 & 140/96 \\ \hline \hline \(\chi^{2}/{\bf dof}\) & & \(308/278=1.11\) & \(306/278=1.10\) \\ \hline \end{tabular} \end{table} Table 5: The values of \(\chi^{2}/N_{\rm pts}\) for both the individual and the total data sets included in the SKMHS23 QCD fit. Figure 4: The SKMHS23-dijet gluon distribution at the input scale \(Q_{0}^{2}=1.69\) GeV\({}^{2}\), and at higher energy value of 10, 20, 60, 100 and 200 GeV\({}^{2}\). The extracted uncertainties determined using the Hessian method also are shown as well. We show both the absolute distributions and ratios to the NLO results. NNLO theoretical predictions obtained using the NNLO SKMHS23-dijet fit. In Fig. 8 such a comparison is displayed for the NNLO theory prediction calculated using the SKMHS23-dijet global QCD fit with the inclusive diffractive DIS data sets. The comparisons are presented as a function of Q\({}^{2}\) and for four different selected bins of \(x_{\mathbf{P}}\) = 0.001, 0.003, 0.01 and 0.03, and several values of \(\beta\). The shaded area indicates to the experimental uncertainty. As can be seen, in general an overall very good agreement between the data and the NNLO theoretical predictions is achieved for all diffractive experiments, which is consistent with the \(\chi^{2}\) values per data points reported in Tab. 6. Remarkably, the SKMHS23-dijet NNLO theoretical predictions and the inclusive diffractive data are in good agreement over the whole kinematical region. In Fig. 9, we compare the NNLO theory prediction for the inclusive cross section calculated using the \begin{table} \begin{tabular}{l c c c} \hline \hline & & SKMHS23-dijet (NLO) & SKMHS23-dijet (NNLO) \\ \hline Experiment & Process & \(\chi^{2}/N_{\rm pts}\) & \(\chi^{2}/N_{\rm pts}\) \\ \hline H1-LRG-11 \(\sqrt{s}=225\) GeV[28] & inclusive DDIS & 11/13 & 10/13 \\ H1-LRG-11 \(\sqrt{s}=252\) GeV[28] & inclusive DDIS & 19/12 & 18/12 \\ H1-LRG-12 [29] & inclusive DDIS & 135/165 & 135/165 \\ H1/ZEUS combined[30] & inclusive DDIS & 141/96 & 139/96 \\ H1-LRG (HERA II)[31] & inclusive dijet production & 12/15 & 10/15 \\ \hline \hline \(\chi^{2}/\textbf{dof}\) & & \(320/293=1.09\) & \(314/293=1.07\) \\ \hline \end{tabular} \end{table} Table 6: The values of \(\chi^{2}/N_{\rm pts}\) for both the individual and the total data sets included in the SKMHS23-dijet global QCD fit. Figure 5: Same as Fig. 4 but this time for the SKMHS23-dijet singlet distribution with their including uncertainties. SKMHS23-dijet global QCD fit with the H1-LRG-11 \(\sqrt{s}=225\) GeV and H1-LRG-11 \(\sqrt{s}=252\) GeV inclusive diffractive DIS data sets. The NNLO theory prediction are calculated and shown as a function of \(\beta\) and for some selected values of \(x_{I\!\!P}\) and \(Q\). We show both the absolute distributions (upper panel) and the data/theory ratios (lower panel). As one can see, the theoretical predictions and the data are in good agreement with the H1-LRG-11 \(\sqrt{s}=225\) GeV. A small disagreement with the H1-LRG-11 \(\sqrt{s}=252\) GeV is found which reflects the origin of the large \(\chi^{2}\) reported in Tab. 6 for these data. Finally, in Fig. 10, we present detailed comparisons between the SKMHS23-dijet NNLO theory predictions with the H1/ZEUS combined data. The comparison are shown as a function of \(\beta\) and for some selected values of \(Q\) and \(x_{I\!\!P}\). The data/theory ratios also presented in the lower panel. Again, an overall good agreement between the data and the SKMHS23-dijet theoretical predictions is achieved over the whole kinematical region. Now we are in a position to turn our attention to a detailed comparison with the newly added inclusive diffractive dijet production data published by the H1 collaboration at HERA [30]. In Fig. 11, we compare the NLO and NNLO theory prediction for the diffractive dijet production cross section calculated using the SKMHS23-dijet diffractive PDFs with the diffractive dijet production data. Both the absolute distributions (upper panel) and the data/theory ratios (lower panel) are shown. The comparisons are presented as a function of the transverse Figure 6: Comparison of the NLO and NNLO results for the SKMHS23 and SKMHS23-dijet at the \(Q_{0}^{2}=1.69\) GeV\({}^{2}\) for the gluon and singlet distributions. The lower panels display the ratio to the SKMHS23-dijet. momentum \(p_{T}\), and for different values of \(\mathrm{Q}^{2}\) from 4 to 100 GeV\({}^{2}\). In general, a very good agreement between the data and the theoretical predictions is achieved for all values of \(\mathrm{Q}^{2}\). As one can see, the NNLO predictions are very compatible with the data, consistent with the \(\chi^{2}\) values per data points reported in Tab. 6. For the case of the NLO fit, the \(\chi^{2}/\mathrm{dof}=0.80\) is achieved, while for the NNLO fit, we obtained \(\chi^{2}/\mathrm{dof}=0.66\). The improvements upon inclusion of the NNLO accuracy is also reflected in the the data/theory comparison in Fig. 11 and the smaller error bands in Fig. 6. ## V Discussion and Conclusion In this work, we have presented SKMHS23 and SKMHS23-dijet, the first determination of diffractive PDFs up to the next-to-next-to-leading order accuracy in perturbative QCD taking into account the inclusive DIS and di-jet DIS data. The data sets analyzed in this work include the combined H1 HERA-I and HERA-II LRG inclusive diffractive DIS data, H1 Low energy HERA-II LRG data, and more importantly the H1 HERA-II dijet LRG data. We have discussed the quality of SKMHS23 and SKMHS23-dijet QCD fits and shown that the inclusion of QCD corrections up to the NNLO accuracy improves the description of the data. We have then examined the diffractive PDFs resulting from our QCD fits. We also highlighted their perturbative stability and observed a reduction of the diffractive PDFs uncertainties at NNLO with respect to the NLO case. Very good descriptions between the NLO and NNLO predictions based on SKMHS23 and SKMHS23-dijet and the data points are observed over a wide range of \(x_{P}\) and \(\beta\). The extracted diffractive PDFs are also compared with the results available in the literature, where largely good agreement is found. In our SKMHS23 and SKMHS23-dijet analysis we have introduced some methodological improvements, and the theoretical framework applied in this work also features a number of further improvements. As we discussed, a well-established fitting methodology is used to provide a faithful representation of the diffractive experimental uncertainties, and to minimize any bias related to the parametrization of the diffractive PDFs and to the minimization of the fitting procedure. The theoretical calculations have been done at NLO and NNLO accuracy for both inclusive and jet production using the APFEL, NNLOJET and fastNLO schemes. To consider the contribution from heavy quarks, we employed the FONLL-A and FONLL-C GM-VFNS approaches which provide a proper theory input for such contributions at NLO and NNLO accuracy, respectively. The H1 HERA-II dijet LRG data are also added to the data sample, to constrain the gluon component which is weekly constrained from the inclusive diffractive DIS data. Hence, we expect that the determination of the gluon distribution is more reliable in our SKMHS23-dijet Figure 7: Comparison between SKMHS23-dijet, GKG18[12] and SKMHS23[15] at \(\mathrm{Q}^{2}\) = 6 GeV\({}^{2}\) as a function of \(\beta\), for gluon (left) and total singlet distributions (right). QCD fit, since the dijet from HERA-II are considered, which are directly sensitive to the gluon density. The SKMHS23 and SKMHS23-dijet analyses presented in this work represents the first step of a broader program. A number of updates and improvements are foreseen, and the SKMHS23 and SKMHS23-dijet analyses presented in this article can be extended in several different directions. The most important one is to repeat the analysis described here and present a new combined QCD analysis of both recent data sets measured by the H1 and ZEUS collaborations at HERA, and the expected observables from the future colliders considering the large hadron-electron collider (LHeC) [42] on the top of the list, to examine the effect of such data on the extracted diffractive PDFs. The SKMHS23 and SKMHS23-dijet NLO and NNLO diffractive PDFs sets presented in this work are available in the standard LHAPDF format [43] from the authors upon request. ###### Acknowledgements. Hamzeh Khanpour, Hadi Hashamipour and Maryam Soleymaninia thank the School of Particles and Accelerators, Institute for Research in Fundamental Sciences (IPM) for financial support of this project. Hamzeh Khanpour also is thankful to the Physics Department Figure 8: The NNLO theory prediction obtained using the SKMHS23–dijet global QCD fit in comparison with the inclusive diffractive DIS data sets as a function of \(\mathrm{Q}^{2}\) and for two different selected bins of \(x_{\mathcal{P}}\) = 0.001, 0.003, 0.01 and 0.03. The shaded area indicates to the experimental uncertainty. of University of Udine, and the University of Science and Technology of Mazandaran for the financial support provided for this research. Maryam Soleymaninia is thankful to the Iran Science Elites Federation for the financial support. This work was also supported in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the funds provided to the Sino-German Collaborative Research Center TRR110 "Symmetries and the Emergence of Structure in QCD" (DFG Project-ID 196253076 - TRR 110). The work of UGM was supported in part by the Chinese Academy of Sciences (CAS) President's International Fellowship Initiative (PIFI) (Grant No. 2018DM0034) and by VolkswagenStiftung (Grant No. 93562). Figure 9: Comparison of the NNLO theory prediction for the inclusive diffractive cross section obtained using the SKMHS23-dijet with the H1-LRG-11 \(\sqrt{s}=225\) GeV and H1-LRG-11 \(\sqrt{s}=252\) GeV inclusive diffractive DIS data sets. Both the absolute distributions (upper panel) and the data/theory ratios (lower panel) are shown. Figure 10: Same as Fig. 9 but this time in comparison with the H1/ZEUS combined data. Figure 11: Comparison of the NLO and NNLO theory prediction for diffractvie dijet production cross section calculated using the SKMMS23-dijet diffractvie PDFs with the diffractive dijet production data published by H1 collaboration at HERA [30]. Both the absolute distributions (upper panel) and the data/theory ratios (lower panel) are shown as well.
2306.04256
Enumeration of splitting subsets of endofunctions on finite sets
Let $d$ and $n$ be positive integers such that $d|n$. Let $[n]=\{1,2,\ldots,n\}$ and $T$ be an endofunction on $[n]$. A subset $W$ of $[n]$ of cardinality $n/d$ is said to be $d$-splitting if $W \cup TW \cup \cdots \cup T^{d-1}W =[n]$. Let $\sigma(d;T)$ denote the number of $d$-splitting subsets. If $\sigma(2;T)>0$, then we show that $\sigma(2;T)=g_T(-1)$, where $g_T(t)$ is the generating function for the number of $T$-invariant subsets of $[n]$. It is interesting to note that substituting a root of unity into a polynomial with integer coefficients has an enumerative meaning. More generally, let $g_T(t_1,\ldots,t_d)$ be the generating function for the number of $d$-flags of $T$-invariant subsets. We prove for certain endofunctions $T$, if $\sigma(d;T)>0$, then $\sigma(d;T)=g_T(\zeta,\zeta^2,\ldots,\zeta^d)$, where $\zeta$ is a primitive $d^{th}$ root of unity.
Divya Aggarwal
2023-06-07T08:54:43Z
http://arxiv.org/abs/2306.04256v1
# Enumeration of splitting subsets of endofunctions on finite sets ###### Abstract. Let \(d\) and \(n\) be positive integers such that \(d|n\). Let \([n]=\{1,2,\ldots,n\}\) and \(T\) be an endofunction on \([n]\). A subset \(W\) of \([n]\) of cardinality \(n/d\) is said to be \(d\)-splitting if \(W\cup TW\cup\cdots\cup T^{d-1}W=[n]\). Let \(\sigma(d;T)\) denote the number of \(d\)-splitting subsets. If \(\sigma(2;T)>0\), then we show that \(\sigma(2;T)=g_{T}(-1)\), where \(g_{T}(t)\) is the generating function for the number of \(T\)-invariant subsets of \([n]\). It is interesting to note that substituting a root of unity into a polynomial with integer coefficients has an enumerative meaning. More generally, let \(g_{T}(t_{1},\ldots,t_{d})\) be the generating function for the number of \(d\)-flags of \(T\)-invariant subsets. We prove for certain endofunctions \(T\), if \(\sigma(d;T)>0\), then \(\sigma(d;T)=g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})\), where \(\zeta\) is a primitive \(d^{th}\) root of unity. Key words and phrases:splitting subset, endofunction, cycle, tree, cyclic sieving phenomenon, invariant subset, generating function, roots of unity, enumeration, \(q\)-analogue 2 where \(T^{i}\) denotes the \(i\)-fold composition of \(T\). We denote by \(\sigma(d;T)\) the number of \(d\)-splitting subsets for the endofunction \(T\). **Problem 1.2**.: For a given endofunction \(T\), what is \(\sigma(d;T)\)? The \(q\)-analogue of this is a well-studied open problem, but it seems that Problem 1.2 has not been studied particularly in the literature yet. The \(q\)-analogue of \([n]\) is the \(n\)-dimensional vector space over the finite field \(\mathbb{F}_{q}\), while the \(q\)-analogue of an endofunction on \([n]\) is a linear operator on the vector space of dimension \(n\) over \(\mathbb{F}_{q}\)[16, p. 89]. The problem translates over the finite fields as follows: Let \(V\) be a \(dm\)-dimensional vector space over the finite field \(\mathbb{F}_{q}\) and let \(T\) be a linear operator on \(V\). An \(m\)-dimensional subspace \(W\) of \(V\) is said to be \(T\)-splitting if \[W+TW+\cdots+T^{d-1}W=V.\] Let \(\sigma_{q}(d;T)\) denote the number of \(m\)-dimensional \(T\)-splitting subspaces. Then for an arbitrary assignment of the operator \(T\), determination of \(\sigma_{q}(d;T)\) is an open problem [6]. An explicit formula for \(\sigma_{q}(d;T)\) is known when \(T\) has an irreducible characteristic polynomial [3, 5], is regular nilpotent [2], is regular diagonalizable [9, 10], or when the invariant factors of \(T\) satisfy certain degree conditions [1]. The case \(d=2\) is of particular interest. A complete solution for \(\sigma_{q}(2;T)\) for an arbitrary operator \(T\) is recently given by Prasad and Ram [11]. Determining \(\sigma(2;T)\) is essentially counting the number of subsets \(W\) of \([2m]\), of cardinality \(m\), such that \(W\) is mapped to its complement under \(T\). We answer this problem as follows. A subset \(U\) of \([n]\) is said to be \(T\)-invariant if \(TU\subseteq U\). Let \(g_{T}(t)\) denote the generating function for the number of \(T\)-invariant subsets, i.e. \[g_{T}(t)=\sum_{i=0}^{n}a_{i}t^{i},\] where \(a_{i}\) is the number of \(T\)-invariant subsets of cardinality \(i\). We prove that if \(\sigma(2;T)>0\), then \[\sigma(2;T)=g_{T}(-1). \tag{1}\] Note that \(-1\) is the primitive second root of unity, and substituting the second root of unity into a polynomial with integer coefficients counts the enumerative measure \(\sigma(2;T)\). Cyclic Sieving Phenomenon (CSP) is a similar phenomenon studied by Riener, Stanton and White [13]. Cyclic sieving is a phenomenon by which evaluating a generating function for a finite set at the roots of unity counts symmetry classes of objects acted on by a cyclic group. It generalizes Stembridge's \(q=-1\) phenomenon [17, 18, 19]. Let \(C\) be a cyclic group generated by an element \(c\) of order \(n\). Suppose \(C\) acts on a set \(X\). Let \(X(q)\) be a polynomial with integer coefficients. Then the triple \((X,X(q),C)\) is said to exhibit the cyclic sieving phenomenon if, for all integers \(d\), the value \(X(e^{2\pi id/n})\) is the number of elements fixed by \(c^{d}\). We refer to the survey article of Sagan [14] for more on this topic. For a general \(d\), we show in section 3 that when the endofunction \(T\) is a cycle or a chain, then \(\sigma(d;T)=g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})\), where \(\zeta\) is a primitive \(d^{th}\) root of unity and \(g_{T}(t_{1},\ldots,t_{d})\) is the generating function for the number of \(d\)-flags of \(T\)-invariant subsets. A \(d\)-flag of \(T\)-invariant subsets is an increasing sequence of subsets \(\emptyset=U_{0}\subseteq U_{1}\subseteq\cdots\subseteq U_{d-1}\subseteq U_{ d}=[n]\), where each \(U_{i}\) is \(T\)-invariant (i.e. \(TU_{i}\subseteq U_{i}\)). We write the generating function for the number of \(d\)-flags of \(T\)-invariant subsets as follows: \[g_{T}(t)=g_{T}(t_{1},\ldots,t_{d})=\sum_{J}a_{J}t^{J},\] where the summation index \(J\) runs over all d-tuples of non-negative integers and \(t^{J}\) means \(t_{1}{}^{j_{1}t}t^{j_{2}}\ldots t_{d}{}^{j_{d}}\). Here \(a_{J}\) is the number of flags of \(T\)-invariant subsets \(\emptyset=U_{0}\subseteq U_{1}\subseteq\cdots\subseteq U_{d-1}\subseteq U_{ d}=[n]\) such that \[|U_{i}|=j_{1}+\cdots+j_{i}\ \forall\ 1\leq i\leq d.\] We prove that if \(T\) is a tree (see section 2 for the definition) and \(\sigma(d;T)>0\), then \(\sigma(d;T)=g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})\). In Section 4, we extend our result to endofunctions that satisfy certain structure criteria. More precisely, let \(T\) be an endofunction such that: _I._\(T\) has a central cycle consisting of \(ds\) nodes (\(s\geq 0\)) and \(k\) trees attached to the nodes of the cycle such that each attached tree has a \(d\)-splitting subset. _II._\(T\) has a central cycle consisting of \(ds+1\) nodes and \(T_{1},\ldots,T_{k}\) are \(k\) trees attached to the nodes of the cycle such that \(T_{1}\), together with its root node on the cycle, has a \(d\)-splitting subset and each of \(T_{2},\ldots,T_{k}\) has a \(d\)-splitting subset. If \(T\) is either of Type \(I\) or of Type _II_, then \[\sigma(d;T)=g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d}).\] ## 2. The case \(d=2\) We begin by defining the structures chains, cycles, and trees. A chain on \([N]\) is defined as \(C:a_{1}\to a_{2}\to\cdots\to a_{N-1}\to a_{N}\), where each \(a_{i}\in[N]\) and \(a_{i}\neq a_{j}\) when \(i\neq j\). A cycle on \([N]\) is a permutation \(P\) consisting of precisely one cycle when \(P\) is written as a product of disjoint cycles. An example of a cycle is shown in Fig. 11. A tree is an acyclic-connected simple graph. A directed tree is a directed acyclic graph whose underlying graph is a tree. For our purposes, we will only be considering directed trees, and with a slight abuse of notation, we will call directed trees as trees. **Definition 2.1**.: A rooted tree \(T\) is a tree with a distinguished node, called the root node, \(R\), such that all the edges point towards the root (see Fig. 1). The trees \(T_{1},\ldots,T_{k}\) in Fig. 1 are called subtrees of \(T\). We adopt the following convention. Let \(C\) be a chain with \(N\) nodes. If \(C\) does not feed into a cycle, we assume that the last node of \(C\) goes to itself, thereby making \(C\) an endofunction on \([N]\). Figure 3 is an example of a chain on [4]. Likewise, let \(T\) be a rooted tree with the root node \(R\). If \(R\) does not feed into a cycle, we assume that \(R\) is mapped to itself so that \(T\) becomes an endofunction. Let \(T\) be an endofunction on \([N]\). Let us recall the \(T\)-invariant subsets of \([N]\). **Definition 2.2**.: A subset \(U\) of \([N]\) is said to be \(T\)-invariant if \(TU\subseteq U\), where \(TU\) denotes the image of \(U\) under \(T\). We denote by \(g_{T}(t)\), the generating function for the number of \(T\)-invariant subsets, i.e. \[g_{T}(t)=\sum_{i=0}^{N}a_{i}t^{i},\] where \(a_{i}\) is the number of \(T\)-invariant subsets of cardinality \(i\). Note that \(g_{T}(t)\) is a polynomial with integer coefficients. The following theorem is the main result of this section. **Theorem 2.3**.: Let \(T\) be an endofunction on \([2m]\) for which \(\sigma(2;T)>0\). Then \[\sigma(2;T)=g_{T}(-1).\] We use the following result from Bergeron, Labelle, and Leroux [4, p. 41] to prove Theorem 2.3. Every endofunction is a permutation of disjoint rooted trees. Figure 2 shows that an endofunction \(T\) can naturally be identified with a permutation of disjoint rooted trees, where each rooted tree is shown in a different colour. The nodes on the cycle serve as the roots of the attached trees. The following results depict the generating functions for the number of \(T\)-invariant subsets when \(T\) is a cycle or a chain. **Proposition 2.4**.: Let \(T\) be a cycle on \([N]\). Then \[g_{T}(t)=1+t^{N}.\] Proof.: Since \(T\) is a cycle, there are only two \(T\)-invariant subsets: the empty set and the whole set \([N]\). **Proposition 2.5**.: Let \(T\) be a chain on \([N]\). Then \[g_{T}(t)=1+t+t^{2}+\cdots+t^{N}.\] Figure 1. A rooted tree T. Proof.: For each \(i\) (\(0\leq i\leq N\)), there exists precisely one \(T\)-invariant subset of cardinality \(i\), consisting of the last \(i\) nodes of the chain. By the structure of \(T\), it is easy to see that no other subset of \([N]\) is \(T\)-invariant. **Example 2.6**.: Let \(T\) be given by the following chain (see Figure 3). \[T:1\to 2\to 3\to 4\to 4.\] Then there exists a unique \(T\)-invariant subset of cardinality \(i\) (\(0\leq i\leq 4\)), namely the last \(i\) nodes of the chain. So, \(g_{T}(t)=1+t+t^{2}+t^{3}+t^{4}\). Proposition 2.5 may be used to recursively obtain the generating function for a tree. **Lemma 2.7**.: Let \(T\) be a rooted tree on \([N]\) with root node \(R\) and \(k\) subtrees \(T_{1},\ldots,T_{k}\). Then \[g_{T}(t)=1+t\prod_{i=1}^{k}g_{T_{i}}(t),\] where \(g_{T_{i}}(t)\) denotes the generating function for the number of \(T_{i}\) -invariant subsets. Figure 3. A chain on [4]. Figure 2. An endofunction as a permutation of disjoint rooted trees. Proof.: As the empty set is always \(T\)-invariant, we get \(t^{0}\) in the generating function. Since all the nodes of the tree eventually feed into the root node \(R\), therefore \(R\) must belong to every non-empty \(T\)-invariant subset. So, we get \(t\) times the product of the generating functions for each subtree, as \(T_{i}\) is independent of \(T_{j}\), for \(i\neq j\). Combining the above results, we obtain the following generating function for the number of \(T\)-invariant subsets for a general endofunction \(T\). **Lemma 2.8**.: Let \(T\) be an endofunction on \([N]\) with \(s\) connected components. Let the central cycle of \(i^{th}\) component of \(T\) has \(r_{i}\) nodes, and \(T_{i,1},\ldots,T_{i,k_{i}}\) be \(k_{i}\) trees attached to the cycle of \(i^{th}\) component. Then \[g_{T}(t)=\prod_{i=1}^{s}\left(1+t^{r_{i}}\prod_{j=1}^{k_{i}}g_{T_{i,j}}(t) \right),\] where \(g_{T_{i,j}}(t)\) denotes the generating function for the number of \(T_{i,j}\)-invariant subsets. Proof.: Since each connected component is independent of the other, the result immediately follows by Lemma 2.7, as all the nodes of the cycle must belong to every non-empty \(T\)-invariant subset. We adopt the following notation for trees. **Definition 2.9**.: A node \(\tau\) of a tree \(T\) is said to be a _branching node_ if there exist nodes \(\tau_{1}\) and \(\tau_{2}\) such that both \(\tau_{1}\) and \(\tau_{2}\) feed into \(\tau\) under the action of \(T\). Figure 4 represents the branching node \(\tau\). **Definition 2.10**.: Let \(T\) be a tree. To obtain the _chains of \(T\)_, apply the following method. Begin with the leaves of \(T\) and go up to the first branching node (where the branching node is excluded). Cut off all the chains so obtained to get a forest. Repeat the above procedure with each tree in the forest. In a finite number of steps, we have only the structures of the kind \(\tau_{1}\to\tau_{2}\to\cdots\to\tau_{k}\). Each such structure is called a chain of the tree \(T\). Figure 5 depicts the various chains of a tree, where each chain has been highlighted in a different colour. To prove Theorem 2.3, we require the following lemmas. **Lemma 2.11**.: Let \(T\) be a rooted tree with \(k\) subtrees \(T_{1},\ldots,T_{k}\). A \(d\)-splitting subset for \(T\) exists if and only if there exists a unique subtree \(T_{i}\) such that \(\widetilde{T_{i}}(=T_{i}\) with the root node \(R\) (Fig. 6)) has a \(d\)-splitting subset, and for \(i\neq j\), \(T_{j}\) has a \(d\)-splitting subset. **Lemma 2.12**.: For every tree \(T\) on \([N]\) which has a \(2\)-splitting subset, the following holds \[g_{T}(-1)=1.\] Proof.: We prove by induction on the number of chains of the tree \(T\). If there is one chain, then \[g_{T}(t)=1+t+\cdots+t^{N}.\] Since \(T\) has a \(2\)-splitting subset, \(N\) is even. Therefore \(g_{T}(-1)=1\). Suppose the result holds for every tree having less than \(k\) chains. Let \(T\) be a tree for which a \(2\)-splitting subset exists and has \(k\) chains. Then \(T=\widetilde{T}\gets C\), where \(C\) is a chain starting from a leaf and ending to a branching node (branching node excluded), having an even number of nodes, and \(\widetilde{T}\) is a tree. Such a choice of \(C\) is always possible since \(T\) has a \(2\)-splitting subset and whenever a branching occurs in \(T\), then one child is odd and others are even (see Lemma 2.11). Existence of a \(2\)-splitting subset for \(T\) implies that \(\widetilde{T}\) also has a \(2\)-splitting subset, and \(g_{\widetilde{T}}(-1)=1\) by induction hypothesis. Since \(g_{C}(-1)=1\), the result follows. The following corollary will be useful to prove the main result. **Corollary 2.13**.: Let \(T\) be a tree such that \(T\) doesn't have a \(2\)-splitting subset and let \(\widetilde{T}\) be a rooted tree obtained by joining \(T\) to a root node \(R\) such that \(\widetilde{T}\) has a splitting subset, then \(f_{T}(-1)=0\). Proof.: By Lemma 2.12, \(f_{\widetilde{T}}(-1)=1\) and \(f_{\widetilde{T}}(t)=1+tf_{T}(t)\). Therefore \(f_{T}(-1)=0\). The next proposition shows that Theorem 2.3 holds for trees. **Proposition 2.14**.: If a \(2\)-splitting subset exists for a tree, then it is unique. Proof.: Let \(T\) be a tree for which a \(2\)-splitting subset exists. Leaves of the tree must belong to the splitting subset since they are not fed by any other node. Cut off the leaves along with the nodes to which they are mapped, to obtain a forest. Now each tree in the forest has a \(2\)-splitting subset. Repeat the above process with each tree of the forest until we are left with chains having two nodes, each of which has exactly one \(2\)-splitting subset, namely the upper node of the chain. This shows that the \(2\)-splitting subset for \(T\) is unique. We are finally ready to prove Theorem 2.3. Proof of Theorem 2.3.: Let \(T\) be an endofunction such that \(\sigma(2;T)>0\). It is enough to prove the result when \(T\) is connected. Let \(T\) has a central cycle with \(k\) trees \(T_{1},\ldots,T_{k}\) attached to the nodes of the cycle. The following two cases arise. _Case 1: Each tree \(T_{1},\ldots,T_{k}\) has a \(2\)-splitting subset._ Since \(\sigma(2;T)>0\) and each attached tree has a \(2\)-splitting subset, it follows by Proposition 2.14 that the number of nodes on the cycle is even. In this case, \(\sigma(2;T)=2\) since we have two sets of choices to select the nodes for \(2\)-splitting subsets (see, for example, Figures 7 and 8). By Lemma 2.8, \[g_{T}(-1)=1+(-1)^{r}\prod_{i=1}^{k}g_{T_{i}}(-1),\] where \(r\) is the number of nodes on the cycle. Since \(g_{T_{i}}(-1)=1\) for each \(i\), the result follows. _Case 2: Some of the trees among \(T_{1},\ldots,T_{k}\) have a \(2\)-splitting subset (only when) combined with the node of the cycle to which they are attached._ Since \(\sigma(2;T)>0\), the trees which do not have a \(2\)-splitting subset must have a \(2\)-splitting subset together with the node of the cycle to which they are attached. Moreover, at each cycle node, there could be at most one such tree (Lemma 2.11). If the cycle has an even (odd) number of nodes, then the number of such trees is even (odd). Also, such trees occur at even gaps (i.e. there exists an even number of nodes in the cycle between the nodes to which such trees are attached). This gives a unique choice for the \(2\)-splitting subset, thereby \(\sigma(2;T)=1\). And \[g_{T}(-1)=1+(-1)^{r}\prod_{i=1}^{k}f_{T_{i}}(-1)=1,\] where the last equality follows by Corollary 2.13 since \(f_{T_{j}}(-1)=0\) for some \(1\leq j\leq k\). **Remark 2.15**.: We remark that if \(\sigma(2;T)=0\) for an endofunction \(T\), then \(g_{T}(-1)\) may be zero or non-zero. We provide examples for both cases. _Case 1: \(\sigma(2;T)=0\) and \(g_{T}(-1)=0\)._ Let \(T:[4]\to[4]\) be defined as \(1\to 3\), \(2\to 3\), \(3\to 4\), and \(4\to 4\) (see Fig. 9). Here \(g_{T}(t)=1+t+t^{2}(1+t)^{2}\). Clearly \(\sigma(2,2;T)=0\) by Lemma 2.11 and \(g_{T}(-1)=0\) as well. _Case 2: \(\sigma(2;T)=0\) and \(g_{T}(-1)\neq 0\)._ Consider the endofunction \(T:[4]\to[4]\) given by \(1\to 4\), \(2\to 4\), \(3\to 4\), and \(4\to 4\) (see Fig. 10). Again by Lemma 2.11, \(\sigma(2,2;T)=0\). But \(g_{T}(t)=1+t(1+t)^{3}\), so \(g_{T}(-1)=1\neq 0\). ## 3. Cycles and trees In this section, we consider the general case of \(d\)-splitting subsets. The result for \(d=2\) case does not hold in general when \(d>2\). We shall prove that if \(T\) is either a cycle or a chain, then \(\sigma(d;T)=g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})\), where \(g_{T}(t_{1},\ldots,t_{d})\) is the number of \(d\)-flags of \(T\)-invariant subsets (see Definition 3.2). We will further prove that if \(T\) is a tree such that \(\sigma(d;T)>0\), then \(\sigma(d;T)=g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})\). Recall from the introduction the definition of flags of \(T\)-invariant subsets. **Definition 3.1**.: Let \(T\) be an endofunction on \([N]\). An increasing sequence of subsets of \([N]\), \[\emptyset=U_{0}\subseteq U_{1}\subseteq\cdots\subseteq U_{d-1}\subseteq U_{ d}=[N]\] is said to be a \(d\)-flag [16, p. 100] of \(T\)-invariant subsets if \(TU_{i}\subseteq U_{i}\) for all \(1\leq i\leq d\). We will denote by a _flag_, a \(d\)_-flag of \(T\)-invariant subsets_, when the length of the flag is clear from the context. Next, we define the generating function for the number of flags. **Definition 3.2**.: Let \(T\) be an endofunction on \([N]\). The generating function for the number of \(d\)-flags of \(T\)-invariant subsets is defined as: \[g_{T}(t)=g_{T}(t_{1},\ldots,t_{d})=\sum_{J}a_{J}t^{J},\] where the summation runs over all \(d\)-tuples of non-negative integers \(J=(j_{1},\ldots,j_{d})\) and \(t^{J}\) denotes \(t_{1}^{j_{1}}\cdots t_{d}^{j_{d}}\). Here \(a_{J}\) is the number of flags \(\emptyset=U_{0}\subseteq U_{1}\subseteq\cdots\subseteq U_{d-1}\subseteq U_{ d}=[N]\) such that \(|U_{i}|=j_{1}+\cdots+j_{i}\ \forall\ 1\leq i\leq d\). **Example 3.3**.: Let \(T:[4]\to[4]\) be the cycle \(1\to 2\to 3\to 4\to 1\) as shown in Figure 11. Let \(\emptyset=U_{0}\subseteq U_{1}\subseteq\cdots\subseteq U_{d}=[4]\) be a flag of \(T\)-invariant subsets. If \(|U_{i}|\geq 1\) for some \(i\), then it is evident that \(|U_{i}|=4\). Therefore for each \(1\leq i\leq d\), we have one flag such that \(|U_{i}|=4\), \(|U_{j}|=0\) for \(j<i\). Hence \(g_{T}(t)=g_{T}(t_{1},\ldots,t_{d})=t_{1}^{4}+\cdots+t_{d}^{4}\). The following result shows that this holds in general for cycles. **Proposition 3.4**.: Let \(T\) be a cycle with \(N\) nodes. Then the generating function for the number of \(d\)-flags of \(T\)-invariant subsets is \[g_{T}(t)=g_{T}(t_{1},\ldots,t_{d})=t_{1}^{N}+\cdots+t_{d}^{N}.\] Proof.: Let \(\emptyset=U_{0}\subseteq U_{1}\subseteq\cdots\subseteq U_{d-1}\subseteq U_{d}=[N]\) be a flag of \(T\)-invariant subsets. If \(|U_{i}|\geq 1\) for some \(i\), then since \(TU_{i}\subseteq U_{i}\), it follows that \(|U_{i}|=N\). Therefore for each \(1\leq i\leq d\), we have one flag such that \(|U_{i}|=N\), \(|U_{j}|=0\) for \(j<i\). Recall the complete symmetric polynomials [7, p. 383]. For fixed \(k\geq 1\), the polynomial \[h_{k}(x_{1},\ldots,x_{N})=\sum_{1\leq i_{1}\leq i_{2}\leq\cdots\leq i_{k}\leq N }x_{i_{1}}x_{i_{2}}\cdots x_{i_{k}}\] is called the complete homogeneous symmetric polynomial in \(N\) variables. The generating function for \(h_{k}(x_{1},\ldots,x_{N})\)[8, p. 63] is \[\sum_{k=0}^{\infty}h_{k}(x_{1},\ldots,x_{N})t^{k}=\prod_{i=1}^{N}\frac{1}{1-x _{i}t}. \tag{2}\] The next proposition illustrates the generating function for the number of \(d\)-flags of \(T\)-invariant subsets when \(T\) is a chain. **Proposition 3.5**.: Let \(T\) be a chain of length \(k\). Then \[g_{T}(t)=g_{T}(t_{1},\ldots,t_{d})=h_{k}(t_{1},\ldots,t_{d}),\] where \(h_{k}(t_{1},\ldots,t_{d})\) is the complete homogeneous symmetric polynomial of degree \(k\) in the variables \(t_{1},\ldots,t_{d}\). Proof.: Let \(\emptyset=U_{0}\subseteq U_{1}\subseteq\cdots\subseteq U_{d-1}\subseteq U_{ d}=[k]\) be a flag of \(T\)-invariant subsets with \[|U_{i}|=j_{1}+\cdots+j_{i}\ \forall\ i.\] Since \(k=|U_{d}|=j_{1}+\cdots+j_{d}\), it follows that each term in the generating function \(g_{T}(t)\) must be of degree \(k\). As \(T\) is a chain, for any \(1\leq s\leq k\), there is a unique \(T\)-invariant subset of cardinality \(s\), namely, the last \(s\) nodes of the chain. Therefore for any \(d\)-tuple of non-negative integers \((j_{1},\ldots,j_{d})\), there is a unique flag of \(T\)-invariant subsets \(\emptyset=U_{0}\subseteq U_{1}\subseteq\cdots\subseteq U_{d-1}\subseteq U_{ d}=[k]\) such that \(|U_{i}|=j_{1}+\cdots+j_{i}\), and hence the coefficient of \(t_{1}^{j_{1}}\cdots t_{d}^{j_{d}}\) is \(1\). The following corollary is an immediate consequence of the above result, which we will use repeatedly. Figure 11. A cycle with \(4\) nodes. **Corollary 3.6**.: Let \(T\) be a chain on \([n]\). Then \[g_{T}(\zeta,\zeta^{2},\dots,\zeta^{d})=1.\] Proof.: By Proposition 3.5, \(g_{T}(\zeta,\zeta^{2},\dots,\zeta^{d})=h_{n}(\zeta,\zeta^{2},\dots,\zeta^{d})\). By (2), \[\sum_{k=0}^{\infty}h_{k}(\zeta,\zeta^{2},\dots,\zeta^{d})t^{k}= \prod_{i=1}^{d}\frac{1}{1-\zeta^{i}t}\] \[= \frac{1}{1-t^{d}}=\sum_{k=0}^{\infty}t^{dk}.\] Therefore \(h_{n}(\zeta,\zeta^{2},\dots,\zeta^{d})=1\) as \(n=dm\). The following proposition describes the generating function for the number of \(d\)-flags of \(T\)-invariant subsets for a rooted tree \(T\). **Proposition 3.7**.: Let \(T\) be a rooted tree (Fig. 1) with \(N\) nodes and \(k\) subtrees \(T_{1},\dots,T_{k}\). Then \[g_{T}(t_{1},\dots,t_{d})=\sum_{i=1}^{d}t_{i}\prod_{j=1}^{k}g_{T_{j}}(t_{i},t_{i +1},\dots,t_{d}), \tag{3}\] where \(g_{T_{j}}(t_{i},t_{i+1},\dots,t_{d})\) is the generating function for the number of \((d-i+1)\)-flags for the tree \(T_{j}\). Proof.: Let \(\emptyset=U_{0}\subseteq U_{1}\subseteq\dots\subseteq U_{d-1}\subseteq U_{ d}=[N]\) be a flag for the tree \(T\). The generating function \(g_{T}(t_{1},\dots,t_{d})\) is defined recursively. If the root node lies in \(U_{1}\), we have \(t_{1}\) times the product of the generating functions for each subtree \(T_{j}(1\leq j\leq k)\). If the root node doesn't lie in \(U_{1}\), then \(U_{1}\) is empty since each \(U_{i}\) is \(T\)-invariant and all other nodes of the tree eventually fall into the root node. This way, we get the other terms appearing in (3) depending on the least \(i\), such that the root node lies in \(U_{i}\). The next two results are two of the three main results of this section. **Theorem 3.8**.: Let \(T\) be a cycle with \(n\) nodes. Then \[\sigma(d;T)=g_{T}(\zeta,\zeta^{2},\dots,\zeta^{d}).\] Proof.: Let \(W\) be a \(d\)-splitting subset for the cycle \(T\). Fix a node \(\tau\) on the cycle. If \(\tau\) is in \(W\), then the next \((d-1)\) nodes to \(\tau\) lie in \(TW,\dots,T^{d-1}W\), respectively. The \(d^{th}\) node to \(\tau\), say \(\tau_{d}\), may again belong to \(W\). Then the next \((d-1)\) nodes to \(\tau_{d}\) lie in \(TW,\dots,T^{d-1}W\) respectively. Continuing this, we obtain a splitting subset of cardinality \(m\) since \(n=dm\). For the next splitting subset, say \(W_{1}\), we may begin with the node next to \(\tau\), say \(\tau_{1}\). Arguing as above, the \((d-1)\) nodes next to \(\tau_{1}\) can not lie in \(W_{1}\) as they belong to \(TW_{1},\dots,T^{d-1}W_{1}\), respectively. Therefore, the second possible node in \(W_{1}\) is \(\tau_{d+1}\). Completing the cycle, we obtain another \(d\)-splitting subset, \(W_{1}\), which is different from \(W\) as \(\tau_{1}\in W_{1}\) but \(\tau_{1}\notin W\). Continuing the same argument, we obtain \((d-1)\) distinct \(d\)-splitting subsets \(W_{1},\ldots,W_{d-1}\) as \(\tau_{i}\in W_{i}\) but \(\tau_{i}\notin W_{j}\) for \(i\neq j\). However, as \(\tau_{d}\) belongs to \(W\), the \(d^{th}\) splitting subset, \(W_{d}\), coincides with \(W\) since \(T\) is a cycle. Moreover, the cyclic structure of \(T\) ensures that these are the only possible splitting subsets. Hence \(\sigma(d;T)=d\). Since \(g_{T}(t_{1},t_{2},\ldots,t_{d})=t_{1}^{n}+t_{2}^{n}+\cdots+t_{d}^{n}\) by Proposition 3.4, we have that \[g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})= \zeta^{n}+\zeta^{2n}+\cdots+\zeta^{dn}=d,\] as \(d\) divides \(n\). **Theorem 3.9**.: Let \(T\) be a chain on \([n]\). Then \[\sigma(d;T)=g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d}).\] Proof.: Let \(W\) be a \(d\)-splitting subset. Beginning with the top node, say \(\tau_{1}\) of the chain, we attempt to construct a \(d\)-splitting subset. If \(\tau_{1}\in W\), then the next \((d-1)\) nodes in the chain, say \(\tau_{2},\ldots,\tau_{d}\) belong to \(TW,\ldots,T^{d-1}W\) respectively. The next possible node in \(W\) is \(\tau_{d+1}\). Again, \(\tau_{d+2},\ldots,\tau_{2d}\) belong to \(TW,\ldots,T^{d-1}W\) respectively. Since \(n=dm\), the nodes \(\tau_{1},\tau_{d+1},\ldots,\tau_{(m-1)d+1}\) constitute a \(d\)-splitting subset for \(T\). Note that the above-defined \(d\)-splitting subset \(W\) is unique because the top node \(\tau_{1}\) does not get fed by any other node, and so it must belong to \(W\). This leaves a unique choice of nodes for \(W\). Hence \(\sigma(d;T)=1\). The result now follows by Corollary 3.6. The following proposition extends Lemma 2.12. **Proposition 3.10**.: For every tree \(T\) which has a \(d\)-splitting subset, we have \[g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})=1,\] where \(g_{T}(t_{1},\ldots,t_{d})\) is the generating function for the number of \(d\)-flags of \(T\)-invariant subsets. Proof.: We will prove by induction on the number of chains of the tree \(T\). If \(T\) has one chain and a \(d\)-splitting subset exists for \(T\), then \(T\) has \(dk\) nodes for some positive integer \(k\) and \(g_{T}(t_{1},\ldots,t_{d})\) is a complete homogeneous symmetric polynomial in \(t_{1},\ldots,t_{d}\) of degree \(dk\). By Corollary 3.6, \(g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})=1\). Suppose the result holds for all trees which consist of less than \(k\) chains and have a \(d\)-splitting subset. Let \(T\) be a tree which has \(k\) chains such that a \(d\)-splitting subset exists for \(T\). Then consider a chain \(C\) beginning from one of the leaves of the tree and ending before the branching node such that \(C\) has \(dl\) number of nodes for some positive integer \(l\), i.e. \(T=\widetilde{T}\gets C\), where \(\widetilde{T}\) is a tree. Note that such a choice of \(C\) is possible because whenever there is a branching that leads to the leaves, there is exactly one branch which has a \(d\)-splitting subset when joined with the branching node and other branches have \(d\)-splitting subsets (see Lemma 2.11). Then \(\widetilde{T}\) is a tree for which a \(d\)-splitting subset exists and has less than chains. By the induction hypothesis, \[g_{\widetilde{T}}(\zeta,\zeta^{2},\dots,\zeta^{d})=1. \tag{4}\] Also \[g_{C}(\zeta,\zeta^{2},\dots,\zeta^{d})=1. \tag{5}\] Therefore (4) and (5) give us \(g_{T}(\zeta,\zeta^{2},\dots,\zeta^{d})=1\). We immediately obtain the following corollary using the above result. We will use this corollary to prove Theorem 4.5. **Corollary 3.11**.: Let \(T\) be a tree with \(k\) nodes such that \(T\) does not have a \(d\)-splitting subset. Let \(\widetilde{T}\) be a rooted tree obtained by joining \(T\) to a root node \(R\) (i.e., \(\widetilde{T}=T\to R\)) such that \(\widetilde{T}\) has a \(d\)-splitting subset, then \[\zeta\ g_{T}(\zeta,\zeta^{2},\dots,\zeta^{d})+\zeta^{2}g_{T}(\zeta^{2},\dots, \zeta^{d})+\dots+\zeta^{d-1}g_{T}(\zeta^{d-1},\zeta^{d})=0.\] Proof.: The generating function for the tree \(\widetilde{T}\) is \[g_{\widetilde{T}}(t_{1},\dots,t_{d})=t_{1}\,g_{T}(t_{1},\dots,t_{d})+t_{2}\,g_ {T}(t_{2},\dots,t_{d})+\dots+t_{d-1}\,g_{T}(t_{d-1},t_{d})+t_{d}\,g_{T}(t_{d}).\] Since \(\widetilde{T}\) is a tree which has a \(d\)-splitting subset, by Proposition 3.10, \(g_{\widetilde{T}}(\zeta,\dots,\zeta^{d})=1\). Also, \(g_{T}(t_{d})\) is the generating function for the number of \(1\)-flags on \([k]\), so \(g_{T}(t_{d})=t_{d}^{k}\). Therefore \(\zeta^{d}g_{T}(\zeta^{d})=(\zeta^{d})^{k+1}=1\), and the proposition follows. The next result together with Proposition 3.10 shows that if \(T\) is a tree such that \(\sigma(d;T)>0\), then \(\sigma(d;T)=g_{T}(\zeta,\zeta^{2},\dots,\zeta^{d})\). **Proposition 3.12**.: If a \(d\)-splitting subset exists for a tree, then it is unique. Proof.: Let \(T\) be a tree such that \(\sigma(d;T)>0\). Let \(W\) be a \(d\)-splitting subset. Since the leaves of the tree are not fed by any other node, they must belong to \(W\). Then, the subsequent \((d-1)\) nodes after each leaf can not be in \(W\) as they lie in \(TW,T^{2}W,\dots,T^{d-1}W\), respectively. Cut off the leaves along with the \((d-1)\) nodes following them. This gives us a forest. In the forest, repeat the above procedure with the trees which are not chains with \(d\) nodes. After a finite number of steps, we will be left with only chains with \(d\) nodes, and \(W\) consists of all the top nodes (i.e. the leaves in the final forest). This proves that there is a unique choice of \(W\) if \(\sigma(d;T)>0\). ## 4. Endofunctions on finite sets In this section, we extend the results of the previous section to endofunctions, which have some specific structures. We begin with describing the generating function for the number of \(d\)-flags of \(T\)-invariant subsets for a general endofunction **Lemma 4.1**.: Let \(T\) be an endofunction on \([N]\) with \(s\) connected components such that there are \(r_{i}\) nodes in the cycle of \(i^{th}\) component. Let \(T_{i,1},T_{i,2},\ldots,T_{i,k_{i}}\) be the trees attached to the cycle of \(i^{th}\) component. Then \[g_{T}(t_{1},\ldots,t_{d})=\prod_{i=1}^{s}\left(\sum_{l=1}^{d}t_{l}^{r_{i}}\prod _{j=1}^{k_{i}}g_{T_{i,j}}(t_{l},t_{l+1},\ldots,t_{d})\right),\] where \(g_{T_{i,j}}(t_{l},t_{l+1},\ldots,t_{d})\) is the generating function for the number of \((d-l+1)\)-flags for the tree \(T_{i,j}\). Proof.: Since distinct connected components are independent of each other, we get the product of the generating functions for each connected component. Therefore, it is enough to consider an endofunction with one component. Let \(T\) be an endofunction with a cycle having \(r\) nodes, and let \(T_{1},\ldots,T_{k}\) be \(k\) trees attached to the \(r\) nodes of the cycle. Let \(\emptyset=U_{0}\subseteq U_{1}\subseteq\cdots\subseteq U_{d-1}\subseteq U_{ d}=[N]\) be a flag of \(T\)-invariant subsets. If \(U_{1}\) contains a node from the cycle, then since \(U_{1}\) is \(T\)-invariant, \(U_{1}\) must have all the cycle nodes. Therefore we get \(t_{1}^{r}\) times the product of the generating functions of the trees attached to the cycle as the nodes of the cycle may serve as the root nodes of the trees. In this case, we get \(t_{1}^{r}\prod_{j=1}^{k}g_{T_{j}}(t_{1},\ldots,t_{d})\). If \(U_{1}\) doesn't contain a node of the cycle, then since \(U_{1}\) is \(T\)-invariant and all the trees feed into the nodes of the cycle, \(U_{1}\) must be empty. If \(U_{2}\) contains a node of the cycle, then arguing as above, we obtain \(t_{2}^{r}\prod_{j=1}^{k}g_{T_{j}}(t_{2},\ldots,t_{d})\). We continue this procedure and obtain all the terms in the sum \(\sum_{l=1}^{d}t_{l}^{r}\prod_{j=1}^{k}g_{T_{j}}(t_{l},\ldots,t_{d})\) depending on the first \(U_{l}\), which contains a node from the cycle. The following lemma will be useful to prove Theorem 4.4. **Lemma 4.2**.: Let \(n=dk\) for some positive integers \(d\) and \(k\). Then for all \(1<l\leq d\), we have \[h_{n}(\zeta^{l},\zeta^{l+1},\ldots,\zeta^{d})=1,\] where \(h_{n}\) is the complete homogeneous symmetric polynomial of degree \(n\) in \((d-l+1)\) variables. Proof.: Note that \[h_{n}(\zeta^{l},\zeta^{l+1},\ldots,\zeta^{d})=\zeta^{ln}\ h_{n}(1,\zeta, \ldots,\zeta^{d-l}).\] Let \(d-l=m\). Then \(m<d\). It is enough to prove that \(h_{n}(1,\zeta,\ldots,\zeta^{m})=1\) for \(m<d\), since \(\zeta^{ln}=1\). Consider the principal specialization of the homogeneous symmetric polynomial. By [15, Prop. 7.8.3], \[h_{n}(1,q,\ldots,q^{m})=\genfrac{[}{]}{0.0pt}{}{m+n}{n}_{q},\] where \(\big{[}\ \big{]}_{q}\) denotes the \(q\)-binomial coefficient. To prove the result, it suffices to show that \(\genfrac{[}{]}{0.0pt}{}{m+n}{n}_{q}\) at \(q=\zeta\) is \(1\). By [12, Prop. 4.2 (iii)], \[\genfrac{[}{]}{0.0pt}{}{m+n}{n}_{q=\zeta}=\binom{n/d+\lfloor m/d\rfloor}{ \lfloor m/d\rfloor}=\binom{n/d}{0}=1.\] With the aid of the above lemma, we obtain the following result, which extends Proposition 3.10. **Proposition 4.3**.: Let \(T\) be a tree for which a \(d\)-splitting subset exists and \(l\) be a positive integer such that \(1<l\leq d\). Let \(g_{T}(t_{l},t_{l+1},\ldots,t_{d})\) be the generating function for the number of \((d-l+1)\)-flags of \(T\)-invariant subsets. Then \[g_{T}(\zeta^{l},\zeta^{l+1},\ldots,\zeta^{d})=1.\] Proof.: We prove by induction on the number of chains of \(T\). If \(T\) has one chain and a \(d\)-splitting subset exists for \(T\), it has \(dk\) number of nodes for some positive integer \(k\). In this case, \(g_{T}(t_{l},t_{l+1},\ldots,t_{d})\) is a complete homogeneous symmetric polynomial of degree \(dk\) in the variables \(t_{l},t_{l+1},\ldots,t_{d}\). By Lemma 4.2, \(g_{T}(\zeta^{l},\zeta^{l+1},\ldots,\zeta^{d})=1\). Suppose the result holds for all trees with less than \(k\) chains and a \(d\)-splitting subset. Let \(T\) be a tree for which a \(d\)-splitting subset exists such that \(T\) has \(k\) chains. Then we can decompose \(T\) as \(T=\widetilde{T}\gets C\), where \(C\) is a chain beginning from a leaf of the tree and running upto its branching node (branching node excluded) such that \(C\) has \(ds\) number of nodes for some positive integer \(s\), and \(\widetilde{T}\) is a tree. Note that such a choice is possible because whenever a branching occurs in \(T\), exactly one branch will have a \(d\)-splitting subset together with the branching node, and other branches will have \(d\)-splitting subsets (Lemma 2.11). \(\widetilde{T}\) is a tree with a \(d\)-splitting subset and less than \(k\) chains. By the induction hypothesis, \(g_{\widetilde{T}}(\zeta^{l},\zeta^{l+1},\ldots,\zeta^{d})=1\). Since \(g_{C}(\zeta^{l},\zeta^{l+1},\ldots,\zeta^{d})=1\), the result follows. The following two theorems are the main results of this section. **Theorem 4.4**.: Let \(T\) be an endofunction on \([n]\) with a cycle having \(ds\) nodes and \(k\) trees attached to the nodes of the cycle such that each tree has a \(d\)-splitting subset. Then \[\sigma(d;T)=g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d}).\] Proof.: Since each tree has a \(d\)-splitting subset and the central cycle has \(ds\) nodes, \(\sigma(d;T)=d\) (see Theorem 3.8). We shall prove that \(g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})=d\). Let \(T_{1},\ldots,T_{k}\) be \(k\) trees attached to the nodes of the central cycle. By Lemma 4.1, \[g_{T}(t_{1},\ldots,t_{d})=\sum_{l=1}^{d}t_{l}^{ds}\prod_{j=1}^{k}g_{T_{j}}(t_ {l},t_{l+1},\ldots,t_{d}).\] Therefore \[g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})=\sum_{l=1}^{d}1=d,\] where the last equation follows by Proposition 4.3 and Proposition 3.10. **Theorem 4.5**.: Let \(T\) be an endofunction on \([n]\) with a cycle having \((ds+1)\) nodes and let \(T_{1},\ldots,T_{k}\) be \(k\) trees attached to the nodes of the cycle such that each of the \((k-1)\) trees \(T_{1},\ldots,T_{k-1}\) has a \(d\)-splitting subset, and \(T_{k}\) together with its root node on the cycle, has a \(d\)-splitting subset. Then \[\sigma(d;T)=g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d}).\] Proof.: Since \(T_{k}\) has a \(d\)-splitting subset when joined with a cycle node, this leaves a unique choice of nodes from the cycle to constitute the \(d\)-splitting subset. Therefore \(\sigma(d;T)=1\). We will show that \(g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})=1\). By Lemma 4.1, we have \[g_{T}(t_{1},\ldots,t_{d})=\sum_{j=1}^{d}t_{j}^{ds+1}\prod_{i=1}^{k}g_{T_{i}}( t_{j},t_{j+1},\ldots,t_{d}).\] By Propositions 3.10 and 4.3, \(g_{T_{i}}(\zeta^{j},\zeta^{j+1},\ldots,\zeta^{d})=1\)\(\forall\)\(1\leq j\leq d\) and \(1\leq i\leq k-1\). Therefore, \[g_{T}(\zeta,\ldots,\zeta^{d})=\zeta\ g_{T_{k}}(\zeta,\ldots, \zeta^{d})+\zeta^{2}\ g_{T_{k}}(\zeta^{2},\ldots,\zeta^{d})\] \[\qquad\qquad+\cdots+\zeta^{d-1}\ g_{T_{k}}(\zeta^{d-1},\ldots, \zeta^{d})+\zeta^{d}\ g_{T_{k}}(\zeta^{d}).\] By Corollary 3.11, \(g_{T}(\zeta,\ldots,\zeta^{d})=\zeta^{d}\ g_{T_{k}}(\zeta^{d})=(\zeta^{d})^{l+ 1}=1\), where \(l\) is the number of nodes in the tree \(T_{k}\). **Remark 4.6**.: We will call the type of endofunctions described in Theorem 4.4_endofunctions of Type \(1\)_, while those described in Theorem 4.5 as _endofunctions of Type \(2\)_. The next corollary generalizes Theorems 4.4 and 4.5. It shows that the result holds for a more general class of endofunctions. **Corollary 4.7**.: Let \(T\) be an endofunction on \([n]\) consisting of \(k\) connected components \(T_{1},\ldots,T_{k}\) of size \(dm_{1},\ldots,dm_{k}\) respectively, such that \(m_{1}+\cdots+m_{k}=n/d\) and each \(T_{i}\) is either of Type \(1\) or of Type \(2\). Then \[\sigma(d;T)=g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d}).\] Proof.: Since each \(T_{i}\) is independent of \(T_{j}\) for \(i\neq j\), we have \[g_{T}(t_{1},\ldots,t_{d})=\prod_{i=1}^{k}g_{T_{i}}(t_{1},\ldots,t_{d}).\] Also, by Theorem 4.4 and Theorem 4.5, \[\sigma(d;T_{i})=g_{T_{i}}(\zeta,\zeta^{2},\ldots,\zeta^{d})\ \forall\ 1\leq i\leq k.\] As \(m_{1}+\cdots+m_{k}=n/d\), we have that \[\sigma(d;T)=\prod_{i=1}^{k}\sigma(d;T_{i}),\] and the result follows. We remark that if the endofunction \(T\) is neither of type 1 nor of type 2, then \(\sigma(d;T)\) may not be equal to \(g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})\). The following is an example of an endofunction \(T\) with \(3k+2\) nodes in the cycle and some trees feeding into the cycle, but \(\sigma(d;T)\neq g_{T}(\zeta,\zeta^{2},\ldots,\zeta^{d})\). **Example 4.8**.: Consider the endofunction \(T:[6]\to[6]\) defined as \(1\to 2\), \(2\to 3\), \(3\to 4\), \(4\to 5\), \(5\to 6\), and \(6\to 2\) as shown in Fig. 12. Here the central cycle has \(3k+2\) nodes (\(k=1\)), and a tree is attached to a node of the cycle. Let \(d=3\), then \(W=\{1,4\}\) forms a 3-splitting subset. We may easily verify that \(W\) is the only 3-splitting subset. But \(g_{T}(t_{1},t_{2},t_{3})={t_{1}}^{5}(t_{1}+t_{2}+t_{3})+{t_{2}}^{5}(t_{2}+t_{3 })+{t_{3}}^{5}(t_{3})\) and \(g_{T}(\omega,\omega^{2},\omega^{3})=2+\omega\neq 1=\sigma(3;T)\), where \(\omega\) is the primitive third root of unity. ## 5. Acknowledgements The author sincerely thanks Amritanshu Prasad for several helpful discussions. She extends thanks to Samrith Ram for his guidance. Research support from CSIR, India, is gratefully acknowledged. This work was done when the author was a visiting scholar at The Institute of Mathematical Sciences (IMSc), Chennai, India.
2310.19287
Enhancing Scalability and Reliability in Semi-Decentralized Federated Learning With Blockchain: Trust Penalization and Asynchronous Functionality
The paper presents an innovative approach to address the challenges of scalability and reliability in Distributed Federated Learning by leveraging the integration of blockchain technology. The paper focuses on enhancing the trustworthiness of participating nodes through a trust penalization mechanism while also enabling asynchronous functionality for efficient and robust model updates. By combining Semi-Decentralized Federated Learning with Blockchain (SDFL-B), the proposed system aims to create a fair, secure and transparent environment for collaborative machine learning without compromising data privacy. The research presents a comprehensive system architecture, methodologies, experimental results, and discussions that demonstrate the advantages of this novel approach in fostering scalable and reliable SDFL-B systems.
Ajay Kumar Shrestha, Faijan Ahamad Khan, Mohammed Afaan Shaikh, Amir Jaberzadeh, Jason Geng
2023-10-30T06:05:50Z
http://arxiv.org/abs/2310.19287v1
Enhancing Scalability and Reliability in Semi-Decentralized Federated Learning With Blockchain: Trust Penalization and Asynchronous Functionality ###### Abstract The paper presents an innovative approach to address the challenges of scalability and reliability in Distributed Federated Learning by leveraging the integration of blockchain technology. The paper focuses on enhancing the trustworthiness of participating nodes through a trust penalization mechanism while also enabling asynchronous functionality for efficient and robust model updates. By combining Semi-Decentralized Federated Learning with Blockchain (SDFL-B), the proposed system aims to create a fair, secure and transparent environment for collaborative machine learning without compromising data privacy. The research presents a comprehensive system architecture, methodologies, experimental results, and discussions that demonstrate the advantages of this novel approach in fostering scalable and reliable SDFL-B systems. blockchain, smart contracts, machine learning, distributed federated learning, trust, incentives ## I Introduction In recent years, decentralized approaches have gained significant attention in the field of machine learning. The traditional centralized paradigm faces challenges in handling extensive datasets, addressing data privacy concerns, and relying on a single central authority [1]. Decentralized machine learning, particularly Semi-Decentralized Federated Learning (SDFL), offers a promising solution to these issues by distributing model training across multiple devices, allowing data to remain on-device and preserving privacy while enabling collaborative learning [2, 3]. The SDFL is a strategic amalgamation of decentralization and controlled coordination allowing local model training and parameter exchange while designating an aggregator for model aggregation [3]. The Federated Learning algorithm preserves user privacy by avoiding raw data collection, but recent studies highlight potential vulnerabilities in parameter-based attacks, underscoring the need for further advancements in federated learning frameworks [4]. The combination of Semi-Decentralized Federated Learning with Blockchain (SDFL-B) holds great potential for overcoming existing challenges in Distributed Federated Learning (DFL) [5]. Leveraging blockchain's attributes of immutability and transparency, this integration establishes a secure and tamper-resistant platform for cultivating trust and encouraging honest participation in the decentralized learning process [6]. By incorporating blockchain technology into SDFL, we seek to address the issues of reliability and trust, promoting fairness and accountability among participating nodes. This paper sets out to explore and propose an innovative approach addressing two critical aspects of SDFL: scalability and reliability. We investigate the challenges associated with scalability in SDFL and present a robust architecture that can efficiently manage an increasing number of participating nodes. The proposed system leverages distributed computation and communication techniques to ensure seamless scaling for large-scale SDFL deployments. To enhance the trustworthiness of nodes participating in the SDFL process, we introduce a trust penalization mechanism. This mechanism identifies and penalizes untrustworthy nodes based on their contributions, discouraging dishonest behavior and promoting a reliable and cooperative learning environment. To further enhance the robustness and real-time performance of SDFL, we incorporate asynchronous functionality. This allows nodes to contribute model updates at their own pace, making the system resilient against node failures and network delays. Existing literature has yet to cover a standard federation approach for various machine learning (ML) frameworks, especially in the context of DFL [3]. Therefore, there is a need to create and implement a robust and adaptive codebase for producing generic ML scenarios. Our goal is to develop an adaptable and expandable solution applicable to different frameworks and application scenarios, advancing decentralized machine-learning techniques. This will lead to scalable and reliable SDFL systems capable of accommodating diverse applications across various domains. The rest of the paper is organized as follows. Section II provides a succinct analysis of existing architectures and identifies their shortcomings, laying the groundwork for the need for a novel solution. Our proposed model for the solution architecture is detailed in Section III, outlining the integration of SDFL with blockchain to address the identified challenges. This section also explains the trust penalization mechanism, illustrating how it fosters reliability and accountability among participating nodes, as well as the implementation of asynchronous functionality, showcasing its benefits in achieving real-time performance and robustness. The experimental setup is presented in Section IV, while Section V offers results and analysis, validating the effectiveness of our approach in promoting the scalability and trustworthiness of SDFL-B. Section VI discusses the findings, and lastly, Section VII concludes the paper by summarizing key contributions and outlining potential future research directions in this dynamic field. ## II Background and Related Works Distributed Federated Learning (DFL), initially introduced in 2018, aimed to decentralize the aggregation of model parameters among neighboring participants [7]. In DFL, the core operation involves the rapid transmission of locally computed updates from each node, such as model parameters or gradients, and accompanying metadata, like activation functions in neural networks, to other federation nodes. In comparison to Centralized Federated Learning (CFL), DFL effectively tackles issues associated with single points of failure, trust dependencies, and bottlenecks that can occur at the server node [8]. DFL brings improvements in fault tolerance by enabling nodes to maintain an updated awareness of available or inactive communicating nodes [9], thus reducing vulnerability to single-point attacks. Additionally, DFL mitigates network bottleneck challenges by evenly distributing communication and workloads across nodes, thereby minimizing the risk of congestion or performance delays across the network [10]. However, alongside these advantages, DFL introduces novel challenges, including increased communication overhead, the optimization of training processes, and the assurance of trustworthy AI. Depending on how model aggregation is distributed within the network, specific DFL configurations may experience elevated communication overhead [11]. In such scenarios, careful planning, fine-tuning of communication protocols, client selection strategies, and trust mechanisms become crucial for mitigating these limitations. DFL encompasses various dimensions, including network topology defining node associations [12], communication mechanisms coordinating model parameter exchange [1], and security and privacy, covering potential cyberattacks and measures to safeguard data privacy and model robustness [13]. Within this framework, three key perspectives to be explored: nodes, communications, and models. The first perspective involves assessing the diversity and dynamism of nodes within DFL. The second focuses on the effectiveness of inter-node communications for data exchange. The third centers on evaluating the performance of machine learning and deep learning models in collaborative task-solving. In a prior study [14], the authors employed a semi-decentralized federated learning algorithm where clients collaborated by relaying neighboring updates to a central parameter server (PS), intending to mitigate the impact of intermittent connectivity issues and improve convergence rates. Clients computed local consensus from neighbors' updates and sent a weighted average of their own and neighbors' updates to the PS. The algorithm was optimized for these averaging weights to achieve unbiased global updates, enhancing convergence rates and reducing variance. Another paper [15] introduced the concept of Federated Learning Empowered Overlapped Clustering for Decentralized Aggregation (FL-EOCD). This approach leveraged device-to-device (D2D) communications and overlapped clustering to achieve decentralized aggregation, eliminating the need for a central aggregator. The paper also presented an energy-efficient framework for FL-EOCD within a partially connected D2D network, with a specific focus on addressing energy consumption and convergence rate. In [16], a semi-decentralized learning method merging device-to-server and device-to-device (D2D) communication for model training was introduced. This approach involved local training and D2D-based consensus among device clusters, addressing issues related to diverse resources and device proximity. It demonstrated improved model accuracy, training time, and energy efficiency compared to existing methods. However, limitations included the need to address scalability challenges, potential overhead from D2D communications, and the robustness of the approach in dynamic or adversarial scenarios. In a different study [17], an asynchronous federated learning aggregation protocol utilizing a permissioned blockchain was introduced. This protocol integrated the learned model into the blockchain and performed two-order aggregation calculations, effectively reducing synchronization problems. However, addressing challenges related to scalability, diverse network conditions, and various data types, as well as optimizing system performance, remains vital for future research in real-world edge computing scenarios. ## III System Model ### _System Architecture_ In this paper, we introduce the blockchain-coupled cluster-based semi-decentralized federated learning architecture, a novel approach that capitalizes on geographical proximity to optimize the efficiency and communication dynamics of the federated learning process. As shown in Fig. 1, clusters of participating workers are formed based on their geographical locations. These clusters create localized groups that foster efficient resource allocation and minimize communication overhead. Within each cluster, a randomly designated worker is assigned the role of cluster head. The cluster head takes charge of coordinating the federated learning process within their respective cluster. Central to this role is the aggregation of model weights contributed by individual workers during their local training processes. After collecting these updated model weights, the cluster head initiates the aggregation process. The aggregated model is then disseminated not only among the cluster's workers but also stored externally on the InterPlanetary File System (IPFS) platform. This ensures accessibility to the aggregated model for all cluster workers, even in the presence of intermittent network disruptions. Importantly, communication extends beyond individual clusters. Workers from one cluster can request the model's hash from another cluster's head if they find value in the aggregated model generated by the latter. Upon obtaining the hash, the requesting cluster's head incorporates the model into its own aggregation process, enhancing the collaborative essence of the semi-decentralized federated learning. To maintain dynamism and diversity, the worker head selection process is cyclic. The current cluster head periodically reshuffles and designates a new worker head from the participant pool. This rotation ensures that the model aggregation process remains dynamic and equitable, preventing any single worker from exerting consistent influence over the model. ### _System Workflow_ In this framework, multiple workers establish peer-to-peer connections bypassing the need for an intermediary or central server. The federated learning process begins with the application server initializing a server socket, a crucial communication endpoint in the system. Worker nodes, as they join the federated learning process, establish direct TCP socket connections with the server, enabling the direct sharing of crucial information like smart contract addresses. This address enables worker nodes to interact with the blockchain part of our framework, ensuring transparent and secure collaborative learning. Concurrently, workers provide metadata like their location upon enrollment. The server then leverages geographic data for efficient cluster formation, grouping physically proximate nodes to enhance communication efficiency. This approach capitalizes on data similarities from proximity, potentially improving model accuracy. Within these clusters, direct communication is enabled through JSON files that include cluster head information, allowing seamless communication between workers. Once clusters are formed, the server shares cluster head details; worker nodes train ML models on their own data and update their models and directly transmit weights to their cluster head in a peer-to-peer manner via sockets. The cluster head performs local model weight aggregation, combining received updates from peer worker nodes to generate an updated global model. Additionally, the cluster head connects with other clusters, sharing model information and further enhancing the collaborative process. This interconnected system ensures efficient communication, cluster-based collaboration, and dynamic model aggregation for improved federated learning outcomes. ### _Leader Selection and Contract Integration_ The process begins with the Requester deploying a smart contract onto the blockchain, requiring some tokens for deployment. Workers participate by joining the task and paying a certain amount of blockchain tokens. Subsequently, each worker retrieves their model from IPFS. This smart contract holds all the information about the workers, which is accessible to the Requester. To initiate federated learning, as discussed earlier, the Requester makes a cluster group based on worker location, and within the cluster one worker randomly gets selected to lead the aggregation process based on the information from the smart contract. All workers train their models with their own data and submit their scores to the smart contract. All workers, except the chosen leader, send their model weights to the leader for aggregation. The worker's head sends the updated weights to all workers and in IPFS. The leader then shares the updated model's hash with all other workers. Notably, the smart contract handles penalization and reward distribution based on the collected information. This process continues for subsequent rounds as well, ensuring collaborative learning and model improvement. ### _Role of Blockchain in SDFL Process_ Blockchain technology plays a pivotal role in enhancing various aspects of the SDFL process. A smart contract committed to the blockchain efficiently coordinates the FL task, facilitating the distribution of rewards and penalization of dishonest actors. The decentralized and distributed tamper-resistant nature of blockchain ensures the reliability of the SDFL process, mitigating the risks associated with a single central authority. Its transparent and immutable ledger provides an auditable record of model updates and transactions during the SDFL process, enabling traceability and fostering accountability among participants. The cryptographic mechanisms in blockchain securely verify participants' identities, preventing unauthorized access and creating a secure environment for collaborative learning [2]. In SDFL, data privacy is upheld as raw data is not shared centrally. Instead, the model updates trained by participants at the end of each round are securely stored on the InterPlanetary File System (IPFS), safeguarding against data leakage and unauthorized access to sensitive information. By leveraging the integration of blockchain in SDFL, participants can confidently contribute to the collaborative learning process while retaining control over their private data. The combination of tamper-resistant blockchain technology and IPFS storage fortifies the security and privacy of DFL, paving the way for a robust and transparent decentralized learning ecosystem. ### _Trust Penalization and Async Functionality_ The integration of trust penalization and asynchronous functionality augments the reliability and trustworthiness of SDFL-B. Fig. 1: Network Architecture — Cluster-Based Semi-Decentralized Federated Learning The trust penalization mechanism addresses the presence of bad or dishonest nodes by assessing their contributions and behaviors during the learning process. Nodes are evaluated based on model updates, protocol adherence, and contribution quality, and dishonest behavior is penalized. This mechanism fosters a trustworthy and accountable learning environment, promoting fairness and encouraging honest participation. We utilize the following algorithm for trust penalization: ``` 1. Requester (R) initializes the smart contract by depositing funds (D) to cover the task rewards. R \(\rightarrow\) SmartContractInitiation(D) 2. Each worker w \(\in\) W who wishes to participate in the task deposits a fixed amount F of money: \(\forall\)w \(\in\) W: w \(\rightarrow\) Deposit(F) This deposit ensures commitment and creates a level playing field among workers. 3. Workers' performance is evaluated based on the evaluation score S(w): \(\forall\)w \(\in\) W: S(w)=EvaluatePerformance(w) 4. Bad workers with evaluation scores below the threshold T are identified: \(\text{BadWorkers}=\{\text{w}\in\text{W}\mid\text{S(w)}<\text{T}\}\) Bad workers are penalized for their suboptimal performance. Penalties are imposed on the bad workers: \(\forall\)w \(\in\) BadWorkers: Pen(w) = F:P/100, where P represents the penalty percentage. 5. Penalty amounts are deducted from workers' deposits, reducing their remaining deposit (D(w)) as follows: \(\forall\)w \(\in\) BadWorkers: D(w) = F - Pen(w) 6. The remaining deposit amount is refunded to the workers: \(\forall\)w \(\in\) W: Refund(w) = D(w) The refund process maintains equitable treatment for all participants. 7. Collected penalties are transferred back to the requester: R \(\rightarrow\) TransferPenalties (\(\sum_{\text{w}\in\text{BadWorkers}}\)Pen(w)) This step ensures that penalized funds are appropriately utilized. 8. Top k workers based on specified rules are selected for reward distribution: \(\text{TopKWorkers}=\text{SelectTopK}\) (W, k) Rewards are distributed among the top workers: \(\forall\)w \(\in\) TopKWorkers: Reward(w)=R\({}_{\text{total}}\)/k ``` **Algorithm 1** Trust Penalization Algorithm This algorithm ensures a competitive and incentivized environment for workers to deliver high-quality contributions while penalizing underperforming workers. When participants know that their contributions are being evaluated and that there are consequences for dishonest behavior, they are more likely to contribute accurate and meaningful updates to the shared model through honest behavior. This promotes the convergence of the model towards a consensus that reflects the true underlying patterns in the data. The presence of penalties acts as a deterrent against malicious actions, thereby reducing the likelihood of deliberate attempts to disrupt the learning process. This, in turn, increases the reliability of model updates and the system as a whole. Participants can have more confidence in the accuracy and integrity of the shared model, leading to improved decision-making and outcomes in applications that rely on the FL process. On the other hand, asynchronous functionality empowers nodes to contribute model updates independently, without synchronization with any other entity. This approach tackles issues related to network delays, varying computational capabilities of nodes, and potential node failures. With asynchronous updates, real-time performance is achieved, ensuring system resilience and efficiency in SDFL-B. #### Iii-B1 Asynchronous Updates and Their Advantages in Distributed Collaborative Learning Primarily, it facilitates real-time performance enhancement, empowering individual nodes to independently update the shared model according to their pace. This dynamic approach expedites convergence, reduces training times, and frees nodes from the restrictions of synchronous communication cycles. Consequently, real-time updates enable rapid model improvements, making the system responsive to evolving data patterns. Additionally, this asynchronous feature fortifies the system's resilience against node failures, a crucial aspect in decentralized contexts like SDFL. The system's independence from the participation of every node in each communication round ensures its steadfastness amidst intermittent connectivity or node disconnections. This adaptability empowers the model to sustain progress even when certain nodes experience delays or temporary unavailability. Moreover, _the_ diverse computational and communication capacities inherent in decentralized networks are effectively managed through asynchronous updates, enabling nodes to contribute as they're prepared. This strategy optimizes resource allocation, prevents any single node from becoming a bottleneck, and ensures smooth progress throughout collaborative learning. However, implementing asynchronous functionality brings about trade-offs and challenges, including: Consistency and ConvergenceAsynchronous updates introduce inconsistencies in node models, challenging convergence. Local model updates, weighted averaging, and advanced aggregation methods are needed to ensure accurate and meaningful convergence. Communication EfficiencyAsynchronous updates reduce bottlenecks but create communication overhead due to frequent updates. Efficient communication protocols and strategies are vital to minimize latency, particularly in large networks. Addressing StragglersAsynchronous systems lead to slow-updating nodes or "stragglers" that hinder training. Techniques like redundancy, adaptive learning rates, and scheduling are essential to counteract straggler effects. Balancing Blockchain OverheadIntegrating blockchain in SDFL adds overhead from transaction verification and consensus mechanisms. Achieving an equilibrium between blockchain benefits and computational costs is pivotal. Privacy and Security ImplicationsAsynchronous updates raise privacy and security concerns due to varying data exposure levels. Privacy-preserving techniques are applied during aggregation and updates to maintain data confidentiality and integrity [2]. By combining trust penalization and asynchronous functionality, the SDFL system fosters secure, efficient, and transparent collaborative learning, incentivizing the involvement of reliable nodes and enhancing the overall performance and reliability of the system. ## IV Experimental setup We utilized an x86_64 architecture with 16 CPU cores and 32 threads. We employed the Intel(R) Xeon(R) CPU E5-2673 v4 model, which operated within a frequency range of 1200.0000 MHz to 2300.0000 MHz. We used a dual socket, featuring 8 cores per socket, as the foundation for our experimental investigations We assessed the performance and generalization capabilities of the Cluster-Based SDFL-B model using the well-known MNIST dataset as in our previous paper [2]. This dataset served as the foundation for assessing the model's performance and generalization capabilities. The experiment employed a Recursive architecture, inspired by the established framework for Decentralized Federated Learning with Blockchain. This architecture, denoted as 'Net', encompasses several crucial components that contribute to the model's functionality. These components include convolutional layers (conv1 and conv2), a dropout layer (conv2_drop), and fully connected layers (fc1 and fc2). This recursive structure facilitates decentralized learning and emphasizes collaborative knowledge exchange among participating nodes. The utilized hyperparameters were carefully selected to fine-tune the model's performance. The 'SGD' (Stochastic Gradient Descent) optimizer was chosen with a learning rate of 0.01, a momentum value of 0.5, a dampening set to 0, and no weight decay or Nesterov acceleration. These hyperparameters were tailored to strike a balance between efficient convergence and effective regularization during training. They played a pivotal role in shaping the training process and optimization strategy of the model. Network Architecture ScriptModule( original_name=Net (conv1): RecursiveScriptModule(original_name=Conv2d) (conv2): RecursiveScriptModule(original_name=Conv2d) (conv2_drop): RecursiveScriptModule(original_name=Dropout2d) (fc1): RecursiveScriptModule(original_name=Linear) (fc2): RecursiveScriptModule(original_name=Linear) ) Model Parameter {'state_dict': {'state': {}, 'param_groups': {'lr': 0.01, 'momentum': 0.5, 'dampenings': 0, 'weight_decay': 0, 'nestervo': False, 'params': {0, 1, 2, 3, 4, 5, 6, 7}}, 'name': 'SGD'} The model parameters, encapsulated within the state dictionary, were initialized based on the specified hyperparameters. The model's architecture and parameters were organized into parameter groups, each containing a subset of learnable parameters. The utilization of these parameters was orchestrated by the SGD optimizer, which ensured an iterative optimization process that progressively refined the model's internal representations. ## V Results and Analysis In our evaluations, we first analyzed scenarios involving 3 workers, both with and without the integration of blockchain technology. The results, as depicted in Fig. 2, revealed a remarkable consistency in accuracy, regardless of blockchain utilization. However, it is noteworthy that employing blockchain confers significant advantages, including enhanced trust, assurance of transparency, and a means for imposing penalties. On the flip side, when considering time efficiency, communication without blockchain emerged as the more time-effective option in the long run compared to its blockchain-enabled counterpart. This finding underscores the importance of carefully weighing the benefits and trade-offs associated with blockchain integration in decentralized systems. To assess the scalability of our semi-decentralized federated learning framework, we conducted an in-depth analysis by calculating the average accuracy across different worker participation scenarios: 8 workers, 16 workers and 20 workers, for each epoch. As shown in Fig. 3, the outcomes reveal consistent accuracy trends across different worker counts. This observation suggests that our semi-decentralized federated learning framework exhibits promising scalability, as demonstrated by the consistent trends observed across epochs. To ensure the reliability of the individual worker outputs, we calculated the standard deviations of accuracy for the comparison between configurations involving 8, 16 and 20 workers. Our findings, as shown in Fig. 4 indicate that the Fig. 3: Scalability Assessment (Accuracy vs Number of Epochs) Fig. 2: Analysis of Accuracy and Time with and without Blockchain semi-decentralized federated learning framework exhibits similar consistency and reliability with 8, 16 and 20 workers. This is supported by the similar standard deviation in accuracy metrics across epochs, indicating a stable and dependable training procedure. The similar performance variability emphasizes the system's robustness and improved reliability with a greater number of workers. The model convergence analysis involved examining the accuracy and loss curves of each worker individually as shown in Fig. 5 and Fig. 6. It was observed that, although there were slight variations in convergence rates, all workers demonstrated a clear trend of improving accuracy and diminishing loss as training progressed. This underscores the efficacy of the distributed approach in achieving a model convergence process, demonstrating its capacity to guide diverse workers and datasets toward optimal learning outcomes. ## VI Discussion ### _Impact of Trust Penalization on Node Behavior_ Trust penalization can encourage workers to follow the protocol and behave honestly. Workers are incentivized to provide accurate model updates to avoid penalties that could diminish their earnings or reputation within the network. Next, the incentive for compliance is heightened, motivating workers to actively engage and fulfill their roles within the semi-decentralized federated learning process. This engenders a more engaged and cooperative group of workers. Furthermore, the phenomenon of "free-riding" may decrease as workers understand that untrustworthy actions might lead to penalties. This balanced distribution of work and participation can result in a more equitable environment. ### _Impact of Trust Penalization on Model Performance_ Trust penalization can lead to higher model performance as workers are encouraged to provide accurate and high-quality updates. Inaccurate or malicious contributions are discouraged, leading to a more accurate and reliable model over time. Furthermore, penalization can help filter out noise introduced by unreliable or intentionally malicious workers. This can prevent the model from being negatively influenced by incorrect updates, improving the convergence speed and overall quality of the model. This robustness reinforces the collaborative learning process against potential attacks or attempts to undermine it. The mechanism also ensures fairness among workers by holding everyone accountable for their contributions. This can lead to a more balanced distribution of rewards and prevent a single malicious worker from disproportionately affecting the model. ### _Asynchronous Functionality in SDFL-B Across Varied Scenarios_ Asynchronous updates mean that workers can train their models independently and update them whenever they are ready. This reduces the need for strict synchronization among workers, which can lead to more efficient resource utilization and faster convergence. In scenarios with many workers, synchronous updates might lead to bottlenecks and delays due to the need for coordinated updates. Asynchronous updates can mitigate this issue. Furthermore, asynchronous updates can enhance fault tolerance. If one or more workers experience connectivity issues or temporary failures, it won't disrupt the entire training process since other workers can continue to make progress. ### _Generic Codebase Paradigm_ Our implementation is highly adaptable and modular. Increasing the number of clusters and workers in clusters only requires a simple change in parameters when the requester is initializing the network topology. Likewise, modifying the ML model and hyperparameters is also handled by the requester during the initialization phase. Our codebase seamlessly supports several popular ML Python libraries, including Pytorch, TensorFlow and ScikitLearn. This modular implementation allows different application scenarios to easily leverage our code with minimal configurations. This standardization and modularity significantly decrease redundant implementations of SDFL-B, fostering enhanced adaptability within varied contexts. ### _Limitations and Potential Areas For Future Research_ In asynchronous settings, there's a challenge related to selecting leaders. The complexity emerges because leaders Fig. 4: Reliability Assessment (Accuracy Standard Deviations vs Number of Epochs) Fig. 5: Model Convergence - Accuracy Patterns Fig. 6: Model Convergence - Loss Patterns chosen at random might be bad workers and affect the performance of the model by pushing the bad weights to the IPFS which can impact the entire network. Further research in this direction is needed to optimize this aspect of the system. ## VII Conclusion In conclusion, this research introduces an innovative approach that addresses the challenges of scalability and reliability in Decentralized Federated Learning through the integration of blockchain technology. By combining semi-decentralized federated learning with blockchain, the proposed system establishes a fair, secure, and transparent environment for collaborative machine learning while preserving data privacy. The incorporation of a trust penalization mechanism enhances the trustworthiness of participating nodes, fostering reliability and accountability, while asynchronous functionality ensures efficient and robust model updates. The results from experimental evaluations demonstrate the efficacy of the approach in promoting scalability and trustworthiness in SDFL-B systems. The discussed impact of trust penalization on node behavior and model performance underscores the positive influence of this mechanism. Asynchronous functionality's adaptability across various scenarios highlights its efficiency and fault tolerance benefits. Additionally, conducting user studies and experimental evaluations to validate the trust penalization algorithm's correctness and efficacy would be an essential avenue for future work. The insights gained from this study, coupled with further research, could focus on refining leader selection in asynchronous settings and establishing effective communication protocols for updating leaders' models before aggregation. This research contributes to the advancement and standardization of decentralized machine learning methodologies, enabling the development of scalable and reliable SDFL-B systems that cater to diverse applications across domains.
2304.11744
SketchXAI: A First Look at Explainability for Human Sketches
This paper, for the very first time, introduces human sketches to the landscape of XAI (Explainable Artificial Intelligence). We argue that sketch as a ``human-centred'' data form, represents a natural interface to study explainability. We focus on cultivating sketch-specific explainability designs. This starts by identifying strokes as a unique building block that offers a degree of flexibility in object construction and manipulation impossible in photos. Following this, we design a simple explainability-friendly sketch encoder that accommodates the intrinsic properties of strokes: shape, location, and order. We then move on to define the first ever XAI task for sketch, that of stroke location inversion SLI. Just as we have heat maps for photos, and correlation matrices for text, SLI offers an explainability angle to sketch in terms of asking a network how well it can recover stroke locations of an unseen sketch. We offer qualitative results for readers to interpret as snapshots of the SLI process in the paper, and as GIFs on the project page. A minor but interesting note is that thanks to its sketch-specific design, our sketch encoder also yields the best sketch recognition accuracy to date while having the smallest number of parameters. The code is available at \url{https://sketchxai.github.io}.
Zhiyu Qu, Yulia Gryaditskaya, Ke Li, Kaiyue Pang, Tao Xiang, Yi-Zhe Song
2023-04-23T20:28:38Z
http://arxiv.org/abs/2304.11744v1
# SketchXAI: A First Look at Explainability for Human Sketches ###### Abstract This paper, for the very first time, introduces human sketches to the landscape of XAI (Explainable Artificial Intelligence). We argue that sketch as a "human-centred" data form, represents a natural interface to study explainability. We focus on cultivating sketch-specific explainability designs. This starts by identifying strokes as a unique building block that offers a degree of flexibility in object construction and manipulation impossible in photos. Following this, we design a simple explainability-friendly sketch encoder that accommodates the intrinsic properties of strokes: shape, location, and order. We then move on to define the first ever XAI task for sketch, that of stroke location inversion (SLI). Just as we have heat maps for photos, and correlation matrices for text, SLI offers an explainability angle to sketch in terms of asking a network how well it can recover stroke locations of an unseen sketch. We offer qualitative results for readers to interpret as snapshots of the SLI process in the paper, and as GIFs on the project page. A minor but interesting note is that thanks to its sketch-specific design, our sketch encoder also yields the best sketch recognition accuracy to date while having the smallest number of parameters. The code is available at [https://sketchxai.github.io](https://sketchxai.github.io). ## 1 Introduction It is very encouraging to witness a recent shift in the vision and language communities towards Explainable AI (XAI) [5, 6, 42, 77, 79, 93]. In a world where "bag of visual words" becomes "bag of tricks", it is critically important that we understand why and how AI is making the decisions, especially as they overtake humans on a series of tasks [22, 28, 55, 71]. XAI research to date has focused on two modalities: photo [15, 40, 51, 92] and text [17, 41, 44, 70, 80]. Great strides have been made in the XAI for the photo domain, with the trend of going from heat/saliency maps [11, 68, 72, 74, 90] to the rules/semantics-oriented approaches [30, 31, 69]. The text side is captivating due to the flexibility of sentence construction. Early works in text models explainability also started with visualisations [1, 72, 90], moving onto linguistic phenomena [8, 41, 84], and most recently to attention [21, 65, 75]. In this paper, we make a first attempt at XAI for human freehand sketches. The "why" we hope is obvious - sketches are produced by _humans_ in the first place(!), from thousands of years ago in caves, and nowadays on phones and tablets. They are uniquely expressive, not only depicting an object/scene but also conveying stories - see a "Hunter and Arrows" here for a story dating back 25,000 years in France1. They, therefore, form an ideal basis for explainability which is also _human-facing_. Footnote 1: [https://www.worldhistory.org/Lascaux_Cave/](https://www.worldhistory.org/Lascaux_Cave/) The sketch domain is uniquely different from both of the well-studied photo and text domains. Sketch differs from photo in that it can be freely manipulated, while photos are rigid and hard to manipulate. This is largely thanks to the stroke-oriented nature of sketches - jittering strokes might give the "same" sketch back, jittering pixels gives you a "peculiar"-looking image. Sketches have the same level of flexibility in semantic construction as text: strokes are the building block for a sketch as words are for text. With these unique traits of sketch, the hope of this paper is to shed some light on what XAI might look for sketch data, and what it can offer as a result to the larger XAI community. This, however, is only the very first stab, the greater hope is to stir up the community and motivate follow-up works in this new direction of "human-centred" data for XAI. With that in mind, we focus our exploration on what makes sketches unique - yes, _strokes_. They allow for flexible object construction and make sketches free to manipulate. We then ask how strokes collectively form objects. For that, we identify three inherent properties associated with strokes: shape, location, and order. These three variables define a particular sketch: _shape_ defines how each stroke looks like, _location_ defines where they reside, and _order_ encodes the temporal drawing sequence. Our first contribution is a sketch encoder, that factors in all the mentioned essential properties of strokes. We hope that this encoder will build into its DNA how strokes (and in turn sketches) are represented, and therefore be more accommodating when it comes to different explainability tasks (now and in the future) - and for this, we name it SketchXAINet ("X" for E\(\underline{X}\)plainability). We are acute to the fact that explainability takes simple forms [50], so we refrained from designing a complicated network. In fact, we did not go any further than introducing a branch to encode each of the three stroke properties (shape, location, and order), and simply feed these into a standard transformer architecture with a cross-entropy loss. Interestingly, however, just with this simple architecture, we already observe state-of-the-art sketch recognition performance improving on all prior arts. With an explainability-compatible sketch encoder in place, we now want to examine if we can actually make anything explainable. First and foremost, of course, sketch explainability can be performed in the form of a heat map [11, 68, 74] - just treat sketches as a raster image and we are done. This, however, would be entirely against our very hope of spelling out sketch-specific explainability - the "explainability" one can obtain there is _at best_ at the level of photo heatmaps (see Fig. 1). Instead, we utilise our sketch encoder and put forward the first XAI task for sketch - that of stroke location inversion (SLI) (see Figs. 1 and 3). We study two types of tasks: recovery and transfer. Intuitively, during the recovery, we ask our optimisation procedure to jitter the stroke locations to _recover_ sketch so that it belongs to the same class as the original sketch. During the transfer task, we ask our optimisation procedure to jitter the stroke locations to obtain a sketch that belongs to a new class that we pass as input to the optimiser. The idea is then that how well the network has learned is positively correlated with how well it does at this inversion task, and that explainability lies in visualising this process. So, in addition to heat maps for photos, and correlation matrices for text, for sketch, we now have visualisations, that theoretically be manifested of infinite variety, and in the form of a video/GIF to capture the SLI process. We finish by playing with variants of the proposed SLI: (i) sketch recovery, to offer insights on category-level understanding of a learned encoder, _i.e_., reconstructing a sketch to the same category, and (ii) sketch transfer, to shed light on cross-category understanding, _i.e_., using strokes of one category to reconstruct another. Our contributions are as follows: (i) we argue for sketches to be introduced to the field of XAI, (ii) we identify strokes as the basic building block and build a sketch encoder, named as SketchXAINet, that encapsulates all unique sketch properties, (iii) we introduce stroke location inversion (SLI) as a first XAI task for sketch, (iv) we offer qualitative results of the inversion process and deliver best sketch recognition performance as a by-product. ## 2 Related work **Raster and vector sketch encoders.** Sketch contains high-level human understanding and abstraction of visual signals and is a distinctive modality to photos. Many of the previous works [33, 38, 43, 56, 57, 60, 67, 68, 67], however, treat sketches with no difference to photos - they operate on raster format and feed them into contemporary CNNs for visual learning. Facilitated by the availability of sketch datasets with stroke-level information [18, 20], there is an ongoing trend of works that turn to model sketch as a temporal sequence of vector coordinates, hoping to open up new research insights for downstream sketch tasks [12, 35, 36, 39, 47, 54, 73, 87, 88]. Along with this representation change on sketch data is also the backbone upgrade, from CNN to Transformer [39, 62], the choice of which we also embrace in constructing our proposed sketch encoder. Scarcely few existing works have anchored their focus on the explainability of sketch models, with [54][3] being moderately relevant to our best knowledge. At a high level, both works, just like ours, explore the impact of strokes on forming a sketch object. But instead of studying sketch abstraction,, how strokes can be deleted or simplified without altering the holistic semantic meaning, we leverage the free-hand stroke itself as a building block to understand sketch model explainability. **Ante-hoc and post-hoc explainability methods.** Several recent surveys and books discuss explainability methods in detail [51, 4, 5, 25]. Explainability methods are often split into two groups: _ante-hoc_[10, 34] and _post-hoc_[64, 90, 93, 26, 69, 90] methods. Ante-hoc methods are inherently and intrinsically interpretable, while post-hoc methods require designing separate techniques to provide probes into model explainability. The former, also known as the white/glass box approach, is preferable under the context of explainability, but limited by a few specific choices of instantiations,, decision trees [82], generalised additive models [2]. The latter being less transparent has no restrictions on model learning and therefore often achieves better test-time task performance. Achieving the optimal trade-off of such is then the core to both schools of explainable AI [5, 25]. Our proposed sketch explainability method SLI is post-hoc, but facilitated by a tailor-designed, less black-box (ante-hoc alike) sketch encoder (that allows reasoning over a stroke-based decision into shape, location, and order). Notably, our final sketch model achieves state-of-the-art recognition performance. **Counterfactual explanation and adversarial attack.** Our post-hoc explainability strategy SLI of "relocating first, recovery later" is also reminiscent of a specific AI explainability genre - counterfactual explanation (CE) [81, 29, 46]. CE aims to identify what are the minimal input changes for a model to make a different visual decision. The model is then explained towards credibility if these changes are key in defining the underlying visual concept. In this sense, SLI identifies the stroke location changes that matter (, the and the front handle for a bicycle in Fig. 1) through multiple randomly initialised stroke inversion tasks (because important strokes gets highlighted across trials). Closely related to CE is another field known of adversarial attack [7, 19, 52, 76], which aims at the generation of adversarial examples (AE) having _imperceptible differences_ to human vision but results in completely different AI predictions. Conceptual connections between CE and AE have been extensively discussed in the literature [9, 81, 59], where [59] suggests that AE is part of a broader class of examples represented by CE. Our proposed SLI also generates sketch samples that dictates a prediction change. We however model the generation process via the spatial reconfiguration of strokes, which is intrinsically distinctive to AE - the movement of strokes is less likely to be imperceptible changes to human viewers compared with those by local pixel jittering. ## 3 Methodology In this section, we first introduce our classification model which is designed around strokes as sketch building blocks. We then introduce our method for model explainability. As a pre-processing step, we simplify all sketches by the RDP algorithm [14]. For each stroke \(s_{i}\) consisting of \(k\) points, \(\{s_{i,1},s_{i,2},...,s_{i,k-1},s_{i,k}\}\), we identify three inherent properties in \(s_{i}\) and learn respective descriptor for each: location \(l_{i}\), shape \(sh_{i}\) and stroke order \(o_{i}\). We use the coordinate value \((x_{i},y_{i})\) of \(s_{i,1}\) to represent the location of each stroke \(l_{i}\). In order to disentangle shape information \(sh_{i}\) from its actual location, we use relative instead of absolute coordinates and move the starting point of all strokes to the canvas centre. As per convention, each \(sh_{i}\) point also contains a two-dimensional binary pen state [20] - (1, 0): stroke is being drawn, (0, 1): the end of the stroke, (0, 0): padding points to account for the fact that all strokes have a different number of points. **Sketch-specific encoder.** Our proposed sketch encoder \(f_{w}\), which we name SketchXAlNet ("X" for E\(\underline{X}\)plainability), first learns to encode \(l_{i}\), \(sh_{i}\) and \(o_{i}\) with different learnable components before fusing them together into a Transformer for final decision. This tailored model design is then ready to undertake the novel explainability task defined later. A full high-level schematic is shown in Fig. 2. We use a bidirectional LSTM [24] to extract shape information of each stroke \(sh_{i}\), and one linear layer for location \(l_{i}\) embedding learning. We pre-define the maximum number of strokes allowed and assign a learnable embedding for each order (time) embedding \(o_{i}\). Finally, we sum them all and add one extra [CLS] token before feeding into a transformer encoder [13]. We adopt [CLS] for classification task, optimised under the conventional multi-class cross-entropy loss. **Sketch explainability - SLI.** We introduce a new task for explaining sketch model, that of _Stroke Location Inversion_, SLI. Initiating from replacing each sketch stroke at a random location, SLI explains a sketch classifier through the following hypothesis: to trust a classifier has indeed mastered one concept, a classifier should be able to relocate a group of random strokes towards readable visual imagery that actually corresponds to the underlying concept. SLI thus corresponds to an iterative optimisation problem, aiming to reconfigure strokes locations for increasing recognition confidence. Denoting a sketch composing of \(N\) strokes with class label \(y\) in bold \(\mathbf{s}\), this process is formulated as: \[\arg\min_{l_{1},\cdots,l_{N}}\mathcal{L}\left(f_{w}\left(\text{Relocate}( \mathbf{s})\right),y\right), \tag{1}\] where \(\text{Relocate}(\cdot)\) refers to placing the strokes of a given sketch to random locations on a blank canvas. **In connection to counterfactual & latent optimisation.** At first glimpse, SLI draws considerable similarity to counterfactual explanation - finding input variations that lead to complete change of prediction outcomes. We adapt this definition under our context with a slight modification to its original formulation [81]: \[\arg\min_{l_{1},\cdots,l_{N}}\mathcal{L}\left(f_{w}\left(\mathbf{s}^{\prime} \right),y^{\prime}\right)+d\left(\mathbf{s},\mathbf{s}^{\prime}\right), \tag{2}\] where \(y^{\prime}\) denotes another label different from \(y\), \(d(\cdot)\) is some distance measure and can be a simple sum of location difference here. The advantage of SLI becomes evident under such comparison. Unlike the counterfactual approach restricted by a local input search space, SLI enjoys a much bigger flexibility with each time explaining a different facet of fact through random replacements of \(\mathbf{s}\). SLI is also connected to latent optimisation, a technique extensively explored in GAN literature [85]. If we dissect \(f_{w}\) into \(f_{l}\) (location-relevant component) \(\circ\)\(f_{w\backslash l}\) (location-irrelevant component) and draw an analogy to the latent vector \(z\) and generator \(G(\cdot)\) in GAN language respectively, this becomes a standard GAN inversion problem. The difference is instead of traversing along the non-interpretable \(z\) space, \(f_{w}\) is interpretable in nature with each update dictating the direction and pace of the next sketch stroke movement. **Formal Definition.** We now define two types of SLI tasks, where stroke relocation is leveraged as a gateway to explaining a sketch classifier. _Recovery:_ During the recovery task, we randomise the locations of all strokes and only keep their shapes. We specify the target label \(y\) as the original sketch label and use Eq. (1) to optimise (\(l_{1},\cdots,l_{N}\)). We visualise the entire optimisation process to understand the inner workings of the classifier. _Transfer:_ For the transfer task, we keep stroke shapes and locations intact, while specifying the target label \(y\) as a different category to that of the input sketch. We use this setup to build cross-category understandings. ## 4 Experiments ### Experimental Settings We adopt the QuickDraw dataset [20] to train \(f_{w}\), which contains 345 object categories with 75K sketches each. Following convention the 75K sketches are divided into training, validation and testing sets with size of 70K, 2.5K and 2.5K, respectively. For the analysis of generated explanations by SLI, we randomly select 30 categories. We compare our model with a variety of sketch recognition models: CNN-based [89, 23], hybrid-based [87, 37, 86] and Transformer variants [13, 45, 62]. We use the same learning rate of \(0.00001\), Adam optimiser [32], and 20 epochs for all methods. All experiments of this stage are run on 5 NVIDIA 3090 GPUs with a batch size of 100 per GPU. For better SLI training stability, we use gradient clip [58], CosineAnnealingLR Figure 2: **SketchXAINet architecture.** We build a sketch classifier upon stroke vectors rather than raster pixels. All strokes are decomposed into three parts – order, shape and location. We use a bidirectional LSTM, a linear model and a learnable time embedding matrix to encode such decomposed stroke representation respectively. The dashed line refers to the gradient flow of the location parameters when we generate explanations by SLI with a trained classifier. scheduler [48] and SGD optimiser without momentum to limit the distance a stroke can move. Figure 3: **SLI explains SketchXAINet in Recovery and Transfer tasks.** Here we show the visualisations of the 100 optimisation steps of SLI (Eq. 1). Origin refers to a free-hand sketch sampled from the QuickDraw dataset, where in recovery we randomise its constituent strokes to form different explainable inputs, and in transfer, we keep it intact but leverage it to explain a classifier of the different target category. The number in the top-left corner (the bottom-left corner when present) indicates model confidence in the current sketch to belong to the original label (to the new counterfactual label). We use bounding boxes with gradient colours (from light grey to black) to highlight the progressive nature of SLI. with their primitives and feed them into SketchXAINet for classification. Comparing with the results reported in the past work [3] which manually define a fixed set of heuristics-based shape primitives (line, arc, square, circle, triangle, U-shape, L-shape), our learning-based method is flexible in how a stroke is to be abstracted and how to trade-off recognition at the whole sketch level therein. We demonstrate the comparison in the bottom row of Fig. 4. Apart from the 9-class setting from [3] that specifically choose certain classes with visual semantics biased to their analysis (_e.g_., round-shaped silhouette), [3] mostly fails under more open setting, with recognition accuracy plummeting from 91.8% to 62.4% in 30-class setting and complete reconstruction failure for less regularised sketch samples (_e.g_., shoe, star). Finally, with learned stroke primitives, we can now try to conduct shape, rather than stroke inversion explainability task by modifying Eq. 1 to optimise \(sh_{1},sh_{2},...,sh_{n}\) instead. After each gradient descent, we replace the updated shape embeddings with their closest primitives and use them as initialisation for the next step. Examples in Fig \(5\) show that shape inversion hardly delivers any explainable outcome and implicitly justifies our location inversion choice. ## 5 Discussion Explaining dataset bias with SLIIn our transfer explainability setting, we showed that by relocating the strokes and in some cases removing the strokes from the canvas (mov Figure 4: **Analysis on shape embedding.** Top: t-SNE visualisation on 100 stroke primitives across 30 sketch categories. Strokes with similar semantics are grouped together regardless of the original categories sourced from. Bottom: we compare our learned stroke primitives with [3], where 7 stroke primitives are heuristically pre-defined and their efficacy to reconstruct a sketch (_i.e_., replace any stroke with a primitive) is evaluated on a carefully curated 9-class setting. The table shows the method largely fails when extending the evaluation to a more open-world setting of 30 classes. Ours can not only deal with less regularised sketches from seen classes (_e.g_., star), but also generalises well to unseen cases. Figure 5: **Shape, not location, Inversion.** With automatically generated stroke primitives, we can now proceed inversion tasks on stroke shapes, just like how we do for locations – updates on high-dimensional shape embedding can be now visualised to changes of shape primitives if that update becomes significant enough. We however fail to identify explainable factors in such inversion. ing them out of the canvas bounding box) we can transfer a sketch from category A to category B. Here, we conduct an additional experiment. We sample \(1000\) sketches for each of the \(30\) training categories and apply a transfer task for each pair of sketches. In the top part of Fig. 6, we visualise as a heat map 3 the average recognition confidence values to belong to the target category of sketches transferred from one category to another (The first row indicates that when the target category is [bread], the average confidence of the samples of 30 categories is the highest, and so forth). We find that for almost all sketch categories the average confidence is high for a transfer to a sketch of [bread]. Then, we naturally ask the question of how this behaviour can be explained. We start by looking at the example of the sketches from the [bread] category. In Fig. 6 bottom, we show sketch samples from the QuickDraw dataset for bread sketches4, we can see that many look like something else, e.g. a [shirt]. Our SLI task allowed us to find a category for which sketches are ambiguous with respect to an assigned category. The next category with high average confidence of the transfer task, [baseball_bat], also contains many ambiguous sketches, for example, resembling a [knife]. We also show the [eye] sketches, which we find to be the category hardest to transfer to. We can see that all sketches do look like eyes. Therefore, we can see how our SLI task can help to identify categories for which humans struggle to produce easily recognisable sketches. Such dataset bias needs to be taken into account when training deep models. To conclude, this pilot study provides further insights into how SLI contributes towards explainability. Footnote 3: The full heat map can be found in the supplementary. **Limitation.** SLI is based on gradient descent and therefore inherits its limitations: SLI can be susceptible to local optima by oscillating around stroke location and not progressing further. We exemplify this in Fig. 7 where we use three circles to explain the sun concept. The expectation is then that two circles will be driven away off the canvas and one circle left. In practice, however, one circle is driven away and two circles are trapped in a tug-of-war. Solutions to alleviate this issue can be inspired by the optimisation literature, _e.g._, look ahead optimiser [91] is designed to break the optimisation deadlock by maintaining two sets of fast and slow weights. ## 6 Conclusion Sketches form a great data modality for explainability research because of their inherent "human-centred" nature. We started our journey by first identifying strokes as the basis for explanation. We then introduced SketchXAINet to encode the three innate properties of sketch strokes: shape, location, and order. Leveraging this encoder, we propose the first sketch-specific explainability task, that of stroke location inversion (SLI). Compared to your typical static explanations (_e.g._, saliency map), SLI is a dynamic process that explains the credibility of a sketch model by examining its ability to relocate randomly reshuffled strokes to reconstruct a sketch given a category. We attest to the efficacy of SLI with extensive analysis and contribute a new SoTA sketch recognition model as a by-product. Last but not least, we repeat that this is only the very first stab, yet at what we believe to be a very important and interesting area for XAI. Figure 6: **SLI exposes dataset bias.** Top: we apply SLI on transfer tasks between every two categories out of a total of 30 and observe all sketch samples regardless of the origin can be successfully transferred to [bread] (left). To confirm, we exclude [bread] and replace it with a new category [bus] and this time all sketches transfer to [baseball_bat]. Bottom: we showcase some samples of three QuickDraw categories, [bread], [baseball_bat], [eye], which yields an explanation to the said phenomenon. More details in text. Figure 7: **Limitation.** SLI relies on gradient descent and thus inherits its weakness. Here we demonstrate with a simple sun transfer task how optimisation is trapped in local optima.
2304.00891
Online Algorithms for Hierarchical Inference in Deep Learning applications at the Edge
We consider a resource-constrained Edge Device (ED), such as an IoT sensor or a microcontroller unit, embedded with a small-size ML model (S-ML) for a generic classification application and an Edge Server (ES) that hosts a large-size ML model (L-ML). Since the inference accuracy of S-ML is lower than that of the L-ML, offloading all the data samples to the ES results in high inference accuracy, but it defeats the purpose of embedding S-ML on the ED and deprives the benefits of reduced latency, bandwidth savings, and energy efficiency of doing local inference. In order to get the best out of both worlds, i.e., the benefits of doing inference on the ED and the benefits of doing inference on ES, we explore the idea of Hierarchical Inference (HI), wherein S-ML inference is only accepted when it is correct, otherwise the data sample is offloaded for L-ML inference. However, the ideal implementation of HI is infeasible as the correctness of the S-ML inference is not known to the ED. We propose an online meta-learning framework that the ED can use to predict the correctness of the S-ML inference. In particular, we propose to use the maximum softmax value output by S-ML for a data sample and decide whether to offload it or not. The resulting online learning problem turns out to be a Prediction with Expert Advice (PEA) problem with continuous expert space. We propose two different algorithms and prove sublinear regret bounds for them without any assumption on the smoothness of the loss function. We evaluate and benchmark the performance of the proposed algorithms for image classification application using four datasets, namely, Imagenette and Imagewoof, MNIST, and CIFAR-10.
Vishnu Narayanan Moothedath, Jaya Prakash Champati, James Gross
2023-04-03T11:26:56Z
http://arxiv.org/abs/2304.00891v2
# Online Algorithms for Hierarchical Inference in Deep Learning applications at the Edge ###### Abstract. We consider a resource-constrained Edge Device (ED) embedded with a small-size ML model (S-ML) for a generic classification application, and an Edge Server (ES) that hosts a large-size ML model (L-ML). Since the inference accuracy of S-ML is lower than that of the L-ML, offloading all the data samples to the ES results in high inference accuracy, but it defeats the purpose of embedding S-ML on the ED and deprives the benefits of reduced latency, bandwidth savings, and energy efficiency of doing local inference. To get the best out of both worlds, i.e., the benefits of doing inference on the ED and the benefits of doing inference on ES, we explore the idea of Hierarchical Inference (HI), wherein S-ML inference is only accepted when it is correct, otherwise the data sample is offloaded for L-ML inference. However, the ideal implementation of HI is infeasible as the correctness of the S-ML inference is not known to the ED. We thus propose an online meta-learning framework to predict the correctness of the S-ML inference. The resulting online learning problem turns out to be a Prediction with Expert Advice (PEA) problem with continuous expert space. We consider the full feedback scenario, where the ED receives feedback on the correctness of the S-ML once it accepts the inference, and the no-local feedback scenario, where the ED does not receive the ground truth for the classification, and propose the HIL-F and HIL-N algorithms and prove a regret bound that is sublinear with the number of data samples. We evaluate and benchmark the performance of the proposed algorithms for image classification applications using four datasets, namely, Imagenette and Imagewoof (Mikolov et al., 2017), MNIST (Krizhevsky et al., 2014), and CIFAR-10 (Krizhevsky et al., 2014). + Footnote †: journal: Information Systems ## 1. Introduction Emerging applications in smart homes, smart cities, intelligent manufacturing, autonomous internet of vehicles, etc., are increasingly using Deep Learning (DL) inference. Collecting data from the Edge Devices (EDs) and performing remote inference in the cloud results in bandwidth, energy, and latency costs as well as reliability (due to wireless transmissions) and privacy concerns. Therefore, performing local inference using embedded DL models, which we refer to as S-ML (Small-ML) models, on EDs has received significant research interest in the recent past (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014). These S-ML models range from DL models that are optimized for moderately powerful EDs such as mobile phones to tinyML DL models that even fit on microcontroller units. However, S-ML inference accuracy reduces with the model size and can be potentially much smaller than the inference accuracy of large-size state-of-the-art DL models, which we refer to as L-ML (Large-ML) models, that can be deployed on Edge Servers (ESs). For example, for an image classification application, an S-ML can be a quantized _MobileNet_(Krizhevsky et al., 2014) with a width multiplier of 0.25, that has a memory size of 0.5 MB and an inference accuracy of 39.5% for classifying ImageNet dataset (Krizhevsky et al., 2014), whereas CoCa (Krizhevsky et al., 2014), an L-ML, has an accuracy of 91% and a memory size in the order of GBs. One may choose to achieve the accuracy of L-ML model while utilizing the computational capabilities of EDs using the well-known DNN partitioning techniques, e.g., see (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014). Note that such partitioning techniques require processing time and energy profiling of the layers on EDs as well as on ESs to decide the optimal partition points. Early Exit is yet another technique that introduces side branches in between the layers of DL models to trade-off accuracy with latency [34]. In this work, we explore the novel idea of _Hierarchical Inference_ (HI) that complements the above techniques for performing DL inference at the edge. Consider that an ED is embedded with an S-ML and an L-ML1 is deployed on an ES (to which the ED enlists to get help for doing inference). In HI, we propose that an ED first observes the S-ML inference on each data sample and offloads it to L-ML only if S-ML inference is incorrect. Footnote 1: Both S-ML and L-ML are trained ML models deployed for providing inference and HI does not modify these models. Clearly, the ambition of HI is to maximize the use of S-ML in order to reap the benefits of reduced latency, bandwidth savings, and energy efficiency while not losing inference accuracy by offloading strategically to L-ML, thus achieving the best benefits out of the two worlds: EDs and ESs. However, the central challenge is that the incorrect inferences are inherently unknown at the ED and thus a decision under uncertainty needs to be taken. In this work, we focus on the pervasive _classification applications_ and address the above sequential decision problem by proposing a novel HI meta-learning framework, shown in Fig. 2, that facilitates the ED to decide if a current S-ML inference for a given sample should be accepted or the sample to be offloaded. In our framework, for each sample, the HI learning algorithm observes \(p\), the maximum probability value in the probability distribution over classes output by the S-ML. It then decides to offload, receiving a fixed cost \(0\leq\beta<1\), or not to offload, receiving a cost \(0\) if the inference is correct, and a cost \(1\), otherwise. We will show later that this cost structure facilitates HI by maximizing the offloading of samples with incorrect inference and not offloading the samples with correct inference. To simplify the analysis, we assume that S-ML accepts the inference of L-ML as the ground truth implying that the top-1 accuracy of L-ML is \(100\%\). The justification for this assumption is that the ED cannot know the ground truth when L-ML provides incorrect inference and thus by accepting the L-ML inference the ED tries to achieve the top-1 accuracy of L-ML. Intuitively, if the maximum probability \(p\) is high, then accepting S-ML inference will likely result in cost \(0\) and thus, it is beneficial to do so. However, if \(p\) is low, the cost will likely be equal to \(1\), and thus offloading with cost \(\beta\) is beneficial. This can be seen from Fig. 2, where we present the number of misclassified and correctly classified images of the dataset _Imagenette_[18] by the classifier _MobileNet_[17]. Observe that, for \(p\geq 0.45\) (approximately) there are more images correctly classified. Thus offloading for images with \(p<0.45\) might look like a reasonable policy, where the images that statistically tend to be correctly classified are processed locally and those that are not are offloaded. In this work, we design learning algorithms that learn the best threshold \(\theta\in[0,1)\) with performance guarantees after assigning quantifiable cost functions. Using these algorithms in each step, we decide to offload if \(p<\theta\) and not offload, otherwise. The above problem falls in the domain of Prediction with Expert Advice (PEA) (Bordes and Rafter, 2005). However, we have continuous expert space (or action space) for \(\theta\) and therefore, as explained later in Section 4, the standard Exponentially Weighted average Forecaster (EWF) does not have a regret bound for our problem. Another challenge is that, in the case of accepting S-ML inference, the local cost is not observable as the ED will not know if the inference is correct or not; we call this _no-local feedback_ scenario. To tackle this challenge, we first design an algorithm for the important scenario where local feedback is available - for example, a human user providing this feedback. We refer to this as _full feedback_ scenario. We then extend the algorithm to the no-local feedback scenario. A novel aspect of our algorithms is that they use the structural properties of the HI learning problem at hand to find a set non-uniform intervals obtained by doing dynamic and non-uniform discretizations, and use these intervals as experts, there by transforming the problem from a continuous to a discrete domain without introducing any error due to this discretization. To the best of our knowledge, our work is the first attempt to extend the concept of continuous experts to the no-local feedback scenario and find regret bounds for the same. We summarize our main contributions below. * We propose a novel meta-learning framework for HI that decides whether a data sample that arrived should be offloaded or not based on S-ML output. For the full feedback scenario, we prove that \(O\left(\sqrt{n\log n}\right)\) is the lower bound for the regret that can be achieved by any randomized algorithm for a general loss function, where \(n\) is the number of data samples. * We propose the HI Learning with Full feedback (HIL-F) algorithm that uses exponential weighting and dynamic non-uniform discretization. We prove that HIL-F has \(\sqrt{n\ln(1/\lambda_{\min})/2}\) regret bound, where \(\lambda_{\min}\) is the minimum difference between any two **distinct**\(p\) values among the \(n\) samples. * We propose HI Learning with the no-local feedback (HIL-N) algorithm, which on top of HIL-F, uses an unbiased estimate of the loss function. We prove a regret bound \(O\left(n^{2/3}\ln^{1/3}(1/\lambda_{\min})\right)\). We discuss the ways to approximate \(\lambda_{\min}\) and find the optimal values of the parameters used. * We show that the computational complexity of our algorithms in round \(t\) is \(O\left(\min(t,\frac{1}{\lambda_{\min}})\right)\). * the optimal fixed-\(\theta\) policy, one that offloads all samples, one that does not offload any, and a hypothetical genie algorithm that knows the ground truth. This paper is organized as follows: In Section 2 we go through the related research and explain the novelty in the contributions. In Section 3, we describe the system model followed by some background information and preliminary results in Section 4. Sections 5 and 6 details HIL-F and HIL-N, derive their regret bounds, and Section 7 discuss their computational complexity. Finally, we show the numerical results in Section 8 and conclude in Section 9. ## 2. Related Work Inference Offloading:Since the initial proposal of edge computing in (Srivastava et al., 2015), significant attention was given to the computational offloading problem, wherein the ED needs to decide which jobs to offload and how to offload them to an ES. The majority of works in this field studied offloading generic computation jobs, e.g., see (Srivastava et al., 2015; Krizhevsky et al., 2012; Krizhevsky et al., 2012). In contrast, due to the growing interest in edge intelligence systems, recent works studied offloading data samples for ML inference both from a theoretical [12; 26; 27] and practical [35; 36] perspectives. In [27], offloading between a mobile device and a cloud is considered. The authors account for the time-varying communication times by using model selection at the cloud and by allowing the duplication of processing the job a the mobile device. In [12], the authors considered a scalable-size ML model on the ED and studied the offloading decision to maximize the total inference accuracy subject to a time constraint. All the above works focus on dividing the load of the inference and do not consider HI and online learning. Our work is in part motivated by [26], where the authors assumed that the energy consumption for local inference is less than the transmission energy of a sample and studied offloading decision based on a confidence metric computed from the probability distribution over the classes. However, in contrast to our work, the authors do not consider the meta-learning framework and compute a threshold for the confidence metric based on the energy constraint at the ED. On-Device Inference.Several research works focused on designing S-ML models to be embedded on EDs that range from mobile phones to microcontroller units. While optimization techniques such as parameter pruning and sharing [16], weights quantization [28], and low-rank factorization [11] were used to design the S-ML models, techniques such as EarlyExit were used to reduce the latency of inference. For example, [38] studied the use of DNNs with early exits [34] on the edge device, while [33] studied the best DNN selection on the edge device for a given data sample to improve inference accuracy and reduce latency. These works do not consider inference offloading and in turn HI. DNN Partitioning:Noting that mobile devices such as smartphones are embedded with increasingly powerful processors and the data that needs to be transmitted between intermediate layers of a DNN is much smaller than the input data in several applications, the authors in[21] studied partitioning DNN between a mobile device and cloud to reduce the mobile energy consumption and latency. Following this idea, significant research work has been done that includes DNN partitioning for more general DNN structures under different network settings [19; 25] and using heterogeneous EDs [20], among others. In contrast to DNN partitioning, under HI, ED, and ES may import S-ML and L-ML algorithms from the pool of trained ML algorithms available on open-source libraries such as Keras, TFLite, and PyTorch. Furthermore, HI doesn't even require that S-ML and L-ML be DL models but rather they can even be signal processing algorithms. On the one hand, there is significant research by the tinyML community for building small-size DNNs that can be embedded on microcontrollers and also in designing efficient embedded hardware accelerators [29]. On the other hand, abundant state-of-the-art DNNs are available at edge servers that provide high inference accuracy. Our work is timely as HI will equip ML at the edge to reap the benefits of the above two research thrusts. To the best of our knowledge, we are the first to propose an online meta-learning framework for HI. Online Learning:The problem of minimizing the regret, when the decision is chosen from a finite expert space falls under the well-known Prediction with Expert Advice (PEA) or Multi-Armed Bandit (MAB) problems [3; 6]. We will explain more about these problems in Section 4. We will see that we cannot directly use these problems due to the uncountable nature of the expert space in our problems which we will elaborate in 3. We will also explain why some of the existing literature on continuous extensions of PEA and MAB are not suited or sub-optimal for our specific problem. ## 3. System Model and Problem Statement We consider the system shown in Fig. 1, with an ED enlisting the service of an ES for data classification applications. For the EDs, we focus on resource-constrained devices such as IoT sensors or microcontroller units. The ED is embedded with an S-ML which provides lower _inference accuracy_, i.e., the top-1 accuracy, whereas the ES runs an L-ML with higher accuracy. For example, for an image classification application, an S-ML can be a quantized MobileNet (Krizhevsky et al., 2017) with a width multiplier of 0.25; its memory size is 0.5 MB and has an inference accuracy of 39.5% for classifying ImageNet dataset (Krizhevsky et al., 2017), whereas CoCa (Zhu et al., 2018), an L-ML, has accuracy 91% and has memory size in the order of GBs. Note that the only assumption that we make on the algorithms is that the L-ML is significantly more accurate and costlier than the S-ML. Thus, we do not specify what exactly the S-ML or the L-ML algorithms needs to be, but can be any classification algorithm including regression algorithms, SVMs, random forests, and DNNs. Given an arbitrary sequence of \(n\) data samples that arrive over time at the ED, the question we study is: For each sample, should the ED offload for inference from L-ML or accept the inference from S-ML? We approach this as an online sequential decision problem. We assume that each sample first goes through local inference and the decision is made according to the inference results and parameters. Note that this is an essential assumption to facilitate HI, otherwise, the ED cannot infer anything about the sample. Also, as argued earlier in Section 1, we assume that all the offloaded images will be correctly classified by the L-ML. This assumption is not necessary for the proposed algorithms, but since the ED cannot possibly know if the inference provided by the ES is correct or wrong, we use the assumption to simplify the formulation. Let \(t\) denote the index of a data sample (e.g., an image), or simply sample, that arrives \(t\)-th in the sequence. Let \(p_{t}\) denote the maximum probability in the probability distribution over the classes output by S-ML for the sample \(t\). 2 Note that the class corresponding to \(p_{t}\) is declared as the true class for computing the top-1 accuracy. Intuitively speaking, \(p_{t}\) is the confidence level of S-ML for classifying sample \(t\) and it is a natural candidate to use for HI. Let binary random variable \(Y_{t}\) denote the cost representing the ground truth that is equal to 0 if the class corresponding to \(p_{t}\) is the correct class and is equal to 1, otherwise. Clearly, given an S-ML model, \(Y_{t}\) depends on \(p_{t}\) and the sample. Let \(\beta\in[0,1)\) denote the cost incurred for offloading the image for inference at the ES. This cost, for example, may include the costs for the transmission energy and the idle energy spent by the transceiver till the reception of the inference. Note that, if \(\beta\geq 1\), then accepting the inference of S-ML, which incurs a cost at most 1, for all samples will minimize the total cost. Footnote 2: Note that, in a classification application, a classifier typically outputs a probability distribution over the classes. Our framework allows other metrics, besides \(p_{t}\), that are computed based on the probability distribution over classes. If the ED offloads sample \(t\), it incurs cost \(\beta\), and if it accepts the S-ML inference, it incurs a cost \(Y_{t}\). In the latter case, the ED may not know \(Y_{t}\), in general, and we refer to it by _no-local feedback_ scenario. If \(Y_{t}\) is revealed once the ED accepts S-ML, we refer to by _full feedback_ scenario. In either scenario, if the ED offloads and receives the inference from the ES, it can use that inference to infer \(Y_{t}\). As explained in Section 1, in round \(t\), we use the following decision rule \(\mathfrak{D}_{t}\) based on the choice of threshold \(\theta_{t}\in[0,1]\): \[\mathfrak{D}_{t}=\begin{cases}\text{Do not offload}&\text{ if }p_{t}\geq \theta_{t},\\ \text{Offload}&\text{ if }p_{t}<\theta_{t}.\end{cases} \tag{1}\] Therefore, given \(p_{t}\), choosing threshold \(\theta_{t}\) results in a cost/loss \(l(\theta_{t},Y_{t})\) at step \(t\), given by \[l(\theta_{t},Y_{t})=\begin{cases}Y_{t}&p_{t}\geq\theta_{t},\\ \beta&p_{t}<\theta_{t}.\end{cases} \tag{2}\] Note that, we omit the variable \(p_{t}\) from the loss function \(l(\theta_{t},Y_{t})\) for notational simplicity. We focus on designing online algorithms that learn the best threshold, which balances the conflicting objectives of reducing the number of images offloaded and increasing the inference accuracy, thereby improving the responsiveness and energy efficiency of the system. We use boldface notations to denote vectors. Let \(Y_{t}=\{Y_{\tau}\},\theta_{t}=\{\theta_{\tau}\}\), and \(\mathbf{p}_{t}=\{p_{\tau}\}\), \(\tau=1,2,\dots,t\leq n\). Further, let \(Y\coloneqq Y_{n}\), \(\theta\coloneqq\theta_{n}\) and \(\mathbf{p}\coloneqq\mathbf{p}_{n}\) for convenience. Finally, we define \(\lambda_{\min}\) as the minimum difference between any two distinct probability values in the sequence \(\mathbf{p}_{n}\). Define the cumulative cost \(L(\mathbf{\theta},Y)\) as \(L(\mathbf{\theta},Y)=\sum_{t=1}^{n}l(\theta_{t},Y_{t})\). Also let \(\mathbf{\theta}^{*}=\{\theta^{*},\theta^{*},\dots\}\), a vector of size \(n\) with all values \(\theta^{*}\), denote an optimal fixed-\(\theta\) policy and \(L(\mathbf{\theta}^{*},Y)\) denote the corresponding cost. Then, \[L(\mathbf{\theta}^{*},Y)=\sum_{t=1}^{n}l(\theta^{*},Y_{t})],\] where \(\theta^{*}\) need not necessarily be unique and is given by \[\theta^{*}=\operatorname*{arg\,min}_{\theta\in[0,1]}\sum_{t=1}^{n}l(\theta,Y_{ t}).\] Given a sequence \(Y\), we now define the regret under an arbitrary algorithm \(\pi\) as \[R_{n}=\mathbb{E}_{\pi}\left[L(\mathbf{\theta},Y)\right]-L(\mathbf{\theta}^{*},Y), \tag{3}\] where the expectation \(\mathbb{E}_{\pi}[\cdot]\) is with respect to the distribution induced by \(\pi\). In this work, we are interested finding algorithms for both the full feedback and no-local feedback scenarios that have a sublinear upper bound (i.e., a bound that goes to \(0\) as n goes to \(\infty\)) on \(\mathbb{E}_{Y}[R_{n}]\) - the expected regret over the distribution of all possible sequences \(Y\). We call this bound an expected regret bound and note that if we can find a regret bound that is applicable for any given sequence \(Y\), the same bound is also applicable for the expected regret (or even the maximum regret) over all possible sequences of \(Y\). For this reason, and for the sake of simplicity, we will only carry out the analysis for a given \(Y\) in the upcoming analysis sections. However, later in the numerical section, we will show the results with expected average regret \(\mathbb{E}_{Y}[\frac{1}{n}R_{n}]=\frac{1}{n}\mathbb{E}_{Y}[R_{n}]\) and expected average cost \(\frac{1}{n}\mathbb{E}_{Y,\pi}[L(\mathbf{\theta},Y)]\). We take the average over the number of samples \(n\) to remove the dependency on the size of different datasets and normalize the maximum to \(1\), for easy comparison. Before going to the next section, we summarize the abbreviations and notations used in this paper in TABLE 1 below. \begin{table} \begin{tabular}{|l l|l l|l l|} \hline ED & edge device & HI & hierarchical inference & HIL-F & HIL algorithm: full feedback \\ ES & edge server & HIL & hierarchical inference learning & HIL-N & HIL algorithm: no-local feedback \\ S-ML & small-size ML & PEA & prediction with expert advice & EWF & exponentially weighted forecaster \\ L-ML & large-size ML & \(p_{[i]}\) & \(i^{\text{th}}\) smallest district value in the set of available probabilities \(\{p_{1},p_{1},\dots\}\) \\ \hline \end{tabular} \end{table} Table 1: Table of abbreviations. ## 4. Background and preliminary analysis **Learning Problems:** The HI learning problem falls into the category of PEA (Han et al., 2017) problems. In the standard PEA problem, \(N\) experts (or actions) are available for a predictor - known formally as a _forecaster_. When the forecaster chooses an expert, it receives a cost/reward corresponding to that expert. If the cost is only revealed for the chosen expert, then this setting is the MAB. In contrast to the standard PEA, we have an uncountable expert space where the expert \(\theta_{t}\) belongs to the continuous space \([0,1]\). Continuous action space is well studied in MAB settings, e.g., see (Bahdan et al., 2017; Chen et al., 2017; Chen et al., 2018), where the main technique used is to discretize the action space and bound the regret by assuming that the unknown loss function has smoothness properties such as uniformly locally Lipschitz. However, the problem at hand does not assume any smoothness properties for the loss function. As discussed briefly in Section 1, one well-known forecaster for standard PEA is the exponential weighted average forecaster (EWF). For each expert, EWF assigns a weight that is based on the cost incurred for choosing this expert. For each prediction, EWF selects an expert with a probability computed based on these weights. It is known that for \(n\) predictions, EWF achieves a regret \(\sqrt{n\ln N/2}\). However, the continuous nature of the expert space renders EWF not directly applicable for solving the problem at hand and we need an extension of EWF. Such an extension was considered in (Chen et al., 2017), and a regret bound for convex losses is obtained for continuous experts, conditioned on a hyperparameter \(\gamma>0\). Later, a particular \(\gamma\) is proposed to get the optimum regret bound of \(1+\sqrt{n\ln n/2}\). We, on the other hand, do not require any hyperparameter and more importantly do not assume any convexity for the loss function. In addition, (Chen et al., 2017) does not describe how to compute the integral required for computing the weights. Furthermore, the solution in (Chen et al., 2017) is only applicable to HIL-F with full feedback, but not to HIL-N in which case ours is the first work to the best of our knowledge. One may discretize \([0,1]\) with a uniform interval length \(\Delta\) and use the standard EWF, where a straightforward substitution of the number of experts \(N=1/\Delta\) results in regret bound \(\sqrt{n\ln(1/\Delta)/2}\). However, to not sacrifice the accuracy due to this discretization, one has to take \(\Delta\) small enough such that no two probability realization \(p_{t}\) falls within an interval. This is to make sure that the cumulative loss function is constant within each interval, which will be more clear after Lemma 4.1. Thus, if \(\lambda_{\text{min}}\) is the minimum separation between any two distinct probabilities \(p_{t},1\leq t\leq n\), the best attainable regret bound of a standard EWF using uniform discretization is \(\sqrt{n\ln(1/\lambda_{\text{min}})/2}\) with \(N=1/\lambda_{\text{min}}\). We will soon see that these regret bounds are similar to what we get using our proposed algorithms, but the added complexity with a large number of experts from the first round onwards makes it suboptimal. In this paper, we start with the continuous experts and then use the structure of the problem to formulate it in a discrete domain. We propose a non-uniform discretization that retains the accuracy of a continuous expert while reducing the complexity to the theoretical minimum with at most \(n+1\) experts after \(n^{\text{th}}\) round. Note that, due to the non-uniform discretization, the proposed HIL does not involve \(\Delta\), but instead involves \(\lambda_{\text{min}}\), where \(1/\lambda_{\text{min}}\) acts similar to \(N\) in the regret bound. In Section 5, we provide simple methods to approximate \(\lambda_{\text{min}}\). **Preliminary Analysis:** In order to choose a good threshold \(\theta_{t}\) in round \(t\), we take a hint from the discrete PEA (Han et al., 2017) where a weight for an expert is computed using the exponential of the scaled cumulative losses incurred for potentially choosing that expert. We extend this idea and define continuous weight function \(w_{t}(\theta)\) as follows: \[w_{t+1}(\theta)= e^{-\eta\sum_{r=1}^{t}l(\theta,Y_{r})}\] \[= e^{-\eta\sum_{r=1}^{t-1}l(\theta,Y_{r})}e^{-\eta l(\theta,Y_{t})}= w_{t}(\theta)e^{-\eta l(\theta,Y_{t})}. \tag{4}\] \[W_{t+1}= \int_{0}^{1}w_{t+1}(\theta)\,\mathrm{d}\theta. \tag{5}\] Here, \(\eta>0\) is the learning rate. At each round \(t\), the normalized weights give the probability distribution for choosing the next threshold \(\theta_{t+1}\), and thus they can be used to learn the system. However, it comes with two challenges - (i) finding a (set of) thresholds that follow this distribution, and (ii) computing the integral. Although these challenges can be solved using direct numerical methods, they incur a large amount of computational cost. For instance, the inverse transformation method can generate a random sample of the threshold with this distribution. Instead, we use the facts from (1) and (2) that our final decision (to offload or not) depends solely on the relative position of \(\theta_{t}\) and \(p_{t}\), but not directly on \(\theta_{t}\). Thus, using the distribution given by the normalized weights, we define \(q_{t}\) as the probability of _not_ offloading, i.e., the probability that \(\theta_{t}\) is less than \(p_{t}\), where \[q_{t}=\frac{\int_{0}^{p_{t}}w_{t}(x)\,\mathrm{d}x}{W_{t}}. \tag{6}\] Thus, the decision \(\mathfrak{D}_{t}\) from (1) boils down to _do not offload_ and _offload_ with probabilities \(q_{t}\) and \((1-q_{t})\), respectively. With the first challenge mitigated, we look for efficient methods to compute the integral in (6). Note that the cumulative loss function \(L(\mathbf{\theta}_{t},Y_{t})=\sum_{r=1}^{t}l(\theta_{r},Y_{r})\) can take potentially \(3^{t}\) different values (because of \(0\), \(1\), or \(\beta\) cost in each step), without any necessary pattern, and hence direct analytical integration is not possible. To address this issue, we leverage the result of the following lemma (Lemma 4.1) and convert the integral into summation by discretizing the domain \([0,1]\) of the integral into a finite set of non-uniform intervals. The non-uniform discretization suggested by this lemma is incremental and a new interval is (potentially) added in each round. Let's look at the structure of the weight function after \(n\) rounds. Let \(p_{0}=0\) and \(p_{N}=1\), where \(N\) is the number of intervals formed formed in \([0,1]\) by the sequence of probabilities \(\mathbf{p}_{n}\). Here, we have \(N\leq n+1\) because of the repeated probabilities that do not result in the addition of a new interval. We denote these intervals by \(B_{i}=(p_{[i-1]},p_{[i]}],1\leq i\leq N\), where \(p_{[i]}\) denotes the \(i\)-th smallest distinct probability in \(\mathbf{p}_{n}\). Let \(m_{i},1\leq i\leq N\) be the number of times \(p_{[i]}\) is repeated in \(\mathbf{p}_{n}\). For instance, \(N=n+1\) and \(p_{[i]}=p_{i}\) iff \(m_{i}=1\forall i\). Finally, let \(Y_{[i]},i=1,2,\ldots n\) be the \(i\)-th element in the ordered set of local inference costs ordered according to the increasing values of the corresponding probability \(p_{i}\). Note that, \(i\) in \(Y_{[i]}\) goes up to \(n\) while \(i\) in \(p_{[i]}\) goes only up to \(N\) because any two local inference costs \(Y_{j}\) and \(Y_{k}\) associated with repeated probability values \(p_{j}=p_{k}\) are two different but i.i.d random variables. **Lemma 4.1**: _The function \(L(\mathbf{\theta},Y)\) is a piece-wise constant function with a constant value in each interval \(B_{i}\). Furthermore, if there are no repetitions in the sequence \(\mathbf{p}_{n}\), then_ \[L(\mathbf{\theta^{*}},Y)=\min_{1\leq i\leq n+1}\left\{(i-1)\beta+\sum_{k=i}^{n}Y_ {[j]}\right\}.\] Proof.: By definition, \(p_{t}\) falls on the boundary of \(B_{i}\), \(\forall t\), for some \(i\). Hence, \(B_{i}\) is a subset of either \((0,p_{t}]\) or \((p_{t},1]\). \[\Rightarrow l(\theta_{t},Y_{t})=\begin{cases}Y_{t},&\forall\theta:\theta_{t}\in B _{i}\subset(0,p_{t}],\text{ and }\\ \beta,&\forall\theta:\theta_{t}\in B_{i}\subset(p_{t},1].\end{cases} \tag{7}\] Thus, \(\forall\,i\leq N,\,\,l(\theta,Y_{t})\coloneqq l(B_{i},Y_{t}),\forall\theta\in B _{i}\). That is, the cost for all \(\theta\) within an interval \(B_{i}\) takes a constant value of \(l(B_{i},Y_{t})\), and this value depends on whether \(p_{[i]}\) (the upper boundary of \(B_{i}\)) is greater than \(p_{t}\) or not. To prove the second part, note that \(L(\theta,Y)=\sum_{t=1}^{n}l(B_{i},Y_{t}):\theta\in B_{i}\). \[\Rightarrow L(\theta^{\star},Y) =\min_{\theta\in[0,1]}L(\theta,Y)=\min_{1\leq i\leq N}\sum_{t=1}^ {n}l(B_{i},Y_{t}).\] \[\sum_{t=1}^{n}l(B_{i},Y_{t}) =\sum_{t=1}^{n}[\beta\,\mathbb{1}\,(p_{t}<p_{[i]})+Y_{t}\, \mathbb{1}\,(p_{t}\geq p_{[i]})]\] \[=\beta\sum_{j=1}^{i-1}m_{j}+\sum_{k=1+\sum_{j=1}^{i-1}m_{j}}^{n} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 5. Full Feedback In this section, we consider the full-feedback scenario, where the algorithm receives the ground truth \(Y_{t}\) for all the samples, including those that are not offloaded by accepting the S-ML inference. For this scenario, we present the HIL-F algorithm in Algorithm 1. Some algorithmic rules for the parameter updates are given later in Section 7. As explained in the previous section, given \(p_{t}\), we compute \(q_{t}\), the probability to not offload. Once the decision is made using \(q_{t}\), the costs are received and the weights are updated using (4) and (5). For simplicity, we denote the expected cost received by HIL-F in round \(t\) by \(\bar{I}(\theta_{t},Y_{t})\) and is given by \[\bar{I}(\theta_{t},Y_{t})=\mathbb{E}_{Q_{t}}[I(\theta_{t},Y_{t})]=Y_{t}q_{t}+ \beta(1-q_{t}),\] where the expectation is with respect to the probability distribution dictated by \(q_{t}\). Also, let \(\bar{L}(\mathbf{\theta},Y)=\sum_{t=1}^{n}\bar{I}(\theta_{t},Y_{t})\) denote the total expected cost after \(n\) rounds. In the theorem below, we provide a regret bound for HIL-F. Theorem 5.1 ().: _For \(\eta>0\), HIL-F achieves the following regret bound:_ \[R_{n}=\bar{L}(\mathbf{\theta},Y)-L(\mathbf{\theta^{*}},Y)\leq\frac{1}{\eta}\ln\frac{1 }{\lambda_{\min}}+\frac{n\eta}{8}.\] Proof.: Recall from Lemma 4.1 that \(p_{[i]},B_{i}=(p_{[i-1]},p_{[i]}]\), and \(I(B_{i},Y_{t})\) are the \(i-\)th smallest probability, intervals formed by them, and the constant loss function within that interval at round \(t\), respectively. Also, \(\lambda_{i}=p_{[i]}-p_{[i-1]}\) and \(N\leq n+1\) correspond to the length of the intervals \(i\) and the total number of intervals, respectively. Finally, \(\lambda_{\min}=\min_{1\leq i\leq N}\lambda_{i}\). Substituting \(t=0\) in (5), we have \(W_{1}=1\). Thus, taking logarithm of \(\frac{W_{n+1}}{W_{1}}\) gives, \[\ln\frac{W_{n+1}}{W_{1}} =\ln\int_{0}^{1}e^{-\eta\sum_{t=1}^{n}I(x,Y_{t})}\;\mathrm{d}x\] \[=\ln\sum_{i=1}^{N}\lambda_{i}e^{-\eta\sum_{t=1}^{n}I(B_{i},Y_{t})}\] \[\geq\ln\max_{1\leq i\leq N}\left(\lambda_{\min}e^{-\eta\sum_{t=1 }^{n}I(B_{i},Y_{t})}\right)\] \[=-\eta\min_{1\leq i\leq N}\sum_{t=1}^{n}I(B_{i},Y_{t})-\ln\frac{1 }{\lambda_{\min}}\] \[=-\eta\min_{\theta\in[0,1]}\sum_{t=1}^{n}I(\theta,Y_{t})-\ln\frac{ 1}{\lambda_{\min}}. \tag{9}\] Now, we bound the ratio \(\frac{W_{n+1}}{W_{t}}\). \[\ln\left(\frac{W_{t+1}}{W_{t}}\right) =\ln\left(\frac{\int_{0}^{1}w_{t+1}(x)\;\mathrm{d}x}{W_{t}}\right)\] \[=\ln\left(\int_{0}^{1}\frac{w_{t}(x)}{W_{t}}e^{-\eta l(x,Y_{t})} \;\mathrm{d}x\right).\] By using Hoeffding's lemma3 in the above equation, we get \[\ln\left(\frac{W_{t+1}}{W_{t}}\right) \leq-\eta\int_{0}^{1}\frac{w_{t}(x)}{W_{t}}l(x,Y_{t})\,\mathrm{d}x+ \frac{\eta^{2}}{8}\] \[=-\eta\int_{0}^{Pt}\frac{w_{t}(x)}{W_{t}}l(x,Y_{t})\,\mathrm{d}x- \eta\int_{Pt}^{1}\frac{w_{t}(x)}{W_{t}}l(x,Y_{t})\,\mathrm{d}x+\frac{\eta^{2}}{8}\] \[=-\eta\left(Y_{t}\int_{0}^{Pt}\frac{w_{t}(x)}{W_{t}}\,\mathrm{d}x +\beta\int_{Pt}^{1}\frac{w_{t}(x)}{W_{t}}\,\mathrm{d}x\right)+\frac{\eta^{2}}{8}.\] In the above step, we used (2). Now using (6) to replace the integrals, we get \[\ln\left(\frac{W_{t+1}}{W_{t}}\right) \leq-\eta\left(Y_{t}q_{t}+\beta(1-q_{t})\right)+\frac{\eta^{2}}{8}\] \[=-\eta\tilde{l}(\theta_{t},Y_{t})+\frac{\eta^{2}}{8}. \tag{10}\] Extending this expression telescopically, we get \[\ln\left(\frac{W_{n+1}}{W_{1}}\right) =\ln\left(\prod_{t=1}^{n}\frac{W_{t+1}}{W_{t}}\right)=\sum_{t=1}^ {n}\ln\frac{W_{t+1}}{W_{t}}\] \[\leq\sum_{t=1}^{n}\left[-\eta\tilde{l}(\theta_{t},Y_{t})+\frac{ \eta^{2}}{8}\right]=-\eta\sum_{t=1}^{n}\tilde{l}(\theta_{t},Y_{t})+\frac{n\eta ^{2}}{8}. \tag{11}\] Using (9) and (11), we obtain \[-\eta\,\min_{\theta\in[0,1]}\sum_{t=1}^{n}l(\theta,Y_{t})-\ln \frac{1}{\lambda_{\min}} \leq-\eta\sum_{t=1}^{n}\tilde{l}(\theta_{t},Y_{t})+\frac{n\eta^{2 }}{8}\] \[\Rightarrow\tilde{L}(\theta,Y) \leq L(\theta^{\bullet},Y)+\frac{1}{\eta}\ln\frac{1}{\lambda_{ \min}}+\frac{n\eta}{8}\] \[\Rightarrow R_{n} \leq\frac{1}{\eta}\ln\frac{1}{\lambda_{\min}}+\frac{n\eta}{8}.\] In the last two steps above, we rearranged the terms and divided them with \(\eta\). Here, \(\eta\) is the learning rate of the algorithm. To find \(\eta^{*}\), the \(\eta\) that minimizes the above regret bound, we differentiate the regret \(R_{n}\) above to obtain \[\eta^{*}=\sqrt{\frac{8\ln(1/\lambda_{\min})}{n}}. \tag{12}\] What remains is to find an approximation for \(\lambda_{\min}\), which is possible through various methods. For instance, one can use the precision of the probability outputs, i.e., if the probability outputs are truncated to 6 decimal places, then we know that \(\lambda_{\min}\geq 10^{-6}\). Further, some datasets and/or S-ML models come with specific \(\lambda_{\min}\). For example, the probability output by MobileNet on the Imagenette dataset is 8-bit and hence the probabilities are integer multiples of \(1/256\). Even in cases where all these methods fail, we see that a decent approximation for \(\lambda_{\min}\) is \(\hat{\lambda}_{\min}=1/(n+1)\). ## 6. No-Local Feedback Under no-local feedback, the cost is unknown once the inference of the S-ML is accepted. For this scenario, we use the randomization idea used for label efficient prediction problem (Bertson, 2006), which is a variant of the PEA, where the costs in each round are not revealed, unless they are inquired for, and there can only be at most \(m\) inquires that can be made. For this variant, EWF is modified as follows: in each round, a Bernoulli random variable \(Z\) is generated with probability \(\epsilon\). If \(Z=1\), then feedback is requested and the costs are revealed. However, for our problem, the algorithm for the label-efficient prediction problem is not applicable due to the aspect of continuous expert space. Further, we do not have the notion of inquiring about the costs at the ED. Instead, when \(Z=1\), the sample has to be offloaded to the ES with cost \(\beta\) irrespective of the original decision made using \(q_{t}\). These samples provide the ED with the inference using which the ED computes the cost \(Y_{t}\). To address the above aspects we follow the design principles of HIL-F and use non-uniform discretization of the continuous domain and propose the HI algorithm for no-local feedback (HIL-N), which is presented in Algorithm 2. Even though HIL-N and HIL-F have a similar structure, the design of HIL-N is significantly more involved and has the following key differences with HIL-F. Firstly, in line 5 of Algorithm 2, a Bernoulli random variable \(Z_{t}\) is generated with probability \(\epsilon\). If \(Z_{t}=1\), then the sample is offloaded even if \(Q_{t}=1\), and thus \(Y_{t}\) is realized in this case. This step is used to control the frequency of additional offloads carried out to learn the ground truth \(Y_{t}\). Secondly, instead of the loss function, the weights are updated using a _pseudo loss function_\(\tilde{l}(\theta_{t},Y_{t})\) defined as follows: \[\tilde{l}(\theta_{t},Y_{t})=\begin{cases}0&p_{t}\geq\theta_{t},Z_{t}=0; \quad\quad\text{[{Do Not Offload}]}\\ \frac{Y_{t}}{\epsilon}&p_{t}\geq\theta_{t},Z_{t}=1;\quad\quad\quad\text{[{ Offload}]}\\ \beta&p_{t}<\theta_{t}.\quad\quad\quad\quad\quad\quad\text{[{Offload}]}\\ \end{cases} \tag{13}\] We also update the equations (4), (5) and (6) as follows: \[w_{t+1}(\theta) =w_{t}(\theta)e^{-\eta\tilde{l}_{t}(\theta,Y_{t})}, \tag{15}\] \[W_{t+1} =\int_{0}^{1}w_{t+1}(\theta)\;\mathrm{d}\theta,\text{ and }\] (16) \[q_{t} =\frac{\int_{0}^{Pt}w_{t}(x)\;\mathrm{d}x}{W_{t}}. \tag{14}\] We emphasize that the pseudo loss function \(\tilde{l}(\theta_{t},Y_{t})\) is used only as part of the HIL-N algorithm, and is not the actual cost incurred by the ED. The actual cost remains unchanged and it depends only on the offloading decision and the correctness of the inference if not offloaded. However, this actual incurred cost or the corresponding loss function \(l(\theta_{t},Y_{t})\) is unknown for the no-local feedback scenario, whenever the sample is not offloaded and the local inference is accepted. This is precisely the reason to introduce the pseudo loss function \(\tilde{l}(\theta_{t},Y_{t})\) which is known in each \(t\), and can be used in the HIL-N algorithm to update the weights. Recall from Section 5 that in HIL-F, the cost incurred and the cost used to update the weights are the same, and the incurred cost is \(\beta\) if and only if \(p_{t}<\theta_{t}\). However in HIL-N, we use the pseudo cost to update the weights, and thus the actual cost incurred can be equal to \(\beta\) even if \(p_{t}\geq\theta_{t}\). However, we designed the pseudo-loss function such that \[\mathbb{E}_{Z}\left[\tilde{l}(\theta_{t},Y_{t})\right]=l(\theta_{t},Y_{t}). \tag{17}\] Therefore, the pseudo loss function is an unbiased estimate of the actual loss function, a fact that we will facilitate our analysis. Further, with the addition of a random variable \(Q\), the regret for HIL-N can be rewritten as \[R_{n}=\mathbb{E}_{QZ}[L(\mathbf{\theta},Y)]-L(\mathbf{\theta}^{\star},Y), \tag{18}\] where \(\mathbb{E}_{QZ}[\cdot]\) is expectation with respect to random variables \(\{Q_{1},Q_{2},\ldots,Q_{n}\}\) and Bernoulli random variable \(Z\). **Theorem 6.1**: _For \(\eta,\epsilon>0\), HIL-N achieves the regret bound_ \[R_{n}\leq n\beta\epsilon+\frac{n\eta}{2\epsilon}+\frac{1}{\eta}\ln(1/\lambda_ {min}).\] **Step 1:** Since the costs incurred and the loss function used for updating the weights are different under HIL-N, we first find a bound for the difference between the expected total cost received and the expected total cost obtained using \(\tilde{l}(\theta_{t},Y_{t})\). From Algorithm 2, we infer that sample \(t\) is offloaded if \(Q_{t}=0\) or \(Q_{t}=1\) and \(Z_{t}=1\), and it is not offloaded only when \(Q_{t}=0\) and \(Z_{t}=0\). Therefore, we have \[\mathbb{E}_{Q_{t}Z}\left[l(\theta_{t},Y_{t})\right]=\beta[1-q_{t}+q_{t} \epsilon]+q_{t}(1-\epsilon)Y_{t}. \tag{19}\] From (13), we have \[\tilde{l}(\theta_{t},Y_{t}) =\frac{Y_{t}}{\epsilon}\operatorname{\mathbbm{1}}(\theta_{t}\leq p _{t})\operatorname{\mathbbm{1}}(Z_{t}=1)+\beta\operatorname{\mathbbm{1}}( \theta_{t}>p_{t})\] \[\Rightarrow\mathbb{E}_{Q_{t}Z}\left[\tilde{l}(\theta_{t},Y_{t})\right] =Y_{t}q_{t}+\beta(1-q_{t}). \tag{20}\] From (19) and (20), we obtain \[\mathbb{E}_{Q_{t}Z}\left[l(\theta_{t},Y_{t})\right]-\mathbb{E}_{Q _{t}Z}\left[\tilde{l}(\theta_{t},Y_{t})\right] =\beta\epsilon q_{t}-Y_{t}\epsilon q_{t}.\] \[\Rightarrow\mathbb{E}_{QZ}\left[L(\mathbf{\theta},Y)\right]-\sum_{t=1 }^{n}\mathbb{E}_{QZ}\left[\tilde{l}(\theta_{t},Y_{t})\right] =\beta\epsilon\sum_{t=1}^{n}q_{t}-\epsilon\sum_{t=1}^{n}Y_{t}q_{t}\] \[\leq n\beta\epsilon-\epsilon\sum_{t=1}^{n}Y_{t}q_{t}\] \[\Rightarrow-\sum_{t=1}^{n}\mathbb{E}_{QZ}\left[\tilde{l}(\theta_ {t},Y_{t})\right] \leq-\mathbb{E}_{QZ}\left[L(\mathbf{\theta},Y)\right]+n\beta\epsilon. \tag{21}\] In the last step above, we have used \(q_{t}\leq 1\), for all \(t\). **Step 2:** Using the same analysis to derive (9), we obtain \[\ln\left(\frac{W_{n+1}}{W_{1}}\right)\geq-\eta\min_{\theta\in[0,1]}\sum_{t=1}^{n }\tilde{l}(\theta,Y_{t})-\ln\frac{1}{\lambda_{\min}}\] Note that, here we have \(\tilde{l}(\theta,Y_{t})\) instead of \(l(\theta,Y_{t})\). Now using the fact that the expectation over the minimum is upper bounded by the minimum over expectation, we get \[\Rightarrow\mathbb{E}_{Z}\left[\ln\left(\frac{W_{n+1}}{W_{1}} \right)\right]\geq-\eta\min_{\theta\in[0,1]}\sum_{t=1}^{n}\mathbb{E}_{Z}\left[ \tilde{l}(\theta,Y_{t})\right]-\ln\frac{1}{\lambda_{\min}}\] \[\Rightarrow\mathbb{E}_{Z}\left[\ln\left(\frac{W_{n+1}}{W_{1}} \right)\right]\geq-\eta L(\theta^{\star},Y)-\ln(1/\lambda_{\min}). \tag{22}\] **Step 3:** In the following we find a bound for \(\ln(\frac{W_{t+1}}{W_{t}})\). \[\ln\left(\frac{W_{t+1}}{W_{t}}\right) =\ln\left(\frac{\int_{0}^{1}w_{t+1}(x)\;\mathrm{d}x}{W_{t}}\right)\] \[=\ln\left(\int_{0}^{1}\frac{w_{t}(x)}{W_{t}}e^{-\eta\tilde{l}(x,Y _{t})}\;\mathrm{d}x\right)\] (using (14)) \[\leq\ln\left(\int_{0}^{1}\frac{w_{t}(x)}{W_{t}}\left(1-\eta\tilde {l}(x,Y_{t})+\frac{\eta^{2}}{2}\tilde{l}(x,Y_{t})^{2}\right)\mathrm{d}x\right).\] In the above step, we used the fact that \(e^{-x}\leq 1-x+x^{2}/2\). Rearranging the terms, we get \[\ln\left(\frac{W_{t+1}}{W_{t}}\right) =\ln\left(1+\int_{0}^{1}\frac{w_{t}(x)}{W_{t}}\left(-\eta\tilde{l }(x,Y_{t})+\frac{\eta^{2}}{2}\tilde{l}(x,Y_{t})^{2}\right)\;\mathrm{d}x\right)\] \[\leq\int_{0}^{1}\frac{w_{t}(x)}{W_{t}}\left(-\eta\tilde{l}(x,Y_{ t})+\frac{\eta^{2}}{2}\tilde{l}(x,Y_{t})^{2}\right)\;\mathrm{d}x.\] The above step follows from the fact that \(\ln(1+x)\leq x\), \(\forall x>-1\). \[\Rightarrow\ln\left(\frac{W_{t+1}}{W_{t}}\right)\leq\int_{0}^{1}\frac{w_{t}( x)}{W_{t}}\left(-\eta\tilde{l}(x,Y_{t})+\frac{\eta^{2}}{2\epsilon}\tilde{l}(x,Y_{t}) \right)\;\mathrm{d}x. \tag{23}\] In the last step, we have used the fact that \(\tilde{l}(x,Y_{t})\in[0,1/\epsilon]\). Note that the integral above can be rearranged as follows: \[\int_{0}^{1}\frac{w_{t}(x)}{W_{t}}\tilde{l}(x,Y_{t})\;\mathrm{d}x =\int_{0}^{p_{t}}\frac{w_{t}(x)}{W_{t}}\tilde{l}(x,Y_{t})\; \mathrm{d}x+\int_{p_{t}}^{1}\frac{w_{t}(x)}{W_{t}}\tilde{l}(x,Y_{t})\; \mathrm{d}x\] \[=\frac{Y_{t}}{\epsilon}\,\mathbb{I}\left(Z_{t}=1\right)q_{t}+ \beta(1-q_{t}).\] Therefore, we have \[\mathbb{E}_{Z}\left[\int_{0}^{1}\frac{w_{t}(x)}{W_{t}}\tilde{l}( x,Y_{t})\;\mathrm{d}x\right] =Y_{t}q_{t}+\beta(1-q_{t})\] \[=\mathbb{E}_{Q_{t}Z}\left[\tilde{l}(\theta_{t},Y_{t})\right], \tag{24}\] where we have used (20). Taking expectation with respect \(Z\) on both sides in (23) and then substituting (24), \[\mathbb{E}_{Z}\left[\ln\left(\frac{W_{t+1}}{W_{t}}\right)\right] \leq-\eta\mathbb{E}_{Q_{t}Z}\left[\tilde{l}(\theta_{t},Y_{t}) \right]+\frac{\eta^{2}}{2\epsilon}\mathbb{E}_{Q_{t}Z}\left[\tilde{l}(\theta_{ t},Y_{t})\right]\] \[\leq-\eta\mathbb{E}_{QZ}\left[\tilde{l}(\theta_{t},Y_{t})\right]+ \frac{\eta^{2}}{2\epsilon}. \tag{25}\] Above, we used the fact that \(\mathbb{E}_{QZ}\left[\tilde{l}(\theta_{t},Y_{t})\right]\leq 1\). Taking summation of (25) over \(t\), we obtain \[\mathbb{E}_{Z}\left[\ln\prod_{t=1}^{n}\left(\frac{W_{t+1}}{W_{t}} \right)\right] \leq-\eta\sum_{t=1}^{n}\mathbb{E}_{QZ}\left[\tilde{l}(\theta_{t},Y_{t})\right]+\frac{n\eta^{2}}{2\epsilon}\] \[\Rightarrow\mathbb{E}_{Z}\left[\ln\left(\frac{W_{n+1}}{W_{1}} \right)\right] \leq-\eta\left(\mathbb{E}_{QZ}\left[L(\mathbf{\theta},Y)\right]-n \beta\epsilon\right)+\frac{n\eta^{2}}{2\epsilon}. \tag{26}\] In the last step above, we have used (21). Combining (26) and (22) and rearranging the terms, we obtain \[\mathbb{E}_{QZ}\left[L(\mathbf{\theta},Y)\right]-L(\mathbf{\theta}^{*},Y)\leq n\beta \epsilon+\frac{n\eta}{2\epsilon}+\frac{1}{\eta}\ln(1/\lambda_{\text{min}}),\] which is the regret \(R_{n}\) for HIL-N given by (18). The bound in Theorem 6.1 neatly captures the effect of \(\epsilon\) on the regret. Note that the term \(n\beta\epsilon\) is a direct consequence of offloading sample \(t\), when \(Z_{t}=1\). We denote the bound by \[g(\epsilon,\eta)=n\beta\epsilon+\frac{n\eta}{2\epsilon}+\frac{1}{\eta}\ln(1/ \lambda_{\text{min}}). \tag{27}\] We now minimize this bound and find the parameters that provide a bound that is sublinear in \(n\). **Lemma 6.2**: _The function \(g(\epsilon,\eta)\) defined in (27) has a global minimum at \((\epsilon^{*},\eta^{*})\), where \(\eta^{*}=\left(\frac{2\ln^{2}(1/\lambda_{\text{min}})}{\beta n^{2}}\right)^{ 1/3}\) and \(\epsilon^{*}=\sqrt{\frac{\eta}{2\beta}}\). At this minimum, we have,_ \[g(\epsilon^{*},\eta^{*})=3n^{2/3}\left(\frac{\beta\ln(1/\lambda_{\text{min}} )}{2}\right)^{1/3}.\] We can easily see the strict convexity of \(g(\epsilon,\eta)\) in each dimension \(\epsilon\) and \(\eta\) independently, which tells us that any inflection point of the function will be either a saddle point or a minima but not a maxima. We equate the first-order partial derivatives to zero to get a set of points given by the equations \[\frac{\partial g}{\partial\epsilon}=0 \Rightarrow\epsilon=\sqrt{\frac{\eta}{2\beta}}, \tag{28}\] \[\frac{\partial g}{\partial\eta}=0 \Rightarrow\eta=\sqrt{\frac{2\epsilon\ln(1/\lambda_{\min})}{n}}. \tag{29}\] However, it still remains to check if this point is unique and if this point is indeed a minimum, but not a saddle point. Seeing the uniqueness is straightforward by noting that these two expressions correspond to two non-decreasing, invertible curves in the \(\epsilon\)-\(\eta\) plane, and thus they have a unique intersection. We find this intersection denoted using \((\epsilon^{*},\eta^{*})\) by substituting (28) in (29). We obtain \[\eta^{*}=\sqrt{\frac{2\epsilon^{*}\ln(1/\lambda_{\min})}{n}}=\sqrt{\frac{2 \sqrt{\eta^{*}/2\beta}\ln(1/\lambda_{\min})}{n}}.\] We get \(\eta^{*}\) and \(\epsilon^{*}\) by simplifying the above equation and then substituting it back in (28). Finally, to prove that \((\epsilon^{*},\eta^{*})\) is indeed a minimum, we verified that the determinant of the Hessian at \((\epsilon^{*},\eta^{*})\) is positive, the steps of which are not presented due to space constraints. Since \((\epsilon^{*},\eta^{*})\) is a unique minimum, it should be the global minimum. The proof is complete by substituting \((\epsilon^{*},\eta^{*})\) in (27). Now, with the above Lemma in hand, we provide a sublinear regret bound for HIL-N in the following corollary. **Corollary 6.3**: _With \(\eta=\left(\frac{2\ln^{2}(1/\lambda_{\min})}{\beta n^{2}}\right)^{1/3}\) and \(\epsilon=\min\{1,\sqrt{\frac{\eta}{2\beta}}\}\), HIL-N achieves a regret bound sublinear in \(n\):_ \[R_{n}\leq 3n^{2/3}\left(\frac{\beta\ln(1/\lambda_{min})}{2}\right)^{1/3}\] Note that, if \(\sqrt{\frac{\eta}{2\beta}}\leq 1\), then \(\epsilon=\sqrt{\frac{\eta}{2\beta}}\) and the results directly follows from Lemma 6.2. If \(\sqrt{\frac{\eta}{2\beta}}>1\), then we have \(\epsilon=1\). Substituting \(\eta\) value in \(\sqrt{\frac{\eta}{2\beta}}>1\), we obtain \[\beta<\sqrt{\frac{\sqrt{2}\ln(1/\lambda_{\min})}{n}}. \tag{30}\] Since \(\epsilon=1\), we will have \(Z_{t}=1\) for all \(t\), i.e., HIL-N will always offload. Therefore, in this case, the total cost incurred by HIL-N is equal to \(n\beta\). Now, using (30), we obtain \[n\beta<\sqrt{\sqrt{2}\ln(1/\lambda_{\min})}/n=\sqrt{\sqrt{2}n\ln(1/\lambda_{ \min})}.\] Thus, when (30) holds and we have \(\epsilon=1\), the total cost itself is \(O(n^{\frac{1}{2}})\) and therefore regret cannot be greater than \(O(n^{\frac{1}{2}})\). The result follows by noting that \(O(n^{\frac{2}{3}})\) is the larger bound. _Remarks:_ It is worth noting the following: 1. The proof steps in Theorem 5.1 closely follow some analysis of the standard EWF for PEA with the added complexity to account for the continuous experts and non-uniform discretization. The analysis for HIL-N is novel. In particular, the design of the unbiased estimator, steps 1 and 3 in the proof of Theorem 6.1, and the proof of Lemma 6.2 have new analysis. 2. The computational complexity of HIL-N is of the same order as that of HIL-F due to the similar interval generation steps. 3. We can remove the dependency of \(\eta\) on \(\lambda_{\min}\) and \(n\) by using a sequence of dynamic learning rates: \(\eta_{t}=\frac{1}{\sqrt{t+1}}\). Sublinear regret bounds can be obtained for such a modification but we omit the analysis due to space constraints. ## 7. Algorithm implementation and computational complexity Recall from Lemma 4.1 that cumulative loss is a piece-wise constant function. We use this fact to compute the continuous domain integral in (6) efficiently by splitting the function into multiple rectangular areas of nonuniform base and then summing them up, where we do not make any discretization error but compute the exact value of the integral. In each round \(t\), we increase the number of intervals by at most \(1\) as we split the interval containing \(p_{t}\) at \(p_{t}\). After receiving \(p_{t}\), we thus have \(N\leq t+1\) intervals with boundaries given by \(p_{[0]}=0\), \(p_{[i]},1\leq i\leq t\), and \(p_{[N]}=1\). The weight \(w_{i,t},i\leq t+1\) of the interval \(i\) in round \(t\) is then updated based on, 1) the weights in round \(t-1\), and 2) the position of the interval with respect to \(p_{t}\). Note that in lines 12 of HIL-F and HIL-N, we state that the interval containing \(p_{t}\) should be split and in line 14 we state that the weights should be computed, but without giving more details. Below, we present four algorithmic rules that can be used to compute the probability \(q_{t}\), interval boundaries \(\{p_{[i]}\}\) and weights \(\{w_{i,t}\}\), which needs to be computed in order. Let \(j\) be the interval strictly below \(p_{t}\) and \(dup\) be a boolean variable denoting duplicate \(p_{t}\). \[(i)\quad j \leftarrow\max\{i:p_{[i]}<p_{t}\}.\] \[(ii)\quad dup \leftarrow FALSE,\,\text{if}\,\,p_{\tau}\neq p_{t}\,\forall\tau<t, \,\,TRUE\,\,\text{otherwise}.\] \[(iii)\quad q_{t} \leftarrow\frac{\sum_{i=1}^{j}w_{i,t}(p_{[i]}-p_{[i-1]})+w_{j+1,t} (p_{t}-p_{[i]})}{\sum_{i=1}^{N}w_{i,t}(p_{[i]}-p_{[i-1]})}\] \[(iv)\quad N \leftarrow\begin{cases}N&(dup=TRUE),\\ N+1&(dup=FALSE).\end{cases}\] \[(v)\quad p_{[i]} \leftarrow\begin{cases}p_{[i]}&i\leq j\,\,\text{or}\,\,(dup=TRUE) \\ p_{t}&i=j+1\,\,\text{and}\,\,(dup=FALSE)\\ p_{[i-1]}&j+1<i\leq N\,\,\text{and}\,\,(dup=FALSE)\end{cases}\] \[(vi)\quad w_{i,t} \leftarrow\begin{cases}w_{i,t-1}e^{-\eta\beta}&p_{[i]}>p_{t},\,\,(dup= TRUE) \\ w_{i-1,t-1}e^{-\eta\beta}&p_{[i]}>p_{t},\,\,(dup=FALSE)\end{cases}\] \[(vi)\quad w_{i,t} \leftarrow\begin{cases}w_{i,t-1}e^{-\eta Y_{t}}&p_{[i]}\leq p_{t}, \text{HIL-F}\\ w_{i,t-1}e^{-\eta Y_{t}/e}&p_{[i]}\leq p_{t},Z_{t}=1,\text{HIL-N}\\ w_{i,t-1}&p_{[i]}\leq p_{t},Z_{t}=0,\text{HIL-N}.\end{cases}\] In every round of computation, we need a certain constant number of additions, multiplications, and comparisons per interval, irrespective of the number of samples already processed. Thus, the computational complexity in each round is in the order of the number of intervals present in that interval. Now consider a set of \(n\) input images. In our proposed algorithms, the number of intervals in round \(t\) is upper bounded by \(t+1\). Thus, the worst-case computational complexity of HIL-F in round \(t\) is \(O(t)\). Further, when \(\lambda_{\min}\) is the minimum difference between any two probabilities, the maximum number of intervals is clearly upper bounded by \(1/\lambda_{\min}\), which reduces the complexity to \(O\left(\min\{t,1/\lambda_{\min}\}\right)\) **Proposition 1**: _The computational complexity of HIL-F and HIL-N in round \(t\) is \(O\left(\min\{t,1/\lambda_{\min}\}\right)\)._ Note that there can be many intervals with lengths larger than \(\lambda_{\min}\), and thus the number of intervals can typically be less than \(1/\lambda_{\min}\), which reduces the complexity in practice. As discussed earlier, one might approximate \(\lambda_{\min}\) to \(1/n\) in some datasets, which gives us a complexity of \(O\left(\min\{t,n\}\right)\) in terms of the number of images. Also note that the above complexities are that of round \(t\), and to get the total complexity of the algorithm, one has to sum it overall \(t\). Finally, we note that there can be datasets where \(\lambda_{\min}<1/n\) and for such cases the complexity from Proposition 1 will be \(O(t)\). For instance, this is the case for the MNIST dataset but is not applicable for the Imagenette dataset with \(\lambda_{\min}=\frac{1}{256}\). In this regard, we propose a practical modification to the algorithms by limiting the interval size to a minimum of \(\Delta_{\min}>\lambda_{\min}\), where \(\Delta_{\min}\) is a parameter chosen based on the complexity and cost tradeoffs. One then considers any different probabilities that lie within \(\Delta_{\min}\) of each other as duplicates while generating new intervals in line 12 of HIL-F and HIL-N, which further reduces the complexity to \(O\left(\min\{t,1/\Delta_{\min}\}\right)\). We observed by choosing different values of \(\lambda_{\min}\) (including \(\frac{1}{n}\)) that over a range of values, there is a notable reduction in algorithm runtime, with negligible difference in the expected average costs. ## 8. Numerical Results In this section, we evaluate the performance of the proposed algorithms HIL-F and HIL-N by comparing them against each other as well as further benchmarks. Our evaluation scenario consists of two different classifiers and four different datasets. Firstly, we use 8-bit quantized MobileNet (Han et al., 2017; Wang et al., 2018), with width parameter 0.25, to classify the Imagenette and Imagewoof datasets (Han et al., 2017). We use 0.25 for the width parameter as it reduces the number of filters in each layer drastically, and the resulting MobileNet has a size of 0.5 MB, suitable to fit on an IoT sensor. Imagenette and Imagewoof are derived from Imagenet (Han et al., 2017) and each contains a mixture of 10 different image classes with approximately 400 images per class. Out of the two, Imagewoof is a tougher dataset for classification as it contains 10 different breeds of dogs. Next, we use the test set of MNIST dataset (Miyi et al., 2018), which contains 10000 images of handwritten digits from 0 through 9. For this dataset, we train a linear classifier (without regularizer), as the S-ML model. We convert the labels into vectors of size 10. For label \(l\), i.e., digit \(l\), we use all zero vectors except in \(l\)-th location, where the value is 1. After training the classifier, we scale the output to obtain a probability distribution over the 10 labels. The top-1 accuracy we obtain is 86%. Finally for CIFAR-10 (Krizhevsky et al., 2012; Krizhevsky et al., 2012), we use a readily available trained CNN (Han et al., 2017) with accuracy 84% as the S-ML model. Note that for all the simulations we invoke the assumption that the L-ML models have accuracy 1. As explained in Section 3, we choose the expected average regret \(\frac{1}{n}\mathbb{B}_{Y}[R_{n}]\) and expected average cost \(\frac{1}{n}\mathbb{E}_{Y,\pi}[L(\theta,Y)]\) as the metrics to compare the performance. Recall that these metrics are upper bounded by 1, which is the maximum cost in a single round. For simplicity, we refer to them by average regret and average cost, respectively. For the simulations, we take 100 randomizations of the input sequence \(Y\) and for each of these randomizations we repeat the simulations 100 times. The randomization is for the statistical convergence across the sequences of \(Y\) (_i.e._, \(\mathbb{B}_{Y}[.]\)), and the repetitions are for the convergence over the randomized decisions based on \(q_{t}\) made in line 4 of the algorithms (i.e., \(\mathbb{E}_{\pi}[.]\)). We also checked with higher numbers of randomizations and repetitions and verified that \(100\times 100\) iterations are sufficient for statistical convergence. We use \(\eta\) and \(\epsilon\) from (12) and Lemma 6.2, unless mentioned otherwise. We use the following four baseline algorithms (i.e., policies) to compare the performance of HIL-F and HIL-N. 1. **Genie** - a non-causal policy, where only those images that are misclassified by S-ML are offloaded. 2. \(\mathbf{\theta^{*}}\) - an optimal fixed-\(\theta\) policy. We compute this cost by running a brute-force grid search over all \(\theta\). 3. **Full offload** - all images are offloaded to the ES. - all images are processed locally. Before we go to the figures, we show the number of images offloaded and the number of images misclassified by different policies for the Imagenette dataset with a total of 3925 images in TABLE 2. These results are basically the data point with \(\beta=0.5\) from Fig. 3(a) (explained later). We can immediately infer from the table that HIL-F achieves an offloading rate and misclassification rate very close to that of the optimum fixed-\(\theta\) policy. Further, HIL-F offloads approximately the same number of images as the optimum fixed-\(\theta\) policy and achieves a top-\(1\) accuracy of \(92.3\%\). Contrast this with a much lower accuracy output of \(43.2\%\) by the chosen MobileNet as the S-ML. This also asserts that our framework with the cost structure \(\beta\) and \(Y\) indeed facilitates HI by reducing the number of offloaded images that are correctly classified by S-ML. Note that HIL-N also achieves high accuracy \(95.2\%\), but it achieves this at the cost of offloading more images, \(18\%\) more than \(\theta^{*}\). This is because HIL-N can only get feedback from L-ML and chooses to offload more images to learn the best threshold. Note that a \(\beta\) of \(0.5\) corresponds to minimizing the sum of the total number of errors and offloads. One can the optimum in this case visually from Fig. 2 in Section 1, where all images below the threshold and all misclassified images above the threshold add \(1\) to the total costs. In Fig. 3, we compare the two proposed algorithms HIL-F and HIL-N with the baselines for all four datasets by plotting the average cost vs. \(\beta\). Here, Fig. 3(a) through Fig. 3(d) correspond to Imagenette, Imagewoof, MNIST, and CIFAR-10 datasets, respectively. Observe that HIL-F performs very close to \(\theta^{*}\), having at most \(6\%\) higher total cost than \(\theta^{*}\) among all four figures irrespective of the absolute value of the cost or the dataset considered. In Fig. 3(a) we have also added an inset where we have enlarged a portion of the figure to highlight the distinction between the proposed policies and \(\theta^{*}\). The vertical difference between these two corresponds to the corresponding regret. We can see that HIL-F achieves a cost very close to that of \(\theta^{*}\), having at most \(4.5\%\) higher total cost than \(\theta^{*}\) throughout the range of \(\beta\). For instance for the Imagenette dataset with \(\beta=0.5\), this increase is less than \(1.4\%\). HIL-N on the other hand is more sensitive to the properties of the considered dataset. It performs much better than the Full offload policy and also follows a similar trend as that of the HIL-F. However, for larger values of \(\beta\) the comparative performance of HIL-N with the No offload policy deteriorates. This is because even when offloading is not optimum, HIL-N is offloading with a fixed probability \(\epsilon>0\), to learn the ground truth \(Y\). Furthermore, we can see by comparing the four figures that, lower the accuracy of S-ML - for instance in Fig. 3(b) - larger will be the range of \(\beta\) for which HIL-N performs better than both No offload and Full offload policies. In Fig. 4 we show the dependency of the algorithm on the learning rate parameter \(\eta\) plotting the average regret obtained by the proposed algorithms vs. the number of images for \(\beta=0.7\) and different values of \(\eta\). We show the plots for theoretical bound-optimizing \(\eta\), and for HIL-F we also show the plots with a few other \(\eta\) for comparison. First, note that the HIL-N learns slower compared to HIL-F which is an intuitive behavior because HIL-N cannot learn from those images that are not offloaded. Also, note that the difference in regret incurred by using \(\hat{\lambda}_{\min}=1/(n+1)\) as an approximation of \(\lambda_{\min}\) is minimal - in the order \(10^{-3}\). Recall that the optimum \(\eta\) that we proposed is an optimum for the regret bound, but not necessarily for the regret itself. Hence, it is worth noting that, while using a larger \(\eta\) is slightly beneficial in this particular dataset, this turns out to be deleterious for the regret bound, which is valid for any \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Images & Genie & Full offload & No offload & \(\theta^{*}\) & HIL-F & HIL-N \\ \hline \hline offloaded & 2230 & 3925 & 0 & 2588 & 2626 & 3056 \\ \hline Misclassified & 0 & 0 & 2230 & 303 & 304 & 191 \\ \hline \end{tabular} \end{table} Table 2: Number of images offloaded and misclassified for different policies on Imagenette with \(\beta=0.5\) and optimal \(\eta,\epsilon\). given dataset. Further, too large an \(\eta\) will give too large weights to the thresholds that achieved lower costs in the past, making the algorithm resemble a deterministic algorithm that cannot guarantee performance [6]. Figure 4: Average regret vs. Number of images for \(\beta=0.7\) using HIL-F and HIL-N on the imagenette database with various \(\eta\). Figure 3: Average cost incurred by various offloading policies vs. \(\beta\) for different datasets. The bound optimizing \(\eta\) and \(\epsilon\) are used assuming a prior knowledge of \(\lambda_{\min}\). Note that the curves corresponding to \(\theta^{*}\) and HIL are very close to each other. ## 9. Conclusion We considered an ED embedded with S-ML and an ES having L-ML and explored the idea of HI, where the ED can benefit from only offloading samples for which S-ML outputs incorrect inference. Since an ideal implementation of HI is infeasible, we proposed a novel meta-learning framework where the ED decides to offload or not to offload after observing the maximum probability \(p\) in the probability mass function output by S-ML. For the full feedback scenario, we proposed HIL-F, which assigns exponential weights to decision thresholds \(\theta\in[0,1]\) based on past costs and probabilistically chooses a threshold, based on \(p\), to offload or not. For the no-local feedback scenario, we proposed HIL-N, which uses an unbiased estimator of the cost and generates an additional Bernoulli random variable \(Z\) and always offloads if \(Z=1\). A novel and unique aspect of the proposed algorithms is that we use non-uniform discretization, i.e., create new intervals in each round based on \(p\) and use these intervals as experts. We proved that HIL-F and HIL-N have sublinear regret bounds \(\sqrt{n\ln(1/\lambda_{\min})/2}\) and \(O\left(n^{2/3}\ln^{1/3}(1/\lambda_{\min})\right)\), respectively, and have runtime complexity \(O\left(\min\{t,1/\lambda_{\min}\}\right)\) in round \(t\). Here, it is worth noting that the term \(1/\lambda_{\min}\) acts similarly to the number of experts in PEA as far as regret bounds are concerned and we have explained simple methods to approximate it. For verifying the results, we generated values of \(p\) for four datasets, namely, Imagenette, Imagewoof, MNIST, and CIFAR-10, and compared the performance of HIL-F and HIL-N with four different baseline policies, including the _fixed-\(\theta\)_ policy. The cost achieved by the proposed algorithms is always lower compared to the _Full offload_ and the _No offload_ policies and is close to the cost achieved by the optimum fixed-\(\theta\) policy for a wide range of \(\beta\). More importantly, the algorithms achieve much higher accuracy compared to S-ML while offloading a marginally higher number of images compared to the optimum fixed-\(\theta\) policy.
2305.06111
Joint Falsification and Fidelity Settings Optimization for Validation of Safety-Critical Systems: A Theoretical Analysis
Safety validation is a crucial component in the development and deployment of autonomous systems, such as self-driving vehicles and robotic systems. Ensuring safe operation necessitates extensive testing and verification of control policies, typically conducted in simulation environments. High-fidelity simulators accurately model real-world dynamics but entail high computational costs, limiting their scalability for exhaustive testing. Conversely, low-fidelity simulators offer efficiency but may not capture the intricacies of high-fidelity simulators, potentially yielding false conclusions. We propose a joint falsification and fidelity optimization framework for safety validation of autonomous systems. Our mathematical formulation combines counterexample searches with simulator fidelity improvement, facilitating more efficient exploration of the critical environmental configurations challenging the control system. Our contributions encompass a set of theorems addressing counterexample sensitivity analysis, sample complexity, convergence, the interplay between the outer and inner optimization loops, and regret bound analysis. The proposed joint optimization approach enables a more targeted and efficient testing process, optimizes the use of available computational resources, and enhances confidence in autonomous system safety validation.
Ali Baheri, Mykel J. Kochenderfer
2023-05-10T12:55:40Z
http://arxiv.org/abs/2305.06111v1
# Joint Falsification and Fidelity Settings Optimization for Validation of Safety-Critical Systems: ###### Abstract Safety validation is a crucial component in the development and deployment of autonomous systems, such as self-driving vehicles and robotic systems. Ensuring safe operation necessitates extensive testing and verification of control policies, typically conducted in simulation environments. High-fidelity simulators accurately model real-world dynamics but entail high computational costs, limiting their scalability for exhaustive testing. Conversely, low-fidelity simulators offer efficiency but may not capture the intricacies of high-fidelity simulators, potentially yielding false conclusions. We propose a joint falsification and fidelity optimization framework for safety validation of autonomous systems. Our mathematical formulation combines counterexample searches with simulator fidelity improvement, facilitating more efficient exploration of the critical environmental configurations challenging the control system. Our contributions encompass a set of theorems addressing counterexample sensitivity analysis, sample complexity, convergence, the interplay between the outer and inner optimization loops, and regret bound analysis. The proposed joint optimization approach enables a more targeted and efficient testing process, optimizes the use of available computational resources, and enhances confidence in autonomous system safety validation. Keywords:Falsification Fidelity Optimization Safety-Critical Systems ## 1 Introduction In the development of autonomous systems, such as autonomous vehicles (AVs), ensuring their safe and efficient operation is critical. AVs must navigate various complex urban driving scenarios, including intersections, highway merges, and lane changes, with their control systems being based on learning-enabled policies. These policies must undergo rigorous testing and verification before deployment. Simulators that can generate different traffic scenarios are employed for testing the AV control systems. However, extensive tests using high-fidelity simulators can be computationally expensive, time-consuming, and may not cover all possible scenarios [8]. The joint falsification and simulator optimization approach addresses these challenges by introducing a joint learning framework that streamlines the exploration process for identifying potential failure scenarios. This framework concentrates on the most critical environmental configurations that pose difficulties for the control system. The joint learning framework, which concurrently optimizes simulator fidelity and carries out falsification, facilitates more effective use of computational resources during the search process. One key advantage of jointly learning falsification and simulator optimization is the adaptive control of simulator fidelity settings. The simulator can adjust its fidelity based on the specific scenario or region of the environment, leading to more targeted and efficient testing. As the simulator's fidelity increases, it becomes better at replicating the behavior of the high-fidelity simulator, allowing for a more accurate representation of the environment. This enables the falsification process to focus on the regions of the environment space where the system is more likely to fail, which are the most critical areas to explore. The joint learning framework integrates the search for failure scenarios with the enhancement of simulator fidelity. This synergy allows for efficient exploration by utilizing information gathered from both processes. For instance, if a low-fidelity simulator displays a substantial discrepancy compared to the high-fidelity simulator in a particular region, a joint optimization approach can prioritize refining the simulator fidelity in that area. This not only aids in more accurate identification of potential failure scenarios but also conserves computational resources by preventing unnecessary exploration in less relevant regions. Generalization is another important aspect to consider when developing a joint optimization method. By incorporating multiple tasks during the optimization process, such an approach can identify fidelity settings that excel across various scenarios. This ensures that the optimized fidelity settings are not overly specialized for a single task, but rather strike a balanced trade-off between computational efficiency and accuracy over a wider range of situations. **Related Work.** Falsification of learning-enabled systems has garnered considerable interest recently due to the growing complexity and safety-critical nature of such systems [1]. The primary goal of falsification is to pinpoint scenarios that could lead to system failure or breaches in safety specifications. Various techniques have been proposed to tackle the falsification problem, including optimization-based methods [11, 19, 6], search-based algorithms [12, 15, 20], and reinforcement learning approaches [18, 16, 10]. These methods strive to efficiently explore the state and parameter space in order to discover potential failure scenarios, ultimately enabling system design refinement and enhanced safety assurances. Researchers have also recently developed algorithms that consider the fidelity of simulators when identifying failure scenarios due to the computational expense of high-fidelity simulators [9, 13, 4]. These methods trade between the accuracy of high-fidelity simulators and the computational efficiency of low-fidelity simulators to decrease the overall cost of safety validation. While there has been notable progress in falsification of learning-enabled systems, the literature on joint falsification and fidelity setting optimization remains scarce. Our work aims to bridge this gap by presenting an approach that _jointly_ conducts falsification alongside simulation fidelity optimization. By combining these two aspects, our objective is to improve the efficiency of safety validation in learning-enabled decision-making systems. **Contributions.** We present a mathematical formulation for joint falsification and fidelity setting optimization for the safety validation of autonomous systems. Our primary contribution is the development of a theoretical framework that _unifies_ the two key aspects of the problem: falsification of learning-enabled systems and optimization of simulator fidelity settings. The main contributions of this paper are as follows: * We propose a mathematical formulation that jointly addresses falsification and fidelity setting optimization, enabling a more efficient exploration strategy to search for potential failure scenarios. * We prove six key theorems that establish the fundamental properties and relationships in the joint optimization problem. These theorems cover a range of important aspects, including sensitivity analysis, sample complexity, convergence, the interplay between the outer and inner loops, and the regret bound analysis when employing Bayesian optimization for the outer loop. The insights gained from these theorems provide a foundation for the joint falsification and fidelity optimization framework. ## 2 Problem Formulation Our objective is to efficiently combine falsification and fidelity optimization for safety-critical systems, aiming to minimize computational cost while preserving accuracy. We aim to identify environment configurations that violate safety specifications while simultaneously optimizing the fidelity settings to minimize discrepancies between low-fidelity and high-fidelity simulators. This joint optimization problem is formulated as a nested optimization framework with two components: an inner loop and an outer loop optimization [5]. ### Inner Loop Optimization (Falsification) The inner loop optimization aims to identify environment configurations that minimize the robustness value of a given safety specification \(\varphi\) under a specific fidelity setting \(f\). The simulator operates within a given environment \(e\in\mathcal{E}\). It takes a configuration \(e\) as input and produces a finite-horizon trajectory denoted by \(\xi\). If a trajectory satisfies the safety specification, the robustness function \(\rho_{\varphi}\) evaluates to a positive value; otherwise, it returns a negative value. As a result, the falsification problem can be formulated as the following optimization problem: \[e^{*}(f)=\operatorname*{argmin}_{e\in\mathcal{E}}\,\rho_{\varphi}(e;f) \tag{1}\] The goal is to search for an environment configuration \(e^{*}(f)\) that minimizes the robustness value within the considered environment space \(\mathcal{E}\), given the fidelity setting \(f\). ### Outer Loop Optimization (Fidelity Setting Optimization) The outer loop optimization focuses on finding the optimal fidelity settings \(f^{*}\) that minimize discrepancies between high-fidelity and low-fidelity simulators across a variety of tasks while considering the environment configurations obtained from the inner loop optimization. To efficiently explore the search space, we sample from the space of tasks and their parameters. Let \(T\) denote the number of sampled tasks, and \(M_{i}\) represent the number of sampled parameter configurations for each task. For each task \(T_{i}\), we have access to a high-fidelity simulator generating ground-truth trajectories, denoted as \(\xi_{i}^{H}\left(t_{i};p_{ij}\right)\). Additionally, a low-fidelity simulator produces approximate trajectories, represented by \(\xi_{i}^{L}\left(t_{i};p_{ij},f\right)\). The outer loop optimization problem can be formulated as: \[f^{*}=\arg\min_{f\in\mathcal{F}}\sum_{i=1}^{T}\sum_{j=1}^{M_{i}}\ell\left(\xi_{ i}^{H}\left(t_{i};p_{ij}\right),\xi_{i}^{L}\left(t_{i};p_{ij},f\right) \right), \tag{2}\] where \(\mathcal{F}\) represents the set of possible simulator fidelity settings, \(t_{i}\) is the \(i\)th sampled task, and \(p_{ij}\) is the \(j\)th sampled parameter configuration for task \(i\). The optimization objective is to minimize the discrepancy, measured by the loss function \(\ell\), between high-fidelity simulator \(\xi_{i}^{H}\) and low-fidelity simulator \(\xi_{i}^{L}\) for the sampled tasks and parameter configurations. The loss function \(\ell(\cdot,\cdot)\) measures discrepancies between the high-fidelity and low-fidelity simulators. One option for the loss function is the mean squared error (MSE) between both sets of trajectories over a fixed time interval: \[\ell(\xi^{H},\xi^{L})=\frac{1}{N}\int_{0}^{N}\left|\xi^{H}(t;p)-\xi^{L}(t;p,f) \right|^{2}dt \tag{3}\] where \(N\) is the length of the time interval over which the MSE is computed. ## 3 Theoretical Insights and Results After establishing the problem formulation for the joint optimization of falsification and fidelity settings, we now delve deeper into the theoretical results that guide our approach. In this section, we present a series of theorems that offer insights into the joint optimization framework, providing an understanding of the interplay between the inner and outer loop optimizations, sensitivity analysis of counterexamples, sample complexity, and convergence properties. ### Lipschitz continuity of inner and outer loop objectives In this section, we investigate the Lipschitz continuity of the inner and outer loop objectives. Lipschitz continuity is a crucial property that guarantees the stability of an optimization algorithm and enables us to derive convergence guarantees. We begin by introducing a theorem that establishes Lipschitz continuity for both inner and outer loop objectives under certain conditions. **Theorem 1.**_Let \(\rho_{\varphi}(e;f)\) and \(\ell(\xi^{H},\xi^{L})\) be the inner and outer loop objective functions, respectively. Suppose that there exist constants \(L_{\rho}>0\) and \(L_{\ell}>0\) such that_ \[|\rho_{\varphi}(e_{1};f)-\rho_{\varphi}(e_{2};f)|\leq L\rho\|e_{1 }-e_{2}\|, \tag{4}\] \[|\ell(\xi_{1}^{H},\xi_{1}^{L})-\ell(\xi_{2}^{H},\xi_{2}^{L})|\ \leq L_{\ell}(\|\xi_{1}^{H}-\xi_{2}^{H}\|+\|\xi_{1}^{L}-\xi_{2}^{L}\|), \tag{5}\] _for any \(e_{1},e_{2}\in\mathcal{E}\), \(f\in\mathcal{F}\), and \(\xi_{1}^{H},\xi_{2}^{H},\xi_{1}^{L},\xi_{2}^{L}\in\mathcal{X}\), where \(\mathcal{X}\) represents the space of all possible trajectories generated by the high-fidelity and low-fidelity simulators._ Proof. The proof of Theorem 3.1 follows from the definitions of Lipschitz continuity and the properties of the inner and outer loop objective functions. We need to show that the conditions stated in the theorem hold for the given objective functions. For the inner loop objective function \(\rho_{\varphi}(e;f)\), we assume that it is Lipschitz continuous with respect to the environment configurations \(e\). This property can be established by showing that the specification robustness value changes smoothly with respect to changes in the environment configurations, given a fixed fidelity setting \(f\). This assumption is typically valid where the system behavior is continuous with respect to the environment configurations. Similarly, for the outer loop objective function \(\ell(\xi^{H},\xi^{L})\), we assume that it is Lipschitz continuous with respect to the trajectories \(\xi^{H}\) and \(\xi^{L}\). The Lipschitz continuity of \(\ell\) implies that the discrepancy measure changes smoothly with respect to the trajectories obtained from high-fidelity and low-fidelity simulators. The basis for this property lies in the smooth dynamics of the simulator and the continuous dependency of the discrepancy measure on the trajectories. Assuming the Lipschitz continuity of both inner and outer loop objective functions, we can establish Theorem 3.1. \(\square\) The Lipschitz continuity of the inner and outer loop objectives, as established in Theorem 3.1, has significant implications for the convergence properties of the joint optimization algorithm. In particular, it enables us to derive convergence guarantees for both the inner and outer loop optimization problems, which we will explore in the following sections. ### Convergence of Joint Optimization Now we study the convergence properties of the joint optimization problem for falsification and fidelity optimization. We present a theorem that shows the convergence of the joint optimization problem under specific conditions, leveraging the Lipschitz continuity properties from Theorem 3.1. **Theorem 2.**_Suppose that the inner and outer loop objectives are Lipschitz continuous with constants \(L_{\rho}\) and \(L_{\ell}\), respectively, as stated in Theorem 3.1. Under suitable conditions on the optimization algorithm, the joint optimization problem converges to an optimal solution._ Proof. The proof of Theorem 2.2 relies on the properties of the optimization algorithm and the Lipschitz continuity of the inner and outer loop objectives. For the inner loop optimization problem, we assume that the optimization algorithm converges to a stationary point under suitable conditions. This is a standard assumption for many optimization algorithms, such as gradient-based methods, when applied to Lipschitz continuous objective functions. Since the inner loop objective function \(\rho_{\varphi}(e;f)\) is Lipschitz continuous, the convergence of the inner loop optimization can be guaranteed under suitable conditions. In a similar vein, for the outer loop optimization problem, we assume that the optimization algorithm converges to a stationary point under standard conditions. The Lipschitz continuity of the outer loop objective function \(\ell(\xi^{H},\xi^{L})\) ensures that the optimization algorithm converges when applied to this objective function. By combining the convergence properties of the inner and outer loop optimization problems, we can establish the convergence of the joint optimization problem to an optimal solution. \(\square\) Theorem 3.1 provides a convergence guarantee for the joint optimization problem, which is essential for the practical application of the proposed joint optimization framework. The convergence properties ensure that the algorithm will find an optimal solution, given that the optimization algorithm and the objective functions satisfy the required conditions. ### Interplay between Inner and Outer Loop Optimization The joint optimization framework for falsification and fidelity optimization involves a nested structure, with an inner loop optimization focused on finding counterexamples and an outer loop optimization aiming to identify optimal fidelity settings. In this section, we discuss the interplay between these two optimization problems and the implications for the design and analysis of joint optimization algorithms. The nested optimization dynamics of the inner and outer loop problems are intrinsically linked due to their shared dependence on environment configurations and fidelity settings. The outer loop relies on the counterexamples generated by the inner loop to evaluate the performance of different fidelity settings, as shown in the objective function. In turn, the fidelity settings chosen by the outer loop influence the search space and complexity of the inner loop optimization, as reflected by the inner loop objective function \(\rho_{\varphi}(e;f)\). As a result, the interplay between these two optimization problems creates a complex search process, where improvements in one loop can potentially impact the performance of the other. The joint optimization framework must balance the need for exploration and exploitation in both the inner and outer loop optimization problems. In the inner loop, exploration involves searching for new environment configurations that can potentially lead to counterexamples, while exploitation focuses on refining the current counterexamples to maximize their impact on the outer loop optimization. Similarly, in the outer loop, exploration entails experimenting with different fidelity settings to identify promising configurations, whereas exploitation aims to fine-tune the fidelity settings to minimize the discrepancy between high-fidelity and low-fidelity simulations, as measured by the loss function \(\ell\). The interplay between the inner and outer loop optimization problems also enables the development of _adaptive fidelity management_ strategies. By monitoring the progress of the inner loop optimization and the quality of the generated counterexamples, the outer loop can adaptively adjust the fidelity settings to focus on regions of the search space where the discrepancies between high-fidelity and low-fidelity simulations are the most significant. This adaptive fidelity management can lead to more efficient joint optimization algorithms that _dynamically_ allocate computational resources to the most critical aspects of the problem. Understanding the interplay between the inner and outer loop optimization problems is crucial for the design and analysis of joint optimization algorithms for falsification and fidelity optimization. Leveraging the insights gained from the interplay between the inner and outer loop objectives, such as the Lipschitz continuity established in Theorem 3.1 and the convergence properties from Theorem 3.1, enables the development of algorithms that effectively balance exploration and exploitation. These algorithms can adaptively manage fidelity settings, leading to more efficient and effective solutions for safety-critical systems. The relationship between the quality of counterexamples and fidelity settings, as analyzed in Theorem 3.1, along with the relationship between fidelity settings and counterexamples quality, as explored in Theorem 3.2, further enhance our understanding of the complex dynamics present in the joint optimization framework. ### Sensitivity analysis of counterexamples to fidelity settings We analyze the relationship between the quality of counterexamples and fidelity settings in the context of the joint optimization framework for falsification and fidelity optimization. The quality of a counterexample is typically characterized by the robustness of the system specification violation, as measured by the robustness value \(\rho_{\varphi}(e;f)\). We aim to understand how the choice of fidelity settings affects the quality of counterexamples generated by the inner loop optimization. **Theorem 3.2.**_Given a set of fidelity settings \(f\in\mathcal{F}\) and environment configurations \(\mathbf{e}\in\mathcal{E}\), there exists a constant \(C>0\) such that:_ \[|\rho_{\varphi}(e_{1};f_{1})-\rho_{\varphi}(e_{1};f_{2})|\leq C\|f_{1}-f_{2}\|, \tag{6}\] _for any \(e_{1}\in\mathcal{E}\) and \(f_{1},f_{2}\in\mathcal{F}\)._ Proof. The proof of this theorem relies on the Lipschitz continuity of the inner loop objective function \(\rho_{\varphi}(e;f)\) with respect to fidelity settings, as established in Theorem 3.1. Given the Lipschitz continuity property, the difference in robustness values between two fidelity settings \(f_{1}\) and \(f_{2}\) can be upper-bounded by a constant \(C\) times the distance between the fidelity settings in the fidelity space. This result highlights the sensitivity of counterexample quality to the choice of fidelity settings, which has important implications for the joint optimization process. Let us denote the Lipschitz constant of the inner loop objective function with respect to fidelity settings as \(L_{f}>0\). Then, according to the Lipschitz continuity of \(\rho_{\varphi}(e;f)\), we have: \[|\rho_{\varphi}(e_{1};f_{1})-\rho_{\varphi}(e_{1};f_{2})|\leq L_{f}\|f_{1}-f_{ 2}\|, \tag{7}\] for any \(e_{1}\in\mathcal{E}\) and \(f_{1},f_{2}\in\mathcal{F}\). This inequality establishes an upper bound on the difference in robustness values for a fixed environment configuration \(\mathbf{e}_{1}\) and two different fidelity settings \(f_{1}\) and \(f_{2}\). Now, let \(C=L_{f}\), where \(C>0\). Then, we can rewrite the inequality as: \[|\rho_{\varphi}(e_{1};f_{1})-\rho_{\varphi}(e_{1};f_{2})|\leq C\|f_{1}-f_{2}\|, \tag{8}\] for any \(e_{1}\in\mathcal{E}\) and \(f_{1},f_{2}\in\mathcal{F}\). This completes the proof of Theorem 3.2. \(\square\) This Theorem highlights the sensitivity of counterexample quality to the choice of fidelity settings by providing an upper bound on the difference in robustness values for different fidelity settings. ### Sensitivity analysis of fidelity settings to counterexamples This section focuses on understanding the sensitivity of the counterexamples obtained by the inner loop optimization to changes in the fidelity settings. This sensitivity analysis will provide insights into how the joint optimization process is affected by the fidelity settings and the trade-offs between fidelity and counterexample quality. **Theorem 4**: _Given the Lipschitz properties of the inner loop objective function \(\rho_{\varphi}(e;f)\) and the outer loop objective function \(\ell\), the sensitivity of the counterexamples obtained by the inner loop optimization to changes in the fidelity settings can be characterized by sensitivity function \(S(f)\)._ Let \(S(f)\) be a sensitivity function defined as: \[S(f)=\frac{\partial\rho_{\varphi}(e^{*}(f);f)}{\partial f}, \tag{9}\] where \(e^{*}(f)\) represents the optimal environment configuration obtained by the inner loop optimization for a given fidelity setting \(f\). The sensitivity function \(S(f)\) quantifies the rate of change of the robustness value with respect to the fidelity settings. A high sensitivity indicates that the quality of counterexamples is significantly affected by changes in fidelity settings, whereas a low sensitivity implies that the counterexamples are relatively insensitive to such changes. To understand the relationship between the sensitivity function and the optimization process, we can analyze the gradient of the outer loop objective function with respect to the fidelity settings: \[\frac{\partial\ell}{\partial f}=\sum_{i=1}^{T}\sum_{j=1}^{M_{i}}\frac{ \partial\ell\left(\xi_{i}^{H}\left(t_{i};p_{ij}\right),\xi_{i}^{L}\left(t_{i };fp_{ij},f\right)\right)}{\partial f}. \tag{10}\] Studying the gradient of the outer loop objective function and its relation to the sensitivity function \(S(f)\) provides insights into the influence of changes in fidelity settings on the optimization process and the quality of counterexamples produced by the inner loop optimization. This information can be useful for understanding the trade-offs between fidelity and counterexample quality in the joint optimization framework. ### Sample Complexity of Joint Optimization In this section, we analyze the sample complexity of the joint optimization problem, focusing on the relationship between the number of samples and the convergence properties of the optimization process. Sample complexity is an important consideration in optimization problems, as it quantifies the number of samples required to achieve a desired level of accuracy or convergence. **Theorem 5**: _Given the strong convexity and Lipschitz continuity properties of the inner and outer loop optimization problems, the total number of samples \(N\) required for the joint optimization problem can be expressed as a function of the number of iterations in both loops and the number of samples per iteration: \(N=nK_{1}K_{2}\)_ The number of iterations required for the inner and outer loop optimization problems to converge depends on the strong convexity and Lipschitz continuity properties of the objective functions. However, we cannot directly derive a closed-form expression for \(K_{1}\) and \(K_{2}\) based on these properties. To determine the total number of samples required, we consider the number of iterations in both the inner and outer loop optimization problems. Suppose the inner loop optimization takes \(K_{1}\) iterations to converge, and the outer loop optimization takes \(K_{2}\) iterations to converge. Then, the total number of iterations, \(K\), is the product of the iterations in both loops: \(K=K_{1}K_{2}\). Now, if we assume that the number of samples per iteration is constant and equal to \(n\), then the total number of samples required, \(N\), can be expressed as a function of the total number of iterations, \(K\). We have: \(N=nK=nK_{1}K_{2}\). Now we apply concentration inequalities to bound the deviation between the true objective function and its empirical estimate. For simplicity, we will assume that both the inner and outer loop optimization problems have finite domains, and their objective functions are Lipschitz continuous. Let \(\hat{\rho}_{\varphi}(e;f)\) and \(\hat{\ell}\) be the empirical estimates of the inner loop objective function and the outer loop objective function, respectively, computed using \(n\) samples. By Lipschitz continuity, we have: \[|\rho_{\varphi}(e;f)-\hat{\rho}_{\varphi}(e;f)| \leq L_{\rho}\|e-\hat{e}\| \tag{11}\] \[|\ell\left(\xi^{H},\xi^{L}\right)-\hat{\ell}\left(\xi^{H},\xi^{L}\right)| \leq L_{\ell}\|\xi^{H}-\hat{\xi}^{H}\|+L_{\ell}\|\xi^{L}-\hat{\xi}^ {L}\| \tag{12}\] Applying Hoeffding's inequality, we can bound the probability that the deviation between the true objective function and its empirical estimate is larger than a given threshold. Specifically, we can show that: \[\mathbb{P}\left(|\rho_{\varphi}(e;f)-\hat{\rho}_{\varphi}(e;f)|> \epsilon\right) \leq 2\exp\left(-\frac{n\epsilon^{2}}{2L_{\rho}^{2}}\right) \tag{13}\] \[\mathbb{P}\left(\left|\ell\left(\xi^{H},\xi^{L}\right)-\hat{\ell }\left(\xi^{H},\xi^{L}\right)\right|>\epsilon\right) \leq 2\exp\left(-\frac{n\epsilon^{2}}{2L_{\ell}^{2}}\right) \tag{14}\] To achieve an \(\epsilon\)-approximate solution with probability at least \(1-\delta\), we can set the right-hand side of these inequalities to be less than or equal to \(\delta\) and solve for \(n\). This gives us: \[n\geq\frac{2L_{\rho}^{2}}{\epsilon^{2}}\log\left(\frac{2}{\delta}\right)\quad \text{ and }\quad n\geq\frac{2L_{\ell}^{2}}{\epsilon^{2}}\log\left(\frac{2}{\delta}\right) \tag{15}\] These bounds can be used to inform the choice of the number of samples per iteration, \(n\). However, we cannot directly derive a closed-form expression for the total number of samples from these bounds. Instead, we can use these bounds as guidelines to choose the number of samples per iteration, and then use the relationship \(N=nK_{1}K_{2}\) to compute the number of samples required for joint optimization. ## 4 Regret Bounds Analysis for Bayesian Optimization Building upon the theoretical foundations discussed previously, we will now further explore the performance of our approach, with a particular emphasis on using Bayesian optimization for the outer loop optimization problems. The selection of an optimization algorithm can greatly impact the efficiency of the proposed joint optimization framework. [7, 14]. Its utilization of a probabilistic model to estimate the objective function and an acquisition function to guide the search makes it particularly effective when dealing with costly or noisy evaluations. This has led to its successful application in various domains, including hyperparameter tuning in machine learning [17], design optimization in engineering [3], and decision-making under uncertainty [2]. **Theorem 6.**_When using Bayesian optimization with the GP-UCB acquisition function for the outer loop optimization (fidelity settings optimization), the optimization process converges to the optimal fidelity settings \(f^{*}\) with high probability, and the cumulative regret after \(T\) iterations is bounded by \(\mathcal{O}(\sqrt{T})\)._ Proof. We begin by stating the GP-UCB acquisition function as follows: \[\alpha_{t}(f)=\mu_{t}(f)+\sqrt{\beta_{t}}\sigma_{t}(f) \tag{16}\] where \(\mu_{t}(f)\) and \(\sigma_{t}^{2}(f)\) are the posterior mean and variance of the Gaussian process at fidelity settings \(f\) after \(t\) iterations, and \(\beta_{t}\) is the exploration parameter. We define the instantaneous regret at iteration \(t\) as the difference between the optimal objective function value and the value obtained at the chosen fidelity settings: \[r_{t}=\ell\left(f^{*}\right)-\ell\left(f_{t}\right) \tag{17}\] where \(f^{*}\) is the optimal fidelity settings and \(\mathbf{f}_{t}\) is the fidelity settings chosen by Bayesian optimization at iteration \(t\). The cumulative regret after \(T\) iterations is given by: \(R_{T}=\sum_{t=1}^{T}r_{t}\). To bound the cumulative regret, we use the following inequality based on the GP-UCB acquisition function: \[r_{t}\leq\sqrt{\beta_{t}}\sigma_{t}\left(f_{t}\right)+\frac{1}{2}\left(\mu_{t }\left(f^{*}\right)-\mu_{t}\left(f_{t}\right)\right) \tag{18}\] This inequality follows from the fact that the GP-UCB acquisition function balances exploration and exploitation. By summing both sides of this inequality over \(t=1,\ldots,T\), we obtain a bound on the cumulative regret: \[R_{T}\leq\sum_{t=1}^{T}\left(\sqrt{\beta_{t}}\sigma_{t}\left(f_{t}\right)+ \frac{1}{2}\left(\mu_{t}\left(f^{*}\right)-\mu_{t}\left(f_{t}\right)\right)\right) \tag{19}\] Now, we use the following properties of Gaussian processes: 1. The posterior variance of the Gaussian process at the optimal fidelity settings \(f^{*}\) decreases monotonically with the number of iterations: \(\sigma_{t+1}(f^{*})\leq\sigma_{t}(f^{*})\). 2. The posterior mean of the Gaussian process converges to the true objective function value at the optimal fidelity settings \(\lim_{t\rightarrow\infty}\mu_{t}\left(f^{*}\right)=\ell\left(f^{*}\right)\) Using these properties, we can show that \(\sum_{t=1}^{T}\left(\sqrt{\beta_{t}}\sigma_{t}(f_{t})+\frac{1}{2}\left(\mu_{t }(f^{*})-\mu_{t}(f_{t})\right)\right)\) converges to a finite value as \(T\rightarrow\infty\). Specifically, we can upper-bound the sum by \(\mathcal{O}(\sqrt{T})\). This implies that the cumulative regret is bounded by: \[R_{T}\leq\mathcal{O}(\sqrt{T}) \tag{20}\] This result shows that, with high probability, the Bayesian optimization process converges to the optimal fidelity settings \(f^{*}\), and the cumulative regret is bounded by \(\mathcal{O}(\sqrt{T})\) after \(T\) iterations. ## 5 Additional Insights In this section, we will delve further into the insights we have gained from our nested optimization framework. Specifically, we will explore three key areas: adaptive fidelity management, stability analysis, and robustness analysis. Together with the theorems we have discussed, these insights help us better comprehend the intricate dynamics at play within our joint optimization framework. ### Adaptive Fidelity Management One important feature of our joint optimization approach is its ability to dynamically adjust fidelity settings. During the optimization process, our algorithm adapts the fidelity based on information from both the inner and outer loop optimizations. This flexibility helps the algorithm balance between exploring new options and making the most of known options, all while keeping computational costs low. In practice, this means the algorithm focuses on parts of the search space that seem promising or uncertain. ### Stability Analysis We could also analyze stability to learn more about the convergence and stability of our joint optimization framework. The insights from Theorem 4.1 and Theorem 4.2, which deal with Lipschitz continuity and convergence properties, help us understand the stability of our proposed approach. With these insights, we can study the stability of both the inner and outer loop optimization processes under different conditions, such as changes in fidelity settings and different environment configurations. In the end, this analysis helps us create algorithms that are more robust against uncertainties. ### Robustness Analysis Robustness analysis is about evaluating how our joint optimization framework performs when faced with varying levels of uncertainty and environmental noise, both in terms of configurations and simulator dynamics. By studying how our framework behaves under these conditions, we can pinpoint potential vulnerabilities to bolster its robustness. To carry out this analysis, we assess the impact of noise and uncertainty on the performance of both the inner and outer loop optimization processes. This might involve deriving robustness bounds or establishing worst-case performance guarantees, as well as exploring how fidelity settings affect the sensitivity of the optimization process to noise and uncertainty. Through comprehensive robustness analysis, we can confidently assert that our proposed approach is well-equipped to handle uncertainties in environment configurations and simulator dynamics. ## 6 Conclusions We presented a mathematical formulation for joint falsification and fidelity setting optimization, which addresses the challenge of efficiently validating the safety of autonomous systems. The proposed framework brings together the two critical aspects of the problem, namely, the falsification of learning-enabled systems and the optimization of simulator fidelity settings. Our approach enables a more efficient exploration strategy for searching potential failure scenarios by focusing on the most critical environmental configurations that challenge the control algorithms. We have derived a set of six key theorems to establish the fundamental properties and relationships in the joint optimization problem. These theorems encompass a range of important aspects, including sensitivity analysis, sample complexity, convergence, the interplay between the outer and inner loops, and the regret bound analysis when employing Bayesian optimization. The insights gained from these theorems provide a foundation for the development of efficient algorithms in this domain. As a future direction, we aim to conduct extensive empirical evaluations of our approach on various autonomous systems to demonstrate its practical applicability and effectiveness in improving safety validation.
2305.09222
Touch Sensing on Semi-Elastic Textiles with Border-Based Sensors
This study presents a novel approach for touch sensing using semi-elastic textile surfaces that does not require the placement of additional sensors in the sensing area, instead relying on sensors located on the border of the textile. The proposed approach is demonstrated through experiments involving an elastic Jersey fabric and a variety of machine-learning models. The performance of one particular border-based sensor design is evaluated in depth. By using visual markers, the best-performing visual sensor arrangement predicts a single touch point with a mean squared error of 1.36 mm on an area of 125mm by 125mm. We built a textile only prototype that is able to classify touch at three indent levels (0, 15, and 20 mm) with an accuracy of 82.85%. Our results suggest that this approach has potential applications in wearable technology and smart textiles, making it a promising avenue for further exploration in these fields.
Samuel Zühlke, Andreas Stöckl, David C. Schedl
2023-05-16T06:58:11Z
http://arxiv.org/abs/2305.09222v2
# Touch Sensing on Semi-Elastic Textiles with Border-Based Sensors ###### Abstract This study presents a novel approach for touch sensing using semi-elastic textile surfaces that does not require the placement of additional sensors in the sensing area, instead relying on sensors located on the border of the textile. The proposed approach is demonstrated through experiments involving an elastic Jersey fabric and a variety of machine-learning models. The performance of one particular border-based sensor design is evaluated in depth. By using visual markers, the best-performing visual sensor arrangement predicts a single touch point with a mean squared error of 1.36 mm on an area of 125mm by 125mm.We built a textile only prototype that is able to classify touch at three indent levels (0, 15, and 20 mm) with an accuracy of 82.85%. Our results suggest that this approach has potential applications in wearable technology and smart textiles, making it a promising avenue for further exploration in these fields. Textile Sensor, Touch Interaction, Machine Learning, Smart Textiles and Applications, Technical Textiles ## Introduction The field of wearable technology and smart textiles has seen rapid growth and development in recent years. A key trend in this field is the use of flexible and tangible surfaces to facilitate user interactions. Traditionally, sensors such as capacitive or resistive sensors are directly placed on the sensing area to detect touch inputs [10]. For instance, capacitive textile touch sensors rely on a 2D matrix of wires to detect touch, which can disrupt the texture, surface structure, and potentially alter the behaviour of the textile [1]. Such alterations can compromise the functional qualities of the textile, structural integrity, and aesthetics, limiting the scope of its applications. To address this issue, we propose a novel approach for touch sensing on semi-elastic textile surfaces without the need to alter or place additional sensors in the sensing area. Our approach involves placing sensors on the border of the textile (cf. Figure 1(a)), leaving the interaction area completely unaltered and free of sensors. The sensors on the border detect stretching caused by interactions in the sensing area, which is measured and used for classification using machine-learning algorithms. Our approach eliminates the need for resistive or capacitive measurements on the textile surface within the touch area, preserving its original texture and surface structure. Furthermore, it allows for a wide range of applications in wearable technology and smart textiles, providing a seamless and unobtrusive way for user interaction. Our research aims to explore the technical challenges involved in developing this border-based approach and evaluate its performance, as well as investigate potential applications and limitations. ## 2 Related Work In recent years, there has been significant progress in the development of tactile sensing systems in multiple fields (Chi et al., 2018; Pyo et al., 2021). For example, image sensors have been used to track visual markers within soft synthetic tissue used for robotic grip detection. Together with techniques such as Voronoi segmentation and artificial intelligence, they have been used to improve tactile sensing (Cramphorn et al., 2018; Shimonomura, 2019; Yuan et al., 2017). In the field of robotics and damage detection, electrical resistance tomography (ERT) is used. ERT-based tactile sensors with distributed electrodes can be used in robotic skin to conform to a curved surface (Lee et al., 2021; Park et al., 2020). Similar studies have explored the potential of electrical impedance tomography (EIT) as a method for soft and stretchable sensor applications, structural damage localisation in composite parts, low-cost and large-area touch sensing using conductive fabric, and its application as a robotic skin (Baltopoulos et al., 2013; Duan et al., 2019; Russo et al., 2017; Silvera-Tawil et al., 2015). In fabric sensing, new algorithms are used to improve the touch localisation accuracy of a knitted or embroidered capacitive and resistive touch sensing system, whereas textile mutual capacitive sensors using resistive and capacitive yarn achieve continuous input of up to three degrees of freedom (Aigner et al., 2021, 2022; Hamdan et al., 2018; Parzer et al., 2018; Pointner et al., 2020, 2022; Vallett et al., 2020). Utilising the shapeable nature of fabrics, deformable displays together with user defined gestures have been proposed and intelligent robotic manipulation, sensing principles, typical designs, common issues, and applications have been explored (Bacim et al., 2012; Mlakar et al., 2021; Tegin et al., 2005; Troiano et al., 2014). Figure 1: Overview of our border-based sensor prototype: A Jersey textile is stretched over a frame and 12 stretch-sensitive patches (a single patch is highlighted in red) are mounted on its border around the touch area (highlighted in yellow) (a). Interactions, like finger presses, lead to a 3D deformation of the fabric as illustrated in (b). By measuring tension at the borders of the fabric we can reconstruct touch points. Different positions and different touch depths lead to varying strain on the border of the sensor, indicated by the brightness of the patch (c-e). Overall, a wide range of sensing modalities and technologies can be employed for tactile sensing. While border-based measurement has been explored together with ERT and EIT, and the deformability or stretchability of fabrics has been utilised previously, we believe we are among the first to combine the two paradigms in a novel way. We combined border-based sensing modalities on non-resistive and non-capacitive textiles together with artificial intelligence techniques for accurate and comprehensive tactile sensing systems. ## Results To implement and validate the proposed border-based approach for touch sensing on semi-elastic textile surfaces, we designed a comprehensive experimental setup. First, to demonstrate the working principle of our sensor, we used a vision-based approach and simulations. We painted a 7 by 7 grid of highly reflective points on the surface of the textile to track its movement and enable reliable and semi-automatic data collection and labelling. In further prototypes, the density of points was increased, and a grid of 14 by 14 points was used. Additionally, we utilised a customised CNC milling machine to create indentations at touch points with predefined depths and random locations on the 80 by 80 mm sensor area. The initial step in our study involved an analysis of motion capture data gathered from the fabric, which we utilised to construct a preliminary digital model. We observed that the movement of the fabric closely resembled a linear surface in three dimensions, leading us to develop a mathematical model that could simulate measurements. We compare our simulated points (from our linear model) to tracked real-world data points over four experiment runs with a total of 2000 frames and found that the overall error across the entire surface is 1.7%, measured as Root Mean Squared (RMS) error. The error between simulation and measurements varies from 0,5% to 2% across the surface as displayed in Figure 2(a). ### Single Indent Localisation The recorded real-world data together with simulations were the bases for an assessment of different border-based sensor arrangements for single-indent localisation (as required for single-touch interactions). Therefore, we reconstructed the location of indentations on the fabric by only measuring the stretch between two surface points (i.e., the sensor). The optimal number of sensors and their placement on the surface was the subject of subsequent experiments. To determine the most optimal sensor configuration, several manually defined arrangements were tested, as illustrated in Figure 2(c). The resulting stretch values for each sensor, for both the physical and mathematical models, were then used as input features for machine learning to reconstruct the indent location. To assess performance, elementary machine-learning models such as random forest, linear regression, and polynomial regression were employed. The data was split randomly into training and test sets at a ratio of 66.6 to 33.3. The graph in Figure 2(b) shows the performance of several configurations with a random forest model for simulated and real-world measurements in Mean Absolute Error (MAE). The experimental results allowed us to assess varying sensor configurations and machine-learning models for accurately reconstructing touch inputs from border-based sensors. Note that the high MAE visible in the mathematical model with four sensors, in Figure 2(b), is due to the utilised sensor arrangement. Hereby the sensors are arranged in a cross in the centre of the area where the average RMS-Distance for the mathematical model is the highest as can be seen in Figure 2(a). Due to the concentration of sensors in one area, coinciding with the area of the greatest distance between visual and mathematical data, the mathematical model performs worse in this sole instance. While arrangements with four or less sensors were found to be imprecise, regardless of their placement, the use of six or more sensors yielded better results. A total of 26 different arrangements, ranging from 3 to 36 sensors, were tested and evaluated. In terms of practical production considerations, using fewer sensors minimises the disruption to the fabric structure, and therefore, we selected the best-performing 12-sensor configuration (mean squared error of 1.36 mm) for our textile-sensor prototype. #### 3.2.2 Textile-Sensor Prototype Based on the findings from the previous experiments (i.e., simulations and optical measurements) we mounted 12 textile sensors on the border of our prototype (cf. Figure 1(a) and Figure 3(a)). The sensors are rectangular patches of conductive fabric that change resistance when stretched. The change in resistance during experiments (random touch points on the sensor area) was recorded and used as input into a machine-learning model. Several models were evaluated and the best-working model for textile, mathematical and visual implementation is a simple Multi-Layer-Perceptron. Figure 2: Comparison of the mathematical model to real-world measurements: The RMS of the mathematical surface model in comparison to the measured one across the sensor area (a). Note that the white pixels indicate missing optical tracking information. The Mean Absolute Error from the Random Forest sensor pattern evaluation, in relation to the entire touch area, is presented in (b). Various sensor configurations are evaluated, ranging from 3-36 sensors in 26 different arrangements (6 are shown) (c). The training was performed with a set of 997 touch points and the model was evaluated on 499 test points. The accuracy of touch classification at three different indent levels (0, 15, and 20 mm) from any given touch event using sensor data is 82.85%. In comparison, the simulated sensor achieves a validation accuracy of 91.17% for classifying the indentation depth (0, 15, or 20 mm). ## Method In developing our sensor prototype, we selected a knit jersey fabric as the primary material that was stretched over a rectangular frame with a sensing area of 125 by 125 mm. The textile sensors themselves were created from rectangular pieces of EeonTex(tm) Conductive Stretchable Fabric which were cut and sewn onto the jersey using highly conductive, polyamide silver plated yarn from Madeira. To facilitate repeated and precise touch events, an industrial embroidery frame was utilised, which enabled the prototype to be mounted onto a customised CNC milling machine with a stylus-based touch point attachment for testing purposes. All sensors share a common ground and are individually connected to a measurement unit. To extract measurement data, the hardware platform prototyping kit CY8CPROTO-063-BLE from Infineon is used, with the data subsequently being recorded in a CSV file for further analysis. This setup allowed us to measure and validate the performance of our proposed approach for touch sensing on semi-elastic textile surfaces. For tracking the surface, a set of six motion capture cameras were used to capture the textile behaviour in three dimensions, allowing us to measure the stretching of the sensor's border caused by touch inputs in the sensing area at a rate of 100 frames per second. We used Flex-3 cameras from Optitrack and analysed the tracking data with Optitrack's Motive software in version 2.2. To enable the accurate representation of the fabric's movement during indentation and its Figure 3: Here the sensor arrangement in the sensor area is illustrated, along with the touch area and three indentations (a). Additionally, the time plots for sensor data from sensors 0, 5, and 6 at the duration of the indentations are displayed in (b). The performance of the MLP in predicting the indent levels, as compared to the actual indent levels of a test matrix, is presented in the form of a confusion matrix (c). The raw sensor data stream as it is received from the sensors is depicted in (d). Note that at the time of measurement 3 of the 12 sensors were defective and recorded noise data. Thus, they were excluded from the experiments and graphs. behaviour in response to touch, we transformed the 3D coordinates into a scaled coordinate system, resulting in the rotated, scaled, and translated points being represented within a zero-to-one range on each axis. The position of reflective points compared to the resting state, enables the calculation of surface parameters and the amount of stretch between measured coordinates. Additionally, the CNC milling machine's head was also tracked optically for precise measurements in the same coordinate system as the surface. The stretching properties of both the physical fabric and a corresponding mathematical surface model were measured at various locations, including at potential sensor locations. An artificial sensor was placed between two selected points on a grid, and the degree of stretch was recorded. To determine the most optimal sensor configuration, several different arrangements were tested, as illustrated in Figure 2(c). The resulting stretch values for each sensor, for both the physical and mathematical models, were then used as features in a machine-learning approach. To assess the performance, elementary machine-learning models such as random forest, linear regression, and polynomial regression were employed. Alongside, different standard models from TensorFlow (Martin Abadi et al., 2015), Sticit-learn (Pedregosa et al., 2011) and PyTorch (Paszke et al., 2019) were tested. Ultimately a TensorFlow Keras Sequential model was chosen. The MLP model used in the code has two hidden layers, each with 64 neurons and ReLU activation function, and an output layer with 4 neurons and SoftMax activation function, resulting in a total of 196 neurons. ## 4 Conclusion We propose a border-based approach for touch sensing on semi-elastic textile surfaces. By placing sensors on the border of the textile, we leave the interaction area completely unaltered and free of sensors. The sensors on the border detect stretching caused by interactions in the sensing area, which is then measured and classified using machine-learning algorithms. In our experiments, we show that a simple linear surface model is precise enough for designing optimal sensor configurations and verify this experimentally with simulations and optical tracking. For our experiments, we utilised an elastic Jersey fabric stretched over a rectangular frame with a sensing area of 125 by 125 mm. For this setup, we found an optimal sensor arrangement with 12 border-based sensors, that is capable of reconstructing touch points with an error of 1.36mm and classifying three indentation levels with an accuracy of 91.17% on the simulated mathematical data and 82.85% on the sensor data. Furthermore, we built the first border-based-sensing and textile-only prototype that is able to classify the indent of touch points. The lower classification accuracy of our prototype can be attributed to the presence of noise and errors that may have resulted from hand-cutting the sensor patches. Additionally, our physical prototype uses resistance measurements as input to the machine-learning model, while our simulation and tracking experiments employ distance measurements directly. Therefore, the resistance data might introduce an additional level of complexity when compared to our preliminary analysis. In the future, we want to investigate if the machine-learning models can be further optimised and how our prototype will perform in real-world applications with more complex touch inputs and varying environments. Based on the findings of our study, additional attention will be devoted to the detection of the point of interaction, rather than solely relying on the indent level. Further exploration is necessary to determine whether alternative sensor arrangements or configurations may be advantageous in this regard. Nevertheless, the results of our experiments demonstrate that the proposed approach is effective and might be applied in various wearable technology and smart textile applications in the future. Our approach allows a seamless and unobtrusive way for users to interact with textile surfaces. ## Acknowledgment The authors would like to acknowledge the valuable contributions of our colleagues Roland Aigner, Andreas Pointner and Thomas Preindl for their technical assistance and collaborative efforts, which were instrumental in achieving our research objectives.
2306.10223
Machine learning search for stable binary Sn alloys with Na, Ca, Cu, Pd, and Ag
We present our findings of a large-scale screening for new synthesizable materials in five M-Sn binaries, M = Na, Ca, Cu, Pd, and Ag. The focus on these systems was motivated by the known richness of M-Sn properties with potential applications in energy storage, electronics packaging, and superconductivity. For the systematic exploration of the large configuration space, we relied on our recently developed MAISE-NET framework that constructs accurate neural network interatomic potentials and utilizes them to accelerate ab initio global structure searches. The scan of over two million candidate phases at a fraction of the typical ab initio calculation cost has uncovered 29 possible intermetallics thermodynamically stable at different temperatures and pressures (1 bar and 20 GPa). Notable predictions of ambient-pressure materials include a simple hP6-NaSn$_2$ phase, fcc-based Pd-rich alloys, tI36-PdSn$_2$ with a new prototype, and several high-temperature Sn-rich ground states in the Na-Sn, Cu-Sn, and Ag-Sn systems. Our modeling work also involved ab initio (re)examination of previously observed M-Sn compounds that helped explain the entropy-driven stabilization of known Cu-Sn phases. The study demonstrates the benefits of guiding structure searches with machine learning potentials and significantly expands the number of predicted thermodynamically stable crystalline intermetallics achieved with this strategy so far.
Aidan Thorn, Daviti Gochitashvili, Saba Kharabadze, Aleksey N. Kolmogorov
2023-06-17T01:17:35Z
http://arxiv.org/abs/2306.10223v3
# Machine learning search for stable binary Sn alloys with Na, Ca, Cu, Pd, and Ag ###### Abstract We present our findings of a large-scale screening for new synthesizable materials in five M-Sn binaries, M = Na, Ca, Cu, Pd, and Ag. The focus on these systems was motivated by the known richness of M-Sn properties with potential applications in energy storage, electronics packaging, and superconductivity. For the systematic exploration of the large configuration space, we relied on our recently developed MAISE-NET framework that constructs accurate neural network interatomic potentials and utilizes them to accelerate _ab initio_ global structure searches. The scan of over two million candidate phases at a fraction of the typical _ab initio_ calculation cost has uncovered 29 possible intermetallics thermodynamically stable at different temperatures and pressures (1 bar and 20 GPa). Notable predictions of ambient-pressure materials include a simple hPG-NaSn\({}_{2}\) phase, fcc-based Pd-rich alloys, tI36-PdSn\({}_{2}\) with a new prototype, and several high-temperature Sn-rich ground states in the Na-Sn, Cu-Sn, and Ag-Sn systems. Our modeling work also involved _ab initio_ (re)examination of previously observed M-Sn compounds that helped explain the entropy-driven stabilization of known Cu-Sn phases. The study demonstrates the benefits of guiding structure searches with machine learning potentials and significantly expands the number of predicted thermodynamically stable crystalline intermetallics achieved with this strategy so far. ## I Introduction _Ab initio_ screening of vast chemical spaces has become an integral part of materials discovery. The AFLOW [1], Materials Project [2], OQMD [3], and other open repositories contain _ab initio_ results for hundreds of thousands of compounds in observed structure types and demonstrate that density functional theory (DFT) approximations offer a reliable determination of materials' stability. Mining these databases for new synthesizable compounds or materials with targeted properties has led to numerous interesting predictions [4; 5; 6; 7]. For example, correlations established with machine learning and data mining methods helped identify new oxides [8], perovskites [9], and intermetallics [10]. Global structure optimization methods have expanded the exploration beyond known prototypes and resulted in prediction and confirmation of unfamiliar motifs in various materials classes [11; 12; 13; 14; 15]. Unfortunately, the high cost of _ab initio_ calculations limits the scope of unconstrained searches. Evaluation of structure stability with less expensive and fairly accurate machine learning potentials (MLPs) has shown great promise for accelerating _ab initio_ searches [16] but successful predictions of stable compounds remain scarce [17; 18; 19; 20]. In particular, our recent re-examination of the Li-Sn binary with a MLP has uncovered several stable alloys with large unit cells not detected in _ab initio_ searches [19]. In the present study extended to five metal-tin binaries, we aim to demonstrate the applicability and benefit of the developed predictive strategy on a larger scale in a materials class abundant with potential applications. Tin is a post-transition metal observed in various elemental and multicomponent crystal structure phases [21; 22; 23; 24; 25; 26]. At ambient temperature and pressure, 'white tin' crystallizes in the \(\beta\)-Sn structure and is known as a soft, malleable, and ductile metal. Below 13degC, 'grey tin' adopts the \(\alpha\)-Sn diamond structure and exhibits a semimetallic behavior. The allotropic \(\beta\rightarrow\alpha\) transformation, a so-called 'tin pest' process turning sil-very tin objects into grey powder, has slow kinetics due to the high activation energy associated with the change in the atomic coordination from 6 to 4 and the volume expansion by 27% [26]. Under high pressures and room temperature, tin undergoes a series of transformations to more close-packed structures: \(\beta\)-Sn\(\rightarrow\)bct\(\rightarrow\)bco\(\rightarrow\)bco+bcc\(\rightarrow\)bcc at about 11, 32, 40, and 70 GPa, respectively [27; 23]. The high sensitivity of tin's ground state to the external temperature and pressure conditions can be traced back to the element's particular placement in the periodic table. Within group XIV, tin's position defines the boundary between the covalent bonding for the lighter elements and the metallic bonding for the heavier lead. The propensity of the elements to form covalent bonds can be quantified with a ratio between the \(sp^{3}\) bond formation energy and the \(s\)\(\rightarrow\)\(p\) promotion energy cost. The steady 2.8:1.4:1.15:1.02:0.8 decrease of the ratio in the C:Si:Ge:Sn:Pb set [28] has been linked to the decreasing bond integral strength between the \(s\) and \(p\) states [28; 29; 30]. The competitiveness of different bonding mechanisms in pure tin contributes to the element's readiness to form alloys. In this respect, tin shares a lot of traits with boron which has been the subject of our past work [31]. This metalloid with three valence electrons also occupies a borderline spot between an insulator (carbon) and a metal (beryllium), assumes several elemental configurations (\(\alpha\)-B, \(\beta\)-B, and \(\gamma\)-B [32; 33; 34; 35; 36]) due to the frustrated electronic structure, mixes with the majority of metals, and forms extended 2D or 3D covalent networks. The distinction between the two classes is tin's versatility to be either a hosting or an alloying element in compounds which defines the materials' remarkable suite of demonstrated and possible functions. Tin's large size and tendency to form extended frameworks give rise to applications as a battery anode material [37; 38; 39]. Tin alloy anode materials in general are more conductive and safer than graphite-based anodes, and certain tin binary phases have been found to have larger theoretical specific capacities than their commercially-available counterparts (Li\({}_{22}\)Sn\({}_{5}\)[40] and Na\({}_{15}\)Sn\({}_{4}\)[41] have 992 mA h g\({}^{-1}\) and 847 mA h g\({}^{-1}\) theoretical specific capacities, respectively, while LiC\({}_{6}\) has 372 mA h g\({}^{-1}\)). However, the large volume change of over 250% (420%) upon Li (Na) insertion/extraction leads to anode pulverization over just a few cycles [42; 43; 44]. Tin alloys have been extensively investigated as non-toxic alternatives to Pb-free solders and durable joint materials in electronics interconnects [45; 46; 47; 25; 48; 49; 50; 26]. The efforts have focused on finding tin intermetallics that can balance high mechanical stability, high thermal conductivity, resistance to Sn-whisker formation, cost effectiveness, and other factors important for next-generation integrated circuits [45; 47]. A number of tin-based materials have been studied for their non-trivial topological behavior. They range from pure \(\alpha\)-Sn [51] and Sn-Te/Pb [52; 53; 54; 55] 3D materials to atom-thick stanene with a buckled honeycomb morphology [56; 57; 22; 58]. Nevertheless, a known BaSn\({}_{2}\) compound synthesized first almost a decade ago [59] received little attention until its potential as a strong TI with a wide 200-meV band gap has been demonstrated in our previous studies [60; 61]. A recent experimental study provided insights into synthesis and stability of the BaSn\({}_{2}\) compound [62]. Given a large body of research dedicated to tin alloys over the past few decades [63; 64; 41; 22; 40; 42; 43; 44; 45; 46; 25; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 224; 225; 226; 227; 228; 231; 232; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 287; 289; 288; 289; 291; 289; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 323; 324; 325; 326; 327; 328; 329; 333; 334; 335; 336; 337; 338; 339; 340; 341; 342; 343; 344; 345; 346; 347; 348; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 38; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 41; 412; 413; 414; 42; 435; 44; 45; 46; 47; 48; 49; 425; 47; 49; 409; 426; 40; 43; 40; 41; 42; 43; 44; 44; 45; 46; 47; 48; 49; 40; 43; 44; 41; 44; 44; 45; 46; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 92; 93; 94; 95; 96; 97; 98; 99; 101; 11; 12; 133; 14; 15; 16; 17; 18; 19; 19; 18; 19; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 53; 54; 55; 56; 57; 58; 59; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 74; 75; 76; 77; 78; 79; 81; 92; 94; 93; 95; 97; 82; 98; 99; 99; 99; 100; 11; 12; 13; 14; 15; 16; 17; 19; 20; 21; 23; 24; 25; 26; 27; 28; 29; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 52; 54; 53; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 71; 80; 82; 83; 84; 85; 86; 87; 89; 99; 91; 92; 93; 94; 95; 96; 97; 101; 11; 13; 14; 15; 16; 17; 18; 19; 19; 20; 21; 23; 24; 25; 26; 27; 28; 29; 31; 33; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 52; 56; 57; 58; 59; 60; 61; 63; 64; 65; 66; 67; 68; 69; 70; 73; 74; 75; 76; 77; 78; 79; 82; 83; 85; 86; 87; 89; 99; 91; 92; 94; 95; 96; 97; 98; 99; 101; 11; 12; 13; 14; 15; 16; 17; 18; 19 output). The binary M-Sn NNs with a 145-10-10-1 architecture and 1,880 adjustable parameters (the total of \((8_{\text{MSn}}+43_{\text{MMSn}}+43_{\text{MSnSn}})\times 10_{\text{M}}+(8_{ \text{SnM}}+43_{\text{SnSnM}}+43_{\text{SnMM}})\times 10_{\text{Sn}}\) interspecies weights connecting the input symmetry functions with 10 neurons in the first hidden layer of each elemental NN) were fitted to datasets of only binary structures. Information about the dataset sizes, NN root-mean-square errors, and error distributions is given in Table 1 and Figs. S1-S5. As discussed in our previous studies [11; 19], the relatively modest accuracy of about 10 meV/atom results from the inclusion of unfamiliar configurations identified during NN-based structure search test runs at the end of each MAISE-NET cycle. The addition of diverse high-energy configurations generally increases the total error by about a factor of two but greatly reduces the number of artificial minima. This strategy has been found to ensure a desired balance between accuracy and robustness for the intended NN application in global structure searches. The Na-Sn system proved to be the most challenging binary to model because the experimentally observed ground states have large unit cell sizes outside our standard sampled range and the NN could not learn how to accurately describe the exotic Sn frameworks [60]. Incorporation of the equation-of-state data for the nine structures into the training set resulted in only a marginal NN improvement, as the original 19-96 meV/atom errors decreased to 4-68 meV/atom for this subset. Considering that this study focuses on Sn alloys, we examined the description of pure Sn in more detail. Fig. S6 displays the relative stability of relevant Sn phases in the 0-20 GPa range evaluated with the reference PBE flavor, our NN potential, and the most accurate modified embedded atom model (MEAM) developed for this element [81]. First, it is worth reviewing how the DFT predictions agree with experimental observations. The likely overestimation of the \(\Delta E_{\beta-\alpha}\) energy difference in common DFT approximations along with the limitations of the standard harmonic approximation for evaluating free energies leads to notably higher estimates of the \(T_{\alpha\to\beta}\) transition temperature [82; 81; 21; 30] K in our case versus measured 286 K [83]. This discrepancy makes it difficult to make definitive predictions about possible high-\(T\) ground states at the Sn-rich end of the phase diagram. The PBE results at \(T=0\) K also indicate stability of hcp-Sn in the 5.5-15 GPa window not detected experimentally and favorability of bco over bct up to about 20 GPa in disagreement with reported data [27]. Our NN model reproduces \(\Delta E_{\beta-\alpha}\) within 3.5 meV/atom and predicts \(T_{\alpha\to\beta}=530\) K but does not differentiate bco, bct, and bcc above 14 GPa. Overall, the NN offers a satisfactory description of relative stabilities in the considered pressure range given that all structures were fully relaxed with the corresponding methods and that the absolute error doubles in the calculation of enthalpy differences between two phases. The MEAM agrees well with DFT describing phases at ambient pressure. Since it was not parameterized to model compressed Sn configurations, it is not unexpected to see its less accurate performance resolving competing phases under high pressures. At the same time, the MEAM significantly disfavors the viable but elusive \(\gamma\)-Sn with the simple hexagonal structure [21] and our MEAM-based evolutionary searches uncovered an artificial ground state (\(\text{ol4-}Immm\), \(a=3.2485\) A, \(b=4.5366\) A, \(c=8.3219\) A, and a single \(4j\) (1/2,0, 0.3276) Wyckoff position) 11 meV/atom below \(\alpha\)-Sn at 0 K and 0 GPa. Finally, an interesting hybrid model, a combination of EAM and rapid artificial NN potential, has been recently developed and demonstrated to have a much-improved description of Sn phases in a wide range of temperatures and pressures [82]. ### Evolutionary structure searches Global structure optimizations were performed for each binary system with our MAISE package [11]. Evolutionary searches were carried out for selected fixed M\({}_{1-x}\)Sn\({}_{x}\) compositions between \(0.125\leq x\leq 0.875\). We considered up to 8 formula units and limited the structure sizes to 24 atoms per unit cell. Randomly generated populations of 32 members were evolved for up to 168 generations with standard evolutionary operations. The number of generations was set to \(g=N+N^{2}/4\) as a function of the unit cell size that has been found to be usually sufficient to reach search convergence in our previous study on Li-Sn [19]. Evolutionary operations consisted of mutations of 4 single parents (random atom displacements, atom swaps, and unit cell distortions), injections of 4 new random structures, and crossovers of 24 pairs of parents (combination of two roughly equal parts obtained with planar cuts) [84; 11]. Child structures were locally relaxed with the NN potentials for up to 300 Broyden-Fletcher-Goldfarb-Shanno minimization steps and assigned a fitness based on the final enthalpy. Our fingerprint method based on the radial distribution function [84; 85; 11] was used to identify and eliminate \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline & \multicolumn{2}{c}{Training set} & \multicolumn{2}{c}{Testing set} & \multicolumn{2}{c}{Testing errors} \\ & \(E\) & \(F\) & \(E\) & \(F\) & \(E\) & \(F\) \\ & size & size & size & size & meV/at. & meV/Å \\ \hline Na & 4584 & 34059 & 509 & 3882 & 0.7 & 3 \\ Ca & 4483 & 3327 & 498 & 4011 & 6.2 & 30 \\ Cu & 4316 & 32199 & 479 & 4053 & 1.8 & 10 \\ Pd & 4398 & 38225 & 488 & 4101 & 4.6 & 40 \\ Ag & 4421 & 35889 & 491 & 4416 & 1.9 & 11 \\ Sn & 4346 & 29694 & 482 & 3687 & 8.3 & 35 \\ Na-Sn & 6953 & 33501 & 772 & 2610 & 12.5 & 43 \\ Ca-Sn & 7211 & 29376 & 801 & 3327 & 14.5 & 65 \\ Cu-Sn & 7685 & 69579 & 853 & 7578 & 9.5 & 45 \\ Pd-Sn & 5507 & 46272 & 61 & 4812 & 9.6 & 51 \\ Ag-Sn & 5410 & 37548 & 601 & 4236 & 8.4 & 33 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset sizes and root-mean-square errors of constituent elemental and combined binary NN models. similar structures. For select compositions, we performed additional searches with our hybrid NN+DFT approach that ensured more reliable convergence to putative ground states in the study of Au nanoparticles [79]. The method involves structural relaxations with a NN model followed by static DFT calculations to better resolve candidate's favorability during evolutionary runs. None of the 21 hybrid searches carried out for the Na-Sn and Pd-Sn binaries improved on the best candidates obtained with the standard NN-based runs. ### Vibrational property analysis Most of the phonon calculations were performed in the harmonic approximation with the finite displacement method as implemented in Phonopy [86]. We used symmetry-preserving expansions of primitive or conventional unit cells to generate supercells with 72-256 atoms and applied 0.1 A displacements. The Gibbs free energy corrections due to the vibrational entropy were included via summation over \(20\times 20\times 20\) grids in the Brillouin zone approximating the integral \(\Delta F_{\rm vib.}=k_{b}T\int_{0}^{\infty}d\omega g(\omega)ln[2sinh(\hbar \omega/2k_{b}T)]\). In some cases, we performed a quasi-harmonic approximation (QHA) calculations to determine the significance of anharmonic effects at elevated temperatures. For a given structure, the procedure involves (i) creation of a uniform grid of expanded and compressed volumes about the equilibrium; (ii) relaxation of each structure with VASP under the constant-volume constraint; (iii) phonon calculation in the harmonic approximation at each volume using Phonopy and VASP; (iv) fitting the free energy points at each temperature with a third-order polynomial; and (v) finding the minimum of the free energy as a function of volume at each temperature. ## III Results ### The Na-Sn binary Investigation of the compound-rich Na-Sn binary system extends over a century. In 1920, the existence of four alloys at the 4:1, 2:1, 1:1, and 1:2 compositions was demonstrated with electrochemical experiments [87]. Further synthesis and characterization work uncovered complex phases at or near the 15:4, 5:2, 7:3, 9:4, 7:12, 5:13, 1:3, 1:4, and 1:5 stoichiometries [88]. The recent research on these alloys has been motivated primarily by their potential use in energy storage applications based on abundant and inexpensive sodium [37; 39]. As the graphite anode materials utilized in commercial lithium-ion batteries are not suitable for storing larger sodium ions, Na-Sn alloys have been considered as promising anode alternatives with a high theoretical capacity of about 847 mA h g\({}^{-1}\) when converting Sn into Na\({}_{15}\)Sn\({}_{4}\)[66]. A new layered NaSn\({}_{2}\) material has also been predicted to be a topologically nontrivial metal [60]. Stratford _et al._[66] detected this phase and proposed solutions to previously unidentified intermediates in their comprehensive experimental and DFT study of the full Na-Sn composition range. The presence of phases with large unit cells, partial occupancies, and unique morphologies made the identification of the full set with unconstrained structure searches impractical. Following the strategy used in our Li-Sn study, we performed the thermodynamic stability analysis of the Na-Sn binary by combining the previously observed phases and low-enthalpy ones found in our NN-based evolutionary searches (see Figs. 1 and 2). At ambient pressure and zero temperature, the resulting convex hull is comprised of the cI76-Na\({}_{15}\)Sn\({}_{4}\)[89], hP60-Na\({}_{7}\)Sn\({}_{3}\)[90], and tI64-NaSn [91] ordered ground states and an AP76-Na\({}_{7}\)Sn\({}_{12}\) representation [60] of the disordered monoclinic phase [92]. As discussed in our previous study [60] and below, the related ordered mS48-NaSn\({}_{2}\) and mP78-Na\({}_{5}\)Sn\({}_{3}\) variants [66] are actually metastable by 2.6 and 1.8 meV/atom, respectively. The known hR21-Na\({}_{5}\)Sn\({}_{2}\)[93; 94], oS52-Na\({}_{9}\)Sn\({}_{4}\)[89], and oS288-Na\({}_{5}\)Sn\({}_{13}\)[95] phases hhover about 5, 7, and 23 meV/atom above the corresponding tie-lines. Our screening for high-\(T\) ground states identified three new viable candidates, mP20-Na\({}_{4}\)Sn, oI12-NaSn\({}_{5}\), and hP6-NaSn\({}_{2}\), that are within 0.4 meV/atom, 32.6 meV/atom, and 2.8 meV/atom of stability at 0 K in our PBE calculations (see Table S1). With vibrational entropy corrections included, mP20-Na\({}_{4}\)Sn stabilizes at 250 K and lies 1.7 meV/atom below the hcp-Na+\(\alpha\)Cl76-Na\({}_{15}\)Sn\({}_{4}\) tie-line at 650 K, which is the approximate melting temperature at this Na-Sn composition [88]. The experimentally observed tP12-NaSn\({}_{5}\) phase [96] is stabilized at 240 K (note that the choice of the metastable \(\beta\)-Sn in the calculation of formation energies makes this alloy stable at \(T=0\) K [66]). Our proposed oI12-NaSn\({}_{5}\) polymorph becomes the ground state at this stoichiometry above 450 K. We did not attempt to calculate phonons for the metastable oS288-Na\({}_{5}\)Sn\({}_{13}\) or stable aP76-Na\({}_{7}\)Sn\({}_{12}\) due to their exceptionally large sizes and/or low symmetry. We did approximate the free energy of the latter by adding a linear interpolation of the vibrational entropy terms evaluated for the related mS48-NaSn\({}_{2}\) and mP78-Na\({}_{5}\)Sn\({}_{8}\) at the adjacent compositions. This allowed us to estimate that the new hP6-NaSn\({}_{2}\) phase should become the ground state at 70 K and, in turn, destabilize aP76-Na\({}_{7}\)Sn\({}_{12}\) above 240 K. The configuration entropy contribution in Na\({}_{7}\)Sn\({}_{12}\) is expected to be insignificant because only \(M=4\) out of 30 Na atom sites in the \(N=78\)-atom unit cell have fractional occupancies. As shown in Fig. 4(b), the \(\frac{M/N}{1-x_{d}M/N}kT[x_{d}\ln(x_{d})+(1-x_{d})\ln(1-x_{d})]=-kT/19\ln(2)\) correction evaluated for half-filled \(4g\) sites (\(x_{d}=0.5\)) in the assumption that all configurations are equiprobable shifts the hP6 stabilization temperature up by just \(\sim 15\) K. It has been discussed that the Na-Sn and related Li Si/Ge/Sn binaries exhibit similar morphological trends, as different intercalated covalent frameworks appearing in Sn-rich alloys give way to Sn-Sn dimers and eventually to isolated Sn atoms with the increase of the alkali-metal concentration [60; 66; 97; 98]. The tP12-NaSn phase features a complex 3D framework with square Sn nets crosslinked by a percolating web of Sn bonds, from 4 to 7 per atom, that has well-defined channels conducive for Na migration (see Fig. 3). In fact, Stratford _et al._[66] identified metastable NaSn\({}_{3}\) and NaSn\({}_{4}\) phases with similar 3D morphologies, which could explain the reported but not identified alloy at the latter composition [88]. Our predicted oI12-NaSn\({}_{5}\) is comprised of fused elongated hexagonal bipyramids that effectively trap Na ions. In tI64-NaSn, the Sn framework consisting of tetrahedra about 3.70 A apart was found with a combination of _ab initio_ molecular dynamics and a reverse Monte Carlo refinement to be described better with an amorphous structure [66]. The predicted Na-rich mP20-Na\({}_{4}\)Sn phase has a fairly uniform distribution of Sn atoms with Na\({}_{9}\) and Na\({}_{11}\) local coordinations. The phases at and around the 1:2 composition deserve a closer look. It has been discussed previously that the mS48-NaSn\({}_{2}\), aP76-Na\({}_{7}\)Sn\({}_{12}\), and mP78-Na\({}_{5}\)Sn\({}_{8}\) phases are closely related and can be obtained by different population of the available Na sites between disjointed polyanion Sn layers stacked along the \(c\)-axis [60]. The most stable aP76 structure was obtained from mP78 by removing two \(4g\) Na atoms that results in the largest separation between the vacancies. Our original motivation for studying NaSn\({}_{2}\) was to examine whether the compound could crystalize in the AlB\({}_{2}\) structure with weakly interacting honeycomb layers and be subsequently exfoliated into stanene. We found that the phase does stabilize over the known phases at elevated temperatures and pressures but owes its stability to strong interlayer covalent bonds that prevents the material from being a good precursor for stanene synthesis. The following _ab initio_ analysis by Stratford _et al._[66] supported our conclusions regarding the hP3-NaSn\({}_{2}\) stability and showed that a derived metastable NaSn\({}_{3}\) phase with partial substitution of Na for Sn is consistent with experimental observations. The new hP6-NaSn\({}_{2}\) ground state identified in our unconstrained searches has an unexpectedly simple CaIn\({}_{2}\) prototype. It can be constructed from hP3 by doubling the unit cell and distorting the Sn honeycomb framework in and out of the basal plane. Our examination of the transformation path along the corresponding phonon mode in Fig. 4 shows that both hexagonal structures represent local minima at the PBE and SCAN levels, which Figure 2: Calculated stability of Na-Sn intermetallics at 20 GPa. The three panels show (a) the convex hull constructed at \(T=0\) K, (b) energy distances to the tie-line at \(T=0\) K, and (c) temperature ranges of thermodynamic stability. The symbol and line styles are the same as in Fig. 1. Figure 1: Calculated stability of Na-Sn intermetallics at ambient pressure. The three panels show (a) the convex hull constructed at \(T=0\) K, (b) energy distances to the tie-line at \(T=0\) K, and (c) temperature ranges of thermodynamic stability. The solid, hollow, and crossed diamonds correspond to phases found in our calculations to be stable at 0 K, stabilizing at elevated temperatures, and metastable at all considered temperatures, respectively. The green, blue, and red colors denote experimentally observed, previously considered, and our predicted phases. is further supported with a phonon dispersion analysis that indicates their dynamical stability (see Fig. S7). The presence and depth of the hP6 minimum do depend on the level of theory. Namely, previous PBE calculations revealed a slow convergence of the out-of-plane phonon mode in hP3 [66], our NN model predicts a 7 meV/atom drop along the barrier-free transition path, while the LDA produces a plateau 7 meV/atom above the starting configuration. Our further PBE calculations demonstrated that thermodynamic corrections evaluated in the QHA have little effect on the relative free energy (see Fig. 4(b) and Fig. S8). Hydrostatic compression, on the other hand, disfavors the hP6 minimum at 0.4 GPa and the distorted structure relaxes back to hP3 above 2 GPa (see Fig. S9). If the proposed phase does form it should be easily distinguishable from hP3 because the distortion induces noticeable changes in structural and vibrational properties. The contraction of the \(a\) lattice constant by 2.4% and the expansion of the \(c\)-axis by 9.1% results in considerable shifts of powder XRD peak positions (see Fig. S10). The change in local Sn atom environments from three 3.102-A in-plane and two 3.255-A out-of-plane bonds in hP3 to one out-of-plane 3.056-A and three 3.071-A bonds in hP6 softens the highest optical modes by 17%. At 20 GPa pressure, we find that the convex hull is defined by nine phases (Fig. 2). Some of the ambient-pressure ground states, _e.g._, tI64-NaSn and mS48-NaSn\({}_{2}\), destabilize by more than 0.1 eV/atom and only one, hR21-Na\({}_{5}\)Sn\({}_{2}\), remains stable under this pressure. The eight new ground states found in our searches exhibit compact morphologies more favorable under compression. On the Na-rich side, the hR15-Na\({}_{4}\)Sn, hR42-Na\({}_{11}\)Sn\({}_{3}\), hP12-Na\({}_{3}\)Sn, and hR21-Na\({}_{5}\)Sn\({}_{2}\) phases with large \(c/a\) ratios resemble the Li-rich bcc alloys with different stacking sequences along the [111] direction investigated in our previous study [19]. According to our notation that specifies the separation between Sn layers along the \(c\) axis [19], these structures can be described simply as \(|5|\), \(|455|\), \(|345|\), and \(|34|\), respectively. In the middle of the composition range, the mP6-NaSn phase isostructural to mP6-LiSn [99] is a different bcc-based alloy that exhibits zig-zag Sn nets. At the Sn-rich end, oS16-Na\({}_{3}\)Sn\({}_{5}\) displays square Sn nets bridged by rows of Sn atoms and a cP4-NaSn\({}_{3}\) phase has the common L\({}_{12}\) prototype. The observed conformity between several compressed Na-Sn and ambient-conditions Li-Sn ground state structures indicates the similarity of effective size ratios between the alkali metal and tin ions under the corresponding 20 GPa and 1 bar pressures. Figure 3: Structures of select Na-Sn phases. The polyhedra in (a,d) show the local environments of the minority species in the predicted high-temperature oS20-Na\({}_{4}\)Sn and oI12-NaSn\({}_{5}\) ground states, respectively. Figure 4: Stability analysis of NaSn\({}_{2}\) phases. (a) Relative energy of the hexagonal NaSn\({}_{2}\) structures along the hP3 to hP6 transformation path following an out-of-plane Sn phonon mode. At each point, the magnitude of the Sn layer distortion was kept fixed while the unit cell shape was optimized. (b) Free energies of NaSn\({}_{2}\) polymorphs relative to hP3 as a function of temperature. The solid points correspond to free energies with vibrational entropy contributions calculated in the harmonic approximation. The crossed red hexagons show the significance of the quasi-harmonic (QH) corrections on the relative stability between hP6 and hP3. The crossed black circles illustrate the effect of the configurational entropy in the disordered aP76-Na\({}_{7}\)Sn\({}_{12}\) on the distance to the aP76-Na\({}_{7}\)Sn\({}_{12}\leftrightarrow\alpha\)-Sn tie-line. ### The Ca-Sn binary Early work on the Ca-Sn binary materials indicated the existence of oP12-Ca\({}_{2}\)Sn, o88-CaSn, cP4-CaSn\({}_{3}\), and tI204-Ca\({}_{31}\)Sn\({}_{20}\) phases [100; 101; 102]. In 2000, Palenzona and Fornasini [103] systematically re-examined the binary system in the full composition range with a combination of differential thermal analysis, metallographic analysis, and single-crystal and powder X-ray diffraction. The study led to the discovery of new oP52-Ca\({}_{7}\)Sn\({}_{6}\), tP118-Ca\({}_{36}\)Sn\({}_{23}\), and tI32-Ca\({}_{5}\)Sn\({}_{3}\) intermetallics and the compilation of the most complete phase diagram to date. Structural, bonding, electronic, topological, and other properties of Ca-Sn compounds have been analyzed in a number of detailed DFT studies. Ohno _et al._[104] used a combination of DFT and CALPHAD methods to obtain the thermodynamic model of the binary system. Yang _et al._[105] compared _ab initio_ and previously measured heats of formation, constructed the convex hull, and analyzed the intermetallics' electronic and elastic properties. Engelkemier _et al._[106] used their DFT-chemical pressure approach to rationalize how the atomic sizes determine the favorability of Ca-Sn superstructures derived from the W\({}_{5}\)Cr\({}_{3}\) prototype. A recent demonstration of the Ca\({}_{7}\)Sn\({}_{6}\) function as an anode material with a high working voltage of 4.45 V, excellent cyclability with 95% retention after 350 cycles, and good capacity of 85 mA h g\({}^{-1}\) for Ca-ion batteries [38] spurred computational studies of the binary compounds' electrochemical, stability, and elastic properties [107; 108] and further experimental investigations of the Ca-Sn materials' energy storage potential [109]. CaSn\({}_{3}\) with the Cu\({}_{3}\)Au-type (L1\({}_{2}\)) structure has been shown to have nontrivial topological and superconducting properties [110; 111; 112; 113; 114], while CaSn has been found to be a nodal-line semimetal with potential for topological superconductivity [115]. Figs. 5 and 6 summarize the outcome of our Ca-Sn screening under different \((T,P)\) conditions. In line with our modeling of the other binaries, we relied on prior experimental data to include phases with more than 24 atoms per primitive cell but reproduced all other known binary intermetallics with our evolutionary structure searches. In agreement with previous DFT findings [104; 105; 107], the ambient-pressure convex hull in our calculations is defined by four oP12-Ca\({}_{2}\)Sn, tI204-Ca\({}_{31}\)Sn\({}_{20}\), o88-CaSn, and cP4-CaSn\({}_{3}\) phases, while the remaining three, oP52-Ca\({}_{7}\)Sn\({}_{6}\), tP118-Ca\({}_{36}\)Sn\({}_{23}\), and tI32-Ca\({}_{5}\)Sn\({}_{3}\), are metastable by less than 10 meV/atom. The compositional and formation energy proximity of the tI32-Ca\({}_{5}\)Sn\({}_{3}\), tI204-Ca\({}_{31}\)Sn\({}_{20}\), and tP118-Ca\({}_{36}\)Sn\({}_{23}\) phases arises from their morphological connection discussed in previous studies [103; 106]. The last two are members of the R\({}_{5n+6}\)(T,M)\({}_{3n+5}\) family with \(n=5\) and \(n=6\) representing intergrown segments of different Figure 5: Calculated stability of Ca-Sn intermetallics at ambient pressure. The three panels show (a) the convex hull constructed at \(T=0\) K, (b) energy distances to the tie-line at \(T=0\) K, and (c) temperature ranges of thermodynamic stability. The symbol and line styles are the same as in Fig. 1. Figure 6: Calculated stability of Ca-Sn intermetallics at 20 GPa. The three panels show (a) the convex hull constructed at \(T=0\) K, (b) energy distances to the tie-line at \(T=0\) K, and (c) temperature ranges of thermodynamic stability. The symbol and line styles are the same as in Fig. 1. lengths from the parent W\({}_{5}\)Cr\({}_{3}\) prototype (see Fig. 7(a)). In addition to being metastable, the 36:23 phase was determined previously to be mechanically unstable with \(C_{44}=-122.7\) GPa [105]. The existence of the stoichiometric tl32-Ca\({}_{5}\)Sn\({}_{3}\) phase itself seems to not have been established conclusively, and its appearance has been attributed to hydrogen-impurity and/or entropy-driven stabilization [103; 104; 106]. Our phonon calculations indicate that the three related phases are dynamically stable and that the vibrational entropy corrections stabilize the 36:23 alloy above 1010 K but change the free energy distance of the 5:3 alloy from 7.8 meV/atom at 0 K to 5.4 meV/atom above the convex hull at 1400 K. All new Ca-Sn phases based on \(\alpha\)-Sn, \(\beta\)-Sn, and fcc-Ca unit cells proposed in Ref. [108] are at least 22 meV/atom above the convex hull boundary (_e.g._, the blue points at \(x=0.25\), 0.375, and 0.5 in Fig. 5). Only one of the ambient-pressures phases, cP4-CaSn\({}_{3}\), remains thermodynamically stable at 20 GPa, which offers an opportunity to better understand and tune its intriguing topological and superconducting properties via compression. The new set of ground states identified with our global searches consists of hP8-Ca\({}_{3}\)Sn, hP6-Ca\({}_{2}\)Sn, and tP2-CaSn phases with simple Ni\({}_{3}\)Sn, Ni\({}_{2}\)In, and \(\delta\)-CuTi prototypes (see Fig. 7 (c,d)). At the 2:1 composition, the known hP6 structure comprised of closely spaced heteronuclear Ca-Sn honeycomb layers intercalated with Ca is isoelectronic and morphologically related to the C32 structure adopted by MgB\({}_{2}\) but not expected to exhibit the iconic quasi-2D superconductivity defined by hole-doped B states and hard B phonon modes. The high-pressure hP6 polymorph does help appreciate the morphology of its ambient-pressure oP12 counterpart known to be a derivative of the honeycomb lattice [116; 117]. Fig. 8 illustrates that the frequency of the phonon mode transforming hP6 to oP12 obtained in our linear response calculations becomes imaginary below 1.9 GPa, while the linear slope of the squared frequency dependence on pressure points to a common soft-mode phase transition described by the Landau theory [118; 85]. The considerable distortions reaching 0.7 A for out-of-plane Ca displacements at 0 GPa dramatically change the local atomic environments, _e.g._, changing the number of Ca neighbors around Sn atoms from 3 in hP6 to 7 in oP12 within the 3.5-A radius cutoff and from 11 in hP6 to 9 in oP12 within the 3.7-A radius cutoff. The eigenmode displacements in hP6 and the remnant honeycomb connections in oP12 displayed in Fig. 8 help visualize the pressure-induced structural changes in Ca\({}_{2}\)Sn. ### The Cu-Sn binary In 1990, Saunders _et al._ published a detailed review of the Cu-Sn phase diagram [119] with all (meta)stable phases discovered since the beginning of the 20th century: \(\beta\) (cI2-Cu\({}_{17}\)Sn\({}_{3}\)), \(\delta\) (cF416-Cu\({}_{41}\)Sn\({}_{11}\)), \(\gamma\) (cf16-Cu\({}_{3}\)Sn), \(\zeta\) (hP26-Cu\({}_{10}\)Sn\({}_{3}\)), \(\epsilon\) (oS80-Cu\({}_{3}\)Sn), \(\eta\) (hP4-Cu\({}_{6}\)Sn\({}_{5}\)), and \(\eta^{\prime}\) (mS44-Cu\({}_{6}\)Sn\({}_{5}\)). In 1995, Larsson _et al._[120] indexed a phase appearing above 350\({}^{\circ}\)C as a monoclinic \(\eta^{6}\) (mS54-Cu\({}_{5}\)Sn\({}_{4}\)). In 2009, a hP8-Cu\({}_{3}\)Sn (D0\({}_{19}\)) phase, labeled here as \(\epsilon^{\prime}\), was observed by Sang _et al._[121], which was much simpler than the previously synthesized Cu\({}_{3}\)Ti type eight- [122] and ten-fold [123] hcp-based superstructures. Other notable investigations of the binary system with an updated phase diagram were reported in 2013 [124; 125]. A year later, Muller and Lidin [126] performed a comprehensive characterization of Cu\({}_{3}\)Sn and proposed an off-stoichiometry modulated structure to explain missing reflections in the XRD data. In our recent work [127], we analyzed the thermodynamic stability of the previously reported phases at and near the 3:1 composition and found hP8 to have the lowest energy at \(T=0\) K. The binary system has found several technological applications due to the excellent mechanical and electronic properties of Cu-Sn intermetallics. The high melting temperature, electrical conductivity, resistance to elec Figure 7: Select structures of Ca-Sn phases observed under ambient pressure (a,b) and predicted to be stable at 20 GPa (c,d). The Ca and Sn atoms are shown as large and small spheres, respectively. The tI32 structure has pure Sn layers shown with coordination polyhedra and mixed Sn-Ca layers shown with shaded planes. The oS8 structure features stretched honeycomb layers linked via zig-zag Sn-Sn bonds. Figure 8: Calculated \(\omega^{2}\) dependence on pressure for a soft phonon mode defining the hP6 to oP12 transformation. The red arrows in the orthorhombic representation of the hP6 structure illustrate the atomic displacements along the eigenvector. tromigration, and other beneficial characteristics have made the Cu\({}_{3}\)Sn and Cu\({}_{6}\)Sn\({}_{5}\) compounds prime candidates for Pb-free interconnects in high-performance electronic devices [49, 127, 128, 129, 130, 131, 132, 133]. The Cu-Sn alloys have also been investigated as negative electrode candidate materials in lithium-ion batteries [134, 135, 136, 137]. Our constructed convex hull in Fig. 9 agrees with the current results in the Materials Project and AFLOW databases that the ordered hP4-CuSn is the only ground state with \(E_{\rm form}=-24\) meV/atom under ambient pressure and zero temperature in this binary system. All other observed alloys are located at least 5 meV/atom above the fcc-Cu\(\leftrightarrow\)hP4-CuSn tie-line, and only a handful of phases have negative formation energies. Given that only two Cu-Sn compounds, around the 6:5 and 3:1 compositions, have been reported to have stability regions extending down to the lowest displayed temperature of 373 K [119, 124], it is evident that entropy plays an important role in stabilizing the known Cu-rich alloys. The presence of the disordered \(\beta\) and \(\eta\) phases and the high sensitivity of transition temperature estimates to the DFT approximations [60, 61] make it difficult to accurately map out the phase diagram. The following analysis focuses on assessing the importance of different contributions for the relative phase stability in this complex binary. We find that the vibrational contributions indeed lower the formation free energies of all examined Cu-Sn binary phases. The results in Fig. 9 indicate that cF416-Cu\({}_{41}\)Sn\({}_{11}\) and hP26-Cu\({}_{10}\)Sn\({}_{3}\), metastable by 25 meV/atom and 22 meV/atom at \(T=0\) K without ZPE, move more than half-way toward the convex hull at 550 K but do not become stable at the respective experimentally established \(\sim 623\) K and \(\sim 860\) K transition temperatures [124]. The cF16-Cu\({}_{3}\)Sn phase undergoes only a minor stabilization, from 75 meV/atom above the convex hull at 0 K down to 67 meV/atom at 600 K, and is certainly metastable under typical synthesis conditions. The best phase at this composition, hP8 shown in Fig. 11(b), has a positive formation energy of 3.6 meV/atom at 0 K but becomes a true ground state at 550 K. As discussed in our recent study [127], the oS64 or oS80 superstructures with long-period anti-phase boundaries (APBs) remain a few meV/atom above hP8 at all considered temperatures and their formation is likely defined by kinetic factors. Our searches did not produce any particularly favorable bcc-based phases around \(x=0.15\) to explain the stabilization of the \(\beta\) alloy with entropic terms. At the Sn-rich side, the high-\(T\) screening yielded a possible tP10-CuSn\({}_{4}\) ground state stabilizing near the reported \(\eta\leftrightarrow\)liquid boundary at 500 K [124]. It belongs to a family of layered structures featuring metal-intercalated Sn building blocks and has the A\({}^{+}\square^{-}\) stacking in our notation described in the Pd-Sn subsection. Figure 10: Calculated stability of Cu-Sn intermetallics at 20 GPa. The three panels show (a) the convex hull constructed at \(T=0\) K, (b) energy distances to the tie-line at \(T=0\) K, and (c) temperature ranges of thermodynamic stability. The symbol and line styles are the same as in Fig. 1. Figure 9: Calculated stability of Cu-Sn intermetallics at ambient pressure. The three panels show (a) the convex hull constructed at \(T=0\) K, (b) energy distances to the tie-line at \(T=0\) K, and (c) temperature ranges of thermodynamic stability. The symbol and line styles are the same as in Fig. 1. The known phases just below the 1:1 composition have a common morphology of the NiAs structure with Cu linear chains arranged on a triangular lattice shown in Fig. 11(c-f). The mP36-Cu\({}_{5}\)Sn\({}_{4}\), ms44-Cu\({}_{6}\)Sn\({}_{5}\), and hP4*-Cu\({}_{6}\)Sn\({}_{5}\) derivatives of the never-observed stoichiometric CuSn have different distributions of additional Cu atoms in octahedral interstices. The inclusion of the vibrational entropy does stabilize the ordered monoclinic \(\eta^{\prime}\) above 410 K and \(\eta^{8}\) above 760 K as illustrated in Fig. 12. The high-\(T\) boundaries for these phases depend on the free energy of the disordered \(\eta\) that benefits from an additional configurational entropy term. Its evaluation proved to be a challenge because of the sizable interaction between defects. Indeed, every CuSn unit cell with four atoms can accommodate up to two interstitial Cu atoms, which means that the compositional shifts down to \(x\sim 0.45\) correspond to the octahedral site occupancies of \(\sim 30\%\). The results in Fig. 12(a) for \(T=0\) K demonstrate that at this occupancy level the average defect formation free energies, \(\Delta F_{\rm d}\), spread over a significant \(0.096-0.213\) eV/defect range. Seeing that the most stable monoclinic decorations have fairly uniform defect distributions (Fig. 12), it is apparent that only a fraction of possible configurations, \(f\), effectively contribute to the partition function. Moreover, the values change dramatically if the vibrational entropy contributions are included: for three representative configurations, \(\Delta F_{\rm d}\) was determined to decrease from 0.158, 0.154, and 0.111 eV/defect at 0 K down to 0.042, 0.024, and 0.003 eV/defect at 800 K, respectively. In light of these findings, we chose to approximate the hP4* free energy correction as \(F_{\rm conf}=\Delta F_{\rm d}(1-2x)+k_{\rm B}\frac{1}{(1+f+x_{d})}\{x_{d}\ln{(x_ {d})}+(1-x_{d})\ln{(1-x_{d})}\}\), where \(x_{d}=\frac{(1-2x)}{2f}\), \(\Delta F_{\rm d}\approx 0.024\) eV/defect, and \(f\) is treated as an adjustable parameter to check correspon Figure 11: Structures of known and simulated Cu-Sn phases. (a,b) Observed hcp-based Cu\({}_{3}\)Sn polymorphs, with anti-phase boundaries between blocks shaded in blue and grey. hP8 is shown in an orthorhombic representation to illustrate its relation to OS80. (c,d) Monoclinic derivatives of hP4-CuSn with ordered populations of interstitial sites. (e,f) Representation of ordered (hP4) and disordered (hP4*) phases. Figure 12: Stability analysis of Cu-Sn phases. (a) Formation free energies at 0 K (in black) and 800 K (in olive). The solid lines mark the boundaries of the convex hulls; the dashed lines are linear fits to data sets representing hP4* disordered phases with interstitial Cu defects; the dotted line connects particular interstitial configurations leading to the known mS44 monoclinic phase; the dash-dotted lines show the configurational entropy contribution with different fractions \(f\) of available interstitial sites in hP4* at 800 K; and the short dash line is the tangent to the free energy curve for \(f=1/6\). (b) Free energy distances to the convex hull for select Cu-Sn phases with the arrows illustrating the estimated temperature ranges of stability. Due the difficulty of modeling the disordered hP4* phase, the free energy and estimated transition temperatures were evaluated for a fixed-composition hP4*-Cu\({}_{6}\)Sn\({}_{5}\) phase with \(f=1/7\) occupation factor explained in the main text. dence with experiment. The sample curve and corresponding tangent constructed for \(f=1/6\) at 800 K illustrate that hP4* can indeed easily destabilize both monoclinic phases at elevated temperatures. While this analysis does not allow us to make accurate estimates of transition temperatures, it helps rationalize the kinetics of the ordered \(\eta^{\prime}\) and \(\eta^{8}\) phase formation from the disordered \(\eta\) precursor. Namely, it appears likely that only selected octahedral sites are effectively available for interstitial Cu occupation and the well-spaced defects do not need to migrate far to precipitate in the favorable ordered monoclinic configurations. The excess Cu atoms apparently do not diffuse out of the lattice at low temperatures, which may explain the absence of the stoichiometric hP4. The application of pressure reduces the number of viable ground states down to two at the 3:1 and 1:2 compositions as shown in Fig. 10. It is interesting to see that the oS80-Cu\({}_{3}\)Sn phase commonly obtained in ambient-pressure experiments but only metastable in DFT calculations [127] does stabilize over the hP8 polymorph above 10 GPa. Compression is the only factor identified in our studies so far that promotes the formation of the specific long-period superstructure. At the Sn-rich end, our NN-guided searches identified a thermodynamically stable tI12-CuSn\({}_{2}\) phase. This layered structure type is discussed further in the Pd-Sn and Ag-Sn subsections. ### The Pd-Sn binary The Pd-Sn binary has been attractive primarily due to its applications in nanocatalysis [138; 139; 140; 141]. In the bulk crystalline form, Pd-Sn alloys have been observed at the 3:1, 2:1, 1:1, 5:7, 1:2, 1:3, and 1:4 compositions [142; 143; 144; 145; 146; 147]. The oS20-PdSn\({}_{4}\) phase with space group \(Ceca\) has received most attention due its unusual topological properties. This Sn-rich phase has the same crystal structure and electron count as PtSn\({}_{4}\)[147; 148], known for its extremely large magnetoresistance [149] and Dirac node arcs [150; 151], but features gapped out Dirac node arcs [151]. Our screening and modeling of Pd-Sn alloys indicate that the binary may have several additional phases synthesizable at both ambient and high pressures, as illustrated in Figs. 13 and 14. The known phases defining parts of the ambient-pressure convex hull are oP8-PdSn discovered in 1946 [142], cP4-Pd\({}_{3}\)Sn and oP12-Pd\({}_{2}\)Sn reported in 1957 [143], and oS20-PdSn\({}_{4}\) observed in 2004 [147]. The mP24-Pd\({}_{5}\)Sn\({}_{7}\) phase was shown in 2010 to have partial occupancies [144]. We simulated six different decorations of the 4h sites in unit cells with Sn atoms and observed that the fully relaxed structures lie 24-40 meV/atom above the tie-line. The tI48-PdSn\({}_{2}\)[145] and oS32-PdSn\({}_{3}\)[146] phases observed in the 1950s were found to be metastable by about 1 and 5 Figure 14: Calculated stability of Pd-Sn intermetallics at 20 GPa. The three panels show (a) the convex hull constructed at \(T=0\) K, (b) energy distances to the tie-line at \(T=0\) K, and (c) temperature ranges of thermodynamic stability. The symbol and line styles are the same as in Fig. 1. Figure 13: Calculated stability of Pd-Sn intermetallics at ambient pressure. The three panels show (a) the convex hull constructed at \(T=0\) K, (b) energy distances to the tie-line at \(T=0\) K, and (c) temperature ranges of thermodynamic stability. The symbol and line styles are the same as in Fig. 1. meV/atom, respectively. The majority of new competitive compounds found in our searches are located at the Pd-rich end of the composition range. The tI18-Pd\({}_{8}\)Sn phase matches the bf0bcb581de49d90 entry listed in AFLOW but not discussed previously. It is based on the Pd bcc lattice with Sn substitutions that lead to a \(c/a=1.055\) tetragonal distortion and a \(-21.3\) meV/atom stabilization with respect to the fcc-Pd\(\leftrightarrow\)cP4-Pd\({}_{3}\)Sn tie-line. Although 8:1 was outside the range of standard compositions we chose to scan for the Sn binaries, the presence of the tI18 putative ground state prompted us to perform a search at this stoichiometry as well. Our resulting best candidate, aP9-Pd\({}_{8}\)Sn, is an improvement on tI18 by 1.9 meV/atom. The identified aP8-Pd\({}_{7}\)Sn, nR21-Pd\({}_{6}\)Sn, and mS12-Pd\({}_{5}\)Sn phases happen to be different decorations of the fcc lattice that break the fcc-Pd\(\leftrightarrow\)cP4-Pd\({}_{3}\)Sn tie-line by similar \(-24.3\), \(-27.0\), and \(-22.2\) meV/atom, and lie 0.6, \(-3.8\), and \(-1.2\) meV/atom from the aP9-Pd\({}_{8}\)Sn\(\leftrightarrow\)cP4-Pd\({}_{3}\)Sn tie-line, respectively. The mS12-Pd\({}_{5}\)Sn phase found in our searches is 8.1 meV/atom more stable than the proposed mS12-Pd\({}_{5}\)Sn reported by Wang _et al._[152]. The relative energies are consistent in other DFT approximations (see Table S4), which suggests that at least some of these four ground states might be obtained via standard synthesis routes. We also find that an mP32-Pd\({}_{5}\)Sn\({}_{3}\) phase with isolated Sn atoms breaks the oP12-Pd\({}_{2}\)Sn \(\leftrightarrow\) oP8-PdSn tie-line by \(-1.7\) meV/atom and a tI36-PdSn\({}_{2}\) phase is below tI48-PdSn\({}_{2}\) experimental phase by \(-1.3\) meV/atom. Figs. 13(c) shows that the vibrational contributions do not cause any changes in the convex hull up to 600 K. In order to better understand stable structural motifs defining the Sn-rich ground states, we compared the proposed and known phases with tetragonal (tI36-PdSn\({}_{2}\) and tI48-PdSn\({}_{2}\)) or near-tetragonal symmetry (oS32-PdSn\({}_{3}\) and oS20-PdSn\({}_{4}\)) featuring eight-fold coordinated Pd atoms (see Fig. 15). According to the detailed analysis of MSn\({}_{n}\) (\(n=2-4\)) alloys by Nylen _et al._[147], observed Sn-rich phases in several binary systems can be described as sequences of closely related building blocks. The Sn layers appear as either unrotated squares (4\({}^{4}\) Kepler nets in Schlafli notation) or a combination of rotated squares by 15-20\({}^{\circ}\) and rhombi (3\({}^{2}\)434 Kepler nets in Schlafli notation). Blocks formed by adjacent Sn layers along the stacking axis can also be shifted by (1/2,0), (0,1/2), or (1/2,1/2) within the basal plane. The stacking sequences in such phases can be represented conveniently with an alternative notation that explicitly specifies the location and orientation of the Pd and Sn layers. The "A", "B", and "Tl" symbols denote A-centered, B-centered, and missing Pd layers, respectively (see Fig. 15). The "+", "\(-\)", and "o" superscripts indicate clockwise, counterclockwise, and null rotations of Sn squares, respectively. The lateral placement of Sn squares is defined by the centering of the adjacent Pd layer(s) because metal atoms are never found directly above or below the Sn rhombi. The rotation sign is defined for Sn squares centered at (1/2,0) or (1/2,1/2). With this convention, the new tI36-PdSn\({}_{2}\) member of the PdSn\({}_{n}\) family can be represented as A\({}^{\circ}\)B\({}^{\circ}\)A\({}^{-}\)A\({}^{\circ}\)B\({}^{\circ}\)A\({}^{+}\) (Fig. 15(a)), which illustrates the presence of near-cubic \({}^{\circ}\)B\({}^{\circ}\) Sn coordinations around Pd atoms not seen in the known Pd-Sn prototypes. Considering the ubiquity of observed MSn\({}_{2}\) compounds, it seemed fitting to examine the relative stability of the competing prototypes across the block of noble metals. We chemically substituted and fully relaxed binary Sn alloys with Ru, Rh, Pd, Ag, Os, Ir, Pt, and Au metals in the tI36 and tI48 unit cells. Our re Figure 16: Relative stability of MSn\({}_{2}\) polymorphs referenced to the known tI48 structure. The red solid diamonds correspond to the proposed tI36 structure. The green solid circles denote the most stable known phase for each existing MSn\({}_{2}\) compound. The red hollow circle marks the predicted tI12-AgSn\({}_{2}\) discussed in the main text. Figure 15: Layer stacking of Pd-rich phases at ambient pressure. In this notation, A or B determines if the Pd atoms are in the (0,0) and (1/2,1/2) positions or in the (1/2,0) and (0,1/2) positions, while an empty square symbol denotes the absence of a Pd layer. For the Sn layers, the ”o” superscript denotes a layer of squares (4\({}^{4}\) Kepler nets), while “+” and ”–” superscripts specify the square rotation direction within the layer of squares and rhombi (3\({}^{2}\)434 Kepler nets) as described in the text. sults presented in Fig. 16 show that tI36 becomes favored over tI48 at the electron-rich end of both \(4d\) and \(5d\) sets. Comparison of these phases to the ground states appearing in the Materials Project database [2] shows that only PdSn\({}_{2}\) has the potential to form in the tI36 configuration and that RhSn\({}_{2}\)[145] has virtually identical oS24 and tI24 energies due to a close relationship between the two prototypes with A\({}^{\circ}\)B\({}^{+}\)B\({}^{\circ}\)A\({}^{+}\) and A\({}^{\circ}\)B\({}^{+}\)B\({}^{\circ}\)A\({}^{+}\)A\({}^{\circ}\)B\({}^{-}\)B\({}^{\circ}\)I\({}^{-}\) representations. The hydrostatic compression to 20 GPa induces few changes in the calculated convex hull. Our predicted ambient-pressure mS32-Pd\({}_{5}\)Sn\({}_{3}\) ground state is replaced with a new one, oS20-Pd\({}_{3}\)Sn\({}_{2}\), at a nearby composition. At the 1:2 stoichiometry, the tI12-PdSn\({}_{2}\) polymorph stabilizes over tI48 by 58.4 meV/atom. This structure with the A\({}^{+}\)A\({}^{-}\) sequence (see Fig. 19) is also predicted to be favored at elevated pressure for CuSn\({}_{2}\) (Fig. 10) and at elevated temperature for AgSn\({}_{2}\) (Fig. 17). On the Sn-rich side, the known oS32-PdSn\({}_{3}\) phase remains metastable and the PdSn\({}_{4}\) compound is no longer favored. ### The Ag-Sn binary The most comprehensive Ag-Sn phase diagram was compiled by Karakaya and Thompson in 1987 [153]. It summarized an extensive body of research on Ag-Sn alloys carried out since the 1920s. The accrued data indicated that the binary system has a high solubility of Sn in fcc-Ag up to \(x=0.115\), a disordered \(\zeta\) alloy between \(x=0.118\) and \(x=0.228\), an ordered Ag\({}_{3}\)Sn compound \(\epsilon\) with a relatively narrow stability range, and a eutectic point at the Sn-rich end with \(x=0.962\) and \(221^{\circ}\)C. The \(\zeta\) alloy was identified as an hcp-based solid solution with a near-ideal \(c/a=1.626\). Ag\({}_{3}\)Sn has also been determined to have an underlying hcp lattice but its exact crystal structure has been the subject of a long debate. Structural solutions proposed since the compound's discovery in 1926 [154] include a disordered hcp phase [155], an orthorhombic displacive homeotype \(\beta\)-TiCu (\(Cmcm\)) [156], and the DO\({}_{a}\)-Cu\({}_{3}\)Ti prototype (\(Pmmn\)) [157]. The latest studies [158, 159, 48, 50, 160] point to a general consensus that the equilibrium \(\epsilon\) phase has the ordered Cu\({}_{3}\)Ti-type structure (oP8). Study of Ag-Sn compounds is of significant importance in the ongoing development of Pb-free solders for high-performance electronic devices [161, 162, 48, 163, 164, 165, 48]. The binary eutectic intermetallic is an integral part of the Sn-rich multicomponent alloys with Cu, Sb, Bi, and In optimized for robust operation under mechanical and thermal stresses. In particular, the extensive work discussed in Ref. [50] has demonstrated that joints' resistance to creep and thermal fatigue is strongly affected by the morphology of Ag\({}_{3}\)Sn forming during solder solidification. A Figure 17: Calculated stability of Ag-Sn intermetallics at ambient pressure. The three panels show (a) the convex hull constructed at \(T=0\) K, (b) energy distances to the tie-line at \(T=0\) K, and (c) temperature ranges of thermodynamic stability. The symbol and line styles are the same as in Fig. 1. Figure 18: Calculated stability of Ag-Sn intermetallics at 20 GPa. The three panels show (a) the convex hull constructed at \(T=0\) K, (b) energy distances to the tie-line at \(T=0\) K, and (c) temperature ranges of thermodynamic stability. The symbol and line styles are the same as in Fig. 1. recent transport study has also indicated that Ag\({}_{3}\)Sn is a topological material with a nontrivial Berry phase and a possible candidate for valleytronics [160]. Our screening and modeling findings are consistent with the experimental observations. The unconstrained searches identified a number of hcp-based Ag-Sn phases in the \(0.12<x<0.23\) range that lie within 1-2 meV/atom to the fcc-Ag\(\leftrightarrow\)oP8-Ag\({}_{3}\)Sn tie line. At the 7:1 composition, for instance, a single Ag substitution for Sn in the hexagonal \(2\times 2\times 1\) hcp-Ag supercell (\(c/a=1.638\)) produced an hP8 structure (\(c/a=1.620\)) located essentially on the tie-line (0.2 meV/atom in PBE, -1.1 meV/atom in LDA and 0.7 meV/atom in SCAN). An alternative monoclinic decoration resulted in a nearly degenerate mP16-Ag\(\cdot\)Sn phase (0.4 meV/atom in PBE, \(-1.8\) meV/atom in LDA and 1.2 meV/atom in SCAN). At the 7:2 composition, a monoclinic mP18 phase was found to be similarly close to stability (0.2 meV/atom in PBE, \(-0.7\) meV/atom in LDA and 1.5 meV/atom in SCAN). It is evident that the hcp derivatives have comparable energies, which explains their appearance as a solid solution promoted by the configurational entropy in this Ag-rich part of the phase diagram. In agreement with the previous _ab initio_ analysis of the random alloy, the most stable hcp configurations have fairly uniform distributions of Sn atoms [159]. To better understand the favorability of oP8 at the 3:1 stoichiometry, we compared the relative stability of competing hcp supercells for the binary M\({}_{3}\)Sn compounds with Cu, Ag, and Au. Fig. S11 illustrates that only Cu\({}_{3}\)Sn benefits from the appearance of APBs and has a slight preference, by 2.4 meV/atom, for the hP8 configuration with the highest APB number per formula unit over oP8. In the M\({}_{3}\)Sn compounds with the larger Ag and Au metals, the trend is reversed and oP8 is favored over hP8 by 4.9 meV/atom and 19 meV/atom, respectively. We also considered the simplest orthorhombic representation of hcp at the Ag\({}_{3}\)Sn composition with the reported unit cell dimensions of \(a=2.98\) A, \(b=5.15\) A, and \(c=4.77\) A [156]. In the ordered form, this four-atom unit cell cannot have the \(Cmcm\) symmetry. We found that the fully relaxed oP4 structure with the \(Pmm2\) symmetry to be less stable than oP8 by 43 meV/atom. The inclusion of the vibrational entropy has little effect on the relative free energies for phases with \(x\leq 0.25\) but stabilizes two new phases at the Sn-rich end in our calculations. The tI12-AgSn\({}_{2}\) phase, shown to be metastable by Saleh _et al._[159], is found 29 (6) meV/atom above the oP8-Ag\({}_{3}\)Sn \(\leftrightarrow\)\(\alpha\)-Sn (\(\beta\)-Sn) tie-line in our \(T\)=0 K calculations as well. This structure has the familiar A\({}^{+}\)A\({}^{-}\) stacking (see Fig. 19(c)) appearing as the high-\(P\) ground states in the Cu-Sn and Pd-Sn binaries. In the tI12-AgSn\({}_{2}\) case, the entropic contribution makes it the ground state above 360 K. The tI10-AgSn\({}_{4}\) phase breaks the tI12-AgSn\({}_{2}\)\(\leftrightarrow\)\(\beta\)-Sn tie-line at much higher 570 K temperature that falls between the \(\sim 500\) K and \(\sim 680\) K boundaries that define the stability range of the \(\epsilon\)-liquid mixture at this composition. The tall tetragonal unit cell with \(a=3.310\) A and \(c=23.51\) A has two Ag atoms regularly spaced along the \(c\)-axis comprised of five stacked bct blocks. The application of 20 GPa compression destabilizes the two Sn-rich phases and leads to minor reordering of relative enthalpies, by 1-2 meV/atom, for competing hcp derivatives with \(0.12<x<0.25\). Some of these ordered phases, such as mP16 and oI40, become marginally stable at elevated pressures and temperatures. ## IV Summary The NN-guided materials prediction approach introduced in our previous study and tested on a single Li-Sn binary system [19] has been used in this work to screen a larger configuration space of five M-Sn binaries. Over 2.0 million candidate phases were fully optimized with NN models in our evolutionary searches, requiring over 260,000 CPU hours, and almost 14,000 of them were further re-examined with DFT, costing about 0.98 million CPU hours. Compared to our previous study of Li-Sn alloys [19], the NN simulations for similar-sized structures were cheaper because of the lower number of nearest neighbors within the cutoff radius for the Sn binaries with larger metals, while the DFT calculations were more expensive because of the higher number of included (semicore) electrons. Therefore, an equivalent DFT-level scan of the M-Sn phases with the considered structure sizes would have necessitated about 100 million CPU hours, which indicates that the utilization of NN potentials accelerates global structure searches at \(T=0\) K by two orders of magnitude. An even more considerable benefit was seen in the performed screening for high-\(T\) ground states, as our phonon calculations at the NN level for over 6,000 phases (230 CPU hours) helped narrow the pool down to 173 viable candidates for subsequent phonon calculations at the DFT level (1.2 million CPU hours). The systematic scan has uncovered a surprisingly Figure 19: Structures of relevant Ag-Sn phases. (a) oP8-Ag\({}_{3}\)Sn is the only experimentally observed phase in the Ag-Sn binary system and a subject of multiple studies to date. (b) mP18-Ag\({}_{5}\)Sn\({}_{2}\) is one of the ordered representations of the disordered \(\zeta\) phase. (c) A\({}^{+}\)A\({}^{-}\) tI12-AgSn\({}_{2}\) is a predicted high-\(T\) ground state with the stacking notation explained in the Pd-Sn subsection. large number of possible thermodynamically stable M-Sn intermetallics summarized in Table 2. A total of 11 phases have been shown to be below tie-lines formed by known alloys at ambient pressure and an additional 18 phases have been determined to define convex hulls at 20 GPa. In terms of structural complexity, the identified ground states range from 2- and 3-atom known prototypes with high symmetry (tP2-CaSn and hP3-Na\({}_{2}\)Sn) to 20- and 22-atom unknown low-symmetry unit cells (mP20-Na\({}_{4}\)Sn and mS44-Ag\({}_{9}\)Sn\({}_{2}\)). According to our experience [11; 164; 165; 166; 11], identification of configurations with over 16 atoms per primitive unit cell is challenging and the conventional global searches performed directly at the DFT level could have easily missed the largest ones located in this study. A common approach for dealing with the problem is to collect relevant known prototypes and carry out a high-throughput chemical substitution screening. We have checked the AFLOW library [1; 167] and found that only 12 out of the 29 predicted structures match the known prototypes in this widely used database. Phases with particular underlying lattices, such as the proposed fcc-based Pd-rich Sn alloys, could also be found with the cluster expansion method but exhaustive screening of possible configurations becomes computationally demanding for large unit cells [19]. The main findings for each M-Sn binary include the following. In the Na-Sn system, we uncovered three possible high-\(T\) ground states at ambient pressure. The identification of the stable hP6-NaSn\({}_{2}\) derivative of the hP3-NaSn\({}_{2}\) phase is particularly unexpected because the system had been previously explored with global structure searches [66]. The Ca-Sn binary was the only considered system that did not show any viable ground states at 1 bar. Application of 1.9 GPa pressure induces a transformation of Ca\({}_{2}\)Sn from oP12 to a more symmetric hP6 polymorph with Ca-Sn honeycomb layers. With the known cP4-CaSn\({}_{3}\) remaining stable up to at least 20 GPa, it would be interesting to probe the response of its topological and superconducting properties to compression. In the Cu-Sn binary, we identified a possible high-\(T\) ambient-pressure tP10-CuSn\({}_{4}\) ground state and observed that the known oS80-CaSn\({}_{3}\), metastable at ambient conditions [127], becomes favored at 20 GPa. We also performed a detailed _ab initio_ analysis of the known alloys and were able to demonstrate how vibrational and configurational entropy terms stabilize off-stoichiometry derivatives of the hP4-CuSn phase at high temperatures. In the Pd-Sn binary, we uncovered several viable ground states across the full composition range, including three fcc-based phases around \(x\) = 0.15 and tI36-PdSn\({}_{2}\). To rationalize the stability of the former, we introduced a notation illustrating its morphology in terms of Pd and Sn building blocks and examined the favorability trend of this A\({}^{\circ}\)B\({}^{\circ}\)A\({}^{-}\)A\({}^{\circ}\)B\({}^{\circ}\)A\({}^{+}\) MSn\({}_{2}\) configuration across a set of noble metals. In the Ag-Sn system, we detected several ordered representations of the known disordered alloy in the \(0.12<x<0.23\) composition range and identified two possible Sn-rich phases thermodynamically stable at high temperatures. We find it important to note that the reliability of these predictions depends strongly on the accuracy of the PBE functional in the GGA chosen for this study. Our additional LDA and SCAN tests presented in Tables 1-5 illustrate that most of our conclusions are consistent across the considered DFT approximations. Nevertheless, the unexpectedly high number of phases found to be stable in our calculations but never observed experimentally in these extensively studied binaries may still be a consequence of the limited accuracy of standard DFT flavors. Therefore, the dramatically expanded scope of NN-based searches can be instrumental for detecting potential DFT approximation artifacts. Further improvement of NN models may also be needed to ensure a robust identification of all viable candidates. The 8-15 meV/atom accuracy of NN potentials appears to have been appropriate for the considered set of Sn alloys, as our NN+DFT hybrid tests for select Na-Sn and Pd-Sn compositions did not reveal any missed ground states, but more studies should be carried out for materials with \begin{table} \begin{tabular}{l c c c c c} \hline \hline Tin & Pearson & Space & \multicolumn{2}{c}{\(T\) range (K)} & AFLOW \\ alloy & symbol & group & 0 GPa & 20 GPa & prototype \\ \hline Na\({}_{4}\)Sn & mP20 & 11 & 250 - & & - \\ NaSn\({}_{2}\) & hP6 & 194 & 70 - & & CaMn\({}_{2}\) \\ NaSn\({}_{5}\) & oI12 & 71 & 450 - & & - \\ Na\({}_{4}\)Sn & hR15 & 166 & & 0 - & - \\ Na\({}_{11}\)Sn\({}_{3}\) & hR42 & 166 & & 0 - & - \\ Na\({}_{7}\)Sn\({}_{2}\) & oS36 & 65 & 0 - & - \\ Na\({}_{3}\)Sn & hP12 & 156 & & 0 - 630 & - \\ Na\({}_{2}\)Sn & hP3 & 191 & 0 - & Os\({}_{2}\)Pt \\ Na\({}_{8}\) & mP6 & 10 & 0 - & LiSn \\ Na\({}_{3}\)Sn\({}_{5}\) & oS16 & 65 & 0 - 420 & - \\ NaSn\({}_{3}\) & cP4 & 221 & 0 - & Cu\({}_{3}\)Au (L1\({}_{2}\)) \\ \hline Ca\({}_{3}\)Sn & hP8 & 194 & & 0 - & Ni\({}_{3}\)Sn (D0\({}_{19}\)) \\ Ca\({}_{2}\)Sn & hP6 & 194 & & 0 - & Ni\({}_{2}\)In (B8\({}_{2}\)) \\ CaSn & tP2 & 123 & 0 - & \(\delta\)-CuTi (L2\({}_{a}\)) \\ \hline CuSn\({}_{4}\) & tP10 & 125 & 550 - & & PtPb\({}_{4}\) (D1\({}_{d}\)) \\ CuSn\({}_{2}\) & tI12 & 140 & & 0 - & Al\({}_{2}\)Cu (C16) \\ \hline Pd\({}_{8}\)Sn & aP9 & 2 & 0 - & 0 - & - \\ Pd\({}_{6}\)Sn & hR21 & 148 & 0 - & 0 - & - \\ Pd\({}_{5}\)Sn & mS12 & 12 & 0 - 660 & 0 - & - \\ Pd\({}_{5}\)Sn\({}_{3}\) & mS32 & 15 & 0 - & & - \\ PdSn\({}_{2}\) & tI36 & 140 & 0 - & - \\ Pd\({}_{4}\)Sn\({}_{2}\) & oS20 & 36 & & 0 - 710 & - \\ PdSn\({}_{2}\) & tI12 & 140 & & 0 - & Al\({}_{2}\)Cu (C16) \\ \hline AgSn\({}_{2}\) & tI12 & 140 & 360 - & & Al\({}_{2}\)Cu (C16) \\ AgSn\({}_{4}\) & tI10 & 139 & 570 - & & Al\({}_{4}\)Ba (D1\({}_{3}\)) \\ Ag\({}_{7}\)Sn & mP16 & 13 & & 0 - 880 & - \\ Ag\({}_{6}\)Sn & mS28 & 15 & 0 - & - \\ Ag\({}_{9}\)Sn & mS44 & 15 & 100 - & - \\ Ag\({}_{4}\)Sn & oI40 & 44 & 0 - & - \\ \hline \hline \end{tabular} \end{table} Table 2: Compilation of all new thermodynamically stable M-Sn phases predicted in this study. The columns from left to right denote the composition, Pearson symbol, space group number, stable temperature ranges at ambient and elevated pressures, and prototype if available in the AFLOW database. diverse bonding types. The encouraging performance of our NN-guided structure prediction approach in the exploration of simple bulk alloys [17, 80], metal nanoparticles [79, 168], and more complex Sn alloys studied in Ref. [19] and here indicates that it may also be suitable for broader materials classes. ## Data availability statement The M-Sn NN models, IDs 002BD87A, 0552A1B2, 01121374, 036C3A42, 016C1E4A, can be downloaded at [https://github.com/maise-guide/maise/](https://github.com/maise-guide/maise/). Relevant M-Sn structures are given in the supplementary material. Other data supporting the findings of this study is available from the corresponding author upon request. ## Code availability statement The MAISE and MAISE-NET codes are freely available for download at [https://github.com/maise-guide/](https://github.com/maise-guide/). ## Acknowledgements We acknowledge the NSF support (Award No. DMR-1821815) and the Extreme Science and Engineering Discovery Environment computational resources [169] (NSF Award No. ACI-1548562, Project No. TG- PHY190024). ## Competing interests The authors declare no competing interests. ## Additional information **Supplementary information** The online version contains supplementary material.
2307.00319
A Survey on Explainable AI for 6G O-RAN: Architecture, Use Cases, Challenges and Research Directions
The recent O-RAN specifications promote the evolution of RAN architecture by function disaggregation, adoption of open interfaces, and instantiation of a hierarchical closed-loop control architecture managed by RAN Intelligent Controllers (RICs) entities. This paves the road to novel data-driven network management approaches based on programmable logic. Aided by Artificial Intelligence (AI) and Machine Learning (ML), novel solutions targeting traditionally unsolved RAN management issues can be devised. Nevertheless, the adoption of such smart and autonomous systems is limited by the current inability of human operators to understand the decision process of such AI/ML solutions, affecting their trust in such novel tools. eXplainable AI (XAI) aims at solving this issue, enabling human users to better understand and effectively manage the emerging generation of artificially intelligent schemes, reducing the human-to-machine barrier. In this survey, we provide a summary of the XAI methods and metrics before studying their deployment over the O-RAN Alliance RAN architecture along with its main building blocks. We then present various use-cases and discuss the automation of XAI pipelines for O-RAN as well as the underlying security aspects. We also review some projects/standards that tackle this area. Finally, we identify different challenges and research directions that may arise from the heavy adoption of AI/ML decision entities in this context, focusing on how XAI can help to interpret, understand, and improve trust in O-RAN operational networks.
Bouziane Brik, Hatim Chergui, Lanfranco Zanzi, Francesco Devoti, Adlen Ksentini, Muhammad Shuaib Siddiqui, Xavier Costa-Pérez, Christos Verikoukis
2023-07-01T12:10:18Z
http://arxiv.org/abs/2307.00319v3
A Survey on Explainable AI for 6G O-RAN: Architecture, Use Cases, Challenges and Research Directions ###### Abstract The recent O-RAN specifications promote the evolution of RAN architecture by function disaggregation, adoption of open interfaces, and instantiation of a hierarchical closed-loop control architecture managed by RAN Intelligent Controllers (RICs) entities. This paves the road to novel data-driven network management approaches based on programmable logic. Aided by Artificial Intelligence (AI) and Machine Learning (ML), novel solutions targeting traditionally unsolved RAN management issues can be devised. Nevertheless, the adoption of such smart and autonomous systems is limited by the current inability of human operators to understand the decision process of such AI/ML solutions, affecting their trust in such novel tools. eXplainable AI (XAI) aims at solving this issue, enabling human users to better understand and effectively manage the emerging generation of artificially intelligent schemes, reducing the _human-to-machine_ barrier. In this survey, we provide a summary of the XAI methods and metrics before studying their deployment over the O-RAN Alliance RAN architecture along with its main building blocks. We then present various use-cases and discuss the automation of XAI pipelines for O-RAN as well as the underlying security aspects. We also review some projects/standards that tackle this area. Finally, we identify different challenges and research directions that may arise from the heavy adoption of AI/ML decision entities in this context, focusing on how XAI can help to interpret, understand, and improve trust in O-RAN operational networks. 6G, AI, ML, O-RAN, Survey, Trust, XAI ## I Introduction ### _Context and Motivation_ G wireless networks are growing to revolutionize the way we connect, communicate, and share information, catalyzing smart services and innovative applications [1, 2, 3, 4]. 6G is expected to transform mobile communication networks from the Internet of Things (IoT) to "connected intelligence", by leveraging Artificial Intelligence (AI) techniques and connecting billions of devices and people [5, 6, 7, 8]. The promise of immense connected devices, ultra-low latency, low energy footprint, and extremely high data rates is expected to enhance the sustainability, connectivity, and trustworthiness of the next-generation mobile network, and support the development of innovative applications, such as truly immersive eXtended Reality (XR), smart grid 2.0, high-fidelity mobile hologram, and Industry 5.0 [9, 10, 11, 12]. The co-existence of such a variety of applications, along with their specific requirements, demands a versatile mobile network capable of accommodating and guaranteeing the expected performances by means of accurate and smart management of network components and resources [13, 14, 15] across different technological domains, i.e., Radio Access Network (RAN), core network, cloud, and edge. To this end, both industry and academia are leveraging Network Slicing (NS), Software Defined Network (SDN), and Network Function Virtualization (NFV) paradigms to transform the mobile ecosystem into a more intelligent, energy-efficient, virtual, and software-focused ecosystem [16, 17, 18, 19]. In this context, a global initiative was formed, consisting of over \(200\) companies from the telecommunication industry, who collaborated under the umbrella of the Open Radio Access Network (O-RAN) alliance to introduce a novel RAN architectural design for the forthcoming generation of mobile networks (B5G and 6G) [20][21]. The core concept of O-RAN revolves around the disaggregation of traditional RAN system functionalities and their conversion into software components, known as Virtual Network Functions Virtual Network Function (VNF), which are interconnected through standardized and open interfaces. Additionally, O-RAN introduces a novel hierarchical RAN Intelligent Controller (RIC) architecture [22], which includes two main building blocks namely Non Real-Time RAN Intelligent Controller (Non RT RIC) [23] and Near Real-Time RAN Intelligent Controller (Near RT RIC) [24], designed to enhance the capabilities and flexibility of the RAN ecosystem. The Non RT RIC is responsible for executing non-time-critical functions and tasks, such as policy management, network optimization, and long-term analytics, while the Near RT RIC focuses on time-critical operations and tasks that require low latency and quick decision-making. It is easy to claim that AI will play a critical role in the development and implementation of future network management operations, pursuing better network performance, cost savings, and enhanced customer experience [25][26][27]. In this context, O-RAN envisions RIC entities to support programmable-based functions and logics, featured by the heavy usage of AI techniques, in particular, Machine Learning (ML) and Deep Learning (DL), to ease the development of intelligent and flexible RAN applications and reduce operational complexity [28]. Among others, the AI-based RICs aim to tackle traditionally hard-to-solve aspects of the RAN domain, such as spectrum management, mobility, radio resource assignment and scheduling, admission control, link management, and power allocation [29, 30]. This is particularly beneficial in the 6G landscape when considering various vertical industries and their corresponding networking requirements. Despite a promising future in radio management scenarios, the widespread adoption of AI techniques in O-RAN opens security threads and requires a deep understanding of such technologies before their deployment in real systems, including a characterization of how these techniques perform decisions and behave based on the knowledge they acquire from the input data [31]. Mainly due to a lack of trust, transparency, and explainability caused by the opaque nature of black-box AI models [32], network operators are reluctant to deploy AI-based applications at the RAN level. Moreover, an erroneous adoption of AI/ML-based actuators could bring infrastructure providers to face Service Level Agreement (SLA) violations [33], or in the worst case, lead to potential failures. Therefore, it urges us to clearly identify the operational boundaries of AI/ML models, characterize and understand their behaviour, and prioritize faithful and trustworthy decision-making processes to enable automated network service management, while leaving the quality of service unaffected. On that account, new approaches are required to provide explainable and understandable decisions [34]. eXplainable AI (XAI) is an emerging paradigm that aims to shed light on the decision process that is performed by closed (black box) AI models. The main objective of XAI is to create a transparent and human-understandable model (white box) that clarifies the internal processes of AI models, e.g., by determining the contribution of each input feature to an AI decision or prediction [35]. XAI is crucial to demonstrate the accuracy, fairness, and transparency of AI models that drive decisions and operations in the network, thereby instilling trust and confidence in the deployment of AI-powered components in the O-RAN ecosystem by businesses and organizations [36, 37]. List of Acronyms 3GPP 3rd Generation Partnership Project 4G Fourth Generation 5G Fifth Generation 6G Sixth Generation A2C Advantage Actor Critic AI Artificial Intelligence ANN Artificial Neutral Networks API Application Programming Interface B5G Beyond Fifth-Generation BBU Baseball Unit BS Base Station CAPEX CAPFinial Expenditures CUD Continuous Integration and Delivery CNN Convolutional Neural Network CP Control Plane CT Continuous Training CU Central Unit DAG Directed Acyclic Graph DARPA Defense Advanced Research Projects Agency DevOps DEvelopment and IT Operations DeepLIFT Deep Learning Important FeaTures DL Deep Learning DNN Deep Neural Network DQN Deep Q-Network DRL Deep Reinforcement Learning DU Distributed Unit eMBB enhanced Mobile Broadband eNB eNodeB ENI Experiential Networked Intelligence ETSI European Telecommunications Standards Institute FL Federated Learning GAN Generative Adversarial Network gNB gNodeB GNN Graph Neural Network HE Horizon Europe IEEE Institute of Electrical and Electronics Engineers IG Integrated Gradients IoT Internet of Things ISG Industry Specification Group KL Kullback-Leibler LIME Local Interpretable Model-Agnostic Explanations LM Large Language Model LO Log-Odds LSTM Long Short Term Memory MAC Medium Access Control MDP Markov Decision Process MEC Multi-access Edge Computing MLops ML system operations ML Machine Learning mMTC massive Machine Type Communications MM Mobility Management MM Mobile Network Operator MR Machine Reasoning MVNO Mobile Virtual Network Operator Near RT RIC Near Real-Time RAN Intelligent Controller NFV Network Function Virtualization NG RAN New Generation RAN Non RT Non Real-Time RAN Intelligent Controller NR-MAC New Radio Medium Access Control NS Network Slicing O-Cloud Open Cloud O-CU-CP Open RAN Central Unit Control Plane O-CU-UP Open RAN Central Unit User Plane O-CU Open RAN Central Unit O-DU Open RAN Distributed Unit O-RAN Open Radio Access Network O-RU Open RAN Radio Unit OPEX Operational EXpenditures OSC Open RAN Software Community ### _Review of Existing Related Surveys_ Several studies already addressed the novel O-RAN architecture, highlighting its novel approach and investigating potential benefits and drawbacks. In [39], the authors provided a short review of both advantages and limitations of O-RAN, focusing on the O-RAN architecture and its main modules. The authors conducted a community survey on the benefits of O-RAN among \(95\) researchers from all around the world. Most of them agreed on the fact that O-RAN will be the foundation of next-generation networks. Finally, the authors discussed the benefits, current shortcomings, and future research directions of O-RAN. Similarly, [40] described the O-RAN architecture and its key concepts. In addition, the authors present a novel DL-based scheme for radio resource assignment, validating their performance using data collected from real mobile network deployments. The authors conclude their work by discussing open challenges and future research opportunities. Another review study is provided by [41]. The authors showcase how a DL-based scenario can be deployed on top of the O-RAN architecture, highlighting the main advantages and shortcomings of O-RAN. The evolution of RAN architectures towards the O-RAN proposal both in terms of functionality and implementation is discussed in [42][43]. In the same context, the support of B5G key concepts, such as network slicing and MEC, by the O-RAN architecture is elaborated by [44][52][53][54]. In our previous work [45], we proposed a survey study on the O-RAN architecture, discussing the evolution of RAN architectures, and comparing different studies based on various perspectives. We focused our review on existing AI-based schemes dealing with the RAN challenges, and show how these schemes can be supported by O-RAN by considering the deployment of two realistic DL-based case studies. Similarly, in [38], the authors provided a tutorial on the O-RAN framework describing recent specifications in terms of architecture, design, and open interfaces. They also discuss the main open research challenges and the new innovation possibilities in the O-RAN architecture, focusing on AI and deep learning. Besides, the XAI topic is attracting interest from research and industry domains. Currently, XAI is one of the main programs of the Defense Advanced Research Projects Agency (DARPA), expected to design efficiently the "third-wave AI systems" [55]. In [46][47], the authors reviewed and analyzed several XAI approaches focusing on algorithmic aspects, classifications, and application domains, identifying several still open challenges and key future research directions. The main principles and practices of XAI are summarized in [37]. In particular, the authors target the specific pattern recognition models of machine learning in order to enhance the understanding of such models for industry practitioners (data scientists). In [48], the authors discussed a set of key measurement metrics that can help evaluate explainable AI systems. In 6G networks context, the authors of [35] discussed the use of XAI, targeting different 6G use cases (e.g. Industry \(5.0\)). Similarly, in [50] the authors highlight existing tools in addition to their use to deal with 6G network challenges, discussing how to integrate XAI into 6G networks architecture through a real mobile traffic prediction use-case, and validating their findings on realistic traffic data. Conversely, the authors of [49] focused on XAI methods in low protocol layers of mobile networks, e.g., Physical (PHY) and Medium Access Control (MAC). In the same context, the authors of [51] describe the application of XAI related to security aspects, discussing how XAI can improve the interpretation of AI-based models for a wide range of security use-cases related to B5G/6G networks. Table I summarizes the main topics discussed along the above works, and compares their contributions with respect to our work, in order to provide an easy understanding of the differentiation features with respect to the state-of-the-art. Despite the presence of several survey papers discussing XAI and O-RAN, there is a lack of comprehensive surveys jointly in vestigating XAI and O-RAN aspects able to effectively explore the potential of XAI for developing responsible, trustworthy, and transparent AI-powered O-RAN architecture. In addition, although the integration of XAI with B5G networks has been addressed e.g., in [35][49], such studies do not focus either on the RAN part or consider the novel O-RAN architecture. Therefore, a comprehensive survey of XAI and its potential in designing the future O-RAN is greatly needed to guide the practitioners as well as researchers. ### _Main Contributions_ The contributions of this paper can be summarized as follows: * _Bridging the gap between O-RAN and XAI_: Existing surveys on O-RAN focused on its enabling technologies, such as hierarchical RAN Intelligent Controller, open interfaces, and programmable functions. To the best of our knowledge, there is no survey addressing the potential of human and O-RAN interactions, through XAI systems. Similarly, existing surveys on XAI targeted different XAI approaches and their taxonomies, and more recently their applications to B5G/6G networks. However, discussions on the potential of XAI for O-RAN are still missing. Therefore, this survey paper aims to bridge this gap by jointly exploring the key benefits of the introduction of XAI to O-RAN. * _A comprehensive survey of XAI deployment on top of O-RAN_: Existing works studied both O-RAN and XAI separately, i.e. no work has combined both paradigms in its study. Hence, in this survey paper, we study the promising deployment of XAI on top of the AI-enabled O-RAN. This includes O-RAN architecture as well as O-RAN use cases. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline **Works** & **AI** & **XAI** & **O-RAN** & **O-RAN** & **O-RAN** & **O-RAN** & **O-RAN** \\ \hline \hline [38] & L & L & H & H & L & H & A tutorial on O-RAN framework, by describing recent specifications in terms of architecture, design, and open interfaces. \\ \hline [39] & L & L & H & H & L & H & A short survey on O-RAN’s architecture, benefits, shortcomings, and future directions. \\ \hline [40] & M & L & H & H & H & A concise paper on O-RAN architecture. It designed a DL-based resource allocation scheme and discussed future directions. \\ \hline [41] & M & L & H & H & H & A short survey on O-RAN’s architecture, benefits, and future directions. It showed the deployment of DL-based scenarios in O-RAN. \\ \hline [42][43] & L & L & H & H & L & L & Short review papers that discussed the evolution of RAN architectures towards O-RAN in terms of functionality and implementation. \\ \hline [44] & L & L & H & M & L & L & A concise paper discussed the integration of emergent B5G concepts with O-RAN, such as network slicing and Multi-access Edge Computing (MEC). \\ \hline [45] & H & L & H & H & H & A survey on DL/ML-based solutions for RAN/O-RAN. It includes O-RAN architecture description along with its use cases as well as future directions and open challenges. \\ \hline [46][47] & H & H & M & L & L & H & A review of XAI approaches, in terms of their algorithmic aspects, classifications, application domains, and future research directions. \\ \hline [37] & H & H & M & L & L & H & A review of the main principles and practice of XAI. In particular, the specific pattern recognition models of machine learning are targeted. \\ \hline [48] & H & H & M & L & L & H & A review on a set of key measurement metrics, which can help to measure and evaluate any explainable AI system. \\ \hline [35] & H & H & H & L & L & M & A survey on the use of XAI for B5G/6G networks. It addresses how to design XAI systems for B5G use cases, as well as future research directions in such context. \\ \hline [49] & H & H & H & L & L & L & A review of DL-based solutions in PHY and MAC layers and their performance vs XAI trade-off. \\ \hline [50] & H & H & H & L & L & L & A review of existing XAI techniques and their applicability to deal with the 6G network challenges. \\ \hline [51] & H & H & H & L & L & M & A survey on the application of XAI to the security aspects of B5G networks as well as future research directions. \\ \hline **This survey** & H & H & H & H & H & H & **A comprehensive survey on the use of XAI to design transparent and trustworthy O-RAN architecture, covering architectural aspects, use cases, projects, standardization approaches, and future research directions.** \\ \hline \end{tabular} \end{table} TABLE I: Existing surveys on O-RAN, XAI, XAI for B5G. **H: High, M: Medium**, and **L: Low**. Furthermore, We study the mapping of existing O-RAN research solutions to XAI-supported solutions. * _A depth analysis of XAI automation for O-RAN:_ We provide an exhaustive analysis of how to automate the whole XAI pipeline on top of O-RAN, in order to ensure stable performance of deployed XAI techniques. To the best of our knowledge, no existing work has discussed the automation of XAI process for O-RAN. We also design new architectures showing the automated deployment of XAI, for different levels of automation. * _O-RAN Security aspects and XAI:_ We present the key findings of the official security risk assessments conducted on the O-RAN environment. We explore the potential of XAI to significantly improve the security layer of O-RAN, and how it could be used to build interpretable security threat detection mechanisms. Additionally, we discuss how XAI can help establish trust among stakeholders. * _Identifying New XAI-related Issues and Promising Research Directions:_ Integrating XAI with O-RAN will rise new issues, which should also be considered in future research studies. Thus, we exhaustively discuss new open challenges, along with future research directions. ### _Paper Organization_ As shown in Fig. 1, the survey is organized as follows. Sec. II provides background information related to the topics considered in the survey. Sec. III describes how XAI methods and models can be deployed on top of the O-RAN architecture, considering three realistic deployment scenarios, and communication interfaces. Sec. IV gives a literature review of existing related works in the field, focusing on AI techniques targeting RAN optimization and highlighting how XAI could Fig. 1: The taxonomy of the article. enhance their performances. Sec. V provides an overview of O-RAN use cases taken from the literature and standard documentation, highlighting how XAI could bring benefits to the considered scenarios. Sec. VI details an automation pipeline for XAI model training and deployment in the O-RAN context, involving multiple architectural components and communication interfaces. Sec. VII provides an overview of security issues related to the O-RAN architecture, focusing on XAI-related aspects. Sec. VIII presents the main ongoing projects and standards that are working to promote the adoption of AI/ML techniques in O-RAN, and show how they can be enhanced by XAI. Sec. IX highlights and discusses still open challenges along with their future research directions to deal with them. Finally, Sec. X concludes this paper. Note that the used acronyms in this paper are described in the _List of Acronyms_, in alphabetical order, for ease of reference. ## II Background This section provides background information on AI, XAI, and O-RAN topics which are required to fully understand the potential of XAI techniques in the O-RAN domain. Firstly, we introduce the two main branches of AI, namely machine and deep learning, and their characteristics. Secondly, we describe the main concepts, techniques, and emergent applications of XAI. Finally, we present the O-RAN architecture along with its main modules as designed by the O-RAN alliance. ### _Artificial Intelligence (AI)_ AI aims to reproduce human intelligence in machines through rules and algorithms. It combines various different fields including, planning, reasoning, communicating, learning, interaction, perception, etc. In our study, we focus on ML/DL as one of the AI branches that require XAI for more explanations and interpretations. #### Ii-A1 Machine Learning (ML) Classes Machine learning is a sub-field of AI that is based on mathematical and statistical models to process and build inferences from data patterns. Usually, machine learning algorithms are categorized into three main classes: * _Supervised Machine Learning:_ It uses labeled data to build a mapping function from input variable(s) to output data. It can deal with two different problems: classification to predict the class a particular data observation may belong to, and regression to predict a real (continuous) value. In this machine learning class, a wide range of algorithms have been designed: _linear and logistic regression_ to learn the relationship between inputs and output data through a linear line and curve, respectively. Noting that logistic regression is only used for binary classification since its outputs are in the range \([0,1]\)[56]. _Random Forests_ or _random decision forests_ is a set of decision trees that are built for classification and regression. _Random Forests_ belongs to a class of learning techniques named Ensemble learning techniques, which also comprises other techniques such as AdaBoost and Gradient Boosting Machines [57]. _Support Vector Machines (SVM)_ is another supervised learning algorithm for classification and regression, that builds learning models from a statistical point of view. SVM is mainly based on Vapnik-Chervonenkis theory to determine the best hyperplane, separating the data instances [58]. _Artificial Neural Networks (ANN)_ emulate the human brain to connect a set of neurons with each other, through edges with associated weights. ANN aims to adjust edges' weights, improving the learning accuracy and minimizing the loss function. Noting that deep learning is also based on advanced ANN, with more hidden layers [59]. * _Unsupervised Machine Learning:_ It processes unlabeled data to deduce standard patterns/information. Unsupervised learning includes clustering, association, and dimensionality reduction. _K-means_ is a clustering algorithm that groups data into \(k\) different clusters, based on the distance to the centroid of each group. The distance may be computed using Euclidean, Dynamic time wrapper, and Manhattan [60]. _Association rules_ enable us to determine the associations among data observations, which will then help to understand the relationship between data, for instance, establishing associations between shoppers according to their purchasing histories or browsing [61]. _Principal Component Analysis (PCA)_ is a dimensionality reduction algorithm used to reduce the dataset dimension while keeping the main information. PCA is based on geometrical projections to project data into new components, named Principal Components [60]. * _Reinforcement Learning (RL):_ It enables an agent (or set of agents) to interpret and perceive its environment, take actions, and learn via error and trial. In particular, the agent gets positive or negative rewards based on its actions. Thus, the agent aims to devise the optimal policy that is maximizing the cumulative reward. The most popular reinforcement algorithms are _Q-learning_, _Deep Q-learning_, and advantage actor-critic, where "Q" stands from Quality [62]. _Q-learning_ consists of finding the optimal action for each state based on stochastic transitions. Q-learning builds a Q-table containing the Q-values that correspond to each transition from one state to another. _Deep Q-learning_ replaces the Q-table with an ANN. _Advantage Actor Critic (A2C)_ is another reinforcement learning that builds two different networks, called actor and critic. The actor chooses optimal actions, while the critic network evaluates their qualities (actions) [63]. #### Ii-A2 Deep Learning Algorithms are considered as advanced machine learning algorithms and thus a sub-class of the machine learning field. Deep learning first enables automated data pre-processing without human intervention. It also consists of extracting unknown knowledge from input data through multiple levels. Some well-used deep learning algorithms are as follows: * _Convolutional Neural Network (CNN):_ It is a deep learning algorithm mostly used for image/object classification and recognition. CNN is based on multilayer neural networks comprising an input layer, a set of hidden layers, and an output layer. The intermediate (hidden) layers can be composed of different types of layers including convolutional, pooling, normalization, and full-connected layers [64]. * _Sequence Algorithms_: These algorithms are used to deal with problems related to sequential data and time series, e.g., language translation and speech recognition. _Recurrent Neural Network (RNN)_ and _Long Short Term Memory (LSTM)_ are widely spread deep learning algorithms belonging to this class. Their basic idea is to leverage internal memory units known as cell state, which is the long-term memory of the system, and the output of the previous point in time, known as the hidden state, which is the short-term memory. The output depends on the cell state, the hidden state, and the current input. At the same time, the long-/short-term memory depends on past LSTM experience and greatly influences its performance [65]. For example, if the phrase "Artificial Neural Network" is part of an LSTM model's training data, the model would probably predict the word "Network" after seeing the input "Artificial Neural". * _Generative Adversarial Network (GAN)_: It is based on CNN to enable generative modelling. In other words, Generative Adversarial Network (GAN) is applied in unsupervised learning to learn new knowledge and common proprieties in data, based on previous information. GAN comprises two modules: a generator module to create new data, and a discriminator module to evaluate the quality of the newly generated data. * _Auto-encoder_: It is another unsupervised deep learning algorithm applied to the task of data representation. It aims to learn an efficient coding model of unlabeled data. Auto-encoder is composed of two main branches: an encoder to map input data to a, typically compressed, encoded representation of it, and a decoder to reconstruct the inputs from its encoded representation. It can be used in different applications including data compression, noise reduction, and anomaly detection. * _Transformers_: They are advanced models that use self-attention mechanisms to capture dependencies between sequence elements, surpassing traditional recurrent neural network-based models. By weighing element importance based on relevance to others, transformers achieve superior contextual understanding. They excel in tasks like machine translation, text generation, sentiment analysis, and question answering. Transformers [66, 67] capture long-range dependencies, generating coherent and contextually rich text which makes them a strong component in building Large Language Models (LLMs) [68]. ### _eXplainable AI (XAI)_ In this subsection, we provide the background on XAI and its main concepts, applications, and ongoing studies. #### Iii-B1 Definitions and Key Concepts XAI is a set of methods and tools that help human users to interpret, understand, and trust results made by AI-based models [46][37]. XAI is used to describe AI models, e.g. ML/DL models, their possible biases as well as expected impacts. In other words, XAI aims to build a white-box model, which gives information on the inner working of the underlying ML/AI black-box model. Thus, it helps characterize model fairness, accuracy, and transparency in AI-enabled decisions. XAI is then vital for businesses and organizations in providing confidence and trust when deploying AI models [37]. More specifically, the XAI model exposes information about the inner working of the AI models leveraging on the concepts of _explainability_ and _interoperability_. Explainability involves explaining the behavior of complex models using model-agnostic methods, and it is considered an active characteristic of a model, referring to any function or action taken by a model, in order to describe or clarify its internal functions. Whereas, interpretability, also known as transparency, involves observing a model's inner mechanics to understand its behavior. In other words, interpretability is defined as a passive characteristic of a model, that reflects at which level a particular model makes sense from the point of view of a human observer. XAI models incorporate the so-called _explanation user interface_ to generate a user-understandable explanation and/or interpretation of the rationale behind decisions taken by the model. Most AI models can be translated into an equivalent XAI counterpart, at the expense of integrating additional layers supporting the explanation user interface on top of the deployed model. Based on the design of the explanation user interface, the XAI model can provide both explainability and interpretability or only one, depending on the target human user [46]. #### Iii-B2 Taxonomy of XAI Techniques, Applications, and Stakeholders There are several existing taxonomies in the XAI realm, which can complement and/or overlap each other. Table II describes an XAI taxonomy that is mainly inspired by [37, 46], and is based on the following three main criteria: * _Model Transparency:_ XAI models can be classified based on the target ML models' transparency. In this regard, models are classified as interpretable or complex. Interpretable models are by themselves understandable for human users. In other words, such models are able to provide the rationale behind their decisions in an interpretable way to users [46]. Several proposed works succeeded in interpreting some relatively low-complex \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Model Type** & **Explainability Basis** & **Technique Algorithm** & **Fros and Cons** & **Reference** \\ \hline \hline \multirow{8}{*}{Black-Box Models} & \multirow{4}{*}{Attributions} & \multirow{4}{*}{Gradient} & Saliency Maps & Pros: Simplicity, visual interpretability, widely applicable. Cons: Lack of context, sensitivity to input perturbations, limited to input gradients. & [69, 70] \\ \cline{3-6} & & & Gradient x Input & Pros: Simplicity, direct relevance, feature importance ranking. Cons: Input scaling sensitivity, limited to linear relationships, potential for misleading interpretations. & [71, 72] \\ \cline{3-6} & & & Integrated Gradients & Pros: Baseline comparison, path-based attribution, completeness and sensitivity. Cons: Computationally intensive, baseline selection challenge, linearity assumption. & [73, 74] \\ \cline{3-6} & & & Smooth Gradient & Pros: Noise reduction, robustness to adversarial examples, gradient visualization. Cons: Interpretation challenges, hyperparameter sensitivity, computational overhead. & [75, 76] \\ \cline{3-6} & & & epsilon-LRP & Pros: Deep model interpretability, conceptual clarity, attribution preservation. Cons: Complexity, parameter tuning, vulnerability to network architecture. & [77, 78] \\ \cline{3-6} & & & & Pros: Theoretical grounding based on game theory, global and local interpretability, consistency. Cons: Computational complexity, high-dimensional data challenge, model approximation dependency. & [79, 80] \\ \cline{3-6} & & & DeepLIFT & Pros: Model-agnostic, captures interactions, relevance conservation. Cons: Computational overhead, baseline selection challenge, interpretation complexity. & [81] \\ \cline{3-6} & & & Occlusion & Pros: Intuitive visual interpretation, robustness to model architecture, spatial localization. Cons: Computational expense, coarseness of occlusion, interpeturbation subjectivity. & [82, 83] \\ \cline{3-6} & & & & \\ \cline{3-6} & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \hline \multirow{4}{*}{Surrogates} & \multirow{4}{*}{Local Techniques} & \multirow{4}{*}{IME} & \multirow{4}{*}{IME} & Pros: Model-agnostic, local interpretability, simplicity. Cons: Interpretability limitation, instability, assumes linearity. & [85, 86] \\ \cline{3-6} & & Global Techniques & & TREPAN & Pros: Decision tree interpretability, human-readable explanations, transparent model behavior. Cons: Limited to decision tree models, model-specific, interpeturbation scalability. & [87] \\ \cline{3-6} & & Rule-Based & RuleFit & Pros: combines decision trees and linear regression to provide interpretable insights into the model's decision-making process. Cons: may struggle to model highly intricate or complex nonlinear patterns in the data. & [88, 89] \\ \cline{3-6} & & & & Pros: Provides explicit guidance to the RL/DRL agent, allowing it to focus on desired behaviors. Cons: Can introduce biases if the reward shaping is not carefully designed. & [90, 91] \\ \cline{2-6} & & & & & \\ \cline{2-6} & & & & & \\ \cline{2-6} & & & & & \\ \cline{2-6} & & & & & \\ \cline{2-6} & & & & & \\ \cline{2-6} & & & & & \\ \hline \multirow{2}{*}{RL State} & \multirow{2}{*}{Model-based} & \multirow{2}{*}{Attention Mechanisms} & Pros: Offers transparency by showing which parts of the input state the RL/DRL agent attends to. Cons: Attention mechanisms do not explicitly explain the agent’s internal reasoning or decision-making process. & [92, 93] \\ \cline{3-6} & & & & Pros: Provides human-interpretable explanations for model decision and shows explicit reasoning behind decisions, enhancing transparency. Cons: Requires expertise, computationally expensive and may struggle with uncertain or probabilistic information. & [94, 95] \\ \cline{2-6} & & & & & \\ \cline{2-6} & & & & & \\ \cline{2-6} & & & & & \\ \hline \multirow{4}{*}{Transformers' Attention Head} & \multirow{4}{*}{SCM} & \multirow{4}{*}{Fros: Provides interpretability, enables fine-grained analysis, helps improve models, offers domain-specific insights. Cons: Complexity, lack of unique interpretations, limited context, challenges in generalization.} & [96] \\ \cline{2-6} & & & & & \\ \cline{2-6} & & & & & \\ \cline{2-6} & & & & & \\ \cline{2-6} & & & & & \\ \cline{2-6} & & & & & \\ \hline \multirow{4}{*}{Visual} & \multirow{4}{*}{SCM} & \multirow{4}{*}{Fros: Causal understanding, intuitive visualization, identifying confounding variables. Cons: Limited to causal modeling, simplified representation, expert knowledge required.} & [97, 98] \\ \cline{2-6} & & & & Pros: Contextual understanding, language comprehension, multimodal interpretation. Cons: Subjectivity and ambiguity, lack of fine-grained control, reliance on training data. & [99, 100] \\ \cline{1-1} \cline{2-6} & & & & & \\ \cline{1-1} \cline{2-6} & & & & & \\ \cline{1-1} \cline{2-6} & & & & & \\ \cline{1-1} \cline{2-6} & & & & & \\ \cline{1-1} \cline{2-6} & & & & & \\ \hline \multirow{4}{*}{Transparent Models} & \multirow{4}{*}{Fros: Explainability, trust and accountability, debugging and error analysis. Cons: Performance limitations, vulnerability to adversarial attacks.} & [101, 102] \\ \cline{1-1} \cline{2-6} & & & & & \\ \cline{1-1} \cline{2-6} & & & & & \\ \cline{1-1} \cline{2-6} & & & & & \\ \cline{1-1} \cline{2-6} & & & & & \\ \cline{1-1} \cline{2-6} & & & & & \\ \cline{1-1} \cline{2-6} & & & & & \\ \hline \end{tabular} \end{table} TABLE II: XAI Taxonomy ML models, including logistic/linear regression, decision trees, K-Nearest neighbors, rule-based learners, etc. [46]. On the other hand, more complex models such as deep neural networks, in order to be interpretable, have to be approximated by generating simpler surrogate models that ease the explanation task by means of a technique known as _post-hoc explainability_[110]. The model complexity is a widely considered aspect in the literature related to XAI and is generally adopted to classify XAI approaches [46]. * _Model Agnosticity_: This criterion targets complex ML/DL models, where XAI models can be categorized based on the nature of their target explanations [46][37]. Indeed, XAI models may consider the internal structure and functioning of the learning models, such as model weights, to build their explanations. In such cases, they target a specific ML/DL model. Besides, XAI models can give their interpretations without considering the internal functioning and structure of the studied learning models, but only based on their predictions (ML models). Therefore, such XAI models can be applied to any black-box ML/DL-based models. * _Explainability Methods:_ when ML/DL models are considered complex models, some techniques should be devised and used to interpret such models. Thus, XAI models rely on several explanation types, to describe how these ML/DL models output their predictions for any input data. * Explanations by simplification refer to the techniques that simplify a complex model and approximate it to an interpretable model, which is easier to explain [85]. * Feature relevance explanations study and quantify the impact of each input data, to explain a given ML model's prediction [111]. * Local explanations focus on a single or particular prediction (output) of ML models to generate explanations [85]. * Visual explanations aim to generate explanations in a visual way, describing the inner functioning of ML/DL models [97]. For instance, they could reveal which set of pixels is the most relevant to recognize content in image classification tasks. Visual explanations rely on several tools, e.g, graphs, heatmaps, scatter plots, etc. * Text explanations generate symbol interpretations of learning models using for example natural language text, to explain their results [99]. For instance, they could be used to highlight which words (or forms) are leveraged in automatic email spam filtering. Based on the above taxonomy criteria, several XAI approaches have been proposed in the literature. In what follows, we present the most popular ones, highlighting their main features: * _SHapley Additive exPlanations (SHAP):_ This approach relies on feature relevance explanation to interpret a particular prediction of supervised ML/DL models [120]. It computes an additive feature importance score with respect to a set of required properties (e.h., accuracy, consistency, and missingness). Hence, SHapley Additive exPlanations (SHAP) determines feature influence by applying the Shapley values method, which enables estimating the marginal contribution of one feature over the final reward function. In addition, combining several predictions can also be considered to build a global explanation. Several variants of SHAP have been proposed in the literature in order to optimize its computational complexity, such as DeepSHAP [120] and TreeSHAP [121]. * _Deep Learning Important FeaTures (DeepLIFT) [81]:_ The purpose of Deep Learning Important FeaTures (DeepLIFT) is to clarify the output of a neural network by calculating the significance of each input feature to the output. This is accomplished by comparing the activation of each neuron in the network for a particular input to the activation that would have been obtained if a reference input had been used. The difference in the activations between the input and the reference is measured by DeepLIFT to compute the contribution of each input feature to the output. The contribution score obtained can be utilized to comprehend how the network reached its conclusion and to identify the most relevant input features. DeepLIFT has been effective in explaining the behavior of different neural network models, such as convolutional neural networks and recurrent neural networks, and has been applied to various fields, including drug discovery, image classification, and speech recognition. * _Local Interpretable Model-Agnostic Explanations (LIME):_ It is one of the most known solutions, that relies on local and simplification explanations, to explain supervised ML/DL models [85]. LIME is a model-agnostic approach targeting different types of data, e.g., tabular, text, graphs, and images. Local Interpretable Model-Agnostic Explanations (LIME) aims to approximate the learning models by developing locally linear models, which replace the black-box models to explain their individual predictions. * _Integrated Gradients (IG):_ also known as Path-Integrated Gradients or Axiomatic Attribution for Deep Networks. Integrated Gradients (IG) is an XAI technique that gives an importance value to each feature of the input using the gradients of the model output [122]. Specifically, it is a local method that consists of accumulating the gradients by sampling points at a uniform spacing along a straight line between the input and the baseline. This procedure avoids getting null gradients when e.g., the deep learning model is flat in the proximity of the input feature. This method yields the specific positive or negative attributions of the input features. * _Graph Neural Network (GNN) Explainer:_ It is a technique that explains the predictions of Graph Neural Networks (GNNs) for graph-structured data. It identifies the most important nodes and edges contributing to the output by generating explanation vectors using an additional neural network. This generates an attention map that shows the relative importance of each node and edge. GNN Explainer can be applied to various GNN architectures and input graphs, without requiring changes to the model or training data. It is useful for understanding how GNNs make predictions and identifying potential issues [84]. * _Reward Shaping:_ It entails altering the reward function of the agent to offer supplementary feedback or incentives. This adjustment assists in steering the agent's learning process by molding the reward signal [90]. * _Attention Mechanism:_ It enhances interpretability by identifying and highlighting the crucial elements in the input that significantly impact the decision-making process of the agent. They shed light on the specific features that capture the agent's attention and influence its decision [92]. * _Machine Reasoning (MR):_ It utilizes logical reasoning and inference techniques to offer insights into the decision-making process of AI models, thereby improving transparency and trust. It generates explanations that are easily comprehensible to humans, fostering a deeper understanding and acceptance of AI systems. Nevertheless, applying machine reasoning in XAI necessitates expertise in logic and reasoning, and it may encounter difficulties when dealing with uncertain or probabilistic information. Nonetheless, the incorporation of machine reasoning in XAI contributes to the advancement of interpretable and accountable AI systems [94]. * _Attention Flow Analysis:_ It assesses the individual contribution of attention heads in the encoder to the overall performance of the transformer's model. Specifically, it examines the roles played by these attention heads, with a particular focus on the most important and confident ones. These heads often exhibit consistent and linguistically interpretable roles, providing valuable insights into the model's decision-making process [96]. * _Structural Causal Models (SCM):_ It is another method that targets reinforcement learning models, aiming to show the causal link between the data variables. In [98], the authors leverage Structural Causal Models (SCM) method to explain the behavior of the reinforcement learning model. They are based on visual explanations through a Directed Acyclic Graph (DAG), where the nodes and edges reflect the model states and actions, respectively. By exploring the DAG, it can be extracted which actions take to move from one state to another. Once DAG is created, regression models are built to approximate the relationships using the minimum number of variables. Then, analyzing the DAG's variables will help in generating the explanations, in order to answer the question: "Why action X and not Y?". * _Caption generation:_ It is a class of methods that aims to generate text interpretations to explain the outputs of DL \begin{table} \begin{tabular}{p{85.4pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline **XAI Users** & **Needs** & **Key Application Areas** & **Reference** \\ \hline \hline Data Scientists and Machine Learning Researchers & They require XAI techniques to understand and debug complex models, identify biases, and improve model performance & Model development, debugging, and optimization across various domains such as telecommunications, healthcare, finance, natural language processing, and computer vision & [85, 112] \\ \hline End Users and Consumers & They need explanations to trust and understand AI systems in applications like recommender systems, personalized marketing, and decision support tools & E-commerce and personalized recommendation systems, healthcare decision support tools, financial advice platforms, and autonomous vehicles & [113, 114] \\ \hline Managers and Decision Makers & They require transparent and interpretable AI models to make informed decisions, assess risks, and gain insights into the AI system’s behavior & Business intelligence and analytics, risk assessment and management, fraud detection, and regulatory compliance across industries such as finance, healthcare, and manufacturing & [115] \\ \hline Developers and Engineers & They need tools and methods to troubleshoot networks faults/SLA violations, build explainable AI systems, ensure reliability, and meet regulatory requirements & Building interpretable machine learning models, developing explainable AI frameworks and libraries, ensuring model reliability and security in domains like cybersecurity, telecommunications, and autonomous systems & [34, 116] \\ \hline Auditors and Compliance Officers & They require XAI to assess the fairness, accountability, and compliance of AI systems and to identify potential biases or risks & Assessing the fairness and legality of AI systems in finance, hiring practices, loan approvals, credit scoring, and regulatory compliance in sectors such as finance and human resources & [117] \\ \hline Legal Professionals and Judges & They need explanations to understand AI decisions, assess legal implications, and ensure transparency and fairness in legal proceedings & Interpreting AI-driven legal decisions, evaluating algorithmic fairness, ensuring transparency and accountability in legal proceedings, and addressing ethical concerns in areas like criminal justice and civil rights & [118] \\ \hline Regulators and Policy Makers & They require XAI to establish guidelines, standards, and regulations around AI ethics, transparency, and accountability & Establishing guidelines, standards, and regulations for trustworthy AI in sectors including healthcare, finance, autonomous systems, and data privacy to protect public interests and ensure ethical AI deployment & [119] \\ \hline \end{tabular} \end{table} TABLE III: XAI Users models. In [100], the authors combined a Convolutional Neural Network (CNN) model and a bi-directional LSTM encoder/decoder model. The LSTM encoder helps to extract video features, which are then used by the LSTM decoder to generate textual video captions. * _Knowledge Graphs:_ To produce human-understandable explanations, it is necessary to represent ideas in terms of concepts rather than numeric values. Concepts and the connection between them make what is called _knowledge graph_. It is a powerful way of representing data because Knowledge Graphs can be built automatically and can then be explored to reveal new insights about the domain, especially to find inferred concepts that were not asserted, along with being able to trace back all the steps, making it fully explainable [102]. As anticipated before, the selection of suitable explainability methods depends both on the complexity of the targeted model to be explained and on the target audience. Indeed, the type of explanation exposed and their level of detail depend mainly on the people who are getting such information. In this context, different user profiles may be targeted by XAI models, and XAI models' explanations should differ from one user to another [46]. Table. III shows the different objectives of XAI explainability, expected by different user profiles. For instance, users of the models look at trusting as well as understanding how the model works, while users affected by models' decisions aim to understand their decisions and the main reasons for conducting such decisions. Besides, developers and data scientists expect explanations related to the AI models' performance, in order to optimize them over time. However, both regulatory and manager users aim to get more details related to the compliance of AI models with the legislation in force to check and assess them. ### _XAI Metrics_ XAI metrics are used to measure the quality of the original AI model and its explanation. We summarize relevant XAI metrics in Table IV, and compile a list of them in the following. * _Confidence/Faithfulness:_ A common approach to measuring the confidence of the explanation relies on the notion of feature relevance. Specifically, observing the effect of muting, i.e., replacing a feature with a baseline value--generally zero--helps to measure the effect on the prediction in both classification and regression tasks [123]. For instance, for a probabilistic classification model, we can obscure or remove features according to a policy defined as follows \[\hat{x}_{i,k}=x_{i,k}\times(1-p),\ p\sim\mathcal{B}(1,\pi_{i,k})\] (1) where \(\pi_{i,k}\) is a probability distribution of the features that can be computed as \[\pi_{i,k}=\frac{\exp\Bigl{\{}\bigl{|}a_{i,k}/x_{i,k}\bigr{|}\Bigr{\}}}{\sum_{l =1}^{N}\exp\Bigl{\{}\bigl{|}a_{l,k}/x_{l,k}\bigr{|}\Bigr{\}}},\ i=1,\ldots,N,\] (2) where \(a_{i,k}\) is the attribution of feature \(i\) in a sample of class \(k\). It is obtained using any attribution-based XAI method, such as IG or SHAP. The confidence score in this case is \[c_{k}=\frac{\Delta_{k}^{(c)}}{\Delta_{k}},\] (3) where \(\Delta_{k}^{(c)}\) is the number of samples that conserve their class \(k\) after the mutation of the dataset and \(\Delta_{k}\) stands for the original count of samples with class label \(k\). For regression tasks, however, the classes are replaced with the notion of groups which are defined by comparing the continuous prediction output with one or several thresholds. * _Log-Odds (LO):_ Similarly to the confidence, this score is defined as the average difference of the negative logarithmic probabilities on the predicted class before and after masking the top \(p\%\) features with zero padding [124]. Given the attribution scores generated by an explanation algorithm, we select the top \(p\%\) features based on their attributions and replace them with zero padding. More concretely, for a dataset with \(L\) samples, it is defined as: \[\log\text{-odds}(p)=-\frac{1}{L}\sum_{i=1}^{L}\log\frac{\Pr\left(\hat{y}| \mathbf{x}_{i}^{(p)}\right)}{\Pr\left(\hat{y}|\mathbf{x}_{i}\right)}\] (4) where \(\hat{y}\) is the predicted class, \(\mathbf{x}_{i}\) is the \(i\)th sample, and \(\mathbf{x}_{i}^{(p)}\) is the modified samples with top \(p\%\) features replaced with zero padding. Lower scores are better. * _Comprehensiveness:_ is the average difference of the change in predicted class probability before and after removing the top \(p\%\) features. Similar to Log-odds, this measures the influence of the top-attributed words on the model's prediction. It is defined as [125]: \[\mathrm{Comp}(p)=\frac{1}{L}\sum_{i=1}^{L}\left[\Pr\left(\hat{y}|\mathbf{x}_{i }^{(p)}\right)-\Pr\left(\hat{y}|\mathbf{x}_{i}\right)\right]\] (5) Here \(\mathbf{x}_{i}^{(p)}\) denotes the modified dataset with top \(p\%\) samples deleted. Higher scores are better. * _Sufficiency:_ is defined as the average difference of the change in predicted class probability before and after keeping only the top \(p\%\) features. This measures the adequacy of the top \(p\%\) attributions for the model's prediction. It is defined in a similar fashion as comprehensiveness, except the \(x_{i}^{(p)}\) is defined as the samples containing only the top \(p\%\) features. Lower scores are better [125]. * _Robustness/Sensitivity:_ A crucial property that interpretability methods should satisfy to generate meaningful explanations is that of robustness with respect to local perturbations of the input. This is not the case for popular interpretability methods; even adding minimal white noise to the input introduces visible changes in the explanations [109]. To formally quantify the stability of an explanation generation model, one can estimate the Lipschitz constant \(\lambda\) for a given input \(x_{i}\) and a neighborhood \(B_{\epsilon}\) of size \(\epsilon\) as, \[\lambda(x_{i})=\operatorname*{arg\,max}_{x_{j}\in B_{\epsilon}(x_{i})}\frac{ \|\Phi(x_{i})-\Phi(x_{j})\|_{2}}{\|x_{i}-x_{j}\|_{2}},\] (6) where the evaluation of the explaining function \(\Phi\) for methods like LIME and SHAP is expensive as it involves model estimation for each query. In contrast, gradient-based attribution methods present a lower complexity. On the other hand, computing (6) for post-hoc explanation frameworks is much more challenging, since they are not end-to-end differentiable. Thus, one needs to rely on black-box optimization instead of gradient ascent. This continuous notion of local stability in (6) might be inadequate for discrete inputs or settings where adversarial perturbations are overly restrictive. In such cases, one can instead define a (weaker) sample-based notion of stability. For any x in a finite sample \(X=\{x_{i}\}_{i=1}^{n}\) one replace \(B_{\epsilon}(x_{i})\) with an \(\epsilon\)-neighborhood within \(X\), i.e., \[\mathcal{N}_{\epsilon}(x)=\{x^{\prime}\in X|\ \|x-x^{\prime}\|\leq \epsilon\}.\] (7) * _Uncertainty:_ The explanation is certainly linking the input features to the output prediction/decision when high attributions (in absolute value) are more concentrated in some features compared to a less certain uniform distribution. Indeed, let \(N\) denote the number of features. If we map the attributions to a probability space (using e.g., Eq. (2)), the resulting entropy, \[\mathcal{H}_{k}=-\sum_{i=1}^{N}\pi_{i,k}\log(\pi_{i,k}),\] (8) measures the uncertainty of the output (prediction or decision) with respect to the input (features or states) [127]. On the other hand, when the number of features is very high, one can characterize the uncertainty by comparing the distributions of both the attributions and a reference uniform probability density function. This can be done by invoking the discrete Kullback-Leibler (KL) divergence. The larger the KL divergence, the higher the certainty yield by the XAI method. * _Infidelity:_ In XAI surrogate methods, i.e., the schemes that consist of approximating the original model with a low-complexity more interpretable surrogate such as LIME, the fidelity of the surrogate to the original model can be quantified. Indeed, given a black-box function \(f\), explanation functional \(\Phi\), a random variable \(\mathbf{I}\in\mathbb{R}^{n}\) with probability measure \(\mu_{\mathbf{I}}\), which represents meaningful perturbations of interest, the explanation infidelity can be defined as [128] \[\mathcal{I}(\Phi,f,\mathbf{x})=\mathbf{E}_{\mathbf{I}\sim\mu_{\mathbf{I}}} \left[\mathbf{I}^{\mathcal{T}}\Phi(f,\mathbf{x})-(f(\mathbf{x})-f(\mathbf{x} -\mathbf{I}))^{2}\right]\] (9) where \(\mathbf{I}\) represents significant perturbations around \(\mathbf{x}\) and can be specified in various ways, such as the difference to a baseline \(\mathbf{I}=\mathbf{x}-\mathbf{x}_{0}\). * _Fidelity and Soundness:_ Two metrics can be applied to evaluate fidelity. Firstly, [85] used recall (\(\mathcal{R}\)) as a measure of fidelity for this method, which is defined as follows, \[\mathcal{R}=\frac{|\mathcal{T}\cap\mathcal{E}|}{|\mathcal{T}|}\] (10) where the term True Features \(\mathcal{T}\) represents the relevant features as extracted directly from the white box model and Explanation Features \(\mathcal{E}\) represents the features characterized as most relevant by the explanation [129]. This measure indicates how well the explanation captures the most relevant features from the predictive model, i.e., as a measure of the completeness of the explanation. Additionally, to understand how well the explanation ex \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **XAI Method** & **Basis** & **Metric** & **Type of Problem** & **Reference** \\ \hline \hline \multirow{4}{*}{Attributions-based} & \multirow{2}{*}{Features Mutation/Masking} & Confidence/Faultness & Classification and Regression & [123] \\ \cline{3-5} & & Log-odds & Classification & [124] \\ \cline{3-5} & & Comprehensiveness & Classification & [125] \\ \cline{3-5} & & Sufficiency & Classification & [125] \\ \cline{2-5} & Raw features & Interpretability & Regression & [126] \\ \cline{3-5} & & Uncertainty & Decision/Action & [127] \\ \hline \multirow{4}{*}{Surrogates-based} & \multirow{2}{*}{Perturbation} & Robustness/Sensitivity & Regression and Decision/Action & [109] \\ \cline{3-5} & & (in)fidelity & Regression & [128, 129] \\ \cline{3-5} & & LIME Explainer R2 Score & Classification and Regression & [85] \\ \cline{2-5} & \multicolumn{2}{c|}{Relative Consistency} & Regression & [130] \\ \cline{2-5} & \multicolumn{2}{c|}{Explainer Recall} & Classification & [131] \\ \cline{2-5} & & Explainer Precision & Classification & [131] \\ \hline \end{tabular} \end{table} TABLE IV: Taxonomy of XAI metrics. cludes irrelevant features (soundness of the explanation), precision (\(\mathcal{P}\)) can be measured, \[\mathcal{P}=\frac{|\mathcal{T}\cap\mathcal{E}|}{|\mathcal{E}|} \tag{11}\] * _R-squared (R2) Score:_ Behind the workings of LIME lies the assumption that every complex model is linear on a local scale. LIME tries to fit a simple model around a single observation that will mimic how the global model behaves at that locality. The simple model can then be used to explain the predictions of the more complex model locally. In this respect, R-squared (R2) score is used to measure the performance of the surrogate local model. * _Relative Consistency:_ Let \(f_{i}\) denote a predictor trained over dataset \(\mathcal{D}_{i}\). Explanations arising from different predictors are said to be consistent if they are close when the predictions agree with one another, i.e., given the sets \[\begin{split}\mathcal{S}^{\prime}=\{\delta_{i,j}(x)|f_{i}(x)=y \cup f_{j}(x)=y\}\\ \mathcal{S}^{\prime\prime}=\{\delta_{i,j}(x)|f_{i}(x)=y\oplus f_{ j}(x)=y\},\end{split}\] (12) where \(\delta_{i,j}(x)\) is a similarity measure of the explanations \(f_{i}(x)\) and \(f_{j}(x)\), and \(\gamma\) is a fixed threshold. We aim at making the gap between the set of consistent explanations \(\mathcal{S}^{\prime}\) and inconsistent ones \(\mathcal{S}^{\prime\prime}\) visible. In this respect, we invoke the true positive rate, \[\mathrm{TPR}(\gamma)=\frac{|\{\delta\in\mathcal{S}^{\prime}:\delta\leq \gamma\}|}{|\{\delta\in\mathcal{S}:\delta\leq\gamma\}|},\] (13) where \(\mathcal{S}=\mathcal{S}^{\prime}\cup\mathcal{S}^{\prime\prime}\). In addition, we also consider the true negative rate, \[\mathrm{TNR}(\gamma)=\frac{|\{\delta\in\mathcal{S}^{\prime\prime}:\delta> \gamma\}|}{|\{\delta\in\mathcal{S}:\delta>\gamma\}|}.\] (14) The quality of these explanations can be assessed independently of the accuracy of the predictor via the _Relative Consistency (ReCo)_ metric [130]: \[\mathrm{ReCo}=\max_{\gamma}\mathrm{TPR}(\gamma)+\mathrm{TNR}(\gamma)-1,\] (15) with a score of \(1\) indicating perfect consistency of the predictors' explanations, and a score of \(0\) indicating complete inconsistency. ### _O-RAN Alliance Specifications_ The O-RAN Alliance aims to lead the telecom Industry toward designing an intelligent and open RAN [20][21] leveraging and extending 3rd Generation Partnership Project (3GPP) reference RAN architecture towards greater flexibility in network deployment and enable scalability for new services. By disaggregating software from hardware and establishing open interoperable interfaces, the O-RAN Alliance aims to foster a more modular and flexible RAN ecosystem. This approach allows for greater compatibility and interchangeability among different vendors' equipment, enabling network operators to avoid vendor lock-in and embrace a wider range of technology solutions. As shown in Fig. 2, the O-RAN Alliance creates \(11\) Working Groups (WGs) and three focus groups, for RAN cloudification, automation, and disaggregation. A technical steering committee coordinates workgroups, where each one focuses on a part of the O-RAN architecture. For instance, WG1 specifies the Service Management and Orchestration (SMO) part, while WG2 and WG3 are specifying Non RT RIC and Near RT RIC, respectively. The new O-RAN architecture leverages NFV and SDN technologies to define new open interfaces and disaggregate the RAN functional blocks, to allow the deployment of new services and applications. O-RAN divides the Baseband Unit (BBU) of RAN into three functional blocks, Central Unit (CU), Distributed Unit (DU), and Radio Unit (RU). To support control user plane separation, the CU block is also divided into control plane CU-Control Plane (CP) and user plane CU-User Plane (UP) sub-blocks. Fig. 3 shows the reference architecture of the O-RAN alliance. The radio frequency signals are received, transmitted, amplified, and digitized at RU, which is located near the antenna, while CU and DU represent the base station's computation parts and are in charge of transmitting the digitalized radio signal to the network. Noting that the DU block may be deployed near or at the RU block, while the CU block may be deployed near the core network part. It is also worth noting that 3GPP has defined different RAN deployment scenarios and functional split options, which are described in [45][132]. The two main components introduced by the O-RAN archi Fig. 2: O-RAN Alliance Technical Steering Committee structure. tecture are summarized below: * _Non Real-Time RAN Intelligent Controller (Non RT RIC):_ it supports non-RT functions (i.e., with a time granularity greater than 1s) such as policy-based guidance. The Non RT RIC is located at the SMO and comprises two sub-functions: Non RT RIC Applications (rApps) and Non RT RIC framework. The latter is in charge of providing all required services to rApps via the R1 interface, whether from Non RT RIC framework or SMO, while rApps leverage the functionality provided by the Non RT RIC framework, such as data monitoring via O1 interface (stored in a database), to perform intelligent RAN optimization functions at non-RT scale. Such functionality enables rApps to get information and trigger actions, e.g. re-configuration and policies. Hence, Non RT RIC enables exposing an intelligent RAN policy to Near RT RIC, through A1 interface, based mainly on data analytics and ML/DL inference. * _Near Real-Time RAN Intelligent Controller (Near RT RIC):_ it is in charge of controlling and optimizing the O-RAN nodes (CU and DU) and their resources through fine-grained data monitoring and actions over E2 interface, at a near RT scale (i.e., from \(10\)ms to \(100\)ms). It hosts several Near RT RIC Applicaitons (xApps), which may collect near RT information (e.g., at a User Equipment (UE) or Cell basis) through E2 interface, and provide value-added services, with respect to the Non RT RIC's policies received via the A1 interface. xApps include Mobility Management (MM), Resource Management (RM), Spectrum Management (SM), etc. ## III XAI Deployment on O-RAN In this section, we describe how XAI methods can be deployed in the O-RAN framework and architecture by means of three realistic reference scenarios. ### _Introduction and Motivation_ As described in Sec. II-D, the basic idea of O-RAN is not only to disaggregate RAN functions exploiting the flexibility brought by virtualization techniques, but also to design RICs that locally host specific RAN applications (e.g., rApps and xApps), addressing several and heterogeneous control tasks such as handover management, energy management, fault detection, and radio resource allocation. The O-RAN framework has been devised to natively support a heavy usage of machine/deep learning (ML/DL) techniques to enhance the development and operations of intelligent RAN applications to pave the road for future B5G network services. For instance, as shown in [29], enabling cooperation among several xApps can help to optimize network performance, both in terms of data throughput and packet delivery ratio. However, one of the main challenges of AI-based O-RAN management is the lack of transparency on the decision-making processes that govern AI algorithms, which makes it difficult for network operators and engineers to diagnose problems, and further optimize the network behavior. Therefore, there is a pressing need to integrate XAI into the O-RAN management operations, as to gain more detailed information about the decision-making processes of ML and DL algorithms. Specifically, XAI techniques should be incorporated into the running AI-based rApps/xApps to provide transparent explanations of their outputs. This would not only improve the accuracy and transparency of the decisions made by these systems but also increase the trust of network operators and engineers in the performance of the network. ### _Local Interpretable AI Deployment_ The availability of open interfaces and the distributed nature of RAN deployments allows for the design and implementation of advanced federated and distributed schemes that aim to can overcome traditional RAN management scalability issues. As mentioned before, in the O-RAN context, rApps and xApps deployed at the Near RT RIC and Non RT RIC, respectively, oversee the RAN management operations supporting heterogeneous network services. To achieve this, supervised learning has been widely leveraged, either in a centralized way or distributed (federated) way [133][134]. Indeed, Federated Learning (FL) aims to generate learning models in a collaborative way while preserving the privacy of involved learners [135][136]. FL is a promising technique also for the O-RAN context, especially for rApps/xApps that belong to different network operators (or vendors), or that are spatially distributed, in order to jointly preserve data privacy and confidentiality, and gain knowledge from heterogeneous local scenarios as to collect a global view. Fig. 4 depicts an example scenario that considers the possibility to train and deploy both AI and XAI models in a federated way. To avoid communication overhead and incurring significant Fig. 3: O-RAN Architecture. operational costs, as well as to accommodate specific latency requirements, it is always beneficial to process data locally, where they are made available from dedicated monitoring functions. Therefore, raw control plane information generated by end-users at a given cell (or multiple cells) can be processed locally, in the Near RT RIC, and used to train AI/ML-based models and their corresponding local XAI models (**Step 1**). In order to gain from the distributed nature of RAN deployments, such local information can be transferred to the Non RT RIC, exploiting the O1 interface, (**Step 2**). By combining multiple local models trained over a particular portion of the input space, the Non RT RIC aims to derive more generalized and advanced models to the Near RT RIC and the corresponding xApp. This information can be provided as feedback (**Step 3**) via the AI interface. Hence, leveraging collected data from distributed nodes via the O1 interfaces, predictions along with their corresponding explanations can be performed in real-time. Favored by a continuous learning process, both AI and XAI's outputs should be considered to perform management decisions and improve network performance. For instance, such outputs can help to update users' scheduling policy or that radio resource assignment. To do so, such decisions are communicated from the Near RT RIC to the corresponding O-RAN modules, i.e., Open RAN Central Unit (O-CU) or Open RAN Distributed Unit (O-DU), via the E2 interface (**Step 4**). In this context, different XAI techniques can be leveraged. For instance, RuleFit is one of the most used XAI techniques [88][89]. Its fundamental idea is to capture interactions between the original dataset features in order to create new features in the form of decision rules. Then, RuleFit learns a new transparent learning model using the original features, and also a number of new features that are decision rules. Furthermore, the XAI explainability (outputs) may target different user profiles (**Step 5**). For instance, users of the models may want to trust and understand how the model works, while explanations related to the AI models' performance are sent to developers and data scientists to optimize their behavior over time. In addition, more details about AI models' compliance with the legislation in force should be communicated to both regulatory and manager users in order to check and assess them. state-action pairs and guides the agent to select the best and most explainable actions for specific network state values. ### _Explanation-Aided Confident Federated Learning Deployment_ Explanation-aided confident FL is a type of machine learning that combines federated learning with human-interpretable explanations. In FL, data is collected and processed locally on individual devices, and only the necessary information is shared with a central server for model training [140][141]. The goal of explanation-aided confident FL is to enable individuals and organizations to collaborate on training models while maintaining privacy and security. To achieve a confident FL-based resource allocation/prediction, the local learning is performed iteratively with a run-time explanation as detailed in [142]. The overall working principle of the scheme is manifested in Fig. 6. For each local epoch, the dataset collected through the E2 interface is used to train a local resource allocation model via constrained optimization, which yields the features and the corresponding predictions to the XAI xApp where an _explainer_ generates the features attributions using one of the feature attribution XAI methods (e.g., SHAP, Integrated Gradient, etc.). The _confidence mapper_ then converts these attributions to a soft probability distribution and translates it afterward into a confidence metric according to Eq. (3), and feeds it back to the optimizer to include it as an additional constraint in the local optimization. Moreover, the confidence metric is sent via the NG-c interface to the peer O-CUs. In this respect, each O-CU uses the gathered set of confidence scores to assess its priority, where only the \(K\) O-CUs with the largest confidence scores out of the available \(N\) O-CUs take part in the FL training to guarantee better confidence. Upon the termination of the local optimization, the model weights are reported to the federation layer--located at the Non RT RIC--to perform model aggregation and broadcast it via the A1-P interface. ## IV XAI for Literature Solutions Targeting O-RAN In this section, we first give a literature review of existing works, which leverage AI (ML/DL) techniques on top of the O-RAN architecture, in order to optimize RAN functions. We then discuss how these works can be mapped to XAI methods. ### _Existing AI-driven O-RAN Works_ * _User Access Control:_ The user access control or user association challenge is addressed in [143][144], in order to ensure load balancing among Base Stations (BSs) and avoid frequent handovers. The authors designed a federated deep reinforcement learning. The UEs collaboratively trained their local models and then aggregated them at the RIC level. The designed model succeeded to maximize the overall UEs' throughput and reducing frequent handovers. * _Total Cell Throughput:_ An online training environment of a reinforcement learning model is deployed at the RIC level in [145]. The developed model controlled function parameters in DU, in order to maximize total cell throughput. Thanks to deployed learning model, the total cell throughput increased by \(19.4\%\). * _Function Placement:_ The O-RAN architecture leverages virtualization and disaggregation of RAN functionalities among three key units (RU, DU, and CU). The authors of [146] studied the placement of resource allocation function based on service requirements, by dynamically selecting CU-DU units. Thus, they generated two reinforcement learning models based on Actor-Critic. The first one is used to assign resource blocks to UEs according to traffic types, delay budget, and UEs priorities, while the second one is leveraged to optimize function placement and hence the decisions of resource allocation. The authors showed that through this dynamic placement, both latency and throughput are highly improved. * _Resource allocation:_ In [147][148][149], the authors studied the multi-agent team learning deployment on top of the O-RAN architecture by deciding about each agent placement and the required AI feedback. As a case study, the authors addressed the challenge of how to coordinate several running and independent xApps in O-RAN. They designed two xApps, called resource allocation xApp and power control xApp, and then used federated deep reinforcement learning to enhance learning efficiency as well as network performance in terms of throughput and Fig. 6: Deployment of explanation-aided confident federated learning in O-RAN. latency. Similarly, in [150], the authors aimed to deal with the conflicts that may occur between running xApps when these xApps are deployed by different vendors. Leveraging Q-learning, they proposed a team learning algorithm for resource allocation, to increase cooperation between xApps and hence optimize the performance of the network. Another distributed RL model was generated in [151], in order to manage RAN slice resource orchestration on top of the O-RAN architecture. The distributed architecture of distributed RL is composed of multiple intelligent agents, one for each network slice, that performed local radio allocation decisions. Similarly, in [153], the authors leveraged federated distributed RL to manage the radio resource allocation among multiple Mobile Virtual Network Operators (MVNOs) for two different network slices (Ultra Reliable Low Latency Communication (URLLC) and enhanced Mobile Broadband (eMBB)). In [152], the challenge of how to optimally assign DU resources for various RUs is studied. A deep reinforcement learning model is built to achieve efficient management of RUs-DU resources. Experimental results showed that the proposed scheme improves highly resource usage efficiency. ### _How XAI can Help_ Integrating ML/DL-based algorithms with RAN functionalities has been found to address many challenging tasks: power control, user access control, handover, resources management, etc., which accordingly helps to optimize the performance of the RAN part. This was highly motivated in O-RAN, especially with the introduction of RIC modules. Indeed, the RAN functions are usually formulated as Markov Decision Process (MDP) [154], which explains the wide application of reinforcement learning, e.g. Q-learning, deep Deep Q-Network (DQN), and Actor-Critic, either in a centralized or a federated way, in order to derive the optimal policy about the corresponding RAN function. In addition, team learning is also an emerging paradigm to optimize the coordination and control of the running xApps at the O-RAN's RICs. It is worth noting that resource management is the most studied RAN function using feature engineering approaches, such as feature extraction and feature selection, in addition to reinforcement learning algorithms (DQN and Q-learning). With this strategy of RAN functions analysis, it is possible to determine the contribution of every feature, e.g., higher-order cumulants, related to the RAN performances. This helps to adjust the features' values in order to optimize the ML/DL predictions. In fact, humans/users understanding of Q-learning models limits to small scenarios involving a small number of states and actions. However, these models may become complex, especially with a high number of features, states, and actions, making them less interpretable by humans. The challenge here is the accuracy-interpretability trade-off, which means that the greater the accuracy, the less likely the model \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \multirow{2}{*}{**Works**} & \multirow{2}{*}{**Addressed RAN Function**} & \multirow{2}{*}{**AI Technique**} & \multicolumn{4}{c|}{**XAI Deployment at RIC as xApps**} \\ \cline{4-9} & & & **XAI** & & **O-RAN** & **Functional Blocks** & **Interfaces** \\ \hline \hline Yang et al. [143] & User access control & Federated deep reinforcement learning & & & & UE and gNB procedure management & \\ \hline Hoejoo et al. [145] & Total cell throughput & Deep reinforcement learning & & & State-action certainty & O-DU & Resource assignment (NR-MAC) & \\ \hline Shahram et al. [146] & Resource allocation and function placement & Actor-Critic learning & & & & Resource assignment (NR-MAC) & \\ \hline River et al. [147] & Resource allocation and power control & Federated deep reinforcement learning & & & & Resource assignment (NR-MAC) and PDSCH & \\ \hline Han et al. [150] & Resource allocation & Q-learning & & & & \\ \hline Farhad et al. [151] & Resource allocation & Distributed deep reinforcement learning & & & & \\ \hline Wang et al. [152] & Resource allocation & Deep reinforcement learning & & & & \\ \hline Abouaomar et al. [153] & Resource allocation & Federated Deep reinforcement learning & & & & \\ \hline Abouaomar et al. [153] & Resource allocation & Federated Deep reinforcement learning & & & & \\ \hline \end{tabular} \end{table} TABLE V: Mapping of Existing AI-based O-RAN works to XAI-enabled Solutions. is interpretable and vice versa. For example, the Q-learning model can improve the overall performance of radio resource management by exploiting more descriptive frequency and time features, but its complexity increases when considering more features including network density and network/users power, service requirements, and the trust and security of the wireless communications, and thus this will introduce more states and actions in the system. Besides, despite being a more advanced algorithm as compared to Q-learning, DQN gives black-box models that output a lack of explainability. For instance, radio resource allocation using a DQN can introduce many ambiguous points, which should be explained, such as which layers/neurons in the DQN architecture can help to improve the accuracy, and why some UEs get the same number of radio block than others, even with different service requirements (URLLC, eMBB, massive Machine Type Communications (mMTC)). In this context, XAI is highly recommended since it provides profound insights into automated decisions and predictions. These details can help different users, as well as the network operators, to deal with unexpected problems, either related to the ML/DL models or to the corresponding xApps of the O-RAN's RICs. Therefore, the performance of the different RAN functions can be highly enhanced. ### _Mapping to XAI-enabled Works_ In Table V, we compare the existing AI-driven O-RAN works according to several criteria, including the addressed RAN function and the leveraged AI techniques. In addition, we illustrate how XAI can be deployed, as xApps, on top of these works, in order to explain their AI-based decisions. We clearly observe that most of the existing works are based on reinforcement learning to manage RAN functions, especially power and resource allocation, user access control, function placement, and total cell throughput. According to [155], two main groups of XAI techniques can be applied to reinforcement learning strategies, in order to give both local and global explanations. * _Reactive XAI techniques_ imply that explanations are given in an immediate time horizon. It includes three main approaches: _i) Policy simplification_, which aims to simplify the policy of transition function into a form that is interpretable/understandable by itself, by using for instance decision trees and fuzzy rule-sets [156]. _ii) Reward decomposition_ into understandable components, which aims to better understand the reasons for certain reward values [157]. _iii) Feature contribution and visual methods_, to determine features' contribution to the decision-making process, and then generate explanations based on that contribution. Examples of such techniques are both LIME [85] and SHAP [120]. * _Proactive XAI techniques_ focus on a longer time horizon to provide the required explanations. These techniques can be classified into four main classes: _i) Structural causal model_, that aims to learn the relationships between variables/features. This technique generates human explanations since it considers the world as a causal lens [98][158]. _ii) Explanation in terms of consequences_, which enables agents in reinforcement learning to answer a question about their policy in terms of consequences [159]. In other words, it enables to determine what each agent can get from a state, and which outputs it expects from a visited state or corresponding action. _iii) Hierarchical policy_ by decomposing a given task into multiple sub-tasks, located at different abstraction levels [160]. Then, a prerequisite for executing a subsequent task is given as an interpretation of a particular action. _iv) Relational reinforcement learning_, which is based on a set of rules to provide background knowledge to the decision agent [161]. In this way, actions, states, and policies are represented by relation language, which helps to understand the reinforcement learning model's outputs. Moreover, the deployed XAI techniques can be evaluated according to several metrics, as shown in Table V. Specifically, in DRL-based resource management use-cases, the state-action mapping _certainty_ can be measured via the entropy, which is computed from the attributions of input state features. Moreover, the _confidence and Log-odds_ metrics serve to quantify the trust in AI predictions/decisions by using the input XAI attributions as a basis to mask the impactful features in either offline or DRL-based on-the-fly datasets, and measure the corresponding deviation of the output, which is fed back to the optimizer/agent for accountability. It is noteworthy that different data types can be monitored at the Non RT RIC from Open RAN Radio Unit (O-RU) modules, via the **O1 interfaces**, in order to build both AI and XAI models. These models are then deployed at the Near RT RIC as xApps through the **A1 interface**. In addition, the made decisions with respect to both AI and XAI outputs are sent and executed at different levels (O-DU and O-CU), via the E2 interface. ## V XAI for O-RAN Use-cases In the following, we collect a list of use-cases in the context of O-RAN and network slicing, highlighting how they would benefit from the introduction of XAI methods. ### _Quality of Experience (QoE) Optimization_ Modern applications in the Fifth Generation (5G) ecosystem demand large bandwidth and low-latency communication to provide an adequate level of QoE, which can hardly be achieved by current semi-static Quality of Service (QoS) frameworks devoted to concurrently supporting heterogeneous broadband applications like in the Fourth Generation (4G) era. Radio fluctuations impair radio transmission capabilities, especially when adopting higher carrier frequencies like millimeter waves, leading to variable application requirements even within the same communication session. In order to improve QoE, estimation and prediction tasks performed at the application level can help in dealing with such a dynamic environment, favoring both user experience and efficient use of RAN resources [162]. The open interfaces introduced by the O-RAN architecture significantly ease the per-user flow modification and configuration by means of proactive closed-loop network optimization. Fig. 7 depicts a possible deployment addressing this use-case. It involves Non RT RIC, Near RT RIC, E2 Nodes, and external applications running on the UE. The open interface allows external applications to interface with the O-RAN domain, which, empowered by ad-hoc optimization logic, would be capable of dynamically re-configuring the networking settings in response to real-time triggering events. In this case, XAI methods would help in understanding the reasons for specific reconfiguration decisions both at Non RT RIC and Near RT RIC. ### _Traffic Steering_ Imbalances in the traffic load across cells of different access technologies may lead to performance degradation. 5G systems aim to seamlessly support different combinations of access technologies, e.g., LTE, 5GNR, Wi-Fi, etc., with the 5G Core development converging towards a unified domain able to support a wide variety of applications and requirements, combining different Radio Access Technologies (RAT) and enhancing the mobile network offloading capabilities [162]. In this context, O-RAN A1 interface would allow enforcing desired optimization policies and utilizing the appropriate performance criteria to proactively manage user traffic across different access technologies. The Non RT RIC monitors the user experience by UE level performance measurements on a cell level and may decide to relocate one or more users to other cells based on global optimization objectives, e.g., fairness in bandwidth sharing, QoE maximization, load-balancing. In all these scenarios, XAI methods can be applied to guide the definition of such optimization criteria. ### _RAN Slice Service Level Agreement (SLA) Assurance_ The 5G infrastructure has been designed to cope with highly diverse performance requirements coming from heterogeneous services and vertical applications. In this context, network slicing arises as a key technology to efficiently support tailored end-to-end connectivity satisfying specific business requirements. In general, the business parties and the infrastructure provider define the set of networking capabilities required to successfully run the service in a SLA, e.g., in terms of data rate, latency, and resource availability [164]. Perhaps not surprisingly, this introduced the need for ad-hoc mechanisms able to efficiently measure and expose such information to \(3^{rd}\) party entities traditionally alien to the telecommunication market. In this context, O-RAN's open interfaces and AI/ML-based architecture will enable such mechanisms, enabling the operators to take full advantage of the business opportunities brought by the network slicing concept. More in detail, specific slice configuration settings derived from the SLA requirements can be easily enforced by initiating the procedure from the SMO layer, and finely adjusted over time thanks to measurements feedback and zero-touch XAI-based mechanisms applied at the different layers of the architecture, especially at the Non RT RIC and Near RT RIC. Fig. 8 summarizes the main workflow to achieve the solution. ### _Multi-vendor Slices_ The coexistence of different network functions provided by different vendors to instantiate operators' services is one of the key enablers for flexible and efficient use of radio resources and CAPital EXpenditures (CAPEX)/OPerational EXpenditures (OPEX) optimization. To this extent, the O-RAN architecture enables the deployment of multiple slices comprising functions provided by different vendors offering a variety of virtual Open RAN Distributed Unit (vO-DU) and virtual Open RAN Central Unit (vO-CU) options, specifically optimized to meet the requirements of a certain service. This brings several advantages such as a more flexible and time-to-market slice deployment, where operators can select from the available options the most suitable vO-DU and vO-CU to deploy their services and huge business opportunities for the vendors. To deploy multi-vendor slices, vO-DUs and vO-CUs must coordinate to coexist and share the radio environment efficiently and avoid conflicts among the deployed services [163]. Fig. 7: Use case: User-Centric QoE Optimization. Adapted from [162] Fig. 9 depicts three possible ways of coordination: \(i\)) _loose coordination_ where there is no direct coordination between deployed services, and the radio resource is fully controlled by the RICs through the O1, A1, and E1 interfaces. \(ii\)) _moderate coordination_ where different network functions are allowed to communicate with each other through the X2 and the F1 interfaces to negotiate radio resources without directly involving the RICs. In this case, the negotiation must cope with the time frame allowed by the X2 interface communication exchange, which is in the order of seconds; \(iii\)) the WG1 and WG4 of O-RAN alliance envision also a so-called _tight coordination_ allowing faster radio resource negotiation among slices, which would require a new interface, dubbed as _New IF_ in Fig. 9, for direct communication between vO-DUs. In this context, distributed AI/ML models are particularly suitable to smartly perform the negotiation task. Indeed, an XAI-enabled component can be deployed to take control of the coordination and negotiation of resources between different vendors. We report in the figure an example of a deployment of this component, suitable for both the moderate and the tight coordination case. ### _Resource Allocation Optimization_ The need to concurrently support multiple heterogeneous slices characterized by service-tailored networking requirements exacerbates the setup of efficient and dynamic resource allocation mechanisms able to cope with highly different spatio-temporal traffic distributions. For example, end-user mobility towards public events causes spatially localized peaks of traffic in eMBB kind of slices, or IoT smart sensors sporadically generating data volumes from highly distributed deployments in mMTC settings. Compared to traditional RAN deployments characterized by monolithic architecture and private management interfaces, the O-RAN's paradigm would allow for easier and more flexible control of the radio resources. In addition, the possibility to devise a data-driven ML-based optimization algorithm would help to automatize the process, exploiting the closed-loop management framework and previous historical information to perform the best allocation decisions. Additionally, AI/ML model can be used to perform proactive management of the radio resources, predicting recurrent traffic demand patterns of 5G networks in different time epochs and spatial locations, and for each network slice, therefore anticipating the slice's networking needs favoring a better end-users QoE, and limiting the overall energy consumption. All these methods are traditionally based Fig. 8: Use case: Explainable slice SLA assurance. Adapted from [163] Fig. 9: Use case: Multi-vendor deployment with explainable coordination. Adapted from [162]. on RL algorithms and agents interacting with the environment and learning by trial and error. More advanced solutions adopt federated learning techniques to improve the performances of the agents, gaining global knowledge of the system from the collection of multiple locally-trained models [151]. Such enriched information is then sent back to the single agents, improving their training, and speed, and allowing more general management policy definitions. In both these scenarios, XAI methods can further extend the potential of the RL management solutions. On the one side, they will allow to better control the learning procedure, and guide the agent towards the definition of a safe decision process by adding confidence and trust in the devised management policies. On the other side, they may help in limiting the information exchange required by the federated learning approach [165][166]. In fact, only being able to map the context-decision space uniquely, would allow sharing to the federation layer only local models actually carrying insightful information while filtering out erroneous or redundant items. Fig. 10 depicts two possible deployments options, one assuming the main optimization and computing effort running within theNon RT RTIC entity, and the other envisioning such task running within the Near RT RIC. The final deployment choice would depend on multiple factors, including the type of use-case and machine-learning model to be run and its timescale and complexity, also considering the different computing capabilities of the RICs. ### _User Access Control_ The O-RAN vision aims at evolving the current RAN architecture providing open interfaces and ML-based optimization to attract new business entities and ease overall management and reduce operational costs. Current RAN deployments are composed of thousands of nodes [167]. In such a complicated deployment, it is expected from the network assigns each UE to a serving BS, maximizing the overall throughput and the single end-user QoE. This problem is also known as _user access control_. Traditional user access control schemes imply that user associations are based on networking metrics such as the Received Signal Strengtht (RSS), which guides the UE towards the base station providing the best channel. Handover ping-pong effect and load balancing have been identified as two main issues brought by RSS-based schemes [168]. ## VI Automation of AI/XAI Pipeline for O-RAN XAI aims to create equivalent learning models to approximate ML/DL decision process, hence replacing closed approaches from black-box models into human-understandable contents [46][37]. In addition, as discussed in Subsec. III-A, XAI models will be built and deployed at the same level as the ML/DL models on top of the O-RAN architecture, in order to interpret their decisions and target different user profiles. In such context, the critical challenge is how to train/deploy both AI and XAI models, while providing stable life-cycle performances. In fact, data profiles evolving may cause performance degradation of the AI/XAI learning models [169]. Thus, both Fig. 10: Use case: Explainable RAN slice resource allocation decisions taken at Non-Real Time and Near-Real Time RICs. models' performance degradation and new data profiles should be considered and studied, to ensure good performance of the intelligent RAN functions over time. Hence, it is required to not only perform continuous monitoring of both model and data profiles, but also automate the whole AI/XAI learning models development, including data collection/extraction, model training, validation, and deployment [170]. DevOps paradigm includes a set of practices that combines software development (Dev) and IT operations (Ops). DevOps aims not only to reduce the systems' development life cycle but also to provide continuous software delivery with high quality, by leveraging paradigms and concepts like Continuous Integration and Delivery (CI/CD). When dealing with machine learning operations, and automation of the learning process, the paradigm can also be called ML system operations (MLOps) [170]. It is worth noting that O-RAN specification [171] introduces three control loops that facilitate the deployment of AI/ML (Artificial Intelligence/Machine Learning) functionalities within the O-RAN framework. These control loops are designed to operate at different time scales, enabling efficient integration and utilization of AI/ML capabilities in the network. Loop \(1\) is deployed at O-DU level to deal with per Transmission Time Interval (TTI) scheduling, and operates at a timescale of the TTI or above. Loop \(2\) deployed at the Near RT RIC to operate within the range of \(10-500\) msec and above. Loop \(3\) at the Non RT RIC at greater than \(500\) msec (ML/DL training, orchestration, etc.). In what follows, we focus more on both loops \(3\) and \(2\) for XAI models training, inference, and performance monitoring. Indeed, three main levels of automation have been categorized [172]: Manual (no MLOps), training pipeline automation, and CI/CD pipeline automation. A typical architecture integrating XAI with MLOps pipeline is introduced in [127]. ### _Manual Pipeline_ It corresponds to the basic level of maturity, where all the ML steps, including data collection and preparation, model training, and validation, are performed manually (cf. Fig. 11). Hence, it is called no MLOps. At this level, data scientists usually use a rapid application development tool to build learning models, such as Jupyter Notebooks. In this case, the different steps of ML are released at the Non RT RIC module (ML), while the trained models are deployed at the Near RT RIC through the A1 interface, in order to provide prediction services (Ops). Note that the transitions from one step to another are also performed manually, and driven by a source code, developed interactively, till an executable model is created. Fig. 11: XAI-driven Automated Continuous Integration and Delivery Pipeline. In practice, this pipeline corresponds to the learning models which are rarely updated and often break when they are deployed (models) in the real world. In addition, the performance of learning models at the RAN environment may degrade, due mainly either to the dynamic evolving of data profiles describing the environment or to the very dynamic changes that may occur in the radio access environment. Hence, automating the whole learning process becomes primordial. ### _Training Pipeline Automation_ This level introduces a continuous training of the models and thus consists to perform the model training steps automatically. In particular, when new data profiles are monitored, the model retraining process is triggered. This process also includes data and model validation phases to achieve continuous delivery of the learning models. This level introduces two new components, named _feature store_ as a centralized repository to store features and enable access to new features for training serving, and _machine learning metadata_ to store information about the execution of ML pipeline (cf. Fig. 11). ### _Continuous Integration and Delivery Pipeline Automation_ At this level, a complete CI/CD system is introduced, to enable reliable and fast learning model deployments in production. Thus, this level achieves the highest level of automation in ML Ops, by enabling data scientists and developers to efficiently explore new ideas about feature engineering, model hyperparameters, and architecture. The main difference with the last level is that this level enables to build, validate, and deploy automatically the data, learning models, and model training pipeline components. Fig. 11 shows the automation of the ML pipeline using CI/CD in O-RAN context, which mainly features automated both ML pipelines and CI/CD routines. ## VII O-RAN Security Aspects and XAI Due to the central role of the 5G network in providing communication to backbone society infrastructures, security, and security risk awareness play a key role in network deployment. The O-RAN Alliance has recently created a Security Working Group (WG11). This group has identified a list of stakeholders responsible for ensuring the security of the RAN. This goes beyond the parties involved in traditional 4G and 5G networks, such as vendors, operators, and system integrators. In fact, operators will play a greater role in securing the infrastructure, given the platform's openness and the use of multi-vendor components, which allows them to customize and secure the infrastructure. This also enables them to evaluate and verify the security of the open components that are introduced in the network, which may not be possible in fully vendor-driven closed architectures. In addition, according to [173], network functions and virtualization platform vendors, as well as third-party xApp and rApp developers, Open Cloud (O-Cloud) providers, and administrator profiles, that manage virtualized and disaggregated components, are all new stakeholders. Furthermore, the orchestrator, which manages the SMO, also has a responsibility to ensure that the network's operations are secure. However, due to the plethora of heterogeneous components forming the O-RAN ecosystem, and the high exploitation of AI-driven network management and third parties components running services, securing the O-RAN infrastructure is still a challenge. In this regard, XAI can strongly enhance security in O-RAN deployments by providing insights, explanations, and transparency into the decision-making process of AI models. Moreover, XAI helps with threat detection, model transparency, accountability, analysis of training data, and human-in-the-loop security, leading to improved threat detection, increased trust, and compliance with security regulations. However, it could also be the target of cyber attacks that could vanish its benefits. ### _Distributed Architecture_ The open architecture defined in the O-RAN specifications has been identified as a possible security issue due to its distributed nature, which expands the attack surface to malicious entities. The WG11 has recently identified the possible vulnerabilities coming from the openness of the platform and classified them into different threat categories [174]. Such categories include threats against the O-RAN system which are related to new architectural elements and interfaces that can be compromised through different attacks. These attacks could compromise the availability, data, infrastructure integrity, and data confidentiality of the infrastructure. Threats against the O-Cloud, which encompasses virtualized environments. These attacks could compromise virtual network functions, misuse containers or virtual machines, or spoof underlying networking or auxiliary services. Threats against open-source code, which could contain backdoors intentionally compromised by trusted developers or upstream libraries. Physical threats that are against wireless functionalities. Threats against the protocol stack and threats against AI/ML. The latter include poisoning attacks that exploit unregulated access to the data stored in the O-RAN system to inject altered and misleading data. It is worth pointing out that AI/ML models could themselves be deployed by malicious entities [175, 176]. In order to counteract such threads, security principles have been defined in [174]. Such principles are intended to be developed into security requirements, recommendations, and potential countermeasures, which will define what is required, recommended, or optional. They include mutual authentication (i.e., the _zero-trust_ paradigm), access control, secure cryptography, trusted communication, secure storage, secure boot and self-configuration, secure update, recoverability and backup, and security management of risks in open source components. At the time of writing, there is no explicit mention of the exploitation of XAI in enhancing security in O-RAN. Nonetheless, XAI can provide transparency and insights into the decision-making process of AI models, helping stakeholders better understand how these security principles are being applied in practice. This can lead to improved validation, compliance, and trust in the security measures implemented in O-RAN deployments, ultimately enhancing the overall security posture of the network. ### _Risk Assessments_ The WG11 has assessed security analysis for the Non RT RIC, the O-Cloud, and the Near RT RIC frameworks [177, 178, 179]. Such assessments addressed the likelihood of attack and impacting protection goals that can be summarized with \(i)\)_Confidentiality_, which refers to the ability of unauthorized entities to access sensitive information. \(ii)\)_Integrity_, which refers to the risk of manipulability of the data by unauthorized entities and ensures that data is not corrupted, and not outdated. \(iii)\)_Availability_, which refers to the availability of data information and services to authorized entities. Specifically, the Non RT RIC Security Technical Report identifies \(26\) threats to the Non RT RIC Framework, rApps, R1 interface, and A1 interface, with corresponding recommended security controls [177]. The technical report on the security analysis of O-Cloud covers critical services, cloud service and deployment models, stakeholder roles and responsibilities, threat models, and best practices for countering threats [178]. The Near RT RIC and xApps Security Technical Report addresses \(11\) key issues and provides \(13\) solutions, including changes to existing documents and specifications maintained by WG3, with a mapping table for which solutions correspond to which key issues [179]. As the interest in O-RAN deployment increases, security risk assessments have been carried out by third parties and government entities. In [180] the aforementioned integrity, availability, and confidentiality are evaluated alongside two additional protection goals namely Accountability, which refers to the possibility of attributing an action to a given entity, and privacy, which refers to the protection of sensitive data, i.e., anonymity, unlinkability, and unobservability. From the analysis it emerged that O-RAN specifications lack a "security/privacy by design/default" approach, resulting in a system with multiple security risks and highlighting the importance of revising the O-RAN specifications with a stronger security focus before the first productive applications are implemented. In [181] the risks due to the potential for multiple suppliers, new network functions, and additional interfaces that increase the attack surface are highlighted. Moreover, in [181], particular attention is given to the risks coming from the implementation of network functions with AI and ML, which could negatively impact the network. Additionally, using cloud platforms to run base station software in O-RAN could increase dependency on cloud service providers and lead to vulnerabilities, especially if different Mobile Network Operators (MNOs) use the same cloud provider. At the time of writing, and to the best of our knowledge, there are no risk assessments specifically targeting security threads introduced by the use of XAI, nor suggesting XAI-based solutions/recommendations to enhance security in open networks. Nonetheless, we will discuss them shortly in the upcoming subsections.. ### _XAI to Improve O-RAN Security_ The utilization of XAI in the security domain of O-RAN could help enhance the transparency and comprehensibility of the operations and decision-making processes of third-party deployed components. This could be particularly helpful to enable stakeholders to fully understand the decision process of such elements, helping to catch malicious behaviors, thus ensuring accountability and reducing the risk of errors or malicious actions. This is particularly relevant when considering the high number of AI- and ML-driven components that will be deployed in the network, which due to their black-box nature pose a significant challenge to reveal malicious behavior and security threats. As a result, ensuring that AI remains accountable and trustworthy is of utmost importance [175]. By implementing XAI techniques, the complex algorithms used in ML-/AI-based systems can be made more interpretable, allowing stakeholders to better understand the factors influencing the automated decision-making processes. This enhances Fig. 12: O-RAN architecture with additional XAI-based security components. Adapted from [175]. transparency and enables stakeholders to identify potential biases or shortcomings in the system, allowing for continuous improvement and optimization. In the O-RAN architecture, AI/ML models are mainly deployed in the RIC as xApps/Arpps [171], as depicted in 12. Such elements bring in the autonomous operation of several vital network functions including mobility management, resource allocation, etc. Hence, the deployment of malicious AI/ML models or the manipulation of benign ones by attackers could disrupt RAN node functionalities, resulting in severe network failures [38]. In order to overcome this issue, XAI-enabled security engines can be in the Non RT RIC and Near RT RIC as additional functional blocks performing interpretable monitoring of the xApp/Arp operation and malicious components detection [175]. Nonetheless, the employment of XAI techniques O-RAN will require additional effort to build pipelines to generate and communicate explanations in the components hosting the XAI models. Additionally, it will call for more computation power and resources to run both ML and XAI models. However, these added costs are justifiable by the enhancement of O-RAN security and management capabilities [175]. ### _Security threats related to XAI_ While XAI methods can be employed to improve security in O-RAN deployments, as XAI becomes an emerging trend, cyberattacks targeting specifically the XAI models are rising [182]. In [183] two different attacks to the XAI layer are proposed: the first wherein the underlying ML model and XAI interpreted are simultaneously corrupted. The attack aims to construct adversarial samples that are sparse, interpretable, and close to the model's training data distribution by using a manifold approximation algorithm on a small set of test data to find data and explanation distributions and inducing minimal distortions on the input distribution to move the explanation distribution towards the target (distorted) explanation. Similar attacks can vanish the security advantage of an overlaying XAI layer [184]. For example, Adversarial and Data poisoning attacks involve intentionally modifying input data to mislead an AI system's predictions and explanation, aiming of biasing the outcome of the model and leading to intentional misinterpretation of the model's behavior [183]. Evasion attacks, involve revealing sensitive information through the explanations or interpretations provided by an AI system. This can be achieved by using the explanations to infer sensitive information about individuals, such as their health status, financial situation, or other private information, even if the AI system was not explicitly trained on such data [184]. Finally, social engineering attacks, involve manipulating users or human interpreters of the AI system's explanations to make incorrect or biased interpretations. This can be achieved by providing misleading or persuasive explanations that influence the human interpreter's decision-making or perception of the AI system's behavior [185]. However, the research in the area of security attacks specifically targeting XAI models is still in its infancy. ## VIII Projects/Standards on XAI for O-RAN XAI is increasingly becoming critical for the adoption of ML/DL in O-RAN. To achieve trustworthiness and transparency in ML/DL models in O-RAN, there are some ongoing standardization activities and research projects targeting XAI and O-RAN aspects. Some of them include: * _O-RAN Alliance:_ As we describe in Subsec. II-D, the O-RAN Alliance is a global organization that is working to promote an intelligent and open RAN for mobile cellular networks. As shown in Fig. 2, the O-RAN Alliance comprises \(11\) WGs and three focus groups, for RAN cloudification, automation, and disaggregation. In particular, WG2 in [171] describes lifecycle management of AI/ML models on O-RAN including learning model design, composition, training, runtime, and deployment solutions. it also highlights the main criteria for determining multiple ML training and inference host deployment options. In this context, the focus of WG2 can be extended to implementing XAI in O-RAN. To promote XAI adoption in O-RAN, the WG2 can work on various initiatives, including the creation of XAI platforms and tools, the development of interfaces and standards for XAI, and the promotion of XAI best practices. * _IEEE P2894 and P2976:_ these standards aim to deliver specifications on XAI in order to facilitate its adoption in real-world scenarios. The IEEE P2894 standard aims to design an architectural framework and define application guidelines for XAI, including the definition and description of XAI, the main classes of XAI techniques, the main application scenarios of XAI techniques, and performance evaluations of XAI in real systems such as telecommunication networks [186]. Besides, the IEEE P2976 standard is working to achieve interoperability and clarity of AI systems design through leveraging XAI techniques [187]. Specifically, IEEE P2976 defines optional and mandatory constraints and requirements that should be satisfied for an AI algorithm, method, or system to be considered explainable. In this, context, these specifications can be leveraged by O-RAN standards such as O-RAN Alliance in order to develop and advance the adoption of XAI in the O-RAN ecosystem. * _ETSI Experiential Networked Intelligence (ENI):_ The ETSI Industry Specification Group (ISG) is working on defining a cognitive network management architecture, based on context-aware policies and leveraging AI techniques to adjust provided services in 5G networks and beyond, based on changes in business goals, environmental conditions, and user requirements. Thus, it aims to provide automated service operation, provision, and assurance, as well as efficient resource management and orchestration. Besides, recently, ETSI releases its first specifications on O-RAN called "O-RAN Fronthaul Control, User and Synchronization Plane Specification v7.02" [188]. This specification focuses on Open Fronthaul, as one of the interfaces in the O-RAN Architecture. It specifies the synchronization plane protocols, user plane, and control plane used over the fronthaul interface to link the O-RU and O-RU components. Noting that this specification has been submitted to ETSI as a publicly available specification (PAS) produced by the O-RAN WG4 and approved by the ETSI Technical Committee. Therefore, considering this first specification of ETSI about O-RAN, the ETSI Experiential Networked Intelligence (ENI) Industry Specification Group (ISG) can also focus on adopting XAI on top of the designed cognitive network architecture in order to create an AI framework that is explainable, transparent, and thus can be used to ensure the accountability of AI-enabled systems in O-RAN. * _6G-Bricks:_ is a Horizon Europe project that explores novel unified control paradigms based on Explainable AI and Machine Reasoning, which will be delivered in the form of a reusable component with open Application Programming Interfaces (APIs), termed "bricks" [189]. Initial integration with O-RAN will be performed, aiming for the future-proofing and interoperability of 6G-BRICKS outcomes. * _NANCY:_ it is the acronym of _An Artificial Intelligent Aided Unified Network for Secure Beyond 5G Long Term Evolution_; a Horizon Europe project which partly investigates the design of an XAI engine, to provide transparency and trustworthiness [190]. It also aims to identify the key factors that affect the system's local and overall performance. * _Nokia Project:_ Nokia has opened a testing center in the United States. and initiated new O-RAN collaboration to support the development of partnerships among O-RAN vendors, that will help with the introduction, verification, and launch of O-RAN compliant solutions to market [191]. Specifically, vendors will be able to perform end-to-end and Interoperability Tests for O-DU/O-RU Open Fronthaul, as well as xApps testing for Nokia's Near RT RIC. The project is the latest in Nokia's continued commitment to Virtual RAN (vRAN) and O-RAN innovation. This project can also be leveraged to implement, deploy, and validate XAI-based xApps on top of the O-RAN Near RT RIC. Overall, these standards and projects are working to promote the adoption of AI techniques, in particular machine learning and deep learning, in O-RAN, while ensuring that these technologies are interpretable, accountable, and transparent. By doing so, they can help build trust in AI systems deployed in O-RAN, and thus encourage competition and innovation in the telecommunications industry. ## IX Open Challenges and Future Research Directions In this section, we analyze different open questions and research topics for an efficient deployment of XAI in future-proof 6G O-RAN architecture, while pointing out various challenges in this area. ### _Explainability-Performance Trade-Off_ The flip side of the highly performing yet complex AI models, such as DNNs and Transformers, is being ill-disposed to direct interpretation. They consist of numerous layers and billions of parameters, making it difficult to explain how specific inputs are transformed into outputs. In contrast, simpler models like decision trees or linear regressors are more interpretable, as their decision-making processes are more transparent. Therefore, it turns out that there is a trade-off between performance and explainability, especially when the type of training data justifies the use of complex models [192, 46]. Such observation is also valid when it comes to model optimization. To exemplify this, we plot the confidence metric described in Subsec. II-C in two scenarios, i) In each round, an FL model is optimized via a vanilla loss function such as mean square error and then a post-hoc explanation in the form of attributions is generated and used to calculate the confidence metric and ii) In each round, an FL model is trained through a constrained optimization approach, where the confidence metric is evaluated in run time and jointly enforced as a constraint along with the original loss during training. According to Fig. 13, the model confidence degrades as it gradually converges in the post-hoc scenario which highlights the abovementioned trade-off. Interestingly, the _in-hoc_ strategy succeeds in maintaining the model confidence Fig. 13: Confidence vs. FL rounds across the training rounds as thoroughly studied in [142]. Striking a balance between performance and interpretability is therefore an open research direction, especially to guarantee a successful deployment of AI in critical 6G use-cases under O-RAN architectures. ### _LLMs in O-RAN: An Explainability Perspective_ As anticipated in [193], the potential of LLMs to transform the Telecom domain lies in their ability to harness generative capabilities and leverage the multimodal nature of wireless network data, thereby enhancing contextual, situational, and temporal awareness. This advancement holds the promise of significantly improving wireless networks operations, including localization, beamforming, power allocation, handover, and spectrum management, while also eliminating the requirement for task-specific AI models. Although this is still a far cry from achievement, future-proof 6G O-RAN might incorporate LLMs into the design of the different radio functions. In such a scenario, the complex architecture of LLMs that is mainly based on transformers with billions of parameters raises new challenges with respect to their explainability and opens a new research line to tackle it. ### _Lack of Standardization_ One of the still open challenges related to the adoption of XAI to O-RAN is the lack of standardization efforts across different components and interfaces, except the O-RAN alliance and some ongoing projects initiated by network operators [20][21][191] that are more focusing on the main O-RAN's components. This makes it difficult to design XAI tools that can be deployed across different O-RAN components. Indeed, standardization is critical for enabling interoperability among various components of O-RAN and facilitating the development and deployment of XAI tools, on top of O-RAN. In this context, the research and industry communities should work towards developing common standards for open interfaces, data formats, APIs, etc. ### _Privacy Concerns of Distributed XAI Models for the Complex Multi-Vendor O-RAN_ As mentioned before, one of the main features of O-RAN is to disaggregate the RAN functions and manage them through the running xApps at the Near RT RIC, to fulfill the QoS requirements of the envisioned B5G network services. In addition, the different O-RAN components are supplied and supported by various isolated vendors/operators. However, XAI tools typically need large amounts of data to train and test their models for O-RAN systems, and the availability of data may be limited or difficult to access, due to security and privacy concerns in a multi-vendor context. Therefore, these vendors/operators should collaborate with each other, in order to not only ensure stable RAN performance/cost but also deal with the limited available data. In this context, distributed/collaborative deep learning is expected to be widely leveraged. For instance, FL is one of the promising collaborative learning techniques that consists of generating learning models in a collaborative way while preserving the privacy of involved learners such as vendors/operators [135]. However, generating learning models (XAI or AI) in a federated way is still challenging since it still presents privacy concerns. Indeed, even FL avoids sharing learners' private data, however, it was demonstrated that FL is still vulnerable to some attacks, such as poisoning and Byzantine attacks, and sharing model updates during the training process can reveal private information [194]. Privacy-preserving XAI techniques can be used to provide explanations without revealing sensitive information. These techniques include methods such as differential privacy, blockchain, and homomorphic encryption [195]. ### _Interoperable XAI Models for the Complex Multi-Vendor O-RAN_ Introducing open interfaces and RAN functions disaggregation have led to split gNB into multiple CUs and DUs, which may belong to different vendors-operators and are connected through the F1 interface. However, designing XAI models while considering the interoperability among multi-vendor can be very challenging as each vendor may have different implementations, capabilities, and requirements. In this context, a potential solution is to leverage collaborative and distributed learning techniques in building XAI models, as we described in Subsec. III-D. This will enable the generation of local XAI models specific to each vendor, while aggregating them at the central Non RT RIC level, to build a global and interoperable XAI model. Noting that the O-RAN WG5 launches activities aiming to achieve multi-vendor interoperability through the open interfaces such as E1 and F1 interfaces between base stations, Xn interface between gNBs, and X2 interface between gNB and eNodeB (eNB). ### _Complexity of the O-RAN Systems_ O-RAN systems can be highly complex, including several layers of software and hardware components. This complexity can complicate the development of efficient XAI tools, that can provide a concise interpretation and explanation of the decisions made by the AI-based O-RAN systems. One way to deal with the complexity of AI models is to simplify them by applying techniques such as rule-based systems, decision trees, or linear models. These models are easier to explain and more transparent than complex deep neural networks. However, this may come at the cost of lower performance and accuracy. Furthermore, model-agnostic XAI approaches can also be leveraged to provide explanations for any AI model, regardless of its structure and complexity. These approaches include local surrogate models, partial dependence plots, and feature importance. The development and deployment of such approaches will enable XAI for a wider range of AI-based models deployed on top of the O-RAN system. ### _Real-Time Constraints_ O-RAN systems are often designed to operate in real-time in order to ensure and provide the required services at the different levels of RAN. This implies that XAI tools should be able in providing explanations efficiently and quickly. Indeed, there are some XAI approaches that can be designed to operate in real-time, which is critical for the O-RAN systems. These approaches include model compression, feature selection, and online learning. The in-depth development of such approaches will enable the deployment of real-time XAI models on top of the O-RAN systems without compromising their performance. ### _Heterogeneity of target audiences in XAI_ one of the main challenges related to XAI models is to provide understandable explanations and interpretations to different user profiles (data scientists, developers, managers, etc.). One way to deal with this issue is to design human-centered XAI models, which are easy to understand and use for different end-users. This can be achieved by developing interactive explanations, visualization tools, and user-friendly interfaces. Moreover, human-centered XAI can also involve co-design and user studies with end-users to ensure that XAI models' outputs meet their expectations and needs. Overall, the future of XAI for O-RAN will require interdisciplinary collaboration and research between network engineers, human-computer, AI researchers, and data scientists interaction experts. Potential solutions will consist of developing further data formats and standardized interfaces, of the use of simpler and more interpretable/explainable AI models, and the development of real-time XAI models that can operate within the time constraints of O-RAN systems. By addressing these open challenges of XAI for O-RAN, we can ensure that these systems are accountable, trustworthy, and transparent. ## X Conclusion By providing insights into how these systems work and making their decisions more transparent, XAI can help to improve the reliability, performance, and security of future O-RAN networks, playing a crucial role in their development and helping mobile network operators to build and manage more effective and efficient networks. In this survey paper, we presented a comprehensive overview of XAI techniques in the context of O-RAN networks. First, we describe how XAI methods can be deployed in the O-RAN framework and architecture by means of three realistic reference scenarios. We then give a literature review of existing works, which leverage AI (ML/DL) techniques on top of the O-RAN architecture, in order to optimize RAN functions. We also discuss how these works can be mapped to XAI-enabled solutions. In addition, we collect a list of use-cases in the context of O-RAN and network slicing, highlighting how they would benefit from the introduction of XAI methods. Besides, to ensure good performance of the intelligent RAN functions over time, we show how to not only perform continuous monitoring of both model and data profiles, but also automate the whole AI/XAI learning models development, including data collection/extraction, model training, validation, and deployment. Moreover, we explore the potential of XAI to significantly improve the security layer of O-RAN, and how it could be used to build interpretable security threat detection mechanisms. Furthermore, we describe some ongoing standardization activities and research projects targeting XAI and O-RAN aspects. Finally, we discuss the main open challenges related to XAI for O-RAN in addition to suggesting potential solution to such challenges. By means of this work, we hope to foster and inspire the research on XAI, aiming at a new level of interpretable and human-understandable O-RAN network management operations.
2304.01557
Direct in situ determination of the surface area and structure of deposited metallic lithium within lithium metal batteries using ultra small and small angle neutron scattering
Despite being the major cause of battery safety issues and detrimental performance, a comprehensive growth mechanism for metallic lithium deposited at electrode surfaces in lithium metal batteries remains elusive. While lithium surface morphology is often derived indirectly, here, detailed information is directly obtained using in situ small and ultra-small angle neutron scattering, in bulk and non-destructively. Features of 1-10 um and 100-300 nm are identified; the latter contribute to most of the surface area and their size inversely correlates to applied current density. Surface area per unit volume increases continuously during charging from 1-4 h at 2 mA/cm2 but more slowly during discharge. Comparatively higher values are reached after just 1 h at 20 mA/cm2 which remain constant in subsequent cycles. Such quantitative insight into the processes of metallic lithium growth within batteries may enable the development of safer high performance lithium metal batteries.
Christophe Didier, Elliot P. Gilbert, Jitendra Mata, Vanessa Peterson
2023-04-04T06:28:45Z
http://arxiv.org/abs/2304.01557v1
Direct _in situ_ determination of the surface area and structure of deposited metallic lithium within lithium metal batteries using ultra small and small angle neutron scattering ## Abstract Despite being the major cause of battery safety issues and detrimental performance, a comprehensive growth mechanism for metallic lithium deposited at electrode surfaces in lithium metal batteries remains elusive. While lithium surface morphology is often derived indirectly, here, detailed information is directly obtained using in situ small and ultra-small angle neutron scattering, in bulk and non-destructively. Features of 1-10 \(\upmu\)m and 100-300 nm are identified; the latter contribute to most of the surface area and their size inversely correlates to applied current density. Surface area per unit volume increases continuously during charging from 1-4 h at 2 mA/cm\({}^{2}\) but more slowly during discharge. Comparatively higher values are reached after just 1 h at 20 mA/cm\({}^{2}\) which remain constant in subsequent cycles. Such quantitative insight into the processes of metallic lithium growth within batteries may enable the development of safer high performance lithium metal batteries. ## 1 Introduction Considerable effort has been devoted to the improvement of lithium-ion batteries (LIBs) for the past 30 years [1], enabling the use of portable electronics and electric vehicles. There is interest in replacing commonly used LIB electrodes such as graphite with lithium metal because of its order-of-magnitude larger specific capacity; however, rechargeable lithium metal batteries (LMBs) are often plagued by low efficiency and rapid capacity fade [2, 3]. These LMB performance issues are attributed primarily to the formation of high surface area microstructures at the lithium surface, eventually creating short-circuits between electrodes or irreversibly separating from the electrode after partial dissolution resulting in electrochemically inactive "dead lithium" [2, 4, 5]. Many factors influence the morphology of deposited lithium in a LMB [6], however despite considerable research, the mechanism of lithium deposition and microstructure development in LMBs is still poorly understood, partly because observing deposited lithium is experimentally challenging. Historically, the deposited lithium is examined _post mortem_ after extraction from the LMB - a mechanical process potentially changing the electrode surface. _In situ_ techniques, where lithium is examined within the LMB, have enabled remarkable progress in understanding the parameters that influence lithium growth, notably using optical and electron microscopies [3, 7, 8, 9, 10, 11, 12, 13], however those methods require model cells that may not accurately represent the chemical environment within typical LMBs [14]. Electron and optical microscopy studies have revealed a range of deposited lithium morphologies, with the most commonly reported structures in LMBs with liquid electrolytes being so-called whiskers, mosses, and dendrites, while noting a lack of naming convention [6, 8, 9, 10, 15]. Whiskers are reported to initially appear as needles approximately 100 nm wide and up to 10 \(\upmu\)m long. (Figure 1) The description of moss lithium is given to a porous layer up to several hundreds of microns thick and comprising interconnected objects with diameter approximately 0.1 to 10 \(\upmu\)m. Mossy lithium is reported to arise from the interweaving and broadening of whiskers [7, 10, 13], however, it is unclear whether all whiskers become mossy lithium or if both coexist. Dendrites are reported as 100-500 \(\upmu\)m long fractal-like filaments approximately 1 \(\upmu\)m thick, sometimes forming dense bushes [9]. Small- and ultra-small-angle neutron scattering (SANS and USANS) are techniques that can be used to study the morphology of structures within objects such as battery materials on length scales typically ranging from 1 to 10000 nm, falling within the range reported for deposited lithium structures. These techniques are sensitive to neutron scattering length density (SLD) inhomogeneities in a sample, with the scattered intensity proportional to the amount of inhomogeneities and contrast around them given by the square of the difference of the SLD. SLD inhomogeneities arise from elemental and isotopic density variations, as found at the interface between two phases. Scattering intensity variations with scattering vector \(Q\) depend on the spatial distribution of such inhomogeneities, which can be related to the size and shape of those objects. There are large advantages to studying LIB components including LMBs using SANS and USANS, where the high penetration of neutrons easily permits full transmission _in situ_ Figure 1: Representative microscopy images of A) Lithium whiskers observed using bright field cryo-transmission electron microscopy. Adapted with permission from Xu et al. [15]. Copyright 2020 American Chemical Society. B) Surface (top) and cross-section obtained using a focussed ion beam (bottom) of mossy lithium interconnected structures at 0.1 MPa using cryo-scanning electron microscopy. Reprinted from Harrison et al. [16] copyright 2021, with permission from Elsevier. measurements of components within typical electrochemical cells where information is averaged over the entire cell, in contrast to microscopy studies. Despite these advantages, only a limited number of _in situ_ SANS studies of batteries have been performed, and none with USANS. Relatively good sensitivity to electrode surface changes have been observed in several cases using SANS, such as lithium sulfide deposition within porous carbon [17], SEI formation at the surface of lithium titanate [18], or interfaces between lithiated graphite phases [19, 20]. A symmetrical lithium metal pouch cell was studied using SANS [20] and an increase of the total integrated intensity after cycling reported, confirming the sensitivity of SANS to lithium electrode changes, but where details of the lithium morphology were not derived. _In situ_ SANS from a custom lithium metal cell with solid-state electrolyte LLZNO also showed a small increase in scattering intensity and the formation of lithium features 1-10 nm in size [21]. Here, we assess the applicability of in situ SANS and USANS to study the morphology and growth process of metallic lithium deposited within a symmetrical LMB. We first characterise the signal from individual components to further guide construction of the cell that we use; we subsequently evaluate simple models to describe the _in situ_ SANS and USANS data and derive for the first time parameters such as surface area and particle size of deposited lithium structures after applied galvanostatic cycling, over an electrode area that is representative of the whole cell. ## 2 Individual cell component scattering and cell construction A flat laminated pouch cell construction, similar to that used in other work [19, 20], was chosen for its relatively simple and flexible assembly (Figure 1). Because the neutron beam passes through all cell components, all components may contribute to neutron scattering and attenuation. Therefore, battery components were measured individually on both the Kookaburra (USANS) and Quokka (SANS) instruments at ACNS, ANSTO [22, 23] and selected for the construction of the _in situ_ pouch cell based on the following criteria: isotropic scattering - since anisotropic scattering leads to irreproducible data in slit geometry instruments (e.g. Bonse-Hart type USANS instruments such as Kookaburra) - low coherent scattering by electrochemically inactive components, low neutron attenuation and negligible multiple scattering. Four current collectors were compared: nickel mesh, copper mesh, electrodeposited copper foil with one rough surface, and roll-annealed copper foil with smooth surfaces. Current collectors typically have high surface roughness on one side to promote electrode adherence [24]. Both metal mesh current collectors exhibited anisotropic and relatively intense coherent scattering (Figure S1), consistent with other metal mesh materials[25], and were considered unsuitable for the _in situ_ cell. Scattering per unit area (scattering per unit volume multiplied by the thickness) from rough electrodeposited foil was greater by an order of magnitude compared to the smooth roll-annealed foil, despite being a third of the thickness (Figure S1); this behaviour presumably arises from surface scattering. SANS between scattering vector \(Q\) of 10\({}^{\text{-3}}\) and 10\({}^{\text{-1}}\) A\({}^{\text{-1}}\) follows a slope of \(Q^{\text{-3.838(15)}}\) and \(Q^{\text{-3.548(18)}}\) for 'rough' copper foil and for'smooth' copper foil, respectively. A slope close to \(Q^{4}\) (Porod's law) suggests surface scattering as has previously been reported from scratched metal plates[26] as expected from the 'rough' foil. The slope close to \(Q^{\text{-3.5}}\) for the'smooth' foil may indicate a mixture of scattering arising from dislocations within the bulk where \(Q^{\text{-3}}\) variation is expected, and scattering by surfaces, pores, or impurities with \(Q^{4}\) variation, as observed in bulk copper and other metals[26, 27, 28]. We note that the terms 'rough' and'smooth' here refer to macroscopically observed characteristics of the foils; this is not to be confused with the concepts of smooth and rough surfaces at the nanoscale level which exhibit Porod-power law scattering with exponent -4 and greater than -4 (e.g. -3.5) respectively. Smooth foil was selected as the current collector for the _in situ_ cell considering the relatively small and isotropic coherent scattering, with negligible neutron attenuation. (Table S1) Figure 1: A) Side view schematic with exaggerated distances of the symmetrical lithium pouch cell. B) Front view schematic with the top laminated pouch omitted for clarity, distances are in cm. The 1.2 cm diameter neutron beam corresponds to that on the Kookaburra instrument. C) Photograph of the _in situ_ cell covered with quartz slides held by bulldog clips (behind tape) mounted on the metallic sample holder. D) Photograph of the electrolyte-wet PVDF sealed in a laminated pouch between quartz slides held by bulldog clips. The scattering from three separators were compared: Celgard polypropylene, polyvinylidene fluoride (PVDF) and quartz glass microfibre, with relatively strong scattering expected from all of them as a result of their porosity. Separators that have been wetted with electrolyte experience pore deformation[29] and modification of the SLD contrast at the pore surface; to capture data representative of separators in the _in situ_ cell, data for each separator impregnated with approximately 200 \(\mu\)L electrolyte (1 M LiPF\({}_{6}\) in ethylene carbonate/dimethyl carbonate) in a sealed laminated pouch were collected and data for an empty pouch subtracted (Figures 31 and 32) according to Equation 9 of SS7.3 of the Methods section. We note that scattering from the pouch is substantially lower than that from the electrolyte-wet separator (Figure 33). Scattering from the electrolyte was measured but not subtracted from these data, Figure 3: Desmeared USANS (open symbols) and SANS (closed symbols) differential scattering cross-section per unit volume of A) rough electrodeposited and smooth roll-annealed copper foils. Power-law exponents were extracted by fitting of the function \(\rm A\times Q^{a}+B\) over the SANS data between \(\rm 10^{-3}<Q<10^{-1}\) A\({}^{\rm-1}\). Refined values were \(\rm A=1.66(14)\times 10^{-7},B=0.04(6)\) and \(\rm n=3.858(15)\) for rough foil and \(\rm A=2.06(19)\times 10^{-7},B=0.0188(14)\) cm\({}^{\rm-1}\) and \(\rm n=3.548(18)\) for smooth foil. B) Celgard, glass microfibre, and polyvinylidene fluoride (PVDF) membranes measured in air, C) electrolyte-wet Celgard, glass microfibres, PVDF membrane, and the 1 M LiPF\({}_{6}\) in ethylene carbonate/dimethyl carbonate electrolyte, shown after subtraction of scattering from the laminated pouch, with anisotropic two-dimensional data for Celgard shown inset, and D) lithium metal after subtraction of scattering from the laminated pouch and laminated pouch data. Power-law exponents for lithium foil were extracted by fitting of the function \(\rm A\times Q^{a}\) over SANS and smeared USANS data within the \(Q\) range shown in legend. Refined values were \(\rm A=5(2)\times 10^{-9}\) and \(\rm n=3.69(4)\) for USANS and \(\rm A=1.6(4)\times 10^{-6}\) and \(\rm n=3.07(5)\) for SANS. USANS data before de-smearing are given in Figure 33. noting negligible scattering from the electrolyte volume. Celgard exhibited substantial anisotropic scattering, likely as a result of directional specific pore deformation [30]. Although the differential scattering cross-section per unit volume was comparable between wet PVDF and glass microfibre (Figure 3C), scattering from wet glass microfibre per unit area was 1-2 orders of magnitude greater, depending on Q, than that from wet PVDF because of the 10 times larger thickness of the former. Consequently, beam transmission through wet glass microfibre was reduced by coherent scattering (\(T_{S\&S}\!=\!37.1\%\)) and therefore PVDF was selected as the separator for the _in situ_ cell (Figure 3I). SANS and USANS data were collected from a laminated aluminium pouch and lithium foil sealed within a laminated pouch. Data representative of lithium were obtained by subtracting the data for the pouch from that for lithium in the pouch (Figure 3I), resulting in statistically poorer data (Figure 3I). Data for lithium in the pouch contain two approximately linear regions of different slopes on a log-log scale, with \(Q^{3.69(4)}\) variation in the USANS region between 4\(\times\)10-5 and 4\(\times\)10-4 A-1 and \(Q^{3.07(5)}\) variation in the SANS region between 2\(\times\)10-3 and 4\(\times\)10-2 A-1. Data resemble that for other bulk metals [26, 27], where the \(Q^{3}\) variation at low \(Q\) is typical of dislocation scattering and \(Q^{3.5}\) variation at high \(Q\) corresponds to a mixture of dislocation scattering and scattering by pores, impurities or surfaces. Two identical _in situ_ pouch cells were produced and the reproducibility of the USANS signal before cycling confirmed (Figure 3I). The calculated coherent SLD value of each component is given in Table 3. Neutron transmission after attenuation from coherent scattering (\(T_{S\&S}\)), absorption and incoherent scattering (\(T_{\_{A,I}}\)) and the three effects combined (\(T_{\_{A,I,C}}\)) were estimated from USANS data at a wavelength of 4.74 A [31, 32]. Transmission through select components and the _in situ_ cell are given in Table 3. Multiple coherent scattering is negligible for the _in situ_ cell before cycling (\(T_{\_{S\&S}}\!=\!88.2\%\), where multiple scattering is considered significant for T\({}_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}\{\_ The differential scattering cross-section per unit area obtained from equation 1 of SS7.3 of the Methods section, representative of the relative scattering intensity from each component as opposed to data per unit volume, is shown for the individual components and the _in situ_ cell in Figure 4A. Before cycling, lithium contributes significantly at \(Q<10^{-4}\) A; however, in the remaining \(Q\) range, the majority of scattering originates from the electrolyte-wet PVDF, by about an order of magnitude compared to that from lithium. Contribution from lithium foils to data in the USANS region is comparable to that of copper foils and that of the laminated pouch in the SANS range. A high incoherent background is observed at \(Q>10^{-2}\) A-1, arising from electrolyte-wet PVDF and the laminated pouch, as expected from their hydrogen-rich composition. The scattering from lithium in the _in situ_ cell before cycling and after subtracting scattering from other components is shown in Figure 4A. The statistical precision of scattering for lithium before cycling is poor due to the relatively low scattering relative to that from inactive components Figure 4A, and future experiments may improve this by contrast-matching separator and electrolyte via deuteration. Scattering from lithium in the uncycled cell and for lithium in a pouch are comparable (Figure 4A), suggesting scattering does not originate from the surface, where a ten fold increase of scattering is expected from the change of SLD contrast at the interface upon contact with electrolyte (\(\Delta\)p\({}^{2}\) = 0.67\(\times\)10\({}^{-12}\) A\({}^{-4}\) in argon compared with 6.5\(\times\)10\({}^{-12}\) A\({}^{-4}\) in electrolyte). A small increase in intensity at approximately \(Q=10^{-3}\) A-1 is observed for lithium in the cell relative to lithium alone; however, statistical confidence of the feature is low. Data for lithium in the cell before cycling follows a \(Q^{3.64(10)}\) trend in the USANS region between \(Q=4\times\)10\({}^{-5}\) and 4\(\times\)10\({}^{-4}\) A-1, and a \(Q^{2.96(16)}\) trend in the SANS region between \(Q=2\times\)10\({}^{-3}\) and 4\(\times\)10\({}^{-2}\) A-1 (Figure 4A), comparable to those determined for lithium alone (Figure 4A). The similarity of scattering from lithium in the uncycled _in situ_ cell and that for lithium alone Figure 4: Desmeared USANS (open symbols) and SANS (closed symbols). A) Differential scattering cross-section per unit area from the _in situ_ cell before cycling, individual components, and the sum of all components. B) Differential scattering cross-section per unit volume from lithium in the _in situ_ cell before cycling after subtraction of scattering from inactive components and that from lithium in a pouch after subtraction of scattering from the pouch. The power-law exponent for lithium in the cell before cycling was extracted by fitting of the function A \(\times Q^{n}\) over smeared USANS data and A \(\times Q^{n}+B\) function over SANS data within the \(Q\) ranges in legend. Refined values were A = 1.3(12)\(\times\)10\({}^{-8}\) and n = 3.64(10) for USANS and A = 1.4(10)\(\times\)10\({}^{-6}\), B = 0.062(16) and n = 2.96(16) for SANS. Corresponding data before desmearing are shown in Figure 4A. suggest that lithium is not substantially affected by contact with electrolyte before cycling where, in the latter case, the lithium surface reacts with electrolyte to form a several nm thick SEI before cycling[33, 34]. ## 3 Post cycling SANS and USANS Two nominally identical symmetrical lithium metal _in situ_ cells each underwent electrochemical cycling at different applied current density, which influences the rate at which lithium is deposited and extracted from electrode surfaces. Current densities of 2 mA/cm\({}^{2}\) and 20 mA/cm\({}^{2}\) were compared. At both current densities, the formation of dendritic lithium is expected to occur in reasonable time with the transition from mossy to dendritic lithium expected to occur earlier at the higher current density[8]. For each cell, a constant current was applied for 1 h followed by the collection of 3-5 h USANS measurements at open circuit voltage. One cell underwent two cycles (alternating "charge" and "discharge" processes) at 20 mA/cm\({}^{2}\) and the other cell underwent four consecutive "charges" followed by four consecutive "discharges" at 2 mA/cm\({}^{2}\), with a final "discharge" at 20 mA/cm\({}^{2}\). Applied current and measured voltage are shown in Figure 3. The calculated amount of lithium exchanged after each galvanostatic step is \(5.2\pm 0.4\) and \(0.52\pm 0.04\) mg/cm\({}^{2}\) at 20 and 2 mA/cm\({}^{2}\), respectively, and the calculated mass of lithium in each electrode is shown in Figure S11, assuming no loss. After cycling, both cells showed tighter compression against quartz slides, consistent with lithium volume expansion from increased surface porosity[35]. Although the lithium surface on each side of the cell may differ following alternating lithium deposition and extraction, the data contain information from both electrodes. All scattering data are shown as a differential cross-section per unit volume and considers the constant initial volume of lithium in the cell (\(2\times 200\)\(\mu\)m\(\times\)beam area) ignoring volume changes. USANS data measured between each galvanostatic step for the cell cycled at 2 mA/cm\({}^{2}\) are shown in Figure S1. For the first four charges and Figure S1 for the following four 2 mA/cm\({}^{2}\) discharges and the discharge at 20 mA/cm\({}^{2}\). USANS data of the cell charged and discharged twice at 20 mA/cm\({}^{2}\) are shown in Figure S1. Overall scattered intensity from the cell charged at 2 mA/cm\({}^{2}\) increases gradually during the first 3 charges with only minor variation post maximum, including after discharge at 20 mA/cm\({}^{2}\). At 20 mA/cm\({}^{2}\), the maximum scattering intensity is reached after the first cycle, with little variation thereafter. Both cells have a similar maximum scattering intensity, reaching an order of magnitude greater than before cycling, and the initial intensity is never recovered confirming the irreversibility of lithium surface transformations. Overall USANS data of the cell after galvanostatic cycling follow similar trends with an initial power-law decrease in intensity at \(Q<10^{-4}\) A\({}^{-1}\) that becomes a gentler slope at intermediate \(Q\) and a steeper slope at \(Q>10^{-3}\) A\({}^{-1}\). USANS and SANS data for lithium in the cell before and after 2 cycles at 20 mA/cm\({}^{2}\) are shown in Figure 51. Subtraction of inactive components does not substantially change the scattering pattern after cycling (Figure 51) as a result of the order of magnitude increase of scattering from lithium after galvanostatic cycling; this larger increase than previously reported for _in situ_ SANS data of lithium metal cells [20, 21] is possibly a result of the higher relative current density. The USANS transmission through the cell decreased substantially after cycling (Table 3) as a result of this increased scattering. Transmission after attenuation from coherent scattering (T\({}_{\rm SAS}\) = 53%) suggests some degree of multiple scattering which may slightly decrease intensity at lower \(Q\), noting a relatively small effect for samples with T\({}_{\rm SAS}\) \(\approx\) 50% measured on the same instrument [32]. Figure 5: A) B) C) Slit-smeared USANS scattering shown as the differential cross-section per unit volume of lithium after subtraction of scattering from electrochemically inactive components from an _in situ_ cell A) before cycling and after one, two, three and four consecutive “charges” at 2 mA/cm\({}^{2}\), and B) after four “charges” and followed by one, two, three and four consecutive “discharges” at 2 mA/cm\({}^{2}\), and an additional “discharge” at 20 mA/cm\({}^{2}\). C) From an _in situ_ cell before cycling and after alternating “charges” and “discharges” at 20 mA/cm\({}^{2}\). Arrows are visual guides emphasizing intensity change. D) Desmeared USANS (open symbols) and SANS (closed symbols) scattering shown as a differential cross-section per unit volume for lithium in the cell before and after two cycles at 20 mA/cm\({}^{2}\). Power-law exponents for lithium in the cell after two cycles were extracted by fitting of the function A \(\times\)\(Q^{n}\) over smeared USANS data and A \(\times\)\(Q^{n}+B\) function over SANS data within \(Q\) ranges given. Refined values were A = 1.5(2)\(\times\)10\({}^{-3}\) and n = 2.665(16) for USANS and A = 5.33(18)\(\times\)10\({}^{-7}\), B = 0.130(4) and n = 4.082(7) for SANS. Corresponding raw data before desmearing are shown in Figure 53. Figure 6: A) B) C) Slit-smeared USANS scattering shown as the differential cross-section per unit volume of lithium after subtraction of scattering from electrochemically inactive components from an _in situ_ cell A) before cycling and after one, two, three and four consecutive “charges” at 2 mA/cm\({}^{2}\), and B) after four “charges” and followed by one, two, three and four consecutive “discharges” at 2 mA/cm\({}^{2}\), and an additional “discharge” at 20 mA/cm\({}^{2}\). C) From an _in situ_ cell before cycling and after alternating “charges” and “discharges” at 20 mA/cm\({}^{2}\). Arrows are visual guides emphasizing intensity change. D) Desmeared USANS (open symbols) and SANS (closed symbols) scattering shown as a differential cross-section per unit volume for lithium in the cell before and after two cycles at 20 mA/cm\({}^{2}\). Power-law exponents for lithium in the cell after two cycles were extracted by fitting of the function A \(\times\)\(Q^{n}\) over smeared USANS data and A \(\times\)\(Q^{n}+B\) function over SANS data within \(Q\) ranges given. Refined values were A = 1.5(2)\(\times\)10\({}^{-3}\) and n = 2.665(16) for USANS and A = 5.33(18)\(\times\)10\({}^{-7}\), B = 0.130(4) and n = 4.082(7) for SANS. Corresponding raw data before desmearing are shown in Figure 53. Figure 7: A) B) C) Slit-smeared USANS scattering shown as the differential cross-section per unit volume of lithium after subtraction of scattering from electrochemically inactive components from an _in situ_ cell A) before cycling and after one, two, three and four consecutive “charges” at 2 mA/cm\({}^{2}\), and B) after four “charges” and followed by one, two, three and four consecutive “discharges” at 2 mA/cm\({}^{2}\), and an additional “discharge” at 20 mA/cm\({}^{2}\). C) From an _in situ_ cell before cycling and after alternating “charges” and “discharges” at 20 mA/cm\({}^{2}\). Arrows are visual guides emphasizing intensity change. D) Desmeared USANS (open symbols) and SANS (closed symbols) scattering shown as a differential cross-section per unit volume for lithium in the cell before and after two cycles at 20 mA/cm\({}^{2}\). Power-law exponents for lithium in the cell after two cycles were extracted by fitting of the function A \(\times\)\(Q^{n}\) over smeared USANS data and A \(\times\)\(Q^{n}+B\) function over SANS data within \(Q\) ranges given. Refined values were A = 1.5(2)\(\times\)10\({}^{-3}\) and n = 2.665(16) for USANS and A = 5.33(18)\(\times\)10\({}^{-7}\), B = 0.130(4) and n = 4.082(7) for SANS. Corresponding raw data before desmearing are shown in Figure 53. Figure 8: A) B) C) Slit-smeared USANS scattering shown as the differential cross-section per unit volume of lithium after subtraction of scattering from electrochemically inactive components from an _in situ_ cell A) before cycling and after one, two, three and four consecutive “charges” at 2 mA/cm\({}^{2}\), and B) after four “charges” and followed by one, two, three and four consecutive “discharges” at 2 mA/cm\({}^{2}\), and an additional “discharge” at 20 mA/cm\({}^{2}\). C) From an _in situ_ cell before cycling and after alternating “charges” and “discharges” at 20 mA/cm\({}^{2}\). Arrows are visual guides emphasizing intensity change. D) Desmeared USANS (open symbols) and SANS (closed symbols) scattering shown as a differential cross-section per unit volume for lithium in the cell before and after two cycles at 20 mA/cm\({}^{2}\). Power-law exponents for lithium in the cell after two cycles were extracted by fitting of the function A \(\times\)\(Q^{n}\) over smeared USANS data and A \(\times\)\(Q^{n}+B\) function over SANS data within \(Q\) ranges given. Refined values were A = 1.5(2)\(\times\)10\({}^{-3}\) and n = 2.665(16) for USANS and A = 5.33(18)\(\times\)10\({}^{-7}\), B = 0.130(4) and n = 4.082(7) for SANS. Corresponding raw data before desmearing are shown in Figure 53. Figure 9: A) B) C) Slit-smeared USANS scattering shown as the differential cross-section per unit volume of lithium after subtraction of scattering from electrochemically inactive components from an _in situ_ cell A) before cycling and after one, two, three and four consecutive “charges” at 2 mA/cm\({}^{2}\), and B) after four “charges” and followed by one, two, three and four consecutive “discharges” at 2 mA/cm\({}^{2}\), and an additional “discharge” at 20 mA/cm\({}^{2}\). C) From an _in situ_ cell before cycling and after alternating “charges” and “discharges” at 20 mA/cm\({}^{2}\). Arrows are visual guides emphasizing intensity change. D) Desmeared USANS (open symbols) and SANS (closed symbols) scattering shown as a differential cross-section per unit volume for lithium in the cell before and after two cycles at 20 mA/cm\({}^{2}\). Power-law exponents for lithium in the cell after two cycles were extracted by fitting of the function A \(\times\)\(Q^{n}\) over smeared USANS data and A \(\times\)\(Q^{n}+B\) function over SANS data within \(Q\) ranges given. Refined values were A = 1.5(2)\(\times\)10\({}^{-3}\) and n = 2.665(16) for USANS and A = 5.33(18)\(\times\)10\({}^{-7}\), B = 0.130(4) and n = 4.082(7) for SANS. Corresponding raw data before desmearing are shown in Figure 53. Figure 10: A) B) C) Slit-smeared USANS scattering shown as the differential cross-section per unit volume of lithium after subtraction of scattering from electrochemically inactive components from an _in situ_ cell A) before cycling and after one, two, three and four consecutive “charges” at 2 mA/cm\({}^{2}\), and B) after four “charges” and followed by one, two, three and four consecutive “discharges” at 2 mA/cm\({}^{2}\), and an additional “discharge” at 20 mA/cm\({}^{2}\). C) From an _in situ_ cell before cycling and after alternating “charges” and “discharges” at 20 mA/cm\({}^{2}\). Arrows are visual guides emphasizing intensity change. D) Desmeared USANS (open symbols) and SANS (closed symbols) scattering shown as a differential cross-section per unit volume for lithium in the cell before and after two cycles at 20 mA/cm\({}^{2}\). Power-law exponents for lithium in the cell after two cycles were extracted by fitting of the function A \(\times\)\(Q^{n}\) over smeared USANS data and A \(\times\)\(Q^{n}+B\) function over SANS data within \(Q\) ranges given. Refined values were A = 1.5(2)\(\times\)10\({}^{-3}\) and n = 2.665(16) for USANS and A = 5.33(18)\(\times\)10\({}^{-7}\), B = 0.130(4) and n = 4.082(7) for SANS. Corresponding raw data before desmearing are shown in Figure 53. Figure 11: A) B) C) Slit-smeared USANS scattering shown as the differential cross-section per unit volume of lithium after subtraction of scattering from electrochemically inactive components from an _in situ_ cell A) before cycling and after one, two, three and four consecutive “charges” at 2 mA/cm\({}^{2}\), and B) after four “charges” after two cycles were extracted by fitting of the function A \(\times\)\(Q^{n}\) over smeared USANS data and A \(\times\)\(Q^{n}+B\) function over SANS data within \(Q\) ranges given. Refined values were A = 1.5(2)\(\times\)10\({}^{-3}\) and n = 2.665(16) for USANS and A = 5.33(18)\(\times\)10\({}^{-7}\), B = 0.130(4) and n = 4.082(7) for SANS. Corresponding raw data before desmearing are shown in Figure 53. Data show a distinct change in shape before and after cycling (Figure 3D), with post cycling data exhibiting two approximately power-law decreases separated by a broad shoulder in the intermediate \(Q\) range, having \(Q^{2.665(16)}\) slope in the USANS region from \(Q=\) 4\(\times\)10\({}^{-5}\) to 2\(\times\)10\({}^{-4}\) A\({}^{-1}\) and \(Q^{4.082(7)}\) slope in the SANS region from \(Q=\) 3\(\times\)10\({}^{-3}\) to 10\({}^{-1}\) A\({}^{-1}\). The \(Q^{4}\) variation at low \(Q\) is consistent with smooth interfacial scattering and increased surface area post cycling, as observed with other techniques [36, 37]. We postulate that scattering from lithium post cycling originates from the lithium-electrolyte interface; the scattering from the bulk foil volume, that dominates pre-cycling, is negligible in comparison. ## 4 Surface area from Porod's law In non-particulate, non-uniform systems, scattering can originate from interfaces between volumes of different SLD, known as phases [38, 39], and we therefore consider that scattering from the cell post cycling originates from the lithium - electrolyte interface, with lithium metal and electrolyte taken as separate homogeneous volumes where SLD fluctuations are negligible. The \(Q^{4}\) slope at high \(Q\) indicates a sharp change of SLD at the boundary [38, 39, 26], and the increase of scattering in the cell post cycling is consistent with increased surface area [36, 37]. The \(Q^{4}\) slope from the cell post cycling is sustained at \(Q>\) 2\(\times\)10\({}^{-3}\) A\({}^{-1}\) (\(Q^{-1}<\) 50 nm), suggesting the absence of particles or porosity smaller than 50 nm, noting that features smaller than 100 nm are rarely observed for deposited lithium despite complex micrometre scale morphology shown by optical and scanning electron microscopy [6, 8, 9, 10, 13, 15]. In a non-particulate system with homogeneous regions of different SLD, the differential scattering cross-section per unit volume from "smooth" surfaces is expected to follow Porod's law at sufficiently high \(Q\)[38, 39]: \[\frac{d\Sigma}{d\Omega}=P\times Q^{-4}+B \tag{1}\] with \(B\) accounting for the background and the Porod exponent \(P\) the contribution from all surfaces: \[P=2\pi\sum_{i}\left[(\Delta\rho_{i})^{2}\times\frac{S_{i}}{\nu}\right] \tag{2}\] where \(S_{i}\) is the surface area at interface i between two phases, \(\Delta\rho_{i}\) is the SLD contrast between the phases each side of the interface, and \(V\) is the volume occupied by the phases. The model can be extended to any number of phases. The modelling of scattering from the lithium - electrolyte interface is complicated by the presence of the SEI - a component of complex and debated composition. The SLD of the SEI is probably intermediate between that of lithium (\(\rho_{\rm L}\) = -0.82 \(\times\) 10\({}^{-6}\) A\({}^{-2}\)) and the electrolyte (\(\rho_{\rm E}=1.73\times\) 10\({}^{-6}\) A\({}^{-2}\)), with neutron reflectometry suggesting a value \(\rho_{\rm S}\approx 0.8\times 10^{-6}\) A\({}^{-2}\)[40], or alternatively, the SLD of the SEI may be very close to that of either lithium or the electrolyte, as postulated in previous SANS experiments [20]. Where a uniformly thick SEI coats the lithium electrode, all surfaces are equivalent and consequently, the surface area in the three-phase system is 1.86 times that of the two-phase system, as developed in [[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[ ( ( ( ( ( ( ( Porod's law description of the differential scattering cross-section per unit volume of the cell after two cycles at 20 mA/cm\({}^{2}\) in the SANS range 2\(\times\)10\({}^{-3}\)\(<\)_Q_\(<\) 10\({}^{-1}\) A\({}^{-1}\) (Figure S6B) yields P = 78.0(3)\(\times\)10\({}^{-8}\) A\({}^{-4}\).cm\({}^{-1}\) and B = 0.118(4) cm\({}^{-1}\), corresponding to a surface area per unit volume S\({}_{\rm V}\) = 19.08(7)\(\times\)10\({}^{3}\) cm\({}^{2}\)/cm\({}^{3}\) in the two-phase system and S\({}_{\rm V}\) = 35.57(14)\(\times\)10\({}^{3}\) cm\({}^{2}\)/cm\({}^{3}\) in the three-phase system. Comparison with other experiments is not straightforward for cycled lithium. The differential scattering cross-section is usually divided by the irradiated volume to obtain the differential scattering cross-section per unit volume allowing comparison of the same material with variable thicknesses and beam area.[41] Following convention, scattering from lithium in the cell are presented per unit volume (in "absolute" units cm\({}^{-1}\)), where data are divided by the lithium volume in the as prepared cell, containing 2\(\times\)200 \(\upmu\)m thick foils within the beam area. This approach assumes a uniform distribution of SLD heterogeneities within the sample volume; however, such a condition is not anticipated for lithium in the cell post cycling where scattering is conjectured to predominantly originate from the lithium - electrolyte interface, which is segregated to the lithium foil surface. In this condition, scattering intensity varies linearly with the volume of deposited lithium including pores, and not with the total volume including the excess, which depends on cell construction. This means that the differential scattering cross-section and, by extension, surface area, per unit volume cannot be compared between cells with different lithium foil thickness. This is further complicated by the unaccounted for volume expansion introduced by porous deposited lithium.[7, 35] A macroscopically uniform distribution of deposited lithium is expected across the electrode surface, where the approximate 6 mm distance between electrode edges and the area of the cell probed by the beam renders edge effects[42] likely negligible. Therefore, the magnitude of surface scattering is expected to vary linearly with sample area, similar to materials where scattering from a relatively homogeneous bulk is negligible in comparison to that from highly subdivided surfaces,[43, 26, 44] and where the differential scattering cross-section per geometric area (cm\({}^{2}\)/cm\({}^{2}\)) should be comparable between data for cells with the same surface condition but with different amount of excess foil. The conversion of surface area per unit volume _S\({}_{V}\)_ to surface area per geometric area _S\({}_{d}\)_ that considers both active surfaces is obtained using equation (11) in SS7.3 of the Methods section by straightforward multiplication by 0.02 cm. The surface area determined from Porod's law is compared to quantitative measurements of surface area reported using _in situ_ X-ray tomography[11, 45] and _post mortem_ gas adsorption using Brunauer-Emmett-Teller (BET) theory[36, 37] in Table 1. Quantitative reports of the surface area of cycled lithium are scarce, with qualitative descriptions from microscopy measurements substantially more common. Taiwo _et al._ measured a surface area per unit volume of 0.05\(\times\)10\({}^{3}\), 0.4\(\times\)10\({}^{3}\) and 0.6\(\times\)10\({}^{3}\) cm\({}^{2}\)/cm\({}^{3}\) within 35, 130 and 180 \(\upmu\)m-thick electrodeposited lithium foils using X-ray tomography, corresponding to surface area per unit area[11] of 0.175, 5.2 and 10.8 cm\({}^{2}\)/cm\({}^{2}\), respectively, and below that measured using SANS or BET gas sorption. _In situ_ X-ray tomography severely underestimates lithium surface area as a consequence of its approximate 1 \(\upmu\)m resolution limitation. Description of USANS data by the Debye-Anderson-Brumberger (DAB) model, described later, confirms that micrometric-scale lithium features contribute to surface area determined by tomography. Only two reports of gas adsorption determined lithium surface areas are found,[36, 37] perhaps due to the experimental need to extract foils from the cell, requiring washing and drying under an inert atmosphere, and unconventional use of argon as the adsorbent. A comparison of surface areas per unit mass \(S_{M}\) excluding excess lithium is presented in Table 3 and substantial differences are observed between SANS and BET and between BET reports. The surface area per unit area \(S_{A}\) obtained from SANS here, less prone to thickness uncertainties, and by Weber _et al._ using gas adsorption are within an order of \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & **Surface area per unit volume (Sv, 10\({}^{3}\) cm\({}^{2}\)/cm\({}^{3}\))** & **Surface area per unit area (Sa, 10\({}^{3}\) cm\({}^{2}\)/g)** & **Surface area per unit area (Sa, 10\({}^{3}\) cm\({}^{2}\)/cm\({}^{2}\))** \\ \hline **SANS Porod model** & & & \\ **2 cycles 20 mA/cm\({}^{2}\) (2 phase)** & 19.08(7) & 38.56(15) & 381.7(1.5) \\ **2 cycles 20 mA/cm\({}^{2}\) (3 phase)** & 35.57(14) & 71.8(3) & 711(3) \\ \hline **BET (Weber _et al._)** & Not reported & & \\ **1 cycle 1.2 and 0.48 mA/cm\({}^{2}\)** & & 30 (foil) & 30 (foil) \\ **4 cycles 1.2 and 0.48 mA/cm\({}^{2}\)** & & 75 (foil) & 75 (foil) \\ **10 cycles 1.2 and 0.48 mA/cm\({}^{2}\)** & & 150 (foil) & 150 (foil) \\ & & 250 (powder) & 250 (powder) \\ \hline **BET (Saito _et al._)** & Not reported & & Not reported \\ **1 h discharge 3 mA/cm\({}^{2}\)** & & 25 & \\ **6 cycles 1 and 0.2 mA/cm\({}^{2}\)** & & 132 & \\ **6 cycles 1 and 3 mA/cm\({}^{2}\)** & & 258 & \\ \hline **X-ray tomography (Taiwo _et al._)** & & Not reported & \\ **10 cycles** & 0.05 & & 0.175 \\ **70 cycles** & 0.4 & & 5.2 \\ **135 cycles** & 0.6 & & 10.8 \\ \hline \end{tabular} \end{table} Table 1: Surface area of deposited lithium calculated from SANS data of the _in situ_ cell after two cycles at 20 mA/cm\({}^{2}\) (Figure 50) using Porod’s law for two- and three-phase models, alongside the reported surface area of lithium metal in cells post cycling obtained using gas adsorption and X-ray tomography. Weber _et al._ report _ex situ_ data for lithium after 10 cycles extracted from the cell as foil plated on copper or powder scraped from copper, indicated as “foil” and “powder”, respectively. Standard uncertainties estimated from the least-square regression analysis are shown in parentheses. magnitude, with a smaller \(S_{A}\) reported from gas adsorption, likely as a result of the limited probing of internal surface, seen in [12], as evidenced by the increase in surface area measured for pulverised samples [37]. Insufficient information was given by Saito _et al._ to enable the determination of surface area per unit area. We note experimental differences likely influencing the determined surface area of lithium between this work and that of Saito _et al._ and Weber _et al._ including differences in cycling protocol, electrolyte, cell construction, as well as the substantial amount of lithium remaining attached to the separator after extraction from the cell in the work of Weber _et al_. These results highlight the applicability of SANS for the direct and representative determination of lithium metal surface area during cycling however further experiments are needed to discriminate between the suitability of two- and three-phase models. ## 5 Debye-Anderson-Brumberger modelling The slope at \(Q<10^{-4}\) A-1 and shoulder around \(10^{-3}\) A-1 in the USANS data cannot be modelled by Porod's law and the Debye-Anderson-Brumberger (DAB) model was used to describe the data in the combined SANS and USANS region [46, 47]. The DAB model considers a non-particulate multi-phase system characterised by a correlation length \(L\) that is related to the average distance between interfaces, with differential scattering cross-section [46]: \[\frac{d\Sigma}{d\Omega}=D\times\frac{L^{3}}{[1+(QL)^{2}]^{2}} \tag{1}\] where \(D\) is a scaling factor and \(D/L\) is related to the surface area similarly to Porod's constant [46]: \[D/L=2\pi\sum_{i}\left[(\Delta\rho_{i})^{2}\times\frac{S_{i}}{V}\right] \tag{2}\] where \(S_{i}\) is the surface area at interface \(i\) between two phases, \(\Delta\rho_{i}\) is the SLD contrast between phases either side of the interface and \(V\) is the volume occupied by all phases. Derivation of the surface area for two and three-phase systems follow those for Porod's law. In this model, scattering at large \(Q\) is \(Q^{4}\) consistent with Porod's law, and approaches a soft maximum at \(Q=1/L\), where the shoulder at approximately \(10^{-3}\) A-1 yields a correlation length close to 100 nm. The slope at \(Q<10^{-4}\) A-1 in our data is attributed to scattering from SLD heterogeneities larger than \(1/Q_{\min}\approx 2\) um, where \(Q_{\min}\) is the smallest experimentally accessible scattering wavevector. This slope can be modelled by a second DAB term with larger \(L\)[46]. To distinguish between the two contributions, the scaling factor and correlation length of lithium features of sizes \(<1\) um and \(>1\) um are denoted \(D_{nano}\) and \(L_{nano}\), and \(D_{micro}\) and \(L_{micro}\), respectively. The complete model used to describe the _in situ_ data is therefore given by: \[\frac{d\Sigma}{d\Omega}=D_{nano}\times\frac{{}^{L_{nano}}{}^{3}}{[1+(QL_{nano })^{2}]^{2}}+D_{micro}\times\frac{{}^{L_{micro}}{}^{3}}{[1+(QL_{micro})^{2}]^ {2}}+B \tag{3}\] with background parameter \(B\). Assuming both micrometric and nanometric inhomogeneities arise from lithium-electrolyte interfaces, the surface area is given by the sum: \[D_{nano}/L_{nano}+D_{micro}/L_{micro}=2\pi\sum_{i}\left[(\Delta\rho_{i})^{2} \times\frac{S_{i}}{\nu}\right]\] The bimodal DAB model for two different sized inhomogeneities provides a reasonable description of USANS and SANS data for lithium in the cell post cycling at 20 mA/cm\({}^{2}\) (Figure 6). Refined model parameters (Table 4) reflect information from both electrodes, where lithium is alternatively deposited on one side and removed on the other, involving the partial redissolution of previously deposited lithium. The sum of D\({}_{nano}\)/L\({}_{nano}\) and D\({}_{micro}\)/L\({}_{micro}\) is 78.9(7)\(\times\)10\({}^{-8}\) A\({}^{-4}\).cm\({}^{-1}\), which compares well with the refined Porod parameter P = 78.0(3)\(\times\)10\({}^{-8}\) A\({}^{-4}\).cm\({}^{-1}\), confirming the consistency of derived surface area using the two methods. 98% of the surface area arises from nanometric features as \(D_{nano}/L_{nano}>>D_{micro}/L_{micro}\), with the surface area arising from micrometric features close to S\({}_{\text{V}}\) = 0.4\(\times\)10\({}^{3}\) cm\({}^{2}\)/cm\({}^{3}\) (S\({}_{\text{A}}\) = 8 cm\({}^{2}\)/cm\({}^{2}\)), comparable to the surface area obtained using _in situ_ X-ray tomography (Table 1) of surface deposited lithium that is limited by resolution to micrometric porosity [11, 45]. The refined correlation length corresponds to average distances between lithium/electrolyte interfaces separating statistically homogeneous volumes; one can envisage this to relate to the size of deposited lithium features and electrolyte-filled pores between them (Babinet's principle). L\({}_{nano}\) = 169.4(11) nm is similar to the width of so-called whiskers (Figure 11) as well as the nanoproosity within mossy layers observed using electron microscopy [13, 15, 6, 148]. L\({}_{micro}\) = 2.06(5) \(\upmu\)m is similar to the size of microscopic pores within mossy lithium (Figure 11) and to the width of dendrites seen using optical and electron microscopies [6, 8, 9, 13]. \(D_{nano}\) and \(D_{micro}\) are quantitatively related to the lithium/electrolyte Figure 4: SANS (closed symbols) and desmeared USANS (open symbols) data of lithium in the _in situ_ cell after 2 cycles at 20 mA/cm\({}^{2}\) and the corresponding DAB model calculation. The model was simultaneously refined against SANS and slit-smeared USANS data, with smearing applied to the model, as shown in Figure 4, and refined parameters in Figure 4. interfaces separating statistically homogeneous volumes at distances \(L_{nano}\) and \(L_{micro}\), respectively. As lithium volume expansion is not considered, \(D_{nano}\) and \(D_{micro}\) scale with the areal quantity of surfaces and not the volume concentration of surfaces conventionally expected. The bimodal DAB model was fitted to USANS data post galvanostatic cycling, with B fixed to 0.117 cm-1 in line with the negligible background, as shown in Figure S1 and refined parameters given in Table S1. Refined parameter values were in agreement between combined USANS + SANS data and USANS only data after 2 cycles at 20 mA/cm2 (Table S4), with the exception of the \(D_{nano}\) value, which was 10% larger when extracted using USANS data, as expected given the relative differences in \(Q\) range. Generally, the USANS data were well described by the model, noting limited data from \(Q=4\times 10^{-5}\) to \(10^{-4}\) A-1 where \(L_{micro}\) and \(D_{micro}\) are determined, and where the signature for these features was not clearly present in data from the cell cycled at 2 mA/cm2 after the 3rd and 4th charge, as well as the 1st and 2nd discharge. In these data, strong correlation between the \(D_{micro}\) and \(L_{micro}\) value (Figure S1) was identified, and although convergence was achieved with good reproducibility of the refined \(D_{micro}\) and \(L_{micro}\) value, an unphysical substantial change in \(L_{micro}\) was obtained between the 2nd and 3rd charges at 2 mA/cm2. Correlations for all other refined parameters were negligible (Figure S1). To account for correlations between \(D_{micro}\) and \(L_{micro}\), \(L_{micro}\) was fixed to the average value of 2.1um as obtained from data that did not present these correlations, noting the inherent introduction of bias in the determined \(D_{micro}\) as a result (Figure S1 and S1), and a relatively smaller bias in the determined \(D_{nano}\), \(L_{nano}\), and \(S_{v}\) value (Figure S1). Refinement results where \(L_{micro}\) was fixed are presented in Figure S1 and Table S1. Figure 1: Refined parameters of the bimodal DAB model for lithium using USANS data of the _in situ_ cell after each galvanostatic step, where “Ch.” and “Dch.” refer to “charge” and “discharge”, respectively. A) Scale factor for micrometric features \(D_{micro}\), B) correlation length \(L_{micro}\), C) Scale factor for nanometric feature \(D_{nano}\), and D) correlation length \(L_{nano}\). Error bars are standard uncertainties estimated from the least-square regression analysis. Refined parameters of the bimodal DAB model against USANS data with B fixed to 0.117 cm\({}^{-1}\) are plotted in Figure 7 and Figure 1. With increasing cycling, parameters for the cell cycled at 2 mA/cm\({}^{2}\) approached values obtained for the cell cycled at 20 mA/cm\({}^{2}\). L\({}_{nano}\) was approximately 240 and 170 nm for the cell cycled at 2 mA/cm\({}^{2}\) and 20 mA/cm\({}^{2}\), respectively, comparable to the width of lithium "whiskers" observed by electron microscopy[10, 15, 33], and where a reduction of whisker width was observed with increasing current density[15, 48]. A relatively slow reduction of \(L_{nano}\) was observed during the 4 h discharges at 2 mA/cm\({}^{2}\) and the following discharge at 20 mA/cm\({}^{2}\), whereas \(L_{nano}\) remained relatively constant in the cell cycled at 20 mA/cm\({}^{2}\). The quantity of surface between nanometric features (\(D_{nano}\)) increased during the first 4 h charges at 2 mA/cm\({}^{2}\) and remained relatively constant during the subsequent discharge at 2 mA/cm\({}^{2}\) and at 20 mA/cm\({}^{2}\) past the initial charge capacity. In the cell cycled at 20 mA/cm\({}^{2}\), \(D_{nano}\) remained relatively constant and comparable to the value eventually reached by the cell cycled at 2 mA/cm\({}^{2}\), suggesting the dissolution of deposited lithium is partially reversible. The refined size of micrometric features (\(L_{micro}\)) was between 1 and 2 \(\mu\)m, although further interpretation of the physical meaning of this is statistically limited. The quantity of surface around micrometric features (\(D_{micro}\)) increased after cycling at 2 mA/cm\({}^{2}\) noting limited statistical accuracy (Figure 3), to a value close to that obtained after 1 h at 20 mA/cm\({}^{2}\). The surface area per unit volume \(S_{V}\) depends exclusively on \(D_{nano}\) and \(L_{nano}\), with negligible influence from micrometric lithium. An increase in \(S_{V}\) is observed from 3\(\times\)10\({}^{3}\) to 12\(\times\)10\({}^{3}\) cm\({}^{2}\)/cm\({}^{3}\) after 1 to 4 h charge at 2 mA/cm\({}^{2}\), followed by a slower increase from 12\(\times\)10\({}^{3}\) to 15\(\times\)10\({}^{3}\) cm\({}^{2}\)/cm\({}^{3}\) after 1 to 4 h discharge at 2 mA/cm\({}^{2}\) (Figure 3). A higher surface area of 20\(\times\)10\({}^{3}\) cm\({}^{2}\)/cm\({}^{3}\) is reached after just 1 h at 20 mA/cm\({}^{2}\) in the second cell, which remained Figure 4: Refined surface area per unit volume \(S_{V}\) considering two phases (lithium and the electrolyte) calculated from refined parameters of the bimodal DAB model using USANS data for lithium in the _in situ_ cell after each galvanostatic step. Error bars are standard uncertainties estimated from the least-square regression analysis. constant in subsequent cycles. This corresponds to surface areas per unit area \(S_{A}\) from 60 to 300 cm\({}^{2}\)/cm\({}^{2}\) in the 2 mA/cm\({}^{2}\) cell and around 400 cm\({}^{2}\)/cm\({}^{2}\) in the 20 mA/cm\({}^{2}\) cell. Cycling history had a major impact on the evolution of surface area, where 1 h at 20 mA/cm\({}^{2}\) caused a smaller increase in \(S_{V}\) when the cell was previously cycled at 2 mA/cm\({}^{2}\) compared to 20 mA/cm\({}^{2}\), noting that capacity may have been limited by dendrite-induced short circuits. ## 6 Conclusions We present a lithium metal battery (LMB) with relatively simple pouch construction suitable for _in situ_ small angle neutron scattering (SANS) and ultra-small angle neutron scattering (USANS) studies of the structural development of deposited lithium within LMBs; characteristics can be evaluated with good precision and much less difficulty compared to other techniques such as X-ray tomography, microscopy or gas adsorption. We demonstrate the sensitivity of SANS and USANS to the development of lithium-electrolyte interfaces arising from lithium deposition, and quantify the surface area and average distance between these interfaces using relatively simple Porod's law and Debye-Anderson-Brumberger models applied to the SANS and USANS data, respectively. Complex variations of surface area and distance between interfaces were observed depending on the cell cycling history. This work paves the way for future investigations probing the influence of parameters such as current density, charge duration and alternating lithium deposition/dissolution processes on the surface area and interfacial distances within the deposited lithium; such information is necessary to address the limitations that lithium dendrite growth has on LMB technology application. ## 7 Methods ### Battery components Three separators were investigated: polypropylene (Celgard 2400 Polypore, 25 \(\upmu\)m thickness, 40 nm pore size, 40% porosity), polyvinylidene difluoride (PVDF) (Immobilon-P, Merck, 110 \(\upmu\)m thickness, 450 nm pore size, 70% porosity), and quartz glass microfibre (Whatman QM/A, Sigma-Aldrich, 450 \(\upmu\)m thickness, 2.2 \(\upmu\)m pore size). Four current collectors were investigated: nickel mesh (TOB New Energy, 180 \(\upmu\)m aperture, 50 \(\upmu\)m threads), copper mesh (99.9%, The Mesh Company, 204 \(\upmu\)m aperture, 50 \(\upmu\)m threads), electrodeposited copper foil with one rough and one smooth side (\(>\) 98%, MTI, 9 \(\upmu\)m thickness), and roll-annealed copper foil smooth on both sides (99.9%, Goodfellow, 25 \(\upmu\)m thickness). Current collectors and separators were dried overnight at 80 \({}^{\circ}\)C under vacuum before their introduction into an Ar glove box. The electrolyte was made by dissolving 1 M lithium hexafluorophosphate (LiPF\({}_{6}\)) (99.99%, Sigma-Aldrich) in a 1:1 volume mixture of ethylene carbonate (EC) (anhydrous, 99%, Sigma-Aldrich) and dimethyl carbonate (DMC) (anhydrous, 99.7%, Sigma-Aldrich). Solvents were dried overnight under 4A molecular sieves prior to dissolution of LiPF\({}_{6}\) at room temperature for two days within an Ar filled glove box with \(<\) 1 ppm O\({}_{2}\) and H\({}_{2}\)O. Lithium metal (99.9%, Goodfellow, 200 \(\upmu\)m thickness) was used as electrodes. Although lithium was stored in an Ar filled glove box with \(<\) 1 ppm O\({}_{2}\) and H\({}_{2}\)O, a thin and uneven coating of a black or white crust as a result of oxidation or nitridation, respectively, was present on the lithium metal, which was removed by abrasion using a rough polypropylene block until a smooth metallic surface was obtained. Aluminium laminated film (MTI, 115 \(\upmu\)m thickness), referred to as laminated pouch, was used for isolating air-sensitive battery components, consisting of an inner polypropylene layer facing battery components and an outer nylon 6,6 polyamide layer, with the two layers encasing a layer of aluminium metal, attached with adhesive of unknown composition (not provided by manufacturer). SANS and USANS data from individual battery components were collected to quantify their contribution to the overall scattering signal from the cell and to inform the construction of a symmetrical pouch cell favourable to the observation of changes in the deposited lithium. Laminated Al pouch, separators, and current collectors were handled in air. Lithium metal and separators wetted with approximately 200 \(\upmu\)L electrolyte were sealed in the laminated Al casing within an Ar glove box. Components, aside from the electrolyte, were maintained flat between two quartz slides and taped to a flat sample holder during SANS and USANS measurements (Figure 21). Electrolyte was introduced between two quartz slides separated by 300 \(\upmu\)m and sealed with a compressed O-ring during USANS measurements and introduced into a quartz cuvette (Hellma cell) of 1 mm thickness and sealed by a Teflon cap for SANS measurements. Quartz slides and cuvette scattering measurements were also made and formed part of the empty cell measurements used in the background subtraction during data processing. ### Preparation of symmetrical cells and electrochemical cycling for _in situ_ SANS and USANS Two symmetrical lithium metal pouch cells were prepared in an Ar filled glove box with \(<1\) ppm O\({}_{2}\) and H\({}_{2}\)O. Lithium foil electrodes 2.5 cm \(\times\) 2.5 cm, comprising approximately 10 mg/cm\({}^{2}\) (38.6 mAh/cm\({}^{2}\)), were placed on 3.0 cm \(\times\) 4.0 cm roll-annealed copper current collectors and aligned on each side of a PVDF separator wetted with approximately 200 \(\upmu\)L of 1 M LiPF\({}_{6}\) in EC/DMC electrolyte. Measurements of cut lithium square electrodes post abrasion showed a thickness of \(200\pm 10\)\(\upmu\)m, edge length of \(2.5\pm 0.1\) cm and initial mass \(62\pm 4\) mg. The assembly was sealed in a laminated aluminium pouch and electrical connections made with Ni tabs mechanically welded to current collectors. A representation of the cell is shown in Figure 22. 23. and 24. Batteries were maintained flat between quartz slides with a slight pressure applied by bulldog clips to prevent misalignment of electrodes. Each battery underwent a different electrochemical cycling program, summarised in Figure 4. Although the terms "charge" and "discharge" have no real meaning in a symmetrical cell where lithium is in excess, they are used here to indicate when the direction of applied current is reversed. One battery underwent two "charge" and "discharge" cycles at a current density of 20 mA/cm\({}^{2}\) while the other underwent four consecutive "charge" followed by four consecutive "discharge" processes at 2 mA/cm\({}^{2}\), followed by one further "discharge" at 20 mA/cm\({}^{2}\). Each galvanostatic step was applied for 1 h using a PG302N (Autolab) potentiostat/galvanostat and the circuit was left open for 3-5 h during which USANS data were measured. SANS data of the 20 mA/cm\({}^{2}\) battery were measured prior to and following cycling in USANS studies. The mass of lithium \(m_{Li}\) exchanged between electrodes after a current \(I\) is applied for a time \(t\) is calculated by \(m_{Li}=\frac{l\times t}{F}\times M_{Li}\) where F = 96485 A.s/mol is the Faraday constant and M\({}_{Li}\) = 6.94 g/mol is the molar mass of lithium. The mass of lithium at each "side" of the battery as bulk lithium foil, and as deposited from electrochemical processes, is plotted in Figure S11 assuming this as a total inventory with no loss from side reactions and the complete reinsertion of previously deposited lithium on current reversal. ### Small and ultra-small angle neutron scattering measurements USANS data (approximate \(Q\) range = 3.5x10\({}^{5}\)-10\({}^{-2}\) A\({}^{\text{-1}}\)) were measured on the slit-geometry instrument Kookaburra at the Australian Nuclear Science and Technology Organisation (ANSTO)[22] in high flux mode with a neutron wavelength of 4.74 A and vertical resolution parameter of 0.0586 A\({}^{\text{-1}}\). The beam aperture diameter was 29 mm for individual components and 12 mm for cells. Multiple scattering was estimated from USANS data using the beam transmission \(T_{SAS}\) method[31, 32] using neutron counts measured with and without the sample to estimate neutron transmission through the sample after attenuation by coherent, incoherent scattering, and absorption combined (\(T_{A,IC}\)), by absorption and incoherent scattering combined (\(T_{A,I}\)), and by coherent scattering only (\(T_{SAS}\)), the latter being an indication of multiple scattering. SANS (6 x 10\({}^{-4}\) \(<\) \(Q\) / A\({}^{\text{-1}}\) \(<\) 0.6) data were measured on the pinhole-geometry instrument Quokka at ANSTO[23] in four configurations: at a neutron wavelength of 5.0 A and source-to-sample distances of 20.1, 8.0, and 1.3 m, and at a wavelength of 8.1 A and source-to-sample distance of 20.1 m using MgF\({}_{2}\) focussing optics). Quokka features a 1 x 1 m\({}^{2}\) area detector which was used to identify anisotropic scattering from individual cell components. Experimentally measured scattering intensity was converted to differential macroscopic scattering cross-section per unit volume (absolute calibration) using empty-beam and direct beam attenuation measurements, and taking into account the thickness of components given in SS7.1 of the Methods section[41, 49]. For each component i, the differential scattering cross-section per unit area I\({}_{\text{A}}\) (in cm\({}^{2}\).cm\({}^{-2}\)) and per unit volume I\({}_{\text{V}}\) (in cm\({}^{\text{-1}}\)) are related by the thickness t (in cm) of the component: Figure 1: Applied current density to, and measured voltage of, symmetrical lithium cells at A) 20 mA/cm\({}^{2}\) and B) 2 mA/cm\({}^{2}\) with a final 20 mA/cm\({}^{2}\) galvanostatic step. Green arrows indicate the approximate time for USANS measurements between steps and orange arrows mark SANS measurements of the cell before and after cycling at 20 mA/cm\({}^{2}\). \[I_{A}(i)=I_{V}(i)\times t(i) \tag{1}\] When a sample contains several components i (sample = \(\sum i\)) and component scattering is independent from each other as demonstrated for those comprising the _in situ_ cell except for separator and electrolyte treated together, scattering intensities per unit area are additive: \[I_{A}(\sum i)=\sum_{i}I_{A}(i) \tag{2}\] And scattering per unit volume is obtained after multiplication by the thickness: \[I_{V}(\sum i)\times\sum_{i}t(i)=\sum_{i}[I_{V}(i)\times t(i)] \tag{3}\] The calculation of scattering for lithium in the _in situ_ cell is provided as an example in Technical Note 5.3. Data were subsequently processed with python scripts in Mantid[50] for additions, subtractions, and, in the case of USANS data, desmearing. Data additions and subtractions were performed independently for SANS and slit-smeared USANS after a spline interpolation of the data to a common \(Q\) grid of evenly spaced points on a log-scale. For plotting purposes, USANS data were desmeared as the last step using the Lake algorithm.[51] Differential scattering cross-sections are given with solid angles expressed in units of steradians but steradians were omitted from unit labels by convention.[52] In plot axis labels, "differential scattering cross-section" is simplified to "scattering intensity". Model fitting was done using the program SASView 5.0.5 for slit-smeared USANS and/or SANS data with horizontal slit-smearing applied to the model when USANS data were included. Standard uncertainties of refined parameters are estimated from the least-square regression analysis and noted following recommendations of the International Union of Crystallography.[53] For calculations of surface areas, the coherent neutron scattering length density (SLD) was calculated using \(SLD=\frac{N_{A}\rho}{M}\times\sum_{i}p_{i}b_{C,i}\) where N\({}_{A}\) is Avogadro's number, \(\rho\) is the bulk density, \(M\) is the molar mass, \(b_{C,i}\) are the atomic coherent scattering lengths[54] of the \(i\) element in atomic proportion \(p_{i}\). The surface area per unit volume \(S_{V}\) (cm\({}^{2}\)/cm\({}^{3}\)) extracted from the coherent scattering cross-section per lithium volume corresponds to the surface contributing to scattering \(S_{i}\) divided by the initial volume of lithium introduced in the cell: \[S_{V}=\frac{s_{i}}{\textit{initial Li volume}}=\frac{s_{i}}{\textit{beam area\times foil thickness\times number of foils}} \tag{4}\] with two 200 um-thick lithium foils in our experiment, neglecting thickness changes after cycling. The surface area per unit mass \(S_{M}\) (cm\({}^{2}\)/g) is derived from \(S_{V}\): \[S_{M}=\frac{s_{i}}{\textit{initial Li mass}}=\frac{s_{i}}{\textit{beam area\times mass loading\times number of foils}}=\frac{S_{V}\times foil thickness}{\textit{mass loading}} \tag{5}\] with a mass loading of approximately \(9.9\pm 1.4\) mg/cm\({}^{2}\) in our experiment. Both \(S_{V}\) and \(S_{M}\) average the surface of the lithium - electrolyte interface over the total lithium inventory, including excess lithium, precluding comparison between cells with different amounts of lithium. Since surface scattering scales with sample area rather than volume,[26] the surface area per unit area \(S_{A}\) (cm\({}^{2}\)/cm\({}^{2}\)), where only active lithium surfaces are considered, can be derived from \(S_{V}\), considering one active electrode surface on each foil: \[S_{A}=\frac{S_{i}}{active\ Li\ surface\ in\ the\ beam}=\frac{S_{i}}{beam\ area\times number\ of\ foils}=S_{V}\times foil\ thickness \tag{1}\] ## Acknowledgments Access to Kookaburra and Quokka instruments was supported by ANSTO beamtime awards (proposal P8690 and DB9219). This work benefited from the use of the SasView program, originally developed under NSF award DMR-0520547. SasView contains code developed with funding from the European Union's Horizon 2020 research and innovation programme under the SINE2020 project, grant agreement No 654000. Authors are grateful to Liliana de Campo for troubleshooting issues with using the SasView program and useful discussions.
2304.00649
Multilingual Word Error Rate Estimation: e-WER3
The success of the multilingual automatic speech recognition systems empowered many voice-driven applications. However, measuring the performance of such systems remains a major challenge, due to its dependency on manually transcribed speech data in both mono- and multilingual scenarios. In this paper, we propose a novel multilingual framework -- eWER3 -- jointly trained on acoustic and lexical representation to estimate word error rate. We demonstrate the effectiveness of eWER3 to (i) predict WER without using any internal states from the ASR and (ii) use the multilingual shared latent space to push the performance of the close-related languages. We show our proposed multilingual model outperforms the previous monolingual word error rate estimation method (eWER2) by an absolute 9\% increase in Pearson correlation coefficient (PCC), with better overall estimation between the predicted and reference WER.
Shammur Absar Chowdhury, Ahmed Ali
2023-04-02T23:08:11Z
http://arxiv.org/abs/2304.00649v1
# Multilingual Word Error Rate Estimation: E-WER3 ###### Abstract The success of the multilingual automatic speech recognition systems empowered many voice-driven applications. However, measuring the performance of such systems remains a major challenge, due to its dependency on manually transcribed speech data in both mono- and multilingual scenarios. In this paper, we propose a novel multilingual framework- eWER3 - jointly trained on acoustic and lexical representation to estimate word error rate. We demonstrate the effectiveness of eWER3 to _(i)_ predict WER without using any internal states from the ASR and _(ii)_ use the multilingual shared latent space to push the performance of the close-related languages. We show our proposed multilingual model outperforms the previous monolingual word error rate estimation method (eWER2) by an absolute 9% increase in Pearson correlation coefficient (PCC), with better overall estimation between the predicted and reference WER. Shammur Absar Chowdhury, Ahmed Ali Qatar Computing Research Institute, HBKU, Doha, Qatar Multilingual WER estimation, End-to-End systems ## 1 Introduction Recent years have witnessed a surge in both mono- and multilingual speech recognition performances, with accuracy comparable or even outperforming the human performance on established benchmarks [1, 2]. With such success, automatic speech recognition (ASR) systems have been commoditized as speech processing pipelines in many voice-driven applications such as personal assistant devices and broadcast media monitoring among others. However, our means of evaluating the usefulness of the ASR output have remained largely unchanged. Word error rate (WER) is the standard measure for evaluating the performance of ASR systems. To obtain a reliable estimation of the WER, a minimum of two hours of manually transcribed test data is typically required - a time-consuming and expensive process. Often voice-driven applications require quick quality estimation of the automated transcription, which is not feasible with such traditional reference-based measures. Moreover, even with offline applications, it is not always viable to obtain gold references (especially in multilingual scenarios) to evaluate the transcription quality. Thus, there is a need to develop techniques that can automatically estimate the quality of the ASR transcription without such manual effort [3, 4] and handle multilingual. Several studies have explored the automatic estimation of the WER. These studies included a large set of extracted features (with/without internal access to the ASR system) to train neural regression or classification models [5, 6, 7]. Some studies proposed a novel neural zero-inflated model [8], while others model uncertainty [9] in predictions to handle different challenges. However, all these studies are conducted with networks directly trained and tested in monolingual settings. In this work, we design a single multilingual end-to-end model capable of estimating WER given the raw audio and the automatic transcription from different (mono- and multilingual) off-the-shelf ASR systems without having access to the ASR's internal feature representation (the concept is shown in Figure 1). For this, we entail the large self-supervised pretrained models as feature extractor and exploits the available multilingual corpora. We evaluate our results using _Arabic_, _Italian_, _Spanish_, _English_ and _Russian_ - test sets. We train a monolingual estimator and compare it with our proposed multilingual model to show its efficacy for better performance. Our contributions are: * Design the first multilingual WER estimation without using any internal features from the ASR (black-box); * Compare our method with previous state-of-the-art results (e-wer [6] and e-wer2 [7]); * Analyse the effect of imbalanced WER distribution on the estimator's performance and propose a new sampling technique. ## 2 E2e Multilingual WER Estimator Figure 2 shows an overview of the end-to-end system architecture designed to estimate speech recognition WER with no gold-standard reference transcription. As input to the estimator, we first pass raw audio along with its automatic transcription obtained from the speech recognition systems. We extract the speech and lexical representations and utilize these representations jointly to train the multilingual regression model. Acoustic representation:We use XLSR-\(53\) to extract phoneme aware speech representation. The XLSR-\(53\) model is a multilingual variation of wav2vec \(2.0\) model fine-tuned on cross-lingual phoneme-recognition task [14, 15]. For our study, we remove the output (language model head) and use the representation only. We use XLSR-\(53\) as a feature extractor, which includes a cascaded temporal convolutional network to map raw audio, \(X=\{x_{1},x_{2}...,x_{n}\}\) to the latent speech representation \(Z=\{z_{1},z_{2}...,z_{t}\}\). This latent Figure 1: Overview of the study concept and proposed framework. information is then passed through 24 Transformer [16] blocks with model dimension of \(1,024\) and \(16\) attention heads, to capture contextual representations, \(C\) (\(g:Z\mapsto C\)). We then pass the frame-wise representation to a bi-directional LSTM and extracted the last step representations (\(\overrightarrow{A}\), \(\overrightarrow{A}\)). Lexical representation:Simultaneously, to extract the lexical embeddings, we pass the ASR transcription to the XLM-RoBERTa-Large model [17], pretrained using 100 different languages. The pretrained model follows the same architecture as BERT [16], with \(24\)-layers of transformer modules - with \(1,024\) hidden-state, and \(16\) attention heads. The model uses a byte-level BPE as a tokenizer and outputs the sequence of hidden-states for the whole input sentence. To obtain the final lexical representation (\(L\)), we averaged the embeddings over the sequences. Combining representations:We concatenate the output representations from the acoustic and lexical module (\(\overrightarrow{A}+\overrightarrow{A}+L\)) and pass it through two fully-connected layers, before the output layer, for the regression task. ## 3 Experimental Setup ### Speech Recognition Systems To train the estimator, we opt for the current state-of-the-art conformer [18] based end-to-end speech recognition systems (see Table 1). For the Spanish, Italian, and Russian ASR systems, the models are trained using their respective CV train sets. The model has \(12\) encoder layers and \(6\) decoder layers each with \(2,048\) encoder/decoder units from FFN and \(4\) attention heads with \(256\) transformation dimensions and \(15\) CNN kernels. As for the English ASR, we use a large conformer model with \(12\) encoder and \(6\) decoder layers containing \(8\) attention heads with \(512\) transformation dimensions and \(31\) CNN kernels. This large ASR is trained using the well-known \(960\) hours of librispeech data. We use similar architecture for multilingual Arabic ASR [19] trained with Arabic QASR [11] along with English librispeech data. training the model, we select audio instances with a duration of 10 seconds or less; this is based on the upper tail of overall duration distribution. Imbalanced Distribution:Given the remarkable performance of the current end-to-end ASR models, the WER often exhibits imbalanced distributions, where certain target values have significantly fewer observations than others. In this case, the majority of the training set has a WER of '0', making the training data highly skewed (see Figure 3). Moreover, the dataset shows a tendency of missing data for certain target values, thus making the task more challenging. In order to handle the abundance of '0' WER scores in the training set, we sampled \(n\) instances from each language with \(WER=0\). We determine the value \(n\) based on the sum of instances falls under the next two most frequent score groups. Data Split:For our dev set, we divide the training dataset into 10 bins of target WER, with equal intervals such as \([0,10),[10,20)\cdots\)\([90,100]\). From each bin, we then randomly sample \(\approx\)10% of the instances to create the validation set. The details of the resultant (balanced) split are shown in Table 2. Estimator Output:As the output score of the estimator, it is worth noting that we bound the target value (WER) of the estimator to a range of [0,1] (i.e. 0 - 100%).1 Footnote 1: If WER \(>100\%\), the value is scaled down to 100. ### WER Estimator Design Model Parameters:We train the end-to-end WER estimator using a an Adam optimizer for \(20\) epochs with a learning rate of \(1e-3\) and a dropout-rate of \(0.1\), and freeze the parameters of the pretrained self-supervised models. In the acoustic pipeline, we use one layer of BiLSTM model and for the joint training, we opt for two fully-connected layers (\(600\), \(32\) neurons) with ReLU activation function. As for the loss function, we use mean squared error. Same architecture and hyperparameters are used to train mono- and multilingual models with balanced and natural distribution data. ### Evaluation Measures Given the uneven scores distribution (towards small WER value), we use Pearson correlation coefficient (PCC) as our main evaluation metric. However, we also report root mean square error (RMSE) to compare with previous studies [6, 7]. Moreover, to effectively estimate the eWER3 for the complete test set, we report weighted WER: (\(eWER3=\frac{\sum WER_{utt}=Dur(utt)}{\sum^{n}Dur}\)) using the utterance level estimated WER (\(\widehat{WER_{utt}}\)) and the corresponding duration (\(Dur(utt)\)). ## 4 Results and Discussion ### Monolingual Comparison We benchmark the proposed framework eWER3 in a monolingual setting (Arabic) and compare it with the previous estimation models - eWER and eWER2. The results, reported in Table 3, show that our model outperforms both eWER and eWER2 with an absolute increase of \(3\)% and \(9\)% in PCC, and a decrease of \(21\)% and \(6\)% in RMSE respectively. Such improvement indicates the estimation power of our architecture without using any additional feature from the ASR decoder. Moreover, when the monolingual models (for both Arabic and English - in Table 4) were tested in the cross-lingual Italian dataset, both models' performance (both in correlation coefficient and RMSE) decrease drastically. Yet, it is observed that the Italian test set benefits more from the English monolingual model with RMSE: \(0.19\) compared to RMSE:\(0.32\) in the Arabic model. Thus indicating the potential advantage of having shared latent space for close languages in multilingual settings. ### Effects of Imbalanced Data and Sampling We analyse the effect of training the model with sampled data (Model Sampled: \(\psi\)) instead of natural distribution (Model Natural:\(\varphi\)). With respect to the \(\psi\), we noticed \(\varphi\) has a slightly better correlation coefficient, yet has higher RMSE-values and large difference in aggregated estimated eWER3 than the Oracle WER. For example, for ES:CV test set, \(\varphi(PCC)=0.58\), \(\varphi(RMSE)=0.15\), \(\varphi(eWER3)=7.0\%\), whereas, \(\psi(PCC)=0.53\), \(\psi(RMSE)=0.14\), \(\psi(eWER3)=13.0\%\).2 Footnote 2: A higher RMSE and overall WER difference is seen for other datasets while using natural distribution. The density curve, from \(\varphi\) and \(\psi\) model predictions (Figure 6), indicates that with natural distribution the model (\(\varphi\)) learns to predict lower WER better than the \(\psi\). However, the prediction is scaled down to a lower range (see the shift in the peak of both the curves) thus increasing RMSE and the difference between the overall predicted eWER3 and Oracle WER. This is a potential limitation of the current study and a future endeavor for experimenting with zero-inflated output layers [20] for such a multilingual network. ## 5 Conclusion In this study, we propose a novel framework, for estimating multilingual WER without the need of manual transcription. Our proposed framework is a joint acoustic-lexical model exploiting the self-supervised learning paradigm. Using a small subset of languages, our results suggest the efficacy of such model to predict utterance-level and overall WER for the test sets. When compared with monolingual models, the multilingual framework performs comparably for the distant languages (e.g., Arabic) while boosting the performance of the close languages (e.g., \(En_{mono}\): 0.62 PCC _vs \(En_{multi}\)_: 0.66 PCC). The current study can be used as a proof of concept to utilize the representation models to design such a predictor for an ASR. We exploit pretrained models as a feature-extractor for computational feasibility. In the future, we will focus on improving the performance by handling the imbalanced target distribution, with improved neural architecture and cover more languages. \begin{table} \begin{tabular}{l|c|c|c|c|c} Sets & PCC & RMSE & eWER3 & WER & \# \\ \hline \hline \multicolumn{5}{c}{Monolingual Estimator Model - Arabic} \\ \hline \hline Ar:SUMMA & 0.75 & 0.14 & 16.0\% & 18.0\% & 1410 \\ It:CV & 0.45 & 0.32 & 41.0\% & 17.0\% & 7200 \\ \hline \multicolumn{5}{c}{Monolingual Estimator Model - English} \\ \hline \hline En:TedL & 0.62 & 0.14 & 7.0\% & 12.0\% & 1151 \\ It:CV & 0.49 & 0.19 & 10.0\% & 17.0\% & 7200 \\ \hline \multicolumn{5}{c}{Multilingual Estimator Model} \\ \hline \hline Ar:SUMMA & 0.74 & 0.15 & 15.0\% & 18.0\% & 1410 \\ It:CV & 0.60 & 0.17 & 14.0\% & 17.0\% & 7200 \\ Es:CV & 0.53 & 0.14 & 13.0\% & 11.0\% & 10179 \\ En:TedL & 0.66 & 0.14 & 8.0\% & 12.0\% & 1151 \\ Ru:CV & 0.51 & 0.12 & 6.0\% & 7.0\% & 5748 \\ \hline \end{tabular} \end{table} Table 4: Reported performance of monolingual and multilingual WER estimator on Arabic (Ar), English (En), Italian (It), Spanish (Es) and Russian (Ru) test sets. Figure 5: Scatter Plot for test sets with highest PCC 0.74 – Arabic (a); and the lowest PCC 0.51 – Russian. \begin{table} \begin{tabular}{l|c|c|c} _Ar:SUMMA_ & **PCC** & **RMSE** & **Input to the Estimator** \\ \hline eWER & 0.66 & 0.35 & Lexical + Grapheme + Decoder + Numerical [6] \\ \hline eWER2 & 0.72 & 0.20 & MFCC+ Lexical + Phonetic [7] \\ \hline **eWER3 mono** & **0.75** & **0.14** & Raw Audio, Lexical Transcription \\ \hline \end{tabular} \end{table} Table 3: Monolingual (Arabic) transcription quality estimator results on Ar:SUMMA test set. Figure 6: Density curves using estimated WER for the multilingual model trained using sampled distribution (blue line) and natural distribution (orange) train set, showing the effect of imbalanced data labels. x-axis represents WER. The prediction is from aggregated in-language test sets.
2306.07987
Bayesian inference for high-dimensional discrete-time epidemic models: spatial dynamics of the UK COVID-19 outbreak
Stochastic epidemic models which incorporate interactions between space and human mobility are a key tool to inform prioritisation of outbreak control to appropriate locations. However, methods for fitting such models to national-level population data are currently unfit for purpose due to the difficulty of marginalising over high-dimensional, highly-correlated censored epidemiological event data. Here we propose a new Bayesian MCMC approach to inference on a spatially-explicit stochastic SEIR meta-population model, using a suite of novel model-informed Metropolis-Hastings samplers. We apply this method to UK COVID-19 case data, showing real-time spatial results that were used to inform UK policy during the pandemic.
Chris P Jewell, Alison C Hale, Barry S Rowlingson, Christopher Suter, Jonathan M Read, Gareth O Roberts
2023-06-09T16:50:31Z
http://arxiv.org/abs/2306.07987v3
Bayesian inference for high-dimensional discrete-time epidemic models: spatial dynamics of the UK COVID-19 outbreak ###### Abstract Stochastic epidemic models which incorporate interactions between space and human mobility are a key tool to inform prioritisation of outbreak control to appropriate locations. However, methods for fitting such models to national-level population data are currently unfit for purpose due to the difficulty of marginalising over high-dimensional, highly-correlated censored epidemiological event data. Here we propose a new Bayesian MCMC approach to inference on a spatially-explicit stochastic SEIR meta-population model, using a suite of novel model-informed Metropolis-Hastings samplers. We apply this method to UK COVID-19 case data, showing real-time spatial results that were used to inform UK policy during the pandemic. Stochastic epidemic model, spatial epidemic, COVID-19, Bayesian statistics, MCMC ## 1 Introduction During the COVID-19 pandemic, accurate situational awareness and the ability to project case numbers forward in time was an important aspect of adaptive infection control policy. In the UK, as elsewhere, the epidemic was characterised by dramatically fluctuating case numbers over space and time, with population behaviour interacting with pathogen transmission to create a complex and highly-variable disease landscape. Understanding this variability in case numbers became an important aspect of epidemic management, and quantitative modelling of the drivers of infection transmission enabled metrics such as the time-varying reproduction number, intrinsic growth rate, and the impact of intervention strategies to be estimated. Epidemic dynamics are largely driven by the characteristics of the population at risk: the behavioural determinants of how individuals interact, socioeconomics and spatially-varying population density and environment in which people live (Grenfell et al., 2002). In particular, the spatial distribution and mobility of a population has a marked impact on the pattern of disease transmission, and understanding how epidemic behaviour responds to individuals' mobility is a key aspect of outbreak control (Keeling et al., 2001). At both a local and national level, infectious disease transmission may be assumed to be facilitated by individual-to-individual contact, and therefore understanding how contact behaviour spreads infection is central in designing successful control interventions whilst minimising public disturbance (Keeling and Eames, 2005). Stochastic individual-level state-transition models of epidemics have been extensively used to study the effects of spatial population structure on epidemic spread (Keeling et al., 2001). They are a special case of the more general class of state-transition processes, which consider individuals as transitioning between a number of discrete, mutually-exclusive states reflecting the expected natural history of infection according to a stochastic process (Bartlett, 1964). For example, the classic SEIR model (Figure 1) assumes individuals begin as _susceptible_ to infection, and transition sequentially to _exposed_ (i.e. infected but not yet infectious), _infectious_, and finally _removed_ (i.e. dead or recovered with immunity from further infection). Of particular interest is the infection rate, or hazard rate of transitioning from _susceptible_ to _exposed_, which given a suitable model can provide valuable insights into how interactions between individuals modulate the propensity for disease transmission as a function of spatial separation, as well as how individual-level covariates affect disease susceptibility and infectivity (e.g. Jewell et al., 2009; Smieszek et al., 2011). These models are particularly popular due to their interpretability and compatibility with decision-making questions (e.g. Tildesley et al., 2006; Palomo-Briones et al., 2022). Though powerful, individual-level models are computationally intensive, with fast forward simulation and inference methods requiring model-specific optimisations, and inference methods requiring application-specific approximations to allow fast computation of likelihood functions (Deardon et al., 2010; Brand et al., 2015; Sellman et al., 2018; Probert et al., 2018). Additionally, whilst such methods have been used successfully for population sizes up to approximately 200,000 (commonly suiting livestock disease applications as in Probert et al. (2018)), individual-level covariate data is rarely available to support modelling at the individual level for national and international human populations. In such situations, the population may be stratified by group-level covariates of interest. Stratification by space leads to the metapopulation model, in which individuals are aggregated into a number of discrete spatial units, with individuals within a spatial unit sharing a set of common covariates (Levins, 1969). The exchangability of individuals within each metapopulation hence offers performance gains in both computation time and storage. For models with modest numbers of metapopulations (low-dimensional models), a popular approach is to represent the state transition model as a system of ordinary differential equations, using a continuous mean-field approximation to the discrete space of numbers of individuals transiting through the state-transition graph (e.g. Lipshtat et al., 2021). However, increasing numbers of metapopulations (high-dimensional models) divides the population into smaller units, such that the continuous state-space approximation of the ODE system fails. In this case, a stochastic implementation is required that captures the random nature of infection transmission among small numbers of individuals within the metapopulation. Whilst metapopulation models can be implemented as a continuous time-inhomogeneous Poisson process (e.g. Minh et al., 2011), a more computationally convenient setup is the discrete-time "chain-binomial" model (Becker, 1981). This has the additional computational performance advantage that complexity may be controlled by packaging transition events into time-quanta, rather than having to compute for each individual transition event in continuous time (Diekmann et al., 2021). In a real-time outbreak response context, the utility of epidemic models is maximised through principled parameter inference, providing important policy information through improved situational awareness and forecasting of the ongoing epidemic in space and time (Jewell et al., 2009; Birrell et al., 2021). Given complete epidemic information, i.e. the epidemiological states and transition times for each individual over the course of the epidemic, the likelihood function of a given state-transition model is tractable and amenable to conventional inference methods. However, in all real-world situations inference is complicated by the inability to directly observe individuals' epidemiological states, or certain transition events. For example, whereas the times of individual symptom onset or recovery might be well-recorded, infection events are typically unobserved (Chapman et al., 2018). The demands on inference methods for real-time epidemic model fitting therefore present a considerable statistical challenge: not only must the method be rapid in its own right, but must _also_ be flexible enough to marginalise over potentially many censored transition events and epidemiological states (Swallow et al., 2022). In this paper we shall adopt a Bayesian approach to inference for stochastic metapopulation models, as demonstrated by our COVID-19 application. This has many advantages: it provides a principled and coherent calculus for the measurement of uncertainty both in parameters and predictions of future epidemic scenarios; it gives a natural way of taking into account missing and censored data using data augmentation; it provides a natural framework for experts to input knowledge to inform inference (via prior elicitation); and finally its implementation is facilitated through powerful MCMC techniques. That being said, MCMC methods are often seriously challenged by spatial applications at scale due to the strong correlations inherently present within such models. For individual-level models, data-augmentation methodology has become state-of-the-art for unbiased parameter estimation, allowing efficient marginalisation over the censored data (Neal and Roberts, 2004; Jewell et al., 2009; Probert et al., 2018; Chapman et al., 2018). However, these methods scale poorly to large population sizes, and are therefore unsuitable for the COVID-19 epidemic at hand. Whilst particle filter methods are beginning to show promise for fitting larger-scale epidemic models, they are inherently constrained by poor performance as the dimensionality of the population stratification increases (Rimella et al., 2023). Inspired by existing data-augmentation methodology, we therefore develop a novel MCMC method suited to metapopulation models, and which are amenable to rapid implementation through efficient multi-core computing. Our innovations allow us to analyse national-scale outbreaks in a timely manner, capturing spatiotemporal dependence between disease prevalence and incidence. This then provides estimates of quantities such as the degree of interaction between population strata and reproduction number, and predictions of the ongoing disease trajectory. Between May 2020 and March 2022, the modelling approach to the UK's COVID-19 outbreak we describe in this paper was used to provide information to the UK government, via the Scientific Pandemic Influenza Group on Modelling, Operational sub-group (SPI-M-O) of the Scientific Advisory Group on Emergencies (SAGE). ### Spatial COVID-19 cases in the UK Data on the daily number of new cases of COVID-19 in each of 382 Local Authority Districts (LADs) in the UK is available from the UK Government Coronavirus website (Public Health England, 2020, 2021). This dataset contains the number of people \(y_{it}\) who submitted a positive test sample for COVID-19 on day \(t\) in LAD \(i\), where a positive test is defined as positive by PCR or by LFD followed by confirmatory PCR. In practice we further spatially aggregated Cornwall and the Isles of Scilly due to the latter's small population size, and likewise City of London and City of Westminster for a total of 380 discrete spatial units. From May 2020 until March 2022, we ran our analysis daily on an 84 day sliding window. In practice, for an analysis on day \(t\), we used data in the interval \([t-88,t-4)\), discarding the latest 4 days' worth of data due to significant recording lag. For the purposes of exposition in this paper, however, we restrict our results to the 84 day window between 7th June 2021 and 31th August 2021 inclusive. This covered the emergence of the SARS-CoV-2 Delta (B.1.617.2) strain, and was after social distancing regulations had been relaxed. Figure 2 shows the daily case counts and overall incidence map for this period. Connectivity between LADs was informed by freely available Census 2011 commuting volume data aggregated from Middle Super Output Area (MSOA) onto our 380 LAD-based spatial units. These data provide a matrix \(G\) with \(g_{ij}\,i,j=1,\ldots,380\) being the number of journeys made from "residence" \(j\) to "workplace" \(i\). \(G\) is non-symmetric, reflecting commuting behaviour rather than the reciprocity of disease transmission. We therefore calculated a symmetric matrix \(C=G+G^{T}\) of the daily number of journeys Figure 1: SEIR model showing states (boxes), transitions (arrows), and associated transition rates. between each LAD assuming that commuters return to their residence each day, and go from their residence to their workplace and back at most once per day. We also set \(C_{ii}=0,\text{ for all }i\) as within-LAD infection transmissibility is delegated to another part of our model (Section 2). Finally, we introduce LAD-level population estimates and LAD polygon area to inform how transmission varies with population size. In all datasets, two pairs of LADs (Cornwall and Scilly, and City of Westminster and City of London) are merged to allow mapping of MSOAs onto LADs resulting in data geolocated to 380 spatial units. ## 2 Model In the following sections, we construct a model to represent the epidemic in terms of the number of individuals testing positive on each day \(t=0,\ldots,T-1\) within spatial unit \(i=1,\ldots,M\) defined by UK Local Authority Districts. Since our motivation is to measure quantities such as the degree of disease transmission between spatial units and reproduction number, and predict the ongoing disease trajectory, we are interested in capturing spatiotemporal dependence between disease prevalence and incidence. Below, we first describe a linear state transition model that captures the natural history of the COVID-19 disease progression per individual, accounting for segregation Figure 2: Cases of COVID-19 in the UK, as determined by daily numbers of Pillar 2 positive tests. Left: COVID-19 daily case counts between 7th June and 29th August 2021 inclusive, showing marked weekly variation and long-term drift; Right: Spatial distribution of COVID-19 cases as of 29th August 2021, expressed as incidence per 100000 people per day. Concentrations of high incidence are observed in Scotland, Northern Ireland, and Wales. of the population at risk into discrete spatial units. We then describe how this model is set up as a discrete time Markov process, establishing a notation allowing us to describe our inference methodology in the next section. ### State transition model Within each spatial unit, we model the COVID-19 epidemic by assuming that at any time \(t\) individuals exist in one of 4 mutually exclusive disease states: susceptible, exposed (infected but not yet infectious), infectious, and removed which we denote by S, E, I, and R respectively. We assume that individuals progress sequentially between the disease states, such that the allowed transitions are [SE], [EI], and [IR] as shown in Figure 1. Each transition is associated with a hazard rate, \(\lambda_{i}^{\text{\tiny{SE}}}(t)\), \(\lambda_{i}^{\text{\tiny{E}}}(t)\), and \(\lambda_{i}^{\text{\tiny{In}}}(t)\) respectively giving the transition rate between states for a single individual, and allowing us flexibility to specify how they evolve over space and time. #### 2.1.1 Infection rate The infection hazard rate \(\lambda_{i}^{\text{\tiny{SE}}}(t)\) is space- and time-dependent, parameterised as a product of a log-linear term describing the susceptibility of an individual in \(i\) to infection, and a function of infectious challenge assuming homogeneous mixing of individuals within each spatial unit \(i\) and mixing due to commuting flows between \(i\) and all other spatial units \(j\neq i\). We scale the hazard rate by the local population size \(n_{i}\) following the usual assumption of frequency-dependent disease transmission. Thus \[\lambda_{i}^{\text{\tiny{SE}}}(t)=\frac{\exp(u_{t}+s_{i})}{n_{i}}\left[x_{it }^{\text{\tiny{I}}}+\psi\sum_{j\neq i}\frac{c_{ij}x_{jt}^{\text{\tiny{I}}}}{n _{j}}\right]\] where \(u_{t}\) is a temporally-correlated random effect, \(s_{i}\) a spatially-correlated random effect, \(x_{it}^{\text{\tiny{I}}}\) is the number of infectious individuals in \(i\) at time \(t\) (see below), and \(c_{ij}\) represents the commuting flow to spatial unit \(i\) from spatial unit \(j\). The coefficient \(\psi\) is assumed unknown and is to be estimated given the data. Due to the considerable variability in the infection dynamics of the COVID-19 outbreak we assume a random walk for the temporally-correlated random effect \(\mathbf{u}\), such that \[u_{t}=\begin{cases}\alpha_{0}&\text{if }t=0\\ u_{t-1}+\alpha_{t}&\text{if }t>0\end{cases}\] with \[\alpha_{0} \sim \text{Normal}(0,10)\] \[\alpha_{t} \overset{\text{\tiny{iid}}}{\sim} \text{Normal}(0,\sigma_{u}^{2}),\ t>0\] and \(\sigma_{u}=0.005\) chosen heuristically. Furthermore, we assume that the spatially-correlated random effect follows a Conditional Autoregressive (CAR) process with joint distribution \[\mathbf{s}\sim\text{MVN}\left(\mathbf{0},\Sigma\right).\] The covariance matrix \(\Sigma\) is specified in terms of an adjacency matrix \(W\) with elements \((W)_{ij}=1\iff i\sim j\) and \(0\) otherwise, and a diagonal matrix \(D_{w}\) with element \((D_{w})_{ii}=W\cdot\mathbf{1}\) (i.e. the number of spatial units neighbouring \(i\)) such that \[\Sigma=\sigma_{s}^{2}(D_{w}-\rho W)^{-1}\] with \(\sigma_{s}\) an unknown parameter and correlation parameter \(\rho=0.25\) chosen heuristically to give a smooth posterior surface across LADs in an effort to improve reliability of model fitting by keeping the number of free parameters to a minimum. We note that in a less operational context, a Besag-York-Mollie model might be preferred, though it would require the development of more complex and bespoke MCMC methodology to fit in the presence of latent event times. The following prior distributions are imposed on the remaining free parameters \[\psi \sim \text{Gamma}(0,100)\] \[\sigma_{s} \sim \text{HalfNormal}(0.1)\] #### 2.1.2 Latent and infectious periods The latent period (sojourn in E) and infectious period (sojourn in I) are parameterised through the transition rates \(\lambda_{i}^{\text{\tiny{Ei}}}(t)\) and \(\lambda_{i}^{\text{\tiny{H}}}(t)\). Evidence from the literature suggests a mean latent period of 5 - 7 days (Quesada et al., 2021), with a mean infectious period of 5 days starting 2 days before the onset of symptoms (Tian et al., 2020). In the context of our model, we therefore assume \[\lambda_{i}^{\text{\tiny{Ei}}}(t)=0.25\] giving an _effective_ 4 day mean latent period irrespective of time and space. For the infectious period, we assume \[\lambda_{i}^{\text{\tiny{H}}}(t)=e^{\gamma_{0}+\gamma_{1}g_{t}}\] irrespective of space, where \(\gamma_{0}=\log 0.25\) is the (known) mean log hazard rate for removal ([IR]) events, \(\gamma_{1}\) the (unknown) log hazard ratio for being removed on a weekday versus the weekend, and \[g_{t}=\begin{cases}-\frac{2}{7}&\text{if }t\in\{\text{mon},\dots,\text{fri}\}\\ \frac{5}{7}&\text{otherwise}\end{cases}\] to stabilise the inference. We assume that _a priori_\(\gamma_{1}\sim\text{Normal}(0,100)\). ### Discrete-time Markov chain implementation In this section, we implement the SEIR state transition model described in Section 2.1 as a discrete-time stochastic process, more specifically a discrete-time finite state-space Markov chain using the so-called "chain binomial" setup (e.g. Becker, 1981). For timepoint \(t\) and spatial unit \(i\), let \(x_{it}^{q}\) represent the number of individuals resident in state \(q\in\{\text{S},\text{E},\text{I},\text{R}\}\), and \(y_{it}^{qr}\) for transition \([qr]\in\{\text{[SE]},\text{[EI]},\text{[IR]}\}\) represent the number of transitions occurring between each state. Given an initial state \(\mathbf{x}_{0}\), we evolve the epidemic by iterating \[y_{it}^{qr}\sim\text{Binomial}\left(x_{it}^{q},p_{i}^{qr}(t)\right),\qquad[qr] \in\{[\text{SE}],[\text{EI}],[\text{IR}]\} \tag{5}\] and \[x_{i,t+1}^{\text{\tiny{s}}} =x_{it}^{\text{\tiny{s}}}-y_{it}^{\text{\tiny{se}}}\] \[x_{i,t+1}^{\text{\tiny{e}}} =x_{it}^{\text{\tiny{it}}}+y_{it}^{\text{\tiny{se}}}-y_{it}^{\text {\tiny{in}}}\] \[x_{i,t+1}^{\text{\tiny{l}}} =x_{it}^{\text{\tiny{l}}}+y_{it}^{\text{\tiny{ei}}}-y_{it}^{\text {\tiny{IR}}}\] \[x_{i,t+1}^{\text{\tiny{n}}} =x_{it}^{\text{\tiny{n}}}+y_{it}^{\text{\tiny{IR}}} \tag{6}\] where the transition probabilities \(p_{i}^{qr}(t)=1-e^{-\lambda_{i}^{qr}(t)\delta t}\) are assumed constant throughout each timestep of size \(\delta t\). Considering (5) as the data generating model, a realisation of the Markov chain results in the \(T\times M\) state matrices \(\mathbf{x}^{q}\) and event matrices \(\mathbf{y}^{qr}\) giving the state of the Markov chain at each timepoint and transition events responsible for evolving the state. ## 3 Inference In order to fit our model to the spatial timeseries of positive COVID-19 tests across the UK, we make the assumption that the occurrence of a positive test is equivalent to observing a [IR] event. Furthermore, since the [SE] or [EI] events are censored, we adopt the conventional statistical notation \(z_{it}^{\text{\tiny{se}}}=y_{it}^{\text{\tiny{se}}}\) and \(z_{it}^{\text{\tiny{ei}}}=y_{it}^{\text{\tiny{ei}}}\) to denote the presence of censored data \(\mathbf{z}\). Given the data generating model given in Equations (5) and (6), and conditional on unknown parameter vector \(\mathbf{\theta}=\{\psi,\sigma_{s},\gamma_{1},\mathbf{\alpha},\mathbf{s}\}\) and initial conditions \(\mathbf{x}_{0}\), the log likelihood of observing transitions \(\mathbf{y}\), censored transitions \(\mathbf{z}\), and states \(\mathbf{x}_{1:T}\) given a set of initial conditions \(\mathbf{x}_{0}\) and \(\mathbf{\theta}\) is \[\ell(\mathbf{y},\mathbf{z},\mathbf{x}|\mathbf{x}_{0},\mathbf{\theta})=\sum_{t=0}^{T-1 }\sum_{i=1}^{M}[z_{it}^{\text{\tiny{se}}}\log p_{i}^{\text{\tiny{se}}}(t)+(x_{ it}^{\text{\tiny{l}}}-z_{it}^{\text{\tiny{se}}})\log(1-p_{i}^{\text{\tiny{ se}}}(t))+\] \[z_{it}^{\text{\tiny{ei}}}\log p_{i}^{\text{\tiny{ei}}}(t)+(x_{ it}^{\text{\tiny{e}}}-z_{it}^{\text{\tiny{ei}}})\log(1-p_{i}^{\text{\tiny{ ei}}}(t))+\] \[y_{it}^{\text{\tiny{IR}}}\log p_{i}^{\text{\tiny{IR}}}(t)+(x_{ it}^{\text{\tiny{l}}}-y_{it}^{\text{\tiny{IR}}})\log(1-p_{i}^{\text{\tiny{ IR}}}(t))]\] We estimate the joint posterior of the parameters and missing data \(\pi(\theta,\mathbf{z}|\mathbf{y},\mathbf{x}_{0})\) by using a Metropolis-within-Gibbs MCMC scheme in which we draw alternately from \(\pi(\mathbf{\theta}|\mathbf{z},\mathbf{y},\mathbf{x}_{0})\) by adaptive Hamiltonian Monte Carlo, and \(\pi(\mathbf{z}|\mathbf{\theta},\mathbf{y},\mathbf{x}_{0})\) using a discrete-space Metropolis Hastings step. A high-level description of our approach is shown in Algorithm 1. ``` [MISSING_PAGE_POST] ### Drawing samples from \(\pi(\mathbf{z}|\mathbf{\theta},\mathbf{y},\mathbf{x}_{0})\) In this section we describe the discrete-space Metropolis-Hastings algorithms used to draw samples from the conditional posterior of the censored events and initial conditions given state transition parameters and data, \(\pi(\mathbf{z},\mathbf{x}_{0}|\mathbf{\theta},\mathbf{y})\). The approach follows the principles for data-augmentation in individual level models (see for example Jewell et al., 2009), which involves two separate Metropolis-Hastings kernels to explore the space of partially-censored and fully-censored events respectively. Unlike in the continuous-time setting, for our discrete model we also introduce a third Metropolis-Hastings kernel which explores the initial conditions space. In the context of our model, the challenge to successful updating of censored event times is to create a proposal mechanism that respects the constraint that \(x_{it}^{q}\geq 0\) for all \(q\in\{\mathrm{S},\mathrm{E},\mathrm{I},\mathrm{R}\}\) and \(t=0,\ldots,T-1\) and \(i=1,\ldots,M\). #### 3.1.1 Updating partially-censored events The proposed Metropolis Hastings kernel operates on a \(M\times T\) censored event matrix \(\mathbf{z}^{qr},\ [qr]\in\{\mathrm{[SE]},\mathrm{[EI]}\}\). The algorithm proceeds by proposing to move a number of events \(w\) from \(z_{it}^{qr}\) to \(z_{it+d}^{qr}\) where we draw \[i,t \sim\mathrm{Discrete}\left(i,t:z_{it}^{qr}>0\right) \tag{7}\] \[d \sim\mathrm{Discrete}\left(0\lor t-d_{\max},\ldots,-1,1,\ldots,T \wedge t+d_{\max}\right)\] \[w \sim\mathrm{Discrete}\left(1,\ldots,B_{1}(z_{it}^{qr})\right)\] where \(d_{max}>0\) is a tuning constant, and with bounding function \[B_{1}(z_{it}^{qr})=z_{it}^{qr}\wedge w_{\max}\wedge\begin{cases}\min(x_{it}^{ r},\ldots,x_{i,t+d-1}^{r})&\text{if }d>0\\ \min(x_{i,t-d}^{q},\ldots,x_{t-1}^{q})&\text{if }d<0\end{cases}\] where \(w_{\max}>0\) is also a tuning constant. We then propose \[z_{it}^{qr\star} =z_{it}^{qr}-w\] \[z_{t+\delta,i}^{qr\star} =z_{t+\delta}^{qr}+w\] and accept the proposal with probability \[\alpha(\mathbf{z}^{qr},\mathbf{z}^{qr\star})=\frac{\pi(\mathbf{z}^{qr\star}|\mathbf{\theta},\mathbf{ y},\mathbf{x}_{0})}{\pi(\mathbf{z}^{qr}|\mathbf{\theta},\mathbf{y},\mathbf{x}_{0})}\cdot\frac{ \operatorname{nnz}\left(\mathbf{z}^{qr\star}\right)B_{1}(z_{it}^{qr\star};\mathbf{x} ^{\star})}{\operatorname{nnz}\left(\mathbf{z}^{qr}\right)B_{1}(z_{it}^{qr};\mathbf{x} )}\wedge 1\] where \(\operatorname{nnz}\left(\mathbf{z}^{qr}\right)\) denotes the number of non-zero elements in \(\mathbf{z}^{qr}\). #### 3.1.2 Updating occult events To explore the space of occult events, we again operate on \(M\times T\) censored event matrices \(\mathbf{z}^{qr},\ [\operatorname{qr}]\in\{[\operatorname{SE}],[\operatorname{EI}]\}\), proposing to add or delete a number of events to randomly chosen elements. In our model, occult events are overwhelmingly likely to occur close to the end of the analysis time-window, and so we limit our choice of elements to those in the last 21 days of the timeseries. To add events, we choose an element \(z_{it}^{qr}\) to update via \[i \sim\operatorname{Discrete}(1,\ldots,M)\] \[t \sim\operatorname{Discrete}(T-21,\ldots,T)\] and propose to add \(v\) events by \[v \sim\operatorname{Discrete}(1,\ldots,B_{2}(\mathbf{z}_{it}^{qr})),\] where bounding function \[B_{2}(z_{it}^{qr})=v_{\max}\wedge\min(x_{it}^{q},\ldots,x_{iT}^{q})\] with \(v_{\max}\) a tuning constant. We then update \(z_{it}^{qr\star}=z_{it}^{qr}+v\), and accept the proposal with probability \[\alpha(\mathbf{z}^{qr},\mathbf{z}^{qr\star})=\frac{\pi(\mathbf{z}^{qr\star}|\mathbf{\theta}, \mathbf{y},\mathbf{x}_{0})}{\pi(\mathbf{z}^{qr}|\mathbf{\theta},\mathbf{y},\mathbf{x}_{0})}\cdot\frac {21MB_{2}(\mathbf{z}^{qr})}{\operatorname{nnz}\left(\mathbf{z}^{qr\star}\right)B_{3}( \mathbf{z}^{qr\star})}\wedge 1\] with \(B_{3}(\mathbf{z}^{qr\star})\) defined below. To delete events, we restrict our choice of \(i\) and \(t\) to positive elements of \(\mathbf{z}^{qr}\) as in Equation (7), and propose \[i,t \sim\operatorname{Discrete}(i,t:z_{it}^{qr}>0,t\geq T-21)\] \[v \sim\operatorname{Discrete}(1,\ldots,B_{3}(z_{it}^{qr};\mathbf{x}))\] where bounding function \[B_{3}(z_{it}^{qr})=z_{it}^{qr}\wedge v_{\max}\wedge\min(x_{it}^{r},\ldots,x_{iT} ^{r}).\] We then accept the proposal \(z_{it}^{qr\star}=z_{it}^{qr}-v\) with probability \[\alpha(z^{qr},z^{qr\star})=\frac{\pi(\mathbf{z}^{qr\star}|\mathbf{\theta},\mathbf{y},\mathbf{x }_{0})}{\pi(\mathbf{z}^{qr}|\mathbf{\theta},\mathbf{y},\mathbf{x}_{0})}\cdot\frac{\operatorname {nnz}\left(\mathbf{z}^{qr}\right)B_{3}(\mathbf{z}^{qr})}{21MB_{2}(\mathbf{z}^{qr\star})} \wedge 1.\] #### 3.1.3 Updating initial conditions In the third data-augmentation kernel, we update the initial conditions matrix \(\mathbf{x}_{0}\). Given that we assume a closed population, such that \(\mathbf{x}^{\text{\tiny{$\mathbf{x}$}}}+\mathbf{x}^{\text{\tiny{$\mathbf{x}$}}}+\mathbf{x}^{\text{ \tiny{$\mathbf{x}$}}}+\mathbf{x}^{\text{\tiny{$\mathbf{x}$}}}=\mathbf{N}\) the size vector of the population, we have two options for updating \(\mathbf{x}_{0}\). The first would be to swap individuals between pairs of adjacent epidemiological states. However, since \(\mathbf{x}_{0}\) is _a posteriori_ conditional on the events \(\mathbf{z}\), the small conditional variance \(Var(\mathbf{x}_{0}|\mathbf{z},\mathbf{y},\mathbf{\theta})\) results in slow exploration of the censored data space. Instead, we employ a _joint_ update of \(\mathbf{x}_{0}\) and \(\mathbf{z}\) in which we consider a set of left-censored events occurring prior to timepoint \(t=0\) that have given rise to \(\mathbf{x}_{0}\). For a given transition [qr], we begin as before by choosing an element \(z_{it}^{qr}\) to update \[i \sim\text{Discrete}(1,\dots,M)\] \[t \sim\text{Discrete}(0,\dots,6).\] noting that we restrict the choice of \(t\) to a window \([0,7)\) at the beginning of the epidemic - this helps to reduce the possible dimensionality of the discrete random walk, since it is unlikely that left-censored events could be moved to later timepoints without large changes in the posterior leading to a rejected move. We now choose to either move events forwards (\(d=1\)) or backwards (\(d=-1\)) in time with equal probability such that \[d\sim\text{Discrete}(-1,1)\] and a number of events to move such that \[h\sim\text{Discrete}(1,\dots,B_{4}(\mathbf{x}_{0}))\] with \[B_{4}(\mathbf{x}_{0})=h_{\max}\wedge\begin{cases}z_{it}^{qr}\wedge\min(x_{i1}^{q}, \dots,x_{it}^{q})&\text{if }d=-1\\ \min(x_{i0}^{r},\dots,x_{it-1}^{r})&\text{if }d=1)\end{cases}\] and where \(h_{\max}>1\) is a tuning constant. We then let \[x_{i0}^{q\star} =x_{i0}^{q}+dh\] \[x_{i0}^{r\star} =x_{i0}^{r}-dh\] \[z_{it}^{qr\star} =z_{it}^{qr}+dh\] and accept the proposal with \[\alpha(\mathbf{x}_{0},\mathbf{z},\mathbf{x}_{0}^{\star},\mathbf{z}^{\star})=\frac{\pi(\mathbf{x}_ {0}^{\star},\mathbf{z}^{\star}|\mathbf{y},\mathbf{\theta})}{\pi(\mathbf{x}_{0},\mathbf{z}|\mathbf{y}, \mathbf{\theta})}\cdot\frac{B_{4}(\mathbf{x}_{0}^{\star})}{B_{4}(\mathbf{x}_{0})}\wedge 1.\] We note that if \(B_{4}=0\), then \(f_{k}(h)=0\)\(\forall\)\(h\) in Equation 3.1.3 and the move will be rejected. #### 3.2 Definition of \(R_{t}\) We define an approximate stratified temporal reproduction number \(R_{jt}\) as the expected number of further individuals that an individual in stratum \(j\) will go on to infect given the state at time \(t\). We define this by first considering the pairwise force of infection defined by \(\mathbf{K}\) exerted by an individual in \(j\) on a susceptible individual in \(i\) such that \[\mathbf{R}_{t}\approx\frac{1-\exp\left(-\left(\mathbf{X}_{\cdot t}^{\mathrm{s}}\right)^{ T}\cdot\mathbf{K}\right)}{1-\exp(-\gamma_{0})}\] We remark that this is an approximation since both \(\mathbf{X}_{\cdot t}^{\mathrm{s}}\) (and therefore \(\lambda^{\mathrm{sc}}(t)\)) is assumed constant over the course of an individual's infectious period. ### Software implementation The model and sampler code were implemented in Python 3.8 using TensorFlow and TensorFlow Probability computational and probabilistic programming libraries for GPU acceleration (Abadi et al., 2015; Dillon et al., 2017). The Python code implementing this analysis is freely available under the MIT license at [https://gitlab.com/chicas-covid19/covid19uk](https://gitlab.com/chicas-covid19/covid19uk), with the Version 1.0 snapshot available at [https://doi.org/10.5281/zenodo.7715657](https://doi.org/10.5281/zenodo.7715657). ## 4 Sampler optimisation Before addressing the real-world application, we demonstrate that the discrete-space samplers described above are optimised at the conventional accept/reject ratio of 0.23. Here we provide results for tuning \(m\), the number of metapopulations and \(w\), the number of events for the partially-observed event time moves described in Section 3.1.1, as applied to the [SE] and [EI] transitions respectively. We first define the mean squared jumping distance to be \[\mathrm{MSJD}=\frac{1}{K}\sum_{k=1}^{K}\sum_{i,t,s}\left(x_{it}^{s(k)}-x_{it}^ {s(k-1)}\right)^{2}.\] We then run the partially-observed event time sampler separately for the [SE] and [EI] transitions respectively, considering all other censored data and parameters fixed at values taken from a randomly-chosen iteration of the converged MCMC chain described in Section 5. For each transition, we run the sampler for \((m,x_{max})\in\{m:1,\ldots,10\}\times\{x_{max}:1,\ldots,100\}\) with \(m\) and \(x_{max}\) as defined in Section 3. For each transition and \((m,\,x_{max})\) tuple, we plot the MSJD against the acceptance ratio in Figure 3. This confirms that the MSJD is maximised at an acceptance ratio of approximately 0.234 irrespective of the transition. However, the way in which \((m,\,x_{max})\) affects the MSJD differs depending on the transition selected. Moreover, we observe that the total number of events moved across all selected metapopulations is conserved, such that as long as \(mx_{max}\approx 30\) we achieve the maximum MSJD. ## 5 Application to the UK COVID-19 epidemic In this section we apply the model described in Section 2 to the UK COVID-19 spatial case timeseries shown in Section 1.1. The MCMC algorithms were run for \(k=40000\) Figure 4: Mean squared jumping distance (left) and Metropolis Hastings acceptance probability (right) for the _move_ algorithm for both the \(\left[\mathrm{SE}\right]\) (top) and \(\left[\mathrm{EI}\right]\) (bottom) transitions. Figure 3: Mean squared jumping distance versus acceptance probability for the \(\left[\mathrm{SE}\right]\) (left) and \(\left[\mathrm{EI}\right]\) (right) optimisation studies respectively (Section 4). Blue dots represent each combination of \(m\) and \(x_{max}\), and red vertical line indicates theoretically optimal acceptance rate. iterations, with \(l=380\) censored event updates per iteration (Algorithm 1) with tuning constants as in Table 1. Three independent MCMC algorithms were run, initialising each Markov chain using a random draw from the prior distribution, and discarding the first 10000 iterations for any calculated quantities (\(R_{t}\), riskmaps, and predictive distributions). Traceplots of the scalar quantities \(\exp(\alpha_{0})\), \(\exp(\gamma_{1})\), \(\psi\) are shown in Figure 5, superimposing the three independent chains and calculating the Brookes-Gelman-Rubin statistics (\(\tilde{R}\), not to be confused with the time-varying reproduction number \(R_{t}\)) for each parameter (Brooks and Gelman, 1998). The algorithm exhibits satisfactory convergence for all three chains, after a conservative 10000 iteration burn-in which is removed to compute the following now-casting and predictive results. The temporal trend in the global time-varying reproduction number is shown in Figure 6, demonstrating marked variation through time either side of \(R_{t}=1\) as expected given the case timeseries trajectory in Figure 2. The credible intervals around \(R_{t}\) are narrow, reflecting the choice of case observation model and the large number of daily cases occurring in the UK as a whole, with variation due to spatial connectivity and risk accounted for by the model. At the LAD-level, Figure 7 (left) shows the mean posterior local reproduction number \(R_{it}\) as of the 31st August 2021. Considerable spatial heterogeneity is seen, driven only in part by the spatial distribution of cases on the same date (Figure 2). This reflects the dynamics of the spatial epidemic over the whole time-window, with areas such as East Anglia exhibiting higher \(R_{it}\) values than might be concluded simply by looking at the most recent case data. A convenient way to study the importance of spatial transmission is to plot the attributable fraction of the total infection risk on an individual in LAD \(i\) which is due to between-LAD transmission, defined as \[AF_{it}=\frac{\phi\sum_{j\neq i}\frac{c_{i}jx_{jt}^{\mathrm{t}}}{n_{j}}}{x_{it }^{\mathrm{t}}+\phi\sum_{j\neq i}\frac{c_{i}jx_{jt}^{\mathrm{t}}}{n_{j}}}.\] For a particular LAD of interest, the importance of spatial transmission therefore varies with both connectivity and within-LAD disease prevalence. For the most recent time-point in our case timeseries, we plot the posterior mean AF per LAD. Highly connected \begin{table} \begin{tabular}{l l l r} \hline **Transition** & **Update** & **Tuning constant** & **Value** \\ \hline \([\mathrm{SE}]\) & Partially-censored & \(d_{max}\) & 84 \\ & & \(w_{max}\) & 22 \\ & Fully-censored & \(v_{max}\) & 20 \\ & Initial conditions & \(h_{max}\) & 19 \\ \hline \([\mathrm{EI}]\) & Partially-censored & \(d_{max}\) & 84 \\ & & \(w_{max}\) & 18 \\ & Fully-censored & \(v_{max}\) & 20 \\ & Initial conditions & \(h_{max}\) & 17 \\ \hline \end{tabular} \end{table} Table 1: Tuning constants used for MCMC algorithm to fit the spatial stochastic meta-population model to COVID-19 case time-series data. Figure 5: Traceplots of uni-dimensional parameters \(\exp(\alpha_{0})\) (i.e. \(\alpha_{0}\) expressed as absolute infection risk per individual per day), \(\exp(\gamma_{1})\) (i.e. \(\gamma_{1}\) expressed as relative risk), and \(\psi\) (infection rate per commute visit). Figure 6: Posterior distribution of \(R_{t}\) over the analysis period. Figure 7: Posterior mean spatial reproduction number \(R_{it}\) (left) and Population Attributable Fraction of the infection hazard due to between-LAD mobility (right) as of 31st August 2021. LADs, such as those comprising or close to the major cities, typically have a higher AF than more rural LADs, driven by overall population mobility. Whilst our model captures the national-level baseline transmission rate and inter-LAD connectivity, it also allows local variation in baseline transmission rate through our spatially-correlated term \(\mathbf{s}\). The posterior mean of \(\mathbf{s}\) and exceedance probability \(Pr(s>0)\) is plotted geographically in Figure 8. In general, these results indicate a trend towards higher baseline transmission in less densely populated, rural LADs compared to more populated, urban LADs. Posterior predictive checking of our model was performed by comparing the 90% credible interval of the in-sample smoothing distribution of [IR] transitions against observed case numbers for the last two weeks of our analysis window. Out-of-sample checking was performed by comparing the 90% prediction interval of the 2-week-ahead predictive distribution against the subsequent 2 weeks' worth of case data. These comparisons are plotted for the Lancaster LAD in Figure 9. We see that our model captures both the modest increase and weekly periodicity of the case numbers well, accommodating the apparent "catch-up" in cases seen on a Monday after the propensity to test-and-report less at weekends. In practice, these plots are useful to detecting LADs departing from the expected epidemic trajectory early, so as to alert public health authorities to either case surges or unexpectedly low incidence in individual LADs. Figure 8: Posterior mean of \(\mathbf{s}\) (left), and exceedance probability \(q=Pr(s>0)\) (right) for each Local Authority District. ## 6 Discussion In this paper we have developed an MCMC-based method for fitting a discrete-time spatial stochastic SEIR model to geolocated case timeseries data for the COVID-19 pandemic in the UK. We address particularly the challenge of providing local now-casts and short-term forecasts of epidemic spread, based on knowledge of population structure and mobility. Our approach differs from "classic" epidemic models Birrell et al. (e.g. 2021), in that we make no attempt to fit our model to the entire epidemic timeseries (in the case of COVID-19 from the earliest cases in February 2020), but choose to analyse only the most recent 12 weeks of data. This reflects that the fact the epidemic in the UK was highly non-stationary, so that analysing a longer timeseries would likely have provided little further posterior information and run the risk of being misleading. Our model represents a trade-off between its complexity and our ability to fit it to our 380-dimensional timeseries. It was implemented rapidly in response to the pandemic, with the innovation being the constrained-space samplers operating on the censored event data as in Section 3.1. To our knowledge, this is the first time that such an MCMC scheme has been attempted for a discrete-time model such as this. As a principled fully Bayesian stochastic approach to the modelling and inference of spatio-temporal epidemics, the approach has many advantages. The MCMC algorithm works at scale, allowing the fitting of the COVID-19 model at the LAD level whilst incorporating human mobility in a pragmatic way; we remark that the approach is flexible enough to have been trivially extended to using dynamic mobility data, had it Figure 9: Smoothing distribution (left of vertical line) and predictive distribution (right of vertical line) of number of \([\mathrm{IR}]\) events. The smoothing distribution is compared against the observed case data for the Lancaster LAD for the last 2 weeks of the analysis period, 18th June 2021 – 31st August 2021, whilst the predictive distribution is shown for a 2 week prediction horizon with subsequently observed cases superimposed. been available. Importantly, the Bayesian treatment of censored transition event data allows our model to not only provide unbiased parameter inference, but also to provide probabilistic predictions of future case numbers with an improved measure of uncertainty consistent with all sources of modelled noise. Finally, the sole use of publicly available data and open source hardware/operating system-agnostic computational libraries aids portability and automation of the model. Indeed, during the COVID-19 pandemic, we deployed the model as an automatic nightly analysis pipeline requiring no manual intervention other than statistical oversight of the results. Despite the advantages of our approach, however, it is clear that further research is needed to address a number of key limitations as follows. With individual-level epidemic models incorporating a high degree of population heterogeneity, it is common to attempt to estimate the baseline [IR] transition rate \(\gamma_{0}\)(e.g. Jewell et al., 2009; McKinley et al., 2009; Chis Ster and Ferguson, 2007). However, it is known that estimating \(\gamma_{0}\) in the presence of censored event times is problematic, requiring an exceptionally efficient sampler and re-parameterisation of the model to approximately orthogonalise \(\gamma_{0}\) with respect to the censored data (Neal and Roberts, 2005). With interpretability of the parameters driving our model construction for this application, however, we chose to fix \(\gamma_{0}=\log 0.25\) based on clinically-derived data. This is a weak assumption compared to other modelling approaches (e.g. Scott et al., 2020), though it limits the capability to detect temporal changes in \(\gamma_{0}\) which might occur as a result of improved case detection or changes in the host-pathogen interaction. Our temporal and spatial random effects, \(\boldsymbol{\alpha}\) and \(\boldsymbol{s}\) respectively, were introduced following the observation that the disease transmission rate appeared to fluctuate across time and space more than could be explained by human mobility and the size of the population at risk. In principle, a spatiotemporal random effect could have been adopted, as is commonly used for disease mapping. However, such methods increase the dimensionality of the latent surface from \(T+M\) to \(TM\) and typically require fast approximate methods to compute the posterior (Schrodle and Held, 2011; Zammit-Mangion et al., 2012). Even if such a method were to be applied to this model, the discrete nature of the censored event space would still require a Metropolis-Hastings approach such as ours, though an extremely efficient proposal would be required to ensure an adequate effective sample size. A further pragmatic assumption adopted by our method is that the case timeseries is a perfect observation of the number of [IR] events per day. Though this assumption might hold incontrovertibly for highly pathogenic disease in small populations (such as foot-and-mouth disease in cattle), for large human populations in which social and demographic factors affect the propensity of an individual to take a test, a stochastic observation process would be preferable. Empirical testing showed that whilst our method could be implemented if the [IR] events were treated as latent - but informed by for example a Binomial model - the resulting algorithm was extremely slow to converge due to the extra censored data and the interactions within the epidemic event space this engendered. The corollary to these limitations is that although our approach certainly has utility in informing disease control policy during an outbreak, the field of epidemic modelling is in urgent need of improved methods for inference in the presence of high-dimensional, correlated, and discrete censored data. Whilst recent developments in particle filtering methods have offered promise for improving inference in individual-level models (e.g. (Ju et al., 2021; Rimella et al., 2023)), dependent stochastic metapopulation models such as ours still present a challenge for high-dimensional importance sampling. In MCMC methodology, recent advances in non-centering to orthogonalise state transition events with respect to the model parameters, these are based on individual-level continuous time models which fail to scale to national-level human population sizes (Pooley et al., 2015, 2019). Finally, all modelling approaches are contingent on reliable and timely data to maximise their utility in prediction and policy advice. In this study, we based our analysis on publicly available case incidence data which as argued above is fraught with uncertainty due to changes in individuals' preferences towards testing uptake. However, unbiased prevalence sampling data was collected in the UK during the pandemic, though was unavailable at the LAD-level due to concerns over data privacy (Eales et al., 2022; House et al., 2022). Nevertheless, the ability to use prevalence _as well as_ incidence data for inference conditional on an epidemic model offers the possibility of improving parameter identifiability through observations not only of the 1st-order process (transitions), but also of the 0th-order process (epidemic compartments). Additionally, our approach would have been greatly improved with the addition of real-time human mobility data obtained indirectly through methods such as geolocation via cellular telephony. Such telephony data has been shown to be highly effective as a source of covariate data in epidemic models (Grantz et al., 2020). As it stands, our model is capable of explaining spatial patterns of disease spread via established commuting routes, but cannot separate spatially-homogeneous variation in human mobility due to social determinants (holiday periods, media announcements, etc) from the underlying transmissibility of the virus - the inclusion of accurate time-varying mobility data in place of our fixed commuting data would offer the opportunity to surmount this limitation. We therefore conclude by recommending that due consideration be made to appropriate sharing of these data during a future outbreak emergency, so that the full potential of spatial epidemic models be realised as a resource for policy information and evidence. ## Competing interests The authors declare no competing interests. ACH and CPJ were supported through the Wellcome Trust 'GEM: translational software for outbreak analysis' (grant number UNS73114). ACH, CPJ, JR were supported by MRC through the JUNIPER modelling consortium (grant number MR/V038613/1). CPJ and GOR were supported by EPSRC (grant number EP/V042866/1). We are indebted to the Google Research team, who helped us to implement our model, and fixed bugs in the underlying software libraries swiftly and effectively. The authors would like to thank The High End Computing facility at Lancaster University for providing the facilities required for fitting the models in this paper. The views expressed in this paper are those of the authors and not necessarily those of their respective funders or institutions. ## Data availability All data used in this analysis were public at the time of publication. COVID-19 case data is available from the UK Government Coronavirus website (Public Health England, 2020, 2021). Snapshots of the covariate data used for the analysis may be found in the accompanying software archive at [https://doi.org/10.5281/zenodo.7715657](https://doi.org/10.5281/zenodo.7715657).
2306.16652
TimeClave: Oblivious In-enclave Time series Processing System
Cloud platforms are widely adopted by many systems, such as time series processing systems, to store and process massive amounts of sensitive time series data. Unfortunately, several incidents have shown that cloud platforms are vulnerable to internal and external attacks that lead to critical data breaches. Adopting cryptographic protocols such as homomorphic encryption and secure multi-party computation adds high computational and network overhead to query operations. We present TimeClave, a fully oblivious in-enclave time series processing system: TimeClave leverages Intel SGX to support aggregate statistics on time series with minimal memory consumption inside the enclave. To hide the access pattern inside the enclave, we introduce a non-blocking read-optimised ORAM named RoORAM. TimeClave integrates RoORAM to obliviously and securely handle client queries with high performance. With an aggregation time interval of $10s$, $2^{14}$ summarised data blocks and 8 aggregate functions, TimeClave run point query in $0.03ms$ and a range query of 50 intervals in $0.46ms$. Compared to the ORAM baseline, TimeClave achieves lower query latency by up to $2.5\times$ and up to $2\times$ throughput, with up to 22K queries per second.
K. Bagher, S. Cui, X. Yuan, C. Rudolph, X. Yi
2023-06-29T03:30:53Z
http://arxiv.org/abs/2306.16652v1
# TimeClave: Oblivious In-enclave Time series Processing System ###### Abstract Cloud platforms are widely adopted by many systems, such as time series processing systems, to store and process massive amounts of sensitive time series data. Unfortunately, several incidents have shown that cloud platforms are vulnerable to internal and external attacks that lead to critical data breaches. Adopting cryptographic protocols such as homomorphic encryption and secure multi-party computation adds high computational and network overhead to query operations. We present TimeClave, a fully oblivious in-enclave time series processing system: TimeClave leverages Intel SGX to support aggregate statistics on time series with minimal memory consumption inside the enclave. To hide the access pattern inside the enclave, we introduce a non-blocking read-optimised ORAM named RoORAM. TimeClave integrates RoORAM to obliviously and securely handle client queries with high performance. With an aggregation time interval of \(10s\), \(2^{14}\) summarised data blocks and \(8\) aggregate functions, TimeClave run point query in \(0.03ms\) and a range query of \(50\) intervals in \(0.46ms\). Compared to the ORAM baseline, TimeClave achieves lower query latency by up to \(2.5\times\) and up to \(2\times\) throughput, with up to \(22\)K queries per second. Keywords:Time Series Processing ORAM Intel SGX. ## 1 Introduction Time series data (TSD) are data points collected over repeated intervals, such as minutes, hours, or days. Unlike static data, TSD represents the change in value over time, and analysing it helps understand the cause of a specific pattern or trend over time. Studies have been conducted in various fields on TSD to build efficient time series systems, to name a few, healthcare [1, 2], smart home [1, 3], and smart vehicles [4]. These systems continuously produce massive amounts of TSD that need to be stored and analysed in a timely manner [5, 6, 7], which cannot be efficiently met by general relational database management systems. For this, time series databases (TSDB) have been designed and deployed on cloud platforms to provide a high ingest rate and faster insights [8], such as Amazon TimeStream [9] and InfluxDB [10]. Unfortunately, adopting plaintext TSDBs on cloud platforms to store and process this massive amount of sensitive TSD can lead to critical data breaches, as several incidents have shown that cloud platforms are vulnerable to internal and external attacks [11; 12; 13]. One possible solution to protect TSD in the cloud is to adopt cryptographic protocols such as homomorphic encryption (HE) and secure multi-party computation (MPC) to securely store and process TSD [14; 15; 16; 17; 18; 19]. For example, TimeCrypt [18] adopts partial HE to provide real-time analytics for TSD, while Waldo [19] adopts MPC in a distributed-trust setting. Unfortunately, those solutions have two key limitations. The first is the high computational and network communication cost. As demonstrated in Waldo [19], network communication adds up to \(4.4\times\) overhead to query operations. Similarly, HE is orders of magnitude slower than plaintext processing [20; 21]. The second limitation is that cryptographic protocols, specifically HE, support limited functionalities such as addition and multiplication on integers [21], where performing complex computations on floating-point numbers adds significant overhead and gradually loses accuracy [22]. Furthermore, basic functionalities in time series systems (such as \(max\) and \(min\)) require a secure comparison, which is a costly operation using HE and MPC [23]. A more practical solution is to adopt hardware-based approaches such as Intel SGX [24] to process plaintext TSD in a secure and isolated environment in the cloud, i.e., an enclave. Inside the enclave, data is decrypted and processed in plaintext, allowing systems to securely perform arbitrary and complex computations on the data. However, processing TSD within the enclave is not straightforward due to the costly context switch and access pattern leakage. A context switch occurs when a CPU enters or exists the enclave, such as when the enclave requests encrypted data that reside outside the enclave. Several studies report that context switch is up to \(50\times\) more expensive than a system call [25; 26]. Although such overhead can be minimised by adopting Switchless Calls techniques [26], the enclave has to validate the requested data to prevent the untrusted part from tampering with the query. In addition, the enclave needs to decrypt the requested data, which adds a costly overhead, as seen in previous work [27]. SGX-based solutions are also vulnerable to access pattern attacks, where an attacker performs a page table-based attack [28; 29; 30; 31] to observe which memory page is accessed inside the enclave. This leakage allows the attacker to infer which data are being accessed and when [28; 29; 32; 33; 34]. Several studies have shown that such information can recover search queries or a portion of encrypted records [35; 34; 36]. A widely adopted approach to hide access patterns is to store encrypted data in an Oblivious RAM (ORAM). Several solutions combine Intel SGX with ORAM to hide access patterns inside the enclave. For example, Oblix [37] and ZeroTrace [38] deploy the ORAM controller securely inside the enclave while leaving the ORAM tree outside the enclave, which increases the communication between the enclave and the untrusted part. Such communication adds additional overhead to the system due to context switching and additional processing performed by the enclave (e.g., data decryption and validation), which degrades the system's performance. Even when deploying the ORAM inside the enclave, these solutions adopt vanilla ORAMs, which are not optimised to handle non-blocking clients' queries that dominate the workload of read-heavy systems like TSDB. Motivated by the above challenges, we ask the following question: _How can we build an oblivious SGX-based system capable of securely storing TSD and answering clients' queries with high performance while hiding access patterns?_ **TimeClave.** To answer this question, this paper presents TimeClave, an oblivious in-enclave time series processing system that efficiently stores and processes TSD inside the enclave. TimeClave resides entirely inside the enclave and adopts oblivious primitives to provide a fully oblivious in-enclave time series processing system. TimeClave supports oblivious statistical point and range queries by employing a wide range of aggregate and non-aggregate functions on TSD that are widely adopted in the area of time series processing [9, 10, 39], such as \(sum\), \(max\), and \(stdv\). To efficiently protect against access pattern leakage inside the enclave, we introduce Read-optimised ORAM (RoORAM), a non-blocking in-enclave ORAM. As time series systems need to store and query TSD simultaneously, most ORAMs fail to meet such requirements. The reason is that ORAMs can perform only one operation at a time, i.e. read or write. Regardless of the required operation, ORAM reads and writes data back to the ORAM tree to hide the operation type. As a result, clients' queries are blocked during a write operation and delayed during read operations, especially in read-heavy systems. An efficient approach to address the previous drawbacks is to decouple read and write operations to handle non-blocking client queries. Existing solutions follow different approaches to support non-blocking read operations and parallel access to the ORAM [40, 41]. However, these solutions require either a proxy server to synchronise the clients with the cloud server [40] or require the client to maintain a locally-cached sub-tree that is continuously synchronised with the cloud server [41]. Such a client or third-party server involvement degrades the system's performance and usability, especially in read-heavy systems like time series processing, which requires real-time and low-latency query processing. RoORAM adopts and improves PathORAM [42] to handle client queries with better performance. RoORAM achieves this improvement by having separate ORAM trees, a read-only and a write-only tree, to decouple read- and write operations. Unlike previous solutions [40, 41], RoORAM is an in-enclave and lightweight; hence, it does not require client involvement or a proxy server. RoORAM adopts oblivious primitives inside the enclave to access the ORAM controller obliviously. To avoid the costly context switches, TimeClave integrates RoORAM to efficiently and obliviously store and access TSD inside the enclave, thus, eliminating excessive communication with the untrusted part. In detail, TimeClave leverages the fact that TSD is queried and aggregated in an approximate manner to store statistical summaries of pre-defined time intervals inside the enclave. By only storing summarised TSD, TimeClave reduces the size of the ORAM tree, reducing the consumption of enclave memory and the cost of ORAM access. Such a design allows TimeClave to store and process a large amount of TSD data with low memory consumption while providing low-latency queries. **Performance evaluation** We implemented and evaluated TimeClave on an SGX-enabled machine with 8 cores and 128 GiB RAM SS6. With different query ranges of 8 aggregate functions, TimeClave runs a point query in \(0.03ms\) and a range query of 50 intervals in \(0.46ms\). TimeClave achieves lower query latency of up to \(2.5\times\) and up to \(2\times\) higher throughout. Finally, TimeClave achieves up to \(17\times\) speed-up when inserting new data compared to the ORAM baseline. ## 2 Background ### Intel SGX Intel SGX is a set of commands that allow creating an isolated execution environment, called an enclave. Applications run their critical code and store sensitive data inside the enclave. The enclave is designed to protect sensitive code and data from a malicious host. No other process on the same CPU (except the one that created the enclave), even the privileged one (kernel or hypervisor), can access or tamper with the enclave. SGX-based applications are divided into two parts: untrusted and trusted parts. The two parts communicate with each other using user-defined interface functions, i.e., ECALL and OCALL. The ECALL allows the untrusted part to invoke code inside the trusted part (enclave) and send encrypted data to the enclave. OCALL allows the enclave to send encrypted data to the untrusted part. In addition, SGX provides a remote attestation feature, allowing the client to authenticate the enclave's identity, verify the integrity of the enclave's code and data, and share encryption keys. We refer the readers to [43] for more details about Intel SGX. During remote attestation, the client and the enclave establish a secure channel for secure communication and share their encryption keys. ### Oram Oblivious RAM (ORAM) was introduced by Goldreich and Ostrovsky [44] to protect client data on an untrusted remote machine (such as the cloud) against access pattern attacks. The main idea of ORAM is to hide the user's access patterns by hiding the accessed memory addresses on the cloud, hence, making the user's access oblivious. To achieve this, ORAM continuously shuffles and re-encrypts a portion or all of the user's accessed data in a trusted memory region or on a trusted third-party machine. Since then, many ORAM schemes have been proposed, to list a few, [44, 45, 46, 47, 48, 49]. As of writing this paper, the most notable among them is the widely adopted PathORAM [42]. PathORAM uses a complete binary tree of height \(L=[\log_{2}N]-1\) to store \(N\) encrypted records on untrusted storage, e.g., an untrusted cloud. Every node in the tree is a bucket, where each bucket contains a fixed number (\(Z\)) of encrypted data blocks. The tree contains real blocks (client's blocks) and dummy blocks. Each data block is randomly assigned to a leaf between \(0\) and \(2^{L}-1\). The client maintains a Position Map that tracks the leaf to which each block points in the tree. To access a block in the tree, the client retrieves all blocks on the path from the root node to the leaf node of the required block and stores them in the stash. The client then assigns a random leaf to the accessed/updated block. All blocks are then re-encrypted and inserted in buckets where empty buckets are filled with dummy blocks. The accessed path is then written back to the tree (cloud). As a result, the cloud cannot tell which block is accessed by the client or identify the blocks. In RoORAM, we adopt and enhance PathORAM to support non-blocking queries while achieving high performance with a bit of relaxation on the security guarantee (See RoORAM SS4.9). ### Oblivious Primitives Oblivious primitives are used to access or move data in an oblivious manner, i.e., without revealing access patterns. For example, accessing the \(i\)-th item in an array can reveal the memory address of the item. However, an oblivious access primitive accesses the item without revealing its address. Therefore, hiding the memory access patterns. We use a set of data-oblivious primitives based on previous works [50, 51]. We use the following primitives from a library of general-purpose oblivious primitives provided in [52]: - **Oblivious comparisons**. The following primitives are used to obliviously compare two variables; oless(x,y), ogreater(x,y) and oequal(x,y). - **Oblivious assignment**. The oassign(cond,x,y) is a conditional assignment primitive, where a value is moved from a source to a destination variable if the condition is true. Typically, the oassign is used with the oblivious comparisons primitives to compare values and return a result obliviously. - **Oblivious array access**. The oaccess(op, arr, index) primitive is used to read/write an item in an array without revealing the item's address. - **Oblivious exists**. The oexists(arr,x) primitive is used to obliviously determine if a giving item exists in a giving array or not. oexists achieves this by combining oaccess and oequal. ## 3 System Overview TimeClave consists of 4 entities: data producers, clients, server (referred to as the cloud or server), and the SGX enclave running on the server. Data producers, such as sensors or other devices, generate raw time series data and upload them to the server. Clients send encrypted queries to the server. Finally, the server (typically deployed in the cloud) stores the time series data and handles clients' queries. ### Threat Model We consider a semi-trusted server where an adversary is interested in learning clients' data and queries. Furthermore, we consider an adversary who has full control over the server except for the CPU (Intel SGX). Therefore, the adversary can obtain the encrypted client queries and data, but cannot examine them in plaintext. In addition, the adversary can see the entire memory trace, including the enclave memory (at the page level). We consider trusted data producers and clients; therefore, only authorised users can submit queries to the server. We do not consider DoS attacks or other side-channels on the enclave, e.g., speculative execution [53], voltage changes [54], or cache attack [55]. We discuss the of security of RoORAM in SS4.9 and TimeClave in SS5.3. ### Overview of TimeClave The architecture of TimeClave is depicted in Fig. 1. We can see that TimeClave is partitioned into two parts on the server: untrusted and trusted. The untrusted part runs on the host outside Intel SGX while the trusted part runs inside Intel SGX (referred to as the enclave or enclave code). The role of the untrusted part is to facilitate communication between the client (including the data producer) and the enclave. This is achieved by receiving encrypted requests from the client and sending them to the enclave. To protect data and queries from the cloud, data producers, clients, and the enclave share a secret key \(k\). The raw time series data and queries are encrypted with \(k\) before being sent to the cloud. Note that clients are authenticated and \(k\) is shared during SGX Remote Attestation (RA). Also, clients and data producers authenticate the enclave's identity and verify the code's integrity during RA. As depicted in Figure 1, most of TimeClave components run inside the enclave. TimeClave stores the data in an ORAM inside the enclave, i.e., RoORAM to protect against SGX memory access patterns. The reason of deploying TimeClave entirely inside the enclave is that new generations of SGX support up to 1 TB of enclave memory, allowing applications to store more data and run larger applications inside the enclave. TimeClave utilises the enclave's large memory to securely store and process data securely within the enclave [56]. The trusted and untrusted components of TimeClave cooperate to perform two main functionalities: storing time series data and processing client queries. Figure 1: TimeClave architecture. The figure illustrates how TimeClave securely and obliviously stores time series data (1), and how it handles clients’ queries (2). **1) Storing time series data**. The data producer generates and encrypts raw time series data and transmits them to the server. A request handler on the untrusted part forwards it to the enclave. The enclave then decrypts the data and stores them temporarily for a pre-defined time interval (\(T\)). When \(T\) elapses, TimeClave (inside the enclave) calculates the aggregated values (using all supported aggregate functions) for the data points cached in temporary storage and stores the values in a block. Once the block is generated, the raw time series data are discarded, and the block is stored in the ORAM, i.e., RoORAM. **2) Clients' Query**. The client first generates a query \(\mathcal{Q}=\langle f,(t_{a},t_{b})\rangle\), where \(f\) is the aggregate function, \(t_{a}\) and \(t_{b}\) are the time range. The client then encrypts the query, i.e., \(\mathcal{Q}^{\prime}=\mathcal{E}_{k}(\mathcal{Q})\), where \(\mathcal{E}\) is a symmetric encryption scheme. The encrypted query is then transmitted to the server. Once the request handler receives the query, it sends it to the enclave. The enclave first decrypts the query by computing \(\mathcal{Q}=\mathcal{D}_{k}(\mathcal{Q}^{\prime})\), and then sends it to the query optimiser. The query optimiser is responsible for optimising the number of accessed blocks in the ORAM tree by efficiently accessing and merging blocks from different aggregation intervals (For more details refer to 5.2. The results are encrypted using the client's key and returned to the client through a secure channel established through RA. **Supported Aggregate Functions**: TimeClave supports a set of additives aggregates, i.e., sum, count, mean, variance and standard deviation. In addition, TimeClave supports a set of more complex non-additive aggregates, i.e., max and min. TimeClave uses the previous functions on a set of data points to generate summarised blocks for a pre-defined time interval (see SS5.1). These functions are used to answer simple and complex queries, where multiple aggregated values are used in the calculations instead of raw data points (see SS5.2). ## 4 RoORAM In this section, we introduce our proposed ORAM, namely RoORAM, which is integrated into TimeClave to provide oblivious data storage inside the enclave capable of handling non-blocking read-operations (i.e., clients' queries). Later, we describe how TimeClave efficiently stores time series data in RoORAM and how it is used to realise clients' queries. ### Overview The main idea of RoORAM is to decouple the eviction process from the read/write operation. This separation allows RoORAM to evict the accessed paths without blocking clients' queries. Inspired by [41], RoORAM performs read-operations (queries) on a read-only tree and write-operations (writing data and eviction) on a write-only tree. However, unlike [41], RoORAM stores and obliviously accesses the controller and components inside the enclave (as illustrated in Figure 2). Further, RoORAM does not require clients' involvement to maintain a locally-cached sub-tree nor requires synchronisation with the server. Such a lightweight design makes RoORAM tailored for in-enclave read-heavy time series processing systems, which require real-time and low-latency query processing. RoORAM uses the dedicated read-only tree to perform query operations only (read operation). This allows RoORAM to perform multiple read-operations on the read-only tree (reading multiple paths) before evicting the accessed paths. The retrieved blocks are stored in the stash. After \(\mathcal{R}\) read-operations, RoORAM evicts the blocks from the stash to the write-only tree and then synchronises both trees. RoORAM notations are defined in Table 1. ### Structure and Components **Binary tree**. Similar to PathORAM, RoORAM stores data in a binary tree data structure of height \(L=\lceil\log_{2}(N)\rceil-1\) and \(2^{L}\) leafs. **Bucket**. Each node in the tree contains a bucket, where each bucket contains \(Z\) real data blocks. Buckets containing less than \(Z\) blocks are filled with dummy data. **Path**. A path represents a set of buckets from the leaf node \(x\) to the root node. \(\mathcal{P}(x)\) denotes the path from leaf node \(x\) to the root and \(\mathcal{P}(x,\ell)\) denotes the bucket at level \(\ell\) along the path \(\mathcal{P}(x)\). **Block**. A block contains summarised data for a specific pre-defined time interval. Each block is assigned a random path in the tree between \(0\) and \(2^{L}-1\). Accessing a block is achieved by accessing a path \(\mathcal{P}(x)\) on the read-only tree \(Tr_{R}\). **Stash \(S\)**. When a path is accessed, blocks are stored and kept in the stash \(S\) until batch eviction. During batch eviction, the items in the stash \(S\) are moved Figure 2: RoORAM structure and components. The position map (\(pos\)) and the temporary position map (\(pos_{tmp}\)) are stored in recursive PathORAM. \(S\), \(S_{tmp}\), \(S_{L}\) and \(\mathcal{P}_{L}\) are stored in an array and are accessed using oblivious primitives §2.3. to a temporary stash \(S_{tmp}\), allowing query operations to insert blocks into \(S\). Stash and temporary stash has a size of \(O(\log_{2}N)\cdot\mathcal{R}\). Notice that the stash avoids block duplication by storing unique blocks only while replacing duplicated blocks with dummy data to avoid information leakage. **Stash lookup \(S_{L}\).** Stash lookup \(S_{L}\) contains only the IDs of the retrieved blocks and is used to answer whether a block is in the stash or not. The cost of accessing \(S_{L}\) is lower than \(S\) as \(S_{L}\) contains smaller-sized data than \(S\). Similar to \(S\), \(S_{L}\) has a worst case size of \(O(\log_{2}N)\cdot\mathcal{R}\). **Position map \(pos\).** The position map stores the path to which each block belongs. The position map is updated every time a block is accessed. RoORAM stores the position map in a recursive PathORAM [42] instead of an array to achieve obliviousness. The reason is that the position map contains large number of items, therefore, storing these items and accessing them linearly has a high cost compared to a recursive PathORAM. **Path lookup \(\mathcal{P}_{L}\).** It stores the list of accessed paths \(\mathcal{P}_{L}\) (leaf nodes' IDs) that have been accessed during read-operations (query). \(\mathcal{P}_{L}\) is used during a batch eviction to write the accessed paths back to the tree. RoORAM clears the list after each batch eviction; thus, the maximum size of the list is \(\mathcal{R}\). ### Initialisation Both read- and write-trees are initialised with height \(L=[\log_{2}N]-1\). Therefore, each tree contains \(2^{L+1}-1\) buckets, where each bucket is filled with dummy blocks. Position maps are initialised with an independent random number between \(0\) and \(2^{L}-1\). Stash, temporary stash, and stash lookup are initialised with empty data. Path lookup \(\mathcal{P}_{L}\) is initialised with empty data with size \(\mathcal{R}\). \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Notation** & **Meaning** \\ \hline \(N\) & Total number of blocks on the server \\ \hline \(L\) & Height of binary tree. \(L=[\log_{2}N]-1)\). \\ \hline \(B\) & Block size in bytes \\ \hline \(Z\) & Bucket capacity (in blocks) \\ \hline \(\mathcal{P}(x)\) & Path from leaf node \(x\) to the root \\ \hline \(\mathcal{P}(x,\ell)\) & The bucket at level \(\ell\) along the path \(\mathcal{P}(x)\) \\ \hline \(\mathcal{P}_{\mathcal{L}}{}^{*}\) & Path lookup, a list of accessed paths’ IDs by the read operation \\ \hline \(\mathcal{R}^{*}\) & Eviction frequency, number of read operations before batch eviction \\ \hline \(S\) & Read stash \\ \hline \(S_{tmp}{}^{*}\) & Write Stash, a temporary stash used during eviction. \\ \hline \(S_{L}{}^{*}\) & Stash blocks lookup \\ \hline \(pos\) & Position map used for read operations \\ \hline \(pos_{tmp}{}^{*}\) & Temporary position map used during batch eviction. \\ \hline \(Tr_{R}\) & Read-only tree, used for read operations \\ \hline \(Tr_{W}{}^{*}\) & Write-only tree, used during batch eviction \\ \hline \end{tabular} \end{table} Table 1: RoORAM notations. \({}^{*}\) Represents notations introduced by RoORAM. ### Read Operation ``` 1:\(b_{id}\) - Block id 2:Summarised data block 3:functionReadAccess(\(b_{id}\)) 4:\(x\gets pos[b_{id}]\) 5:\(pos_{tmp}[b_{id}]\gets uniformRandom(0...2^{L}-1)\) 6:if\(\texttt{oexists}(b_{id},S_{L})\)then 7:\(ReadPath(Tr_{R},dummy)\) 8:\(\texttt{oaccess}(write,\mathcal{P}_{L},dummy)\) 9:else 10:\(ReadPath(Tr_{R},x)\) 11:\(\texttt{oaccess}(write,\mathcal{P}_{L},x)\) 12:endif ``` **Algorithm 1** Read Operation The details of READACCESS are shown in Algorithm 1. It is worth noting that, aside from the distinct design variations between TimeClave and PathORAM, the algorithmic distinctions are also demonstrated in Algorithm 1, 2, and 3, which are highlighted in red. To access block \(a\), given its block ID \(b_{id}\), RoORAM first accesses the position map to retrieve the block's position in \(Tr_{R}\), such that \(x:=pos[b_{id}]\). Second, a new random path is assigned to block \(a\) and stored in the temporary position map (\(pos_{tmp}\)). RoORAM updates \(pos_{tmp}\) instead of \(pos\) as the accessed block will not be evicted before \(\mathcal{R}\) read operations. Thus, avoiding inconsistent block position for subsequent queries before a batch eviction. The next step is to check whether block \(a\) is stored in the stash or not by searching \(S_{L}\) with the oblivious primitive oexists SS2.3. If \(S_{L}\) contains \(a\), a dummy path will be accessed; otherwise, path \(x\) is accessed. By doing so, the adversary cannot infer whether block \(a\) is located in the stash (\(S\)) or in the ORAM tree (\(Tr_{R}\)). In both cases, the retrieved blocks of the accessed path are stored in the stash \(S\), and its path ID is tracked in \(\mathcal{P}_{L}\). Finally, block \(a\) is obliviously retrieved from the stash \(S\) with oaccess and assigned to \(d^{\prime}\) with oassign. ### Write Operation The details of WRITEACCESS is given in Algorithm 2. The data block to be written \(data*\) is associated with a time interval \(time\), and will be used as the ID of \(data*\). Since the stash is accessed by both read and write operations, adding a block to the stash requires synchronisation using a mutex (i.e., query lock). Therefore, queries are blocked during a write operation. However, a write operation requires few operations only, such as adding the block to stash and updating the position map, which adds a negligible overhead. Note that if the stash is full, RoORAM will automatically evict the blocks (see SS4.6). ``` 1:\(data*\) - Block data, \(time\) - block time interval 2:functionWriteAccess(\(data*,time\)) 3:\(QueryLock.lock\) 4:\(x\gets UniformRandom(0...2^{L}-1)\) 5:\(b_{id}\gets time\) 6:\(pos_{tmp}[b_{id}]\gets x\) 7:\(\texttt{oaccess}(write,S,data*)\) 8:\(\texttt{oaccess}(write,S_{L},b_{id})\) 9:\(QueryLock.unlock\) 10:endfunction ``` **Algorithm 2** Write Operation To write \(data*\) to the tree, RoORAM assigns a random path \(x\) to it by setting \(pos[time]\gets x\) and obliviously adds the block to the stash \(S\) with Oaccess. Meanwhile, the block ID \(time\) is added to \(S_{L}\), as \(data*\) is stored in the stash. Unlike other tree-based ORAMs, stash items in RoORAM are not evicted after a write operation. Instead, stash items are evicted in batches after \(\mathcal{R}\) read operations SS4.6. ### Batch Eviction and Trees Synchronisation RoORAM performs \(\mathcal{R}\) read operations on the read-only tree prior to a batch eviction. As a result, RoORAM needs to write multiple paths at once in a single non-blocking batch eviction. It is known that eviction in PathORAM is an expensive process; hence, it can degrade the query performance. RoORAM addresses this issue by blocking only queries during the execution of critical sections in batch evictions. Note that there are a few steps during eviction where RoORAM needs to block queries. However, these steps have a negligible impact on query performance, making the batch eviction a non-blocking process. As shown in Algorithm 3, RoORAM splits the batch eviction into two phases: **Path Writing Phase (lines 3-17, Algorithm 3)**. During the eviction phase, RoORAM starts by acquiring a mutex for a short period to swap \(S\) and \(S_{tmp}\). Swapping stash items allows query operations (read operations on the ORAM) to insert blocks into \(S\) while batch eviction is in process. Note that moving stash items is achieved by a simple reference swap instead of swapping data. The eviction process writes all the accessed paths recorded in \(\mathcal{P}_{L}\) to the write-only tree (lines 7 to 17). Specifically, for each path \(p\) in \(\mathcal{P}_{L}\), RoORAM greedily fills the path's buckets with blocks from \(S_{tmp}\) in the order from leaf to root. This order ensures that the blocks are pushed into \(Tr_{W}\) as deep as possible. All non-evicted blocks remain in \(S_{tmp}\) to be evicted in subsequent batch evictions. When RoORAM writes multiple paths to \(Tr_{W}\), there can be an intersection between two paths, at least at the root level. A bucket may be written several times during a batch eviction (e.g., the root node's bucket), causing a buckets collision. RoORAM avoids that by writing every bucket only once. Such an approach can improve performance by reducing the number of evicted buckets. However, it leaks the number of intersected buckets to the adversary. RoORAM prevents such leakage by performing fake access to all buckets in the intersected paths. **Synchronisation Phase (lines 18-24, Algorithm 3)**. At this point, \(Tr_{R}\) needs to be synchronised with \(Tr_{W}\) to reflect the new changes. To synchronise the two trees with minimal query blocking (Lines 18 to 24), RoORAM copies only the written changes (i.e., paths) from \(Tr_{W}\) to \(Tr_{R}\) instead of copying the entire tree. In addition, RoORAM copies the changes from \(pos_{tmp}\) to \(pos\) and any non-evicted blocks in \(S_{tmp}\) to \(S\). ### Memory Consumption RoORAM uses two separate ORAM trees to handle non-blocking queries. As a result, it consumes twice the data as the ORAM baseline. In detail, TimeClave maintains two separate ORAM trees to store the data blocks, two stashes, two position maps and the list of accessed paths. Each tree will have a height \(L=[\log_{2}N]-1\). Therefore, each tree will be able to store up to \(2^{L+1}-1\) buckets. Since each bucket contains \(Z\) blocks, the number of blocks stored in the tree is \(2^{L+1}-1\cdot Z\). Therefore, each tree will require \(2^{L+1}-1\cdot Z\cdot B\) bytes. Stash (\(S\)) and Stash Lookup (\(S_{L}\)) require \(O(\log_{2}N)\cdot\mathcal{R}\cdot B\) bytes. Path Lookup (\(\mathcal{P}_{\mathcal{L}}\)) requires \(\mathcal{R}\cdot 4\) bytes (since each path requires 4-bytes). ### RoORAM Efficiency Analysis RoORAM's operation overheads involve four main operations: accessing and updating the temporary position map \(pos_{tmp}\), stash lookup table access \(S_{L}\), read-only tree \(Tr_{R}\) path reading, and path lookup access \(\mathcal{P}_{L}\). Each of these operations is associated with a computational cost. For read operations SS4.4, accessing and updating \(pos_{tmp}\) and the path reading from \(Tr_{R}\) both involve an asymptotic cost of \(O(\log N)\) due to recursive PathORAM and its position map inside the enclave. Accessing \(S_{L}\) costs \(O(\log N)\cdot\mathcal{R}\), while accessing \(\mathcal{P}_{L}\) costs \(O(\mathcal{R})\). Therefore, the overall cost for a read operation is \(O(logN)\), yielding similar asymptotic complexity to PathORAM. Nevertheless, RoORAM shows higher performance up to 2.5 times SS6. This enhancement is due to RoORAM's design of decoupling the non-blocking read operations from the non-blocking eviction process.The write operation SS4.5 accesses \(pos_{tmp}\), \(S\), and \(S_{L}\). I.e., \(O(\log N)\) + \(O(\log N)\cdot\mathcal{R}\) + \(O(\log N)\cdot\mathcal{R}\) where \(\mathcal{R}\) is a constant. Consequently, the overall cost is \(O(logN)\). Finally, path writing during batch eviction requires \(O(\log N)\), and tree synchronization, which involves copying \(O(\log N)R\) items, leads to an overall cost of \(O(\log N)\). ### Security of RoORAM To prove the security of RoORAM, we adopt the standard security definition for ORAMs from [57]. An ORAM is said to be secure if, for any two data request sequences, their access patterns are computationally indistinguishable by anyone but the client. RoORAM is similar to PathORAM but excludes two main points: 1) RoORAM stores all components in the enclave, whereas PathORAM stores the stash and position map in the client; 2) PathORAM evicts the stash data after each access, while RoORAM performs batch eviction after \(R\) read operations. Therefore, the security definition of RoORAM is captured in the following theorem. Theorem 4.9: _RoORAM is said to be secure if, for any two data request sequences \(\boldsymbol{y_{1}}\) and \(\boldsymbol{y_{2}}\) of the same length, their access patterns \(A(\boldsymbol{y_{1}})\) and \(A(\boldsymbol{y_{2}})\) are computationally indistinguishable by anyone but the enclave._ Proof: To prove the security of RoORAM, we now provide the following analysis. Storing and accessing the stash, position map, and other components of RoORAM within the enclave do not leak additional information to the adversary because they are all accessed with oblivious primitives. Based on the security of oblivious primitives, the adversary cannot infer which item is accessed in the position map \(pos\), the temporary position map \(pos_{tmp}\), the stash \(S\), temporary stash \(S_{tmp}\), stash lookup \(S_{L}\), and path lookup \(\mathcal{P}_{L}\). Evicting the stash after \(R\) reads can improve the performance of RoORAM with a bit of relaxation on the security guarantee, compared to the security of Path ORAM. Note that the security of RoORAM can reach the standard security via extra dummy access, which will be explained later. Let \(\boldsymbol{y}=(a_{R-1},...,a_{1})\) be a sequence of read-access of size \(\mathcal{R}\). The adversary sees a sequence \(A(\boldsymbol{y})=(\)\(pos_{\mathcal{R}}\left[\mathbf{a_{\mathcal{R}}}\right]\), \(pos_{\mathcal{R}-1}\left[\mathbf{a_{\mathcal{R}-1}}\right],\ldots\), \(pos\)\({}_{1}\left[\mathbf{a_{1}}\right])\). It is possible for the adversary to distinguish some read-accesses based on the access pattern, e.g., \(\boldsymbol{y_{1}}=(a_{1},a_{1},...,a_{1})\) and \(\boldsymbol{y_{2}}=(a_{1},a_{2},...,a_{R})\) (here we assume that \(a_{1}\), \(a_{2}\),..., and \(a_{R}\) are stored in different leaf nodes). Recall that within \(\mathcal{R}\) reads, if the required block is stored in the stash, a random path will be accessed; otherwise, the required path will be accessed. Assume that the tree contains \(M=2^{L}\) paths. In the example, \(pos[a_{1}]\), \(pos[a_{2}]\),..., and \(pos[a_{R}]\) must be different. For \(\boldsymbol{y_{1}}\), any two paths are the same with a probability of \(1/M\). However, as long as \(A(\boldsymbol{y})\) contains one pair of the same path \(\boldsymbol{y}\) must be \(\boldsymbol{y_{1}}\), and the probability of that is \(1-C_{M}^{\mathcal{R}}/M^{\mathcal{R}}\), which is \(0\) for \(\boldsymbol{y_{2}}\). In other words, if \(A(\boldsymbol{y})\) does not contain any path repetition, there is a higher probability that \(\boldsymbol{y}\) is \(\boldsymbol{y_{2}}\), otherwise it must be \(\boldsymbol{y_{1}}\). This issue can be avoided by always accessing a path that has not been accessed after a round of eviction when the required block is stored in the stash. In this case, \(A(\mathbf{y})\) never contains repeated paths no matter what \(\mathbf{y}\) contains, and we just need to ensure \(R<M\), which is always the case in practice. As a result, with the extra dummy access for each read, the adversary cannot distinguish between one sequence and the other. ## 5 TimeClave In this section, we first describe how TimeClave efficiently generates summarized time series data blocks and how these blocks are stored inside RoORAM. Then, we describe how TimeClave utilizes these blocks to efficiently answer clients' queries. In TimeClave, a system administrator is responsible for initialising the system by setting the system parameters, including the block size \(B\), bucket capacity \(Z\), block generation interval \(T\), and eviction frequency \(R\). Subsequently, TimeClave continuously receives data points from the data producer, generates data blocks, and answers the client's queries. ### Block Generation TimeClave stores the TSD as summarised data blocks based on the supported aggregate functions instead of the raw data points. The block summarises raw data points of a pre-defined time interval i.e., \([t_{i},t_{i+1})\) with a fixed interval \(T=t_{i+1}-t_{i}\). Larger intervals provide lower query accuracy but high performance with less storage. To support multiple accuracy levels and higher query performance, TimeClave generates blocks at different time intervals, i.e, aggregation intervals \(V\), where \(V=[T_{1},T_{2},....]\) (See SS5.2). The generated data blocks are stored in RoORAM. Each block contains the aggregated values for the supported aggregate functions for \([t_{i},t_{i+1})\). A block is represented by an array, where each item in the array contains all the aggregated values. By storing summarised blocks, TimeClave reduces the ORAM tree size and the query latency. ### Query Realisation TimeClave receives encrypted queries from clients in the form \(Q=E_{k}(\langle f,(t_{a},t_{b})\rangle)\) for a range query and \(Q=E_{k}(\langle f,t_{a}\rangle)\) for a point query, where \(f\) is the aggregate function to be executed over the time interval from \(t_{a}\) to \(t_{b}\). In the following subsections, we describe how TimeClave handles point and range queries and how they can be extended to support complex analytics. **Point Queries**: When TimeClave receives a query, it first decrypts the query, i.e., \(Q^{\prime}=D_{k}(Q)\) where \(k\) is the client's private key. TimeClave then extracts \(t_{a}\) from \(Q^{\prime}\) and retrieves the data block using RoORAM. Once the block is retrieved, TimeClave uses the aggregates position map to find the location of the requested \(f\)'s value. For example, assume that \(f=AVG\), TimeClave will access the \(avg\) value at index 0 (assuming that \(avg\) is located at index 0) of the retrieved block. Finally, the results are encrypted using the client's private key and sent back to the client. **Range Queries** TimeClave follows the same approach that is used with point queries to answer range queries. However, range queries involve retrieving multiple blocks and multiple aggregated values, instead of one. The retrieved values must be fed into the aggregation function to return the final result to the client. Notice that for functions such as min or max, using the retrieved values to calculate the final results is a straightforward process. The reason is that the maximum value for the aggregated values (summarised values) represents the maximum value for the underlying data points. However, this is not valid for other functions, such as the average and variance. To find the average for multiple aggregated values, we use the weighted average, which is given by the formula, \[avg=\frac{\sum_{i=1}^{n}bw_{i}\cdot bx_{i}}{\sum_{i=1}^{n}bw_{i}}\] where \(bw\) is the number of data points in the block (count), \(bx\) is the aggregated average value for the block, and \(n\) is the number of blocks retrieved by the query. The above formula uses only the count and average values from each retrieved block and performs the calculations in a single pass. TimeClave adopts similar approaches for all supported aggregation functions to answer range queries to achieve high performance without compromising accuracy. **Complex Analytics** TimeClave can also combine several aggregate values to answer complex range queries. This is possible because TimeClave retrieves all blocks within the queried time interval \((t_{a},t_{b})\). Unlike cryptographic approaches, blocks' values are stored in plaintext inside the enclave. Hence, one can easily combine and perform arithmetic operations on the aggregate values. TimeClave can also in principle support sketch algorithms such as Count-Min, Bloom filter and HyperLogLog. A single- or multi-dimensional sketch table can be flattened and represented by a one-dimensional array. This allows TimeClave to store a sketch table inside the data block as a range of values (instead of a single aggregate value). **Query Optimisation** In RoORAM, query latency increases linearly with the number of accessed blocks in range queries. The reason is that each block access in RoORAM is independent of the preceding and subsequent access. Such an approach allows RoORAM to offer a stronger leakage profile but degrades query performance. To prevent this performance drawback, RoORAM optimises queries by reducing the number of accessed blocks. RoORAM achieves this by maintaining multiple ORAM trees with different time intervals \(V=[T_{0},T_{1},T_{2},...]\), where \(T_{i-1}<T_{i}<T_{i+1}\), with \(T_{i}\in V\). Query optimiser works by examining the client's query \(Q=\langle SUM,(t_{1},t_{6})\rangle\) and determining the optimal combination of aggregation intervals to minimise the total number of accessed blocks. For example, assume that \(T_{1}=60s\) and \(T_{2}=10s\), hence, every block in the \(T_{1}\) tree summarises 6 blocks of \(T_{2}\) tree. Assume that \(Q=\langle SUM,(0,70)\rangle\). Processing this query without optimisation requires accessing 7 blocks in \(T_{2}\) tree, while with query optimisation, TimeClave will access 1 block from each tree, totalling 2 blocks. As each aggregation interval requires a separate ORAM tree; therefore, \(T_{i}\) will determine the size of the tree. Note that a larger \(T_{i}\) offers high query performance with lower accuracy and less storage. Therefore, such a parameter can be optimised as per the application's requirements. ### Security of TimeClave TimeClave handles input parameters and queries securely inside the enclave, i.e., parameters and queries are encrypted using the clients' key and can only be decrypted by the enclave. In addition, TimeClave adopts and integrates RoORAM to obliviously store and access time series data inside the enclave. Therefore, the security definition of TimeClave is captured in the following theorem. Theorem 5.1: _TimeClave $5 is secure giving that RoORAM $4 is proven to be secure in Theorem 5.1._ Proof: Giving that TimeClave handles data security inside the enclave, therefore, the adversary cannot examine the system parameters or client queries in plaintext. In detail, the adversary cannot examine the following: block size, bucket size \(Z\), the total number of blocks \(N\), eviction frequency \(\mathcal{R}\) and query function. Although the adversary can observe which tree is accessed during quqry optimisation $5.2, this will only reveal \(T\) but not the exact query range (i.e., \((t_{a},t_{b})\)). The reason that TimeClave does not protect the block generation time-interval \(T\) is that the adversary can easily observe such an access pattern during block generation. In addition, TimeClave does not protect the tree, stash, path lookup, or stash lookup sizes, as the adversary can monitor the memory pages and infer them. Finally, reading and writing blocks from/into RoORAM is oblivious and secure, as discussed in SS4.9. Furthermore, all other operations performed by TimeClave, i.e., block generation and the access to the of the required aggregated values from blocks accessed, are also oblivious as they are performed with oblivious primitives $2.3. Therefore, TimeClave does not leak any memory access pattern to the server. ## 6 Evaluation In this section, we evaluate TimeClave while asking the following questions: 1. What is the performance of TimeClave compared to the non-oblivious version and ORAM Baseline? 2. How do the internal components of RoORAM and the levels of aggregation affect its performance? **Implementation**. We implemented TimeClave and RoORAM in \(\sim\)4,000 lines of code. Inside the enclave (server-side), we use the trusted cryptography library provided by SGX SDK, i.e., _sgx_tcrypto_[58]. As mentioned earlier in Section SS3.2, we implemented different aggregate functions where the value of each function is represented by a single item in the data block. We store values in an array of floating-point numbers (4-bytes per number) in the data blocks. **Experimental Setup** We evaluated TimeClave on a local network. For the server, we use an SGX-enabled server running on Intel (r)Xeon CPU (r)E-2288G @ 3.70 GHz with 8 cores and 128 GiB RAM. The enclave size is 256 MB. We simulate a client with 1 vCPU and 3.75 GB memory. We do not include the network latency in our evaluation as our goal is to measure the performance of TimeClave on the most critical part, i.e., the server-side. **Dataset**. We use the time series benchmark suite [59] to generate CPU utilization dataset. The dataset contains a single attribute (i.e., CPU usage). We initialise TimeClave to store 24 hours of readings (i.e., CPU usage), where each data block in the ORAM tree represents 10s of readings (i.e., \(T=10s\)). Each block stores 10 aggregate values, each value consumes 4-bytes (\(B=40\) bytes). Each bucket in the tree stores 4 blocks (\(Z=4\)). Therefore, the height of each tree in RoORAM is \(L=13\). ### Baselines **ORAM baseline**. In this paper, we introduce and integrate our RoORAM into TimeClave to achieve obliviousness with minimal query blocking inside the enclave. We evaluated RoORAM against the widely adopted ORAM, i.e., PathORAM [42]. We integrate both PathORAM and RoORAM into TimeClave for evaluation. For a fair comparison, as done in RoORAM, we stored the stash, position map, and the tree in plaintext inside the enclave for PathORAM. Moreover, we store 4 blocks in each bucket (\(Z=4\)) for both ORAMs. However, PathORAM performs the eviction after each read/write access. **Non-oblivious baseline**. To understand the overhead of oblivious operations, we evaluate TimeClave without RoORAM and oblivious operations (referred to as non-oblivious). In detail, TimeClave generates data blocks and stores them inside the enclave in a non-oblivious data structure. Thus, we replace RoORAM with non-oblivious storage. For the position map, TimeClave stores block IDs in a key-value structure with a pointer to the block's physical address. To answer clients' queries, TimeClave retrieves the required block id from the position map and accesses (and aggregates) data blocks using non-oblivious operations. Such a setup allows us to evaluate the overhead of obliviousness in TimeClave, i.e., RoORAM and oblivious operations. ### Evaluation Results **Query latency** To understand TimeClave's performance, Table 2 shows the query latency for different query ranges and block sizes. We evaluated TimeClave using reasonable small block sizes, since the size represents the number of supported aggregates, where a block size of \(2^{7}\) can store up to 32 aggregate values. As expected, the query latency increases linearly with the query range. The reason is that for each queried time interval, TimeClave needs to access one path in RoORAM and retrieves \((L+1)\cdot Z\) blocks from the ORAM. Similarly, the larger the block size is, the more overhead it will add to the query performance, which is expected in a tree-based ORAM. Fig. 2(a) shows the query latency breakdown with a block size \(B=2^{7}\). The majority of the overhead is due to ORAM and oblivious operations for point and range queries. By omitting the SGX overhead (as it is consistent with different query ranges), the ORAM operations overhead comprises between 80-85% of the query latency. On the other hand, the overhead of oblivious operations comprises between 15-20% of the query latency. Note that the ORAM overhead is reduced in TimeClave by query optimisation as shown in Section SS6.2. **Eviction frequency**. TimeClave evicts the blocks from the stash for every \(\mathcal{R}\) read-operation. Fig. 2(b) demonstrates how \(\mathcal{R}\) affects the query latency. With a fixed query range of 32, the query has a negligible higher latency when the value of \(\mathcal{R}\) is smaller than the query range, i.e, \(t_{b}-t_{a}\leq\mathcal{R}\). However, the latency drops significantly when \(\mathcal{R}\) is larger than the query range. This is because RoORAM performs read-operations from the ORAM with less evictions. \begin{table} \begin{tabular}{l|l l l l l} \hline \hline Range (\(R_{T}\)) & \multicolumn{5}{c}{– _Block size (bytes)_ –} \\ & \(B=2^{3}\) & \(B=2^{4}\) & \(B=2^{5}\) & \(B=2^{6}\) & \(B=2^{7}\) \\ \hline 1 & 0.03 & 0.03 & 0.03 & 0.03 & 0.04 \\ 10 & 0.12 & 0.12 & 0.12 & 0.13 & 0.15 \\ 20 & 0.19 & 0.2 & 0.21 & 0.22 & 0.26 \\ 30 & 0.28 & 0.28 & 0.29 & 0.31 & 0.38 \\ 40 & 0.37 & 0.36 & 0.38 & 0.41 & 0.49 \\ 50 & 0.46 & 0.46 & 0.46 & 0.5 & 0.59 \\ \hline \hline \end{tabular} \end{table} Table 2: TimeClave Query latency (ms) for point and different range queries and B block sizes. Note that \(R_{T}\) is a range of intervals, where each interval represents a single aggregated data block. Figure 3: A) Query latency breakdown with different query ranges (\(R_{T}\)). SGX overhead includes context switch and memory allocation inside the enclave. B) Query latency with different eviction frequencies (\(\mathcal{R}\)) and query range = 32 intervals. **Aggregation Intervals**. In Fig. 4, we show how aggregation levels reduce query latency in TimeClave. We set \(T=1\)s as the baseline in this evaluation, i.e., TimeClave generates 1 block per second. Noticeably, query latency decreases with larger aggregation intervals as less accesses occur to the ORAM. When \(T=20\)s, the speedup achieved in the query latency is \(67\times\) with \(20\times\) less memory consumption (compared to \(T=1\)s). Note that TimeClave supports multiple aggregation levels by maintaining a separate ORAM tree for each level. Despite the fact that TimeClave consumes less memory for higher aggregation levels, such an overhead can be neglected due to the large EPC size in SGX v2. **Comparison with baselines**. Fig. 4(b) illustrates how TimeClave's query latency compares to the baselines. The latency grows linearly with the query range for TimeClave and the baselines. For point queries, the latency overhead of TimeClave and ORAM baseline compared to the non-oblivious TimeClave is \(1.5\times\) and \(2.5\times\) respectively. RoORAM achieves higher performance than the ORAM baseline due to its non-blocking read operations and efficient batch eviction design. For range queries, TimeClave and ORAM baseline adds up to \(12\times\) overhead to the query latency compared to non-oblivious TimeClave. As discussed earlier, the majority of the overhead is due to the ORAM access and the oblivious operations in TimeClave and ORAM baseline. However, TimeClave remains substantially faster than the ORAM baseline by \(1.7-2\times\) for both range and point queries. #### 4.2.2 Query Throughput In Fig. 4(a), we compare TimeClave query throughput with the baselines. For point queries, TimeClave achieves up to 22K ops/s compared to the ORAM baselines which achieves up to 12K ops/s. For range queries, TimeClave achieves higher throughput than ORAM baseline by up to \(2\times\), i.e., 1.5K ops/s for \(R_{T}\geq 20\). Similar to the query latency both TimeClave and ORAM baseline add up to \(6\times\) overhead compared to non-oblivious TimeClave. As mentioned in Section SS6.2.2, the majority of the overhead is caused by ORAM Figure 4: Query latency of different aggregation intervals. access and oblivious operations. Such overhead can be reduced in TimeClave by maintaining multiple aggregation intervals, which reduces the number of accessed paths in RoORAM. For this, we avoid large query ranges in our evaluation as the main goal of TimeClave is to summarise TSD and maintain multiple aggregation intervals to achieve low query latency. #### 4.2.2 Memory Consumption RoORAM uses two separate ORAM trees to handle non-blocking queries. As a result, it consumes twice the data as the ORAM baseline. In detail, TimeClave maintains two separate ORAM trees to store the data blocks, two stashes, two position maps, and the list of accessed paths. For example, assume \(T=10s\), \(\mathcal{R}=4\) and \(B=40\) bytes.To store 24h of time-series data, TimeClave will generate \(N=8,640\) blocks. Recall from SS4.7, RoORAM initialises two ORAM trees, each tree requires \(2^{L+1}-1\cdot Z\cdot B\) bytes., i.e., \(\sim\)5MB for both trees to store a single attribute (e.g., CPU, memory). To maintain multiple aggregation intervals, e.g., \(V=[10s,60s]\), RoORAM will consume \(\sim\)5.6MB while achieving up to \(6x\) query speedup with only \(0.12\times\) memory overhead. Although TimeClave offers a low memory overhead, TimeClave utilises the new generation of SGX-enabled processors support up to 1TB of enclave memory [56]. #### 4.2.3 TimeClave Compared to Cryptographic Approaches We evaluate TimeClave against the cryptographic solutions TimeCrypt [18], which supports aggregate functions over encrypted time-series data. Without query optimization, TimeClave achieves up to \(16x\) lower query latency when the number of queried blocks is below 6,000. TimeCrypt exhibits an almost constant latency of around \(185ms\). However, with query optimization using multiple aggregations intervals, Figure 5: TimeClave query latency and throughput compared to baselines. \begin{table} \begin{tabular}{p{85.4pt}|p{85.4pt} p{85.4pt}} \hline \(V\) & Memory (MB) & Speedup \\ \hline \([10s]\) & 5.0 & up to \(1\times\) \\ \([10s,60s]\) & 5.6 & up to \(6\times\) \\ \([10s,120s]\) & 5.3 & up to \(12\times\) \\ \hline \end{tabular} \end{table} Table 3: Memory consumption using different aggregation intervals with the expected query latency speedup when the same time interval is queried. TimeClave demonstrates a significant improvement in performance in comparison to TimeCrypt by orders of magnitude. Although the query latency for TimeClave increases with the number of queried blocks, querying 8,000 blocks results in a query latency that is approximately \(200x\) lower than TimeCrypt. Without query optimisation, TimeClave achieves better performance when the number of queried blocks is smaller than 6,000, while TimeCrypt shows almost a constant latency of around \(185ms\). Nevertheless, it is clear that TimeClave can achieve up to \(16x\) better performance than TimeCrypt without the query optimiser when the number for small range queries. With query optimisation, TimeClave achieves better performance than TimeCrypt by orders of magnitude. Although the query latency for TimeClave increases with the number of queried blocks, querying 8,000 blocks has around \(200x\) lower query latency. ## 7 Related work ### Secure Time Series Processing Cryptographic protocols have been widely adopted in building secure databases [60, 61, 62] to execute expressive queries on encrypted data, while another line of work leverages SGX, such as Oblidb [63], EncDBDB [64], EnclaveDB [65] and Oblix [37]. However, these solutions either incur significant performance overhead or are not optimised for time series processing. The most related works to TimeClave in the secure time series processing systems are TimeCrypt [18], Zeph and Waldo [19]. TimeCrypt and Zeph employ additive homomorphic encryption to support aggregated queries on encrypted data. However, both solutions are non-oblivious, allowing the adversary to learn sensitive information by recovering search queries or a portion of the encrypted records [34, 36]. On the other hand, Waldo [19] offers a stronger security guarantee than [18, 19] by hiding query access patterns. As Waldo adopts MPC, the network bandwidth adds significant overhead to the query latency. TimeClave eliminates such overhead while providing fully-oblivious query processing. Unlike previous solutions, TimeClave can be easily scaled to support complex analytics as it processes TSD in plaintext inside the enclave. ### ORAM with Intel SGX Another line of prior work has explored and combined SGX and ORAM to build secure storage. For example, ZeroTrace [38] develops a generic oblivious block-level ORAM controller inside the enclave that supports multiple ORAMs. Additionally, ZeroTrace focuses on hiding memory access patterns inside the enclave while leaving ORAM storage outside the enclave. Similarly, Oblix [37] builds an oblivious search index for encrypted data by using SGX and Path ORAM. Oblix designs a novel data structure (ORAM controller) to hide access patterns inside the enclave. Likewise, Oblix stores the ORAM storage on the server in unprotected memory (outside the enclave). Moreover, Obliviate [66] and POSUP [67] adopt SGX and ORAM to develop a secure and oblivious filesystem to read and write data from a file within an enclave. Obliviate is optimised for ORAM write operations by parallelising the write-back process to improve performance. MOSE [68] adopts Circuit-ORAM for a multi-user oblivious storage system with access control. Like previous solutions, MOSE stores the ORAM controller inside the enclave while leaving the ORAM tree outside. Although MOSE parallelises the ORAM read process, clients' queries are blocked until the accessed blocks are evicted. Unlike TimeClave, previous solutions are not optimised for handling multi-user, non-blocking queries in the time series context. ### Plaintext Time Series Processing Many solutions have been proposed to store and process TSD in the plaintext domain, to name a few, such as Amazon TimeStream [9], InfluxDB [10], Monarch [69], Gorilla [70]. These solutions focus mainly on delivering low-latency real-time queries and efficient storage. For example, Gorilla [70] is an in-memory database for TSD that is optimised for heavy read and write operations. It introduces a new compression scheme for timestamps and floating point values for efficient TSD storage and achieves up to \(70\times\) lower query latency than on-disk storage. Monarch [69] is used to monitor and process time series data in a geo-distributed architecture. In addition, it supports in-memory data aggregation for higher query performance. Unfortunately, adopting these solutions for sensitive time series systems on cloud platforms can lead to critical data breaches [12]. ## 8 Conclusion and Future Work In this work, we presented TimeClave, a secure in-enclave time series processing system. While previous works [18, 19] adopt cryptographic protocols, TimeClave leverages Intel SGX to store and process time series data efficiently inside the enclave. To hide the access pattern inside the enclave, we introduce an in-enclave read-optimised ORAM named RoORAM capable of handling non-blocking client queries. RoORAM decouples the eviction process from the read/write operations. TimeClave achieves a lower query latency of up to \(2.5\times\) compared to our ORAM baseline and up to \(5.7-12\times\) lower query latency than previous works. While TimeClave supports a wide range of aggregate functions that are supported by time series processing systems [9, 10, 39], it does not support all the functionalities (e.g., filtering, grouping, and joining). Note that TimeClave can be easily extended to support such functionalities and expressive queries. This is because TimeClave stores and processes data inside the enclave in plaintext, allowing it to perform complex operations efficiently. However, such a direction requires careful implementation and optimisation to maintain TimeClave's obliviousness inside the enclave. Finally, TimeClave can further improve its query performance by parallelising each ORAM read access, which can be a valuable direction for future work. ## Acknowledgement The authors would like to thank the anonymous reviewers for their valuable comments and constructive suggestions. The work was supported in part by the ARC Discovery Project (DP200103308) and the ARC Linkage Project (LP180101062).
2301.12493
Graph Mixer Networks
In recent years, the attention mechanism has demonstrated superior performance in various tasks, leading to the emergence of GAT and Graph Transformer models that utilize this mechanism to extract relational information from graph-structured data. However, the high computational cost associated with the Transformer block, as seen in Vision Transformers, has motivated the development of alternative architectures such as MLP-Mixers, which have been shown to improve performance in image tasks while reducing the computational cost. Despite the effectiveness of Transformers in graph-based tasks, their computational efficiency remains a concern. The logic behind MLP-Mixers, which addresses this issue in image tasks, has the potential to be applied to graph-structured data as well. In this paper, we propose the Graph Mixer Network (GMN), also referred to as Graph Nasreddin Nets (GNasNets), a framework that incorporates the principles of MLP-Mixers for graph-structured data. Using a PNA model with multiple aggregators as the foundation, our proposed GMN has demonstrated improved performance compared to Graph Transformers. The source code is available publicly at https://github.com/asarigun/GraphMixerNetworks.
Ahmet Sarıgün
2023-01-29T17:03:00Z
http://arxiv.org/abs/2301.12493v1
# Graph Mixer Networks ###### Abstract In recent years, the attention mechanism has demonstrated superior performance in various tasks, leading to the emergence of GAT and Graph Transformer models that utilize this mechanism to extract relational information from graph-structured data. However, the high computational cost associated with the Transformer block, as seen in Vision Transformers, has motivated the development of alternative architectures such as MLP-Mixers, which have been shown to improve performance in image tasks while reducing the computational cost. Despite the effectiveness of Transformers in graph-based tasks, their computational efficiency remains a concern. The logic behind MLP-Mixers, which addresses this issue in image tasks, has the potential to be applied to graph-structured data as well. In this paper, we propose the Graph Mixer Network (GMN), also referred to as Graph Nasreddin Nets (GNasNets), a framework that incorporates the principles of MLP-Mixers for graph-structured data. Using a PNA model with multiple aggregators as the foundation, our proposed GMN has demonstrated improved performance compared to Graph Transformers. The source code is available publicly at [https://github.com/asarigun/GraphMixerNetworks](https://github.com/asarigun/GraphMixerNetworks). ## 1 Introduction Graph Neural Networks (GNNs) are a powerful tool for working with graph-structured data, which is data that is made up of entities and their relationships. GNNs have been used to solve a wide range of problems, such as node classification, link prediction, graph generation, and many others. They have attracted great interest in recent years due to their performance and the ability to extract complex information. Graph Convolutional Networks [1] (GCN) is a type of graph neural network (GNN) that uses graph convolutional layers to process data represented as graphs. GCNs can be used for various tasks such as node classification [2], graph classification [3], and link prediction. In each graph convolutional layer, the node features are updated by aggregating the features of their neighboring nodes. This is done through a convolution operation, where a linear combination of the neighboring node features is applied to each node, followed by a non-linear activation function. Graph Isomorphism Network (GIN) [4] is another type of GNN. GIN are able to distinguish non-isomorphic graphs. GIN consists of multiple layers of neural networks, where each layer aggregates the features of the neighboring nodes using a sum pooling operation, followed by a multi-layer perceptron (MLP). GIN can be used for various tasks such as node classification, graph classification, and link prediction. Graph Attention Networks (GAT) [5] is a GNN that uses attention mechanisms to assign different importance to different neighboring nodes when aggregating their features. Each node in GAT has a self-attention mechanism that allows it to weigh its own features and the features of its neighboring nodes in a learnable way. GAT can be used for various tasks such as node classification, graph classification, and link prediction. Message Passing Neural Networks (MPNN) [6] is a class of GNNs that generalize the idea of message passing between nodes in a graph. In MPNNs, messages are passed between nodes in the graph, and the node updates its state based on the messages received from its neighbors. MPNNs can be used for various tasks such as node classification, graph classification, and link prediction. GCN, GIN, GAT, and MPNN are types of GNNs, each with their own characteristics and capabilities. GCN uses graph convolutional layers, GIN uses a sum pooling operation, GAT uses attention mechanisms to assign importance to different neighboring nodes, and MPNNs generalize the idea classification, graph classification, and link prediction. Transformer networks have been particularly successful in natural language processing tasks such as machine translation, text summarization, and language understanding. The key innovation of transformer networks is the use of self-attention mechanisms. In a transformer network, each element in a sequence (such as a word in a sentence) is processed by attending to all the other elements in the sequence. This allows the network to weigh the importance of different elements in the sequence when making a prediction. Vision transformers [7] are a type of transformer network designed for computer vision tasks, adapting the transformer architecture from natural language processing to handle image data. The input image is divided into non-overlapping patches and processed individually using self-attention mechanisms. The key advantage of vision transformers is their ability to handle images of arbitrary size, unlike traditional convolutional neural networks [8] which require a fixed image size. Although transformers have shown superior performance on the image task, they have a computational quadratic complexity. MLP-Mixers [9; 10] have been used for the first time in the image task, and it has been shown to perform well without the quadratic complexity like in the attention mechanism and using only MLP. In this work, we propose Graph Mixer Networks (or Graph Nasreddin Networks) which uses MLP-Mixer on Graphs as a novel Graph Neural Network. They are shown to have less computational complexity than transformers and comparable performance to baseline models. The source code is available publicly at [https://github.com/asarigun/GraphMixerNetworks](https://github.com/asarigun/GraphMixerNetworks). ## 2 Graph Mixer Networks The proposed Graph Mixer Networks (or Graph Nasreddin Networks) method leverages the increased expressivity from the multi-aggregators models such as PNA [11] and MLP-Mixer [9; 10]. GMN architecture is given in Figure 1. ### Motivation Graph Transformers use the attention mechanism. Within this mechanism, in the self-attention mechanism, the dot product between Query and Key makes the computational complexity by \(O(n^{2})\). For this reason, MLP-Mixers, which are applied in image tasks, are seen to be more computationally efficient and perform better than Vision Transformers [9; 10]. That is why MLP-Mixer has considered solving computational efficiency in graphs in this study. ### Multi-aggregators Multiple aggregations are generalizations of sum aggregation within single aggregators and have been shown theoretically and empirically to be better at discriminating graphs. Therefore, in this study, we use multiple aggregators such as min, max, and mean aggregators used in the PNA model simultaneously. Degree Scalers.In GMN, degree scalers amplify or decrease signals based on a node's degree, allowing for more flexibility in the model. The formula for this is \(S\) (scaling factor) = \(\alpha\) (amplification Figure 1: Mixer Layer of Graph Mixer Network where \(T\) denotes transpose operator. factor) x \(d\) (node degree) / \(delta\) (average degree in the training set). In our research, we tested amplification factors of \(-1\), \(0\), and \(1\), which respectively result in attenuation, no change, and signal amplification based on the node's degree. \[S(d,\alpha)=\left(\frac{\log(d+1)}{\delta}\right)^{\alpha},d>0,-1<\alpha<1 \tag{1}\] Combining Aggregators.Like in the PNA, we incorporate various aggregators and degree scalers using the equation \(\otimes\) (Tensor product) and \(\oplus\) (general aggregation function) in the GMN framework to enhance the network's flexibility. \[\oplus=\left(\begin{array}{c}I\\ S(D,\alpha=1)\\ S(D,\alpha=-1)\end{array}\right)\otimes\left(\begin{array}{c}Max\\ Min\\ Mean\end{array}\right) \tag{2}\] ### MLP-Mixer In this work, we used the Mixer block recommended in MLP-Mixer, which is used for the image task, as the Mixer layer in GMN. Input,\(x\) first goes through the layer norm operation, then transposes this matrix by the \(T\) operator. The transposed \(x\) passes through the Linear layer as \(x^{T}\) and is again transposed and transformed into \(x\). It is then merged with the input \(x\) with the residual connection. Then it passes through Layer Norm, linear layer, and residual connection again. MLP-Mixer block can be summarized as the following equation 3: \[Mix=MLP_{2}(LayerNorm((MLP_{1}((LayerNorm(x))^{T}))^{T}+x))+x \tag{3}\] ### Graph Mixer Network As in PNA, after \(h_{i}^{k}\) and \(h_{j}^{k}\), which are the features of the neighboring node, are concatenated and pass through the linear layer containing learnable parameters, they pass through multiple aggregation and scalars. We take the embedding features obtained as a result of this process here as \(x\). This \(x\) enters the Mixer block, just like the MLP-Mixers performed in the image task. Figure 1 shows the components of the Mixer Blocks, and Equation 3 shows the Mix operator. as a result, the update function \(h_{i}^{k+1}\) is obtained as in Equation 4. \[h_{i}^{k+1}=Mix(h_{i}^{k},\oplus(h_{i}^{k},h_{j}^{k},e_{ij}^{k})) \tag{4}\] ## 3 Experiments We evaluated the performance of GMN models on ZINC [3] dataset. The performance results of GMN were compared with the Message Passing Neural Networks (MPNN) [6] and Graph Transformer. ### Dataset Our method was trained using the ZINC dataset, which is a dataset for predicting the solubility of chemical compounds through graph regression. The compounds in the dataset are represented as graphs, with atoms as nodes and bonds between atoms as edges. The ZINC dataset includes 12,000 molecules, with atom numbers ranging from 9 to 37. The performance of the method was evaluated using the mean absolute error (MAE) metric. ### Results We trained models using the multiple aggregator(s) such as mean, max, min. Finally, we compared our results with the well-known baseline methods in the literature. The results are given in Table 1. Our results have shown improved performance over attention based mechanism methods. ## 4 Discussion and Conclusion The experiment shows that it is possible to create a powerful transformer-style graph regressor without using attention layers. Additionally, the MLP-Mixer model has a significant advantage over the Graph Transformer in terms of complexity [9; 10], as it is linear in relation to the sequence length instead of quadratic. This is achieved through the use of an intermediate projection dimension within the feed-forward layer applied to aggregated learnable embeddings. The performance of the GMN is lower than GraphGPS because the positional encoding [14] and the number of parameters are quite low. In addition to its lower performance, the main disadvantage of the MLP-Mixer model is that it can only operate on sequences of a fixed length (as a result of the feed-forward layer applied to aggregated learnable embeddings). While this is not a problem in the image domain, it can be a limitation for graph neural networks because graphs do not have a fixed data structure. This study shows that MLP-Mixers are effective for graph regression. Future research should focus on understanding the specific roles of other parts of the MLP-Mixer, such as interpretability or initialization scheme. Additionally, the report hopes to inspire further investigation into the underlying reasons for the performance of current models. ## Acknowledgements The author express his gratitude to Ahmet S. Rifaioglu, Gokhan Ozsar and Mehmet Volkan Atalay not just only for writing and encouragin him to write this paper but also giving or the valuable insights and discussions.
2307.11214
FairMobi-Net: A Fairness-aware Deep Learning Model for Urban Mobility Flow Generation
Generating realistic human flows across regions is essential for our understanding of urban structures and population activity patterns, enabling important applications in the fields of urban planning and management. However, a notable shortcoming of most existing mobility generation methodologies is neglect of prediction fairness, which can result in underestimation of mobility flows across regions with vulnerable population groups, potentially resulting in inequitable resource distribution and infrastructure development. To overcome this limitation, our study presents a novel, fairness-aware deep learning model, FairMobi-Net, for inter-region human flow prediction. The FairMobi-Net model uniquely incorporates fairness loss into the loss function and employs a hybrid approach, merging binary classification and numerical regression techniques for human flow prediction. We validate the FairMobi-Net model using comprehensive human mobility datasets from four U.S. cities, predicting human flow at the census-tract level. Our findings reveal that the FairMobi-Net model outperforms state-of-the-art models (such as the DeepGravity model) in producing more accurate and equitable human flow predictions across a variety of region pairs, regardless of regional income differences. The model maintains a high degree of accuracy consistently across diverse regions, addressing the previous fairness concern. Further analysis of feature importance elucidates the impact of physical distances and road network structures on human flows across regions. With fairness as its touchstone, the model and results provide researchers and practitioners across the fields of urban sciences, transportation engineering, and computing with an effective tool for accurate generation of human mobility flows across regions.
Zhewei Liu, Lipai Huang, Chao Fan, Ali Mostafavi
2023-07-20T19:56:30Z
http://arxiv.org/abs/2307.11214v1
# FairMobi-Net: A Fairness-aware Deep Learning Model for Urban Mobility Flow Generation ###### Abstract Generating realistic human flows across regions is essential for our understanding of urban structures and population activity patterns, enabling important applications in the fields of urban planning and management. However, a notable shortcoming of most existing mobility generation methodologies is neglect of prediction fairness, which can result in underestimation of mobility flows across regions with vulnerable population groups, potentially resulting in inequitable resource distribution and infrastructure development. To overcome this limitation, our study presents a novel, fairness-aware deep learning model, FairMobi-Net, for inter-region human flow prediction. The FairMobi-Net model uniquely incorporates fairness loss into the loss function and employs a hybrid approach, merging binary classification and numerical regression techniques for human flow prediction. We validate the FairMobi-Net model using comprehensive human mobility datasets from four U.S. cities, predicting human flow at the census-tract level. Our findings reveal that the FairMobi-Net model outperforms state-of-the-art models (such as the DeepGravity model) in producing more accurate and equitable human flow predictions across a variety of region pairs, regardless of regional income differences. The model maintains a high degree of accuracy consistently across diverse regions, addressing the previous fairness concern. Further analysis of feature importance elucidates the impact of physical distances and road network structures on human flows across regions. With fairness as its touchstone, the model and results provide researchers and practitioners across the fields of urban sciences, transportation engineering, and computing with an effective tool for accurate generation of human mobility flows across regions. Human Mobility, Fairness-aware Prediction, Urban Mobility Flows ## 1 Introduction Assessing human mobility is crucial to urban studies and planning as it provides valuable insights into how people move within urban areas[1, 2, 3]. Human mobility analysis captures everyday commuting patterns, periodic travels, and exceptional movements during events or emergencies [4, 5, 6]. Understanding these dynamics enables urban planners to design infrastructure, public transportation systems, and public spaces to meet the needs of the population efficiently and sustainably. Furthermore, by considering mobility patterns, planners can enhance the livability of cities, reduce congestion, improve air quality, and promote social equity [7, 8, 9]. The assessment of human mobility also plays a vital role in urban resilience, helping cities prepare for and adapt to challenges, such as climate change, demographic shifts, or pandemic outbreaks [10, 11, 12]. Human mobility generation and prediction, an emergent field of study fueled by the ubiquity of mobile phones and location acquisition technologies, is fundamentally changing the way we comprehend human behavior and dynamics in urban settings. These technologies have enabled examination of people's mobility patterns, such as the places frequented and their visitation patterns, which shed light on lifestyles [13, 14, 15]. Importantly, these lifestyles can act as a predictive tool for individual and collective behavior, a facet extensively explored in marketing, transportation, health, psychology, sociology, among other fields of study. [16, 17, 18, 19]. Human mobility generation and prediction enable evaluation of future changes in population flows and movements as cities grow and evolve. In addition to prediction of future changes in human flow patterns, human mobility prediction and generation can provide reliable estimates of current human mobility flows when human mobility data are not obtainable due to privacy or availability issues [20, 21, 22]. A plethora of methodologies have been employed in the realm of human mobility prediction, each with its unique advantages and applications. From mathematical models, such as gravity models and radiation models [23]; to machine learning (ML)techniques, including deep learning and support vector machines [24, 25, 26]; the field is abundant with diverse methods, each offering a unique lens through which to understand human mobility. For example, gravity models and radiation models are founded on principles of physics. Gravity models, inspired by Newton's law of gravity, posit that the movement between two places is directly proportional to the size of their populations and inversely proportional to the distance between them [27]. Radiation models, on the other hand, are inspired by diffusion processes, assuming individuals move based on the opportunities in their immediate vicinity [23]. Both of these models have been traditionally employed to provide a macroscopic understanding of human mobility; however, recent studies have shown the shortcomings of these model in reliable generation of human mobility flows at finer spatial temporal scales [28]. In recent years, machine learning methods have offered nuanced insights into individual and collective mobility patterns. For example, support vector machines (SVMs) offer a robust approach to classify data and make predictions, particularly for smaller, well-separated datasets [24]. Deep learning, a subset of machine learning leveraging neural networks with multiple layers (deep structures) to learn data representations and patterns, has also been used for relevant tasks [29]. In human mobility prediction, deep learning techniques have shown the potential to model complex and nonlinear relationships, extract patterns from high-dimensional data, and forecast future states [30]. While the above methods have provided innovative tools for understanding and predicting human mobility, they have paid limited attention to the issues of fairness and equality of model outputs. The absence of fairness considerations in the existing human mobility prediction models could lead to pervasive inequalities in decisions and actions based on these models. The concept of fairness in the domain of human mobility prediction is of paramount importance and warrants serious attention. Fairness, in this context, signifies that the predictive models should demonstrate a uniform efficacy across various communities, regions, or demographic subgroups. The demand for such an impartial, consistent performance from these models springs from the need to ensure an equitable distribution of resources and development of infrastructure for all divisions of society, thereby upholding social justice and equity. The integration of machine learning models for human flow prediction into high-stakes decision-making processes has sparked concerns about possible biases that might inadvertently discriminate against specific groups. Such biases, highlighted in various research studies [31, 32, 33], can lead to severe consequences if left unaddressed. For instance, a biased mobility model might under-perform and yield inaccurate predictions in rural regions, leading to misjudgment and under-investment in necessary infrastructure or services, thereby exacerbating regional disparities. The notion of fairness from the perspective of distributive justice entails an even-handed distribution of predictions, resources, or outcomes across various societal groups. Therefore, a fair human mobility prediction model would not over- or under-perform for a certain group based on their inherent characteristics, thus leading to equitable resource allocation and decision-making [34]. Nevertheless, despite the criticality of fairness considerations in human mobility predictions, the current lack of fairness-aware models not only underscores a significant gap in effectiveness of the existing models, but it also directly undermines decision-making processes in aspects such as resource allocation, infrastructure development, and environmental planning, potentially propagating inequitable outcomes across various societal groups. To address this important gap, this study proposes a novel fairness-aware deep learning model (FairMobi-Net) for human flow prediction. The model is based on a specialized variant of the multi-linear perception that incorporates multiple sources of input features and adopts a source-oriented layer structure as its foundational architecture. The FairMobi-Net employs a three-stage approach to predict human mobility flow, by combining the outcomes of binary classification and numeric regression. Furthermore, the concept of fairness loss is introduced into the model's loss function to ensure fair outcomes across groups with variant income difference. To demonstrate the effectiveness of our proposed model, experiments are conducted using fine-grained real-world human mobility datasets in four U.S. cities. The results demonstrate that FairMobi-Net model can outperform the state-of-the-art models (including the Deep Gravity Model), in terms of achieving more accurate as well as fairer human flow prediction across a variety of region pairs. When compared to the baseline models, FairMobi-Net is capable of producing predictions with comparable levels of accuracy consistently across different regions, demonstrating our model's advantage in maintaining a balance between high prediction accuracy and fairness. The interpretations of feature importance also reveal that certain features such as distance and road network structures as the important factors shaping the human flow across regions. The model and findings will be particularly valuable to various academic disciplines and diverse stakeholder practitioners: (1) FairMobi-Net provides urban planners and transportation engineers with a novel method to fairly generate population flows for a variety of region pairs, regardless the differences of the median household incomes between regions; (2) The ability of the model to make a good trade-off between prediction accuracy and fairness ensures that the resultant mobility flows are equally accurate for equitable decision-making, such as infrastructure development and environmental planning; (3) the proposed Fairness Loss Function in the model provides a novel method for urban-computing researchers to improve the fairness performance of ML models in other urban applications; and (4) the evaluation of features that shape human mobility flows inform urban scientists and geographers about influential factors, including social and built-environment features, that contribute to distribution of mobility flows across a city and subsequent outcomes, such as congestion, access, public health, air pollution, and economic activities. ## 2 Results ### Dataset Collection and Experiment Settings For experiments, we collect human mobility datasets from Spectus, which is a location intelligence and measurement platform collecting mobility data of anonymized devices. Data from about 15 M active users are collected by Spectus in the United States. The previous studies have proven the high demographic representativeness of the Spectus dataset [35, 36, 37]. Specifically, our human mobility datasets are collected at the census-tract level in four U.S. metropolitan areas * 82,198 human mobility flows from the Atlanta Metropolitan Statistical Area (MSA). * 507,994 human mobility flows from Harris County in Houston. * 137,019 human mobility flows from King County in Seattle. * 36,208 human mobility flows from Suffolk County in New York. The data processing and modeling are performed using the NVIDIA RTX A6000 GPU. Based on the properties of the distributions, the learning rates of Atlanta MSA, Harris County, King County and Suffolk County are set 5e-4, 1e-3, 1e-3, 1e-3 respectively. We chose Adam [38] as our optimizer with weight decay of 5e-4, and train each model with 1000 epochs. ### Human flow prediction We conduct experiments for human flow prediction at the census-tract level within the above four areas. We adopt 60% of the datasets for model training and another 20% datasets as validation for hyperparameter fine tuning, and the remaining 20% datasets for performance testing. The experiments aim to evaluate the effectiveness of the model in terms of the accuracy and fairness of human flow prediction. For comparison, we used the following models as the baselines: \(\bullet\) Deep Gravity (DG): The latest state-of-the-art model for human flow prediction [39]. \(\bullet\) FairMobiNet-NoFL: A revised version of our proposed model FairMobi-Net, by removing the Fairness Loss from the FairMobi-Net's loss function, to evaluate the effectiveness of introducing fairness loss (which is a core contribution of this study) To determine the weight of fairness loss in the loss function, the Lagrange multiplier coefficient of the Fairness Loss needs to be defined by grid search. As shown in Figure 1: **Selection of Lagrange multiplier coefficients selection.** For each region, we select the Lagrange multiplier coefficient that yields the lowest PDP for FairMobi-Net. Figure 1, we iteratively test the Lagrange multiplier coefficient within the range of [0,1] with 0.1 intervals, and select the optimal value that yields the minimum PDP. Table 1 summarizes the performances of the models. It shows that the FairMobiNet model can achieve accurate and fair predictions across different regions. For example, in Atlanta, FairMobi-Net has NRMSE = 0.076, 25.5% lower than DG (NRMSE = 0.102), and 23.2% lower than FairMobiNet-NoFL (NRMSE = 0.100), showing our model's advantage in achieving more accurate human flow predictions than the baseline models. Figures 2 and 3, compare the ground-truth human flow with the predicted flow by respective models, which also shows the predicted human flow network by FairMobi-Net is more similar with the ground-truth human flow, through visual interpretation. Moreover, the DP achieved by the FairMobi-Net model is 0.118, significantly lower than DG (PDP = 0.607), indicating the performance by our model is fairer among different subpopulation communities than the DG model. The above results demonstrate the model outperforms the state-of-the-art baseline models, both in terms of human flow prediction accuracy and fairness performance. Similar results can be obtained in Harris County and King County (Table 1): FairMobi-Net yields better accuracy (lower NRMSE, MAE, JSD and higher Corr.) and fairness (lower PDP) of prediction than other models. The performance of models are spatially depicted in Figure 4 shows that the models' performance varies across the study regions, which may be due to the variations in regional characteristics, such as land use patterns, public facility availability, and road network configuration. This result suggests that the complex interplay of these diverse factors, which are inherently region-specific, may influence human flow dynamics, and subsequently, the predictive capacity of the model. In Suffolk County, the investigated models show similar prediction accuracy: NRMSE by our model is 0.090, 31.3% lower than 0.131 Figure 3: **Spatial Comparison of Observed and Predicted Human Mobility Flows** The human mobility flows with different amounts are represented in different colors: blue for the human flow ranging from 1 to 3, orange for the human flow ranging from 4 to 9, and red for the human flow larger than 10. The visual interpretation clearly shows that the predicted human flows by our FairMobi-Net are more consistent with the observed flows, than DG of DG; while on the other hand, our model achieves significantly lower PDP (0.081 by FairMobi-Net), than that of DG (PDP= 0.198), proving the FairMobi-Net model's particular advantage in achieving equal prediction across different communities. The results of further investigation of the models' performance among different communities(Figure 5) shows that our model's MAEs among different groups have less fluctuation than those of DG. The variances of MAE obtained by our model are Fig. 4: **Spatial Comparison of Models’ Accuracy** The average MAEs of FairMobi-Net and DP are calculated for respective census tracts. The third column plots the relative improvement of FairMobi-Net over DG in terms of MAEs. 0.089 (Atlanta), 0.031 (Harris County), 0.029 (King County), 0.012 (Suffolk County), less than those by DG (0.122 (Atlanta), 0.043 (Harris County), 0.033 (King County), 0.016 (Suffolk County)). The results elucidated in Figure 5 and the corresponding variances underscore a remarkable consistency in the performance of our model across diverse communities. Our model not only demonstrates lower Mean Absolute Errors (MAEs), but also yields reduced fluctuations across regions when compared to the DG models. This consistency is manifested in significantly lower variances, notably in Atlanta, Harris County, King County, and Suffolk County. Such accurate and equitable performance across different groups substantiates the superior adaptability of our model, making it a more reliable and fair choice over DG models in various settings and demographics. The introduction of Fairness Loss into the loss function leads to great improvement of models' performance fairness. The most significant improvement is observed in Atlanta, where the PDP is improved from 0.528 (by FairMobiNet-NoFL) to 0.118 (by FairMobi-Net). A similar pattern is also consistent among other study regions. These marked improvements imply that the introduction of Fairness Loss serves to reduce the disparity, ensuring that the model is more equitable in its predictive performance as well. Notably, the fairness element of the model aids in attenuating potential bias that may be inadvertently entrenched in the modeling process, thereby helping Figure 5: **MAEs by FairMobi-Net and DG for Each Group** We plot the MAEs by FairMobi-Net and DG for census tract groups with various income difference (i.e.a1, a2, a3). The data indicates that, across all four study areas, the MAEs generated by FairMobi-Net exhibit fewer fluctuations than those produced by DG. This suggests that the proposed FairMobi-Net is capable of delivering predictions with a more uniform level of accuracy across varying groups. to accomplish more balanced and fair outcomes. Since this trend is not confined to Atlanta, it reflects a consistent pattern among other study regions as well, demonstrating the generalizability of the Fairness Loss addition across a diverse set of contexts and scenarios. The results attest to the robustness of the Fairness Loss strategy in facilitating model fairness, regardless of the location or area under consideration. A broad range of predictive models can benefit from these substantial advancements in performance equality by incorporating Fairness Loss. This innovation, therefore, opens up a promising avenue for advancing the development of predictive models for other urban computing applications that not only maintain high performance but also ensure the fair treatment of various subjects, areas, and groups they aim to predict for. ### Assessing the Influence of Urban Features on Human Flow Predictions The understanding and prediction of human flow between regions is a complex task that involves many factors. The interpretation of features importance in the FairMobi-Net model provides insights into how these factors shape the human flow. We used SHAP (SHapley Additive exPlanations) values to explain the output of our machine learning model and highlight the impact of various factors (Figure 6). **Fig. 6**: **SHAP Plots for Study Areas** On the graph, the x-axis represents the SHAP value, which signifies the degree to which a particular feature impacts the predictions. The color bar corresponds to the feature values themselves. Features prefixed with ’i’ pertain to the origin census tract, while those prefixed with ’j’ are derived from the destination census tract. Intuitively and consistent with the distance decay law of human mobility, distance emerged as the most influential factor in predicting human flow between regions, which is aligned with previous findings [39]. This result aligns with the intuitive assumption that the larger the distance between regions, the less human flow between them. The high SHAP values of the distance factor, consistent across three out of four regions (i.e., Atlanta MSA, Harris County, Suffolk County), affirm the substantial role of the distance decay low of human mobility. This could be attributed to factors such as travel costs and time, which tend to increase with distance, thus discouraging high levels of human flow between distant regions. However, an interesting deviation from this general trend was observed in King County, where the total street length of the destination region was found to be the most influential factor, relegating distance to second place. This result could imply that the structural aspects of a region, represented here by street length, can significantly affect the human flow, perhaps even more so than the geographical distance. This might be because a larger total street length could indicate a more developed or dense urban environment, thus attracting more human flow. In addition to distance and total street length, multiple other road network-related factors, such as total length of streets, total count of street segments, and total count of intersections, also showed considerable influence on the results. These factors might contribute to the fluidity of traffic and accessibility of different regions, thereby affecting the volume of human flow. This underlines the importance of considering not just geographical distance but also the quality and characteristics of infrastructure when predicting human flow. Conversely, other features such as land use, socioeconomic variables, and Points-of-Interest (POIs) generally exhibited minor influence on the human flow prediction. The direction of influence of these features also varied from region to region, indicating a possible interaction effect with local characteristics. For instance, certain types of land use or POIs might attract more human flow in one region due to cultural, demographic, or economic reasons. In summary, while distance commonly plays a significant role in predicting human flow between regions, our findings underscore the importance of regional infrastructure, especially road networks, in shaping this flow. The influence of other factors, such as land use, socioeconomic variables, and POIs, appears to be more context-dependent, requiring further exploration for localized models. As we continue to refine our model, understanding these varying influences will help us increase its predictive accuracy and contribute more effectively to urban planning and policy making. This capability is particularly important for modeling mobility flows at finer spatial and temporal resolution. As the spatial and temporal resolution of human mobility prediction tasks increase, there is a need for consideration of additional features beyond the commonly known ones, such as distance, to achieve better accuracy and fairness performance. ## 3 Problem Statement and Methodology ### Problem Statement This study aims to predict the human flow between census tracts. Mathematically, this can be modeled as: \[y_{ij}=F(X_{i},X_{j},X_{i,j}) \tag{1}\] where, \(X_{i}\) represents the properties of the origin census tract \(i\); \(X_{j}\) represents the properties of the destination census tract \(j\); \(X_{i,j}\) is the communal features shared by census tracts \(i\), \(j\); \(y_{ij}\) is the predicted human flow between census tracts \(i\), \(j\). Consequently, the goal of the model is to find an optimal mapping function \(F\) from \((X_{i},X_{j},X_{i,j})\) to \(y_{ij}\). The specifications for \(F\) and features construction for \((X_{i},X_{j},X_{i,j})\) are given as described in the Methodology section, below. ### Methodology The model is designed to deliver highly accurate and fair predictions of human mobility flow. Our predictions are generated for pairs of census tracts, and by fairness, we mean that the accuracy of these predictions should be consistent across all tract pairs, irrespective of the disparity in income levels. ### Proposed Model As shown in Figure 7, the input layer of our model is divided into three individual layer blocks. These blocks start with the local features \(X_{i}\) from origin census tract \(i\), the local features \(X_{j}\) from destination census tract \(j\) and the communal features \(X_{i,j}\) between the census tracts respectively (\(i,j\in 1,2,\ldots,N\), where \(N\) is the total number of census tracts within the region). Each block consists of a fully-connected layer, a Gaussian Error Linear Unit (GELU) [40] activation function, a batch normalization layer and a dropout layer to introduce non-linearity. The features from the two census tract blocks are then combined by addition to match the topology of the features from the shared properties, and they are stacked together for further multiple block runs. Compared to the previous models, our model is designed for gravity-like patterns in the flow data and is more task-oriented and places greater emphasis on zero-inflated discrete distribution. Particularly, the prediction of human flow by our model is a three-stage process. **Stage One: Binary classification** In reality, the majority of the census tracts have no flow with other tracts, resulting in predominance of zero value in the training dataset. To handle this issue of imbalanced datasets, in this stage, our model is formulated as a binary classifier: \[B_{ij}=M_{1}(X_{i},X_{j},X_{i,j}) \tag{2}\] where, \(B_{ij}\) is a binary value (0 or 1) to distinguish whether there is flow between the census tract. **Stage Two: Numeric regression** In this stage, our model is formulated as regression model: \[N_{ij}=M_{2}(X_{i},X_{j},X_{i,j}) \tag{3}\] where \(N_{ij}\) is a numeric value indicating the possible inter-tract human. **Stage Three: Human flow prediction** In this stage, the output of previous stages is combined to give the final prediction of human flow: \[y_{ij}=F(X_{i},X_{j},X_{i,j})=B_{ij}\cdot N_{ij} \tag{4}\] where \(y_{ij}\) is the final predicted human flow between tract \(i\) and \(j\). ### Loss Specifications The key motivation for our study is to achieve predictions with similar accuracy across the different census tract groups. Consequently, we introduce the concept of Fairness Loss into the loss function of our model: \[L_{tot}=\bar{l}+\zeta\cdot L_{fairness} \tag{5}\] Figure 7: **The conceptual workflow of the study. Multiple categories of datasets are collected to construct features for origin region and destination region. The proposed model introduces Fairness Loss into the loss function and utilizes a three-stage approach to predict the human mobility flows between regions.** where, \(\bar{l}\) is the loss function characterizing the accuracy of human flow prediction; \(L_{fairness}\) is the introduced loss to characterizing the fairness of model prediction across groups; \(\zeta\) is a Lagrangian multiplier that determines the weight of \(L_{fairness}\). In our model, accuracy loss \(\bar{l}\) is defined as the mean absolute error (MAE) of the prediction \[\bar{l}=\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{N}|y_{ij}-\bar{y_{ij}}| \tag{6}\] where, \(\bar{y_{ij}}\) is the predicted human flow (see Equation 4), \(y_{ij}\) is the ground-truth observed human flow. Correspondingly, the aim of achieving similar accuracy across different groups is to explore the optimal solutions of the following optimization problem: \[\begin{array}{c}\min:\quad\bar{l}\\ \text{s.t.}\quad P(\bar{l}-\tau\leq l\leq\bar{l}+\tau|a_{1})=P(\bar{l}-\tau \leq l\leq\bar{l}+\tau|a_{2})\\...\\ P(\bar{l}-\tau\leq l\leq\bar{l}+\tau|a_{1})=P(\bar{l}-\tau\leq l\leq\bar{l}+ \tau|a_{n})\\...\\ P(\bar{l}-\tau\leq l\leq\bar{l}+\tau|a_{n})=P(\bar{l}-\tau\leq l\leq\bar{l}+ \tau|a_{n-1})\\ \forall a_{j}\in A,\ j\in\{1,2,...,N_{A}\}\end{array} \tag{7}\] where, \(a_{i}\) represents a group of human mobility predictions between certain areas (details of the grouping are given in the next section); the number of human flow predictions related in \(a_{i}\) is represented as \(N_{a_{i}}\); \(A\) is the total collection of groups of census tracts; \(l\) is the accuracy loss array of group \(a_{i}\); \(P(\cdot|a_{i})\) is the probability that the error of each prediction in \(a_{i}\) falls in the range delimited by \(\bar{l}\) and a threshold \(\tau\). The above formula illustrates the searching for global optima that satisfy the following condition: for each group \(a_{i}\), the accuracy loss should have similar distribution (characterized as a range determined by \(\bar{l}\) and \(\tau\)) as any other group. To solve this problem by modeling, we reformulate the optimization problem using a Lagrangian. Considering that the difference in population of each group in each area is imbalanced, it is advised to multiply a weight \(w_{p,q}=\frac{\sum_{a_{i}\in A}N_{a_{i}}}{N_{a_{p}}+N_{a_{q}}}\) to amplify the impact of less populous groups. Hence, the optimization problem depicted in equation (7) can be transformed into a Lagrangian: \[\begin{array}{c}\min\quad L(y,\zeta)=l+\sum_{p,q\in A,p\neq q}\zeta_{p,q}w_ {p,q}|P(\bar{l}-\tau\leq l\leq\bar{l}+\tau|a_{p})-P(\bar{l}-\tau\leq l\leq\bar {l}+\tau|a_{q})|\\ \Rightarrow\quad L(y,\zeta)=l+\zeta\cdot\sum_{p,q\in A,p\neq q}w_{p,q}|P(\bar{ l}-\tau\leq l\leq\bar{l}+\tau|a_{p})-P(\bar{l}-\tau\leq l\leq\bar{l}+\tau|a_{q})|\\ \Rightarrow\quad L(y,\zeta)=l+\zeta\cdot PDP\\ PDP=\sum_{p,q\in A,p\neq q}w_{p,q}|P(\bar{l}-\tau\leq l\leq\bar{l}+\tau|a_{p})-P( \bar{l}-\tau\leq l\leq\bar{l}+\tau|a_{q})|=L_{fairness}\end{array} \tag{8}\] To simplify the process of exploring the optimal coefficient that achieves optimal fairness performance with acceptable loss, we estimate each equity Lagrangian multiplier as a constant value and define the summation part as Proportional Demographic Parity (PDP). In this specific case study, we interpret PDP as representing the loss in fairness. PDP focuses on reducing the distance between each pair of groups based on the proportion of prediction losses falling within the domain defined by the overall loss in modeling, and we use it as a key metric for evaluating the fairness performance of our model hereafter in the study. ### Feature Construction and Prediction Grouping As explained in Equation (1), multi-dimensional feature \(X_{i}\), \(X_{j}\), \(X_{i,j}\) are constructed to present the properties of the origin and destination census tracts. The specific features are constructed as below: * \(X_{i}\), \(X_{j}\): * Public facilities (nine features): total count of Points-of-interest (POIs) and buildings relevant to restaurant, school & college, public transport, office, leisure, medical & health, residence, parking and retail. * Land-use (six features): total area (in \(km^{2}\)) for commercial, construction, industrial, residential, retail and natural land-use classes. * Road network (three features): total length (in \(m\)) of streets, total count of street segments and total count of intersections. * Census statistics (two features): population and per capita income (in $). * \(X_{i,j}\) * Euclidean distance between two census tracts (in feet). * Protected attribute (three features): the one-hot encoded income difference between census tracts. We rank the absolute difference of median household income between origin and destination census tracts in ascending order, and define groups of human mobility prediction as: * **Group**\(a_{1}\)--the top 20% ranking, as the group of human mobility predictions between census tracts with low income difference. * **Group**\(a_{2}\)--20% to 50% ranking, as the group of human mobility predictions between census tracts with medium income difference. * **Group**\(a_{3}\)--The remaining 50%, as the group of human mobility predictions between census tracts with high group income difference. The total number of features is 44, of which 20 belong to the origin census tract, 20 features are from the destination census tract, and the remaining 4 are communal features. **The aim of our model** is hence to achieve accurate as well as fair prediction for census tract groups with income differences (i.e., \(A=\{a_{1},a_{2},a_{3}\}\) in Equation 7, and the achieved accuracy for human flow prediction should be approximately consistent across groups \(a_{1}\), \(a_{2}\) and \(a_{3}\)). ### Evaluation metrics We adopted the following measurements to evaluate the performance of our model. For accuracy evaluation, four measurements are adopted: Normalized Root Mean Square Error (NRMSE) and mean absolute error (MAE), both to measure the quantitative deviations of the predicted human flow from observed values. Lower values of the measurements denote more accurate predictions [41]. Pearson's Correlation (Corr.) assesses the inter-dependency between input features and the target variable. A higher value Corr. indicates stronger correlation relationships [42]. Jensen-Shannon Divergence (JSD) measures the similarity between the distribution of predicted values and observed values. A lower JSD value implies more similar distributions [43]. For fairness evaluation, we mainly aim to evaluate the equality of model performance among different income difference groups, hence, the measurement PDP is used (Equation 8). A lower value of PDP implies a more equitable prediction outcome. ## 4 Closing Remarks The attainment of accurate and fair human flow predictions across diverse socioeconomic subgroups is important for promoting social justice and for aiding in decision making for equitable resource allocation and urban development. In this study, we introduce a novel fairness-aware deep learning model, FairMobi-Net, designed for human flow prediction. Employing a three-stage approach, this proposed model incorporates the concept of Fairness Loss and employs a three-stage approach to predict human flow between pairs of regions. Experimental results using real-world datasets from four U.S metropolitan areas indicate that our model delivers human mobility flow predictions with high accuracy across regions. Compared with the baseline models, FairMobi-Net yields comparable levels of accuracy consistently across different regions, showcasing our model's strength in ensuring both good accuracy and fairness in human mobility prediction. The model and outcomes of this study have multiple important contributions. From a theoretical perspective, the FairMobi-Net model provides a unique fairness-aware approach to human mobility prediction by introducing a novel fairness loss component in the deep learning model. The revised loss function ensures the model achieves a good trade-off between prediction accuracy and fairness. The novel fairness loss provides a new approach for future fairness-oriented model design for other urban computing applications as well. From a practical standpoint, the model offers a novel tool for urban planners, transportation engineers, and environmental managers involved in decision-making processes, such as infrastructure and environmental planning, to better predict future patterns of human mobility flows based on anticipated changes in the land use and development patterns with the equity in mind. The model helps ensure that the resultant mobility flows are accurate and fair for all communities, which promotes equitable resource allocation and infrastructure development and, ultimately, social justice. Moving forward, future research could aim to extend the application of the FairMobi-Net model to different geographic locations and settings, exploring how the consideration of fairness improves the machine learning models in other urban computing applications (such as air-pollution prediction). Another intriguing avenue to explore is how to fine-tune the FairMobi-Net model to take into account other demographic factors that might influence human mobility flows, such as age, gender, or occupation as sensitive attributes. Finally, with the growing application of machine learning models in urban studies, integrating the FairMobi-Net model with other predictive models used in sectors like public health, economics, or environmental studies could also provide holistic insights into human mobility behavior and its implications on other urban outcomes (such as air pollution, prevalence of diseases, and economic activity) at a larger scale. ## Data Availability and Acknowledgement This material is based in part upon work supported by the National Science Foundation under Grant CMMI-1846069 (CAREER). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
2306.06441
Image Vectorization: a Review
Nowadays, there are many diffusion and autoregressive models that show impressive results for generating images from text and other input domains. However, these methods are not intended for ultra-high-resolution image synthesis. Vector graphics are devoid of this disadvantage, so the generation of images in this format looks very promising. Instead of generating vector images directly, you can first synthesize a raster image and then apply vectorization. Vectorization is the process of converting a raster image into a similar vector image using primitive shapes. Besides being similar, generated vector image is also required to contain the minimum number of shapes for rendering. In this paper, we focus specifically on machine learning-compatible vectorization methods. We are considering Mang2Vec, Deep Vectorization of Technical Drawings, DiffVG, and LIVE models. We also provide a brief overview of existing online methods. We also recall other algorithmic methods, Im2Vec and ClipGEN models, but they do not participate in the comparison, since there is no open implementation of these methods or their official implementations do not work correctly. Our research shows that despite the ability to directly specify the number and type of shapes, existing machine learning methods work for a very long time and do not accurately recreate the original image. We believe that there is no fast universal automatic approach and human control is required for every method.
Maria Dziuba, Ivan Jarsky, Valeria Efimova, Andrey Filchenkov
2023-06-10T13:41:02Z
http://arxiv.org/abs/2306.06441v1
# Image Vectorization: a Review ###### Abstract Nowadays, there are many diffusion and autoregressive models that show impressive results for generating images from text and other input domains. However, these methods are not intended for ultra-high-resolution image synthesis. Vector graphics are devoid of this disadvantage, so the generation of images in this format looks very promising. Instead of generating vector images directly, you can first synthesize a raster image and then apply vectorization. Vectorization is the process of converting a raster image into a similar vector image using primitive shapes. Besides being similar, generated vector image is also required to contain the minimum number of shapes for rendering. In this paper, we focus specifically on machine learning-compatible vectorization methods. We are considering Mang2Vec, Deep Vectorization of Technical Drawings, DiffVG, and LIVE models. We also provide a brief overview of existing online methods. We also recall other algorithmic methods, Im2Vec and ClipGEN models, but they do not participate in the comparison, since there is no open implementation of these methods or their official implementations do not work correctly. Our research shows that despite the ability to directly specify the number and type of shapes, existing machine learning methods work for a very long time and do not accurately recreate the original image. We believe that there is no fast universal automatic approach and human control is required for every method. Keywords:Vector graphics Image vectorization Computer vision. ## 1 Introduction In computer graphics, two main approaches for image representation coexist. While a bitmap image is a matrix of pixels, a vector image is a sequence of shapes drawn with the canvas. Raster graphics are commonly used for complex images containing a large number of visual details and complex color transitions. Most often these are photos and photorealistic drawings. At the same time, vector images consist of figures, which typically have a constant one-color fill. This results in the simplicity and abstractness of the resulting image. Therefore, the primary domains of vector graphics are icons, logos, simple illustrations, and fonts. Vector images are easily embedded in HTML markup and a crucial requirement is the small size of the code describing them for fast data transfer to the user and subsequent rapid rendering. The most popular vector format is SVG, which defines a vector image as a tag sequence using XML markup. Each tag can use XML attributes to specify the shape color characteristics and transformations. Using this markup, a renderer program draws an image consisting of the specified figures. Thus, the task of vectorizing a bitmap image is similar to the task of obtaining a sequence of shapes and their parameters that together form the original raster image. In 2014, generative adversarial models [9] became the first machine learning algorithm for image synthesis. Since then, image synthesis has become an important part of digital art, data augmentation techniques, design, fashion, and several other domains. Currently, the image generation task is solved with deep generative models based on diffusion models [24, 25] and autoregressive models [7, 35]. Modern generative methods have recently achieved significant success in the raster domain. However, despite these approaches being designed to generate highly realistic images in different styles from the input text, the resulting images do not have a high resolution; usually, it is less than 2048x2048 pixels. However, it is not enough for logos, covers, and for printing. In these domains, vector graphics are standard. One of the ways to obtain such images is to use vector graphics instead of raster graphics. Notably, little research is done in vector image generation [1, 32, 8, 5, 19, 26, 11, 4]. The creation of vector images is still done by humans, but working with the vector image code is not an easy task. Therefore, the ability to generate such images automatically with a minimal number of post-processing steps is very necessary. A possible solution for obtaining a vector image is the vectorization of raster images. It may also allow the enrichment of vector datasets that are required for training vector image generation algorithms, but the size of which is still not sufficient in comparison with bitmap image datasets. The existing image vectorization methods can be divided into two categories: algorithmic and machine learning-based methods. Algorithmic approaches have recently been reviewed [31]. The authors classify the vectorization methods as mesh-based and curve-based. Mesh-based methods split the image into non-overlapping 2D patches and interpolate colors across them. The patch shape can be triangular [38, 18, 10], rectangular [21, 29, 15], or even irregular, for example, in the form of Bezigons, closed regions bounded by Bezier curves [30, 16, 34], and the patch vertices or interior can store color and other attributes. Curve-based methods are based on diffusion curves, which are Bezier curves with colors defined on their left and right sides. Color discontinuities can be modeled by diffusing the colors from both sides of the curves to create the resulting image. For smooth edges, the diffusion process might be followed by a blurring phase. There are different formulations of diffusion curves to work with: basic and harmonic [2, 37, 12], and biharmonic [33] as well. Although several vectorization methods exist, no decent comparison was made to the best of our knowledge. In this paper, we focus on machine learning-compatible image vectorization methods. We aim to classify and compare them using different evaluation criteria. The main comparison criteria of the vector izing methods are: 1) similarity to the original bitmap; 2) the simplicity or complexity of the resulting image including the number of shapes and their parameters; 3) the speed of generation; 4) versatility -- the ability to generate a fairly accurate copy of the input image without prior model training; 5) human control to adjust hyperparameters. The contributions of this paper are overview of machine learning-compatible vectorization methods and comparison of their performance. ## 2 Machine Learning-compatible Methods ### DiffVG The work suggesting DiffVG [17] is grounding for machine learning-based vector image generation methods as it implements a differentiable vector image rasterization function linking vector and raster domains. A raster image can be vectorized with DiffVG by fitting a predefined number of randomly initialized Bezier curves to the target image, see Fig. 1. Optimization can be performed by minimizing the \(L_{2}\) loss between a raster and rasterized vector images or a deep-learning-based perceptual loss [36]. The authors also propose a simple variational autoencoder [13] for vectorizing MNIST digits images [3]. The encoder convolves the input raster image into a latent space, a vector image is synthesized from the embedding, then the input and synthesized images are compared using the proposed rasterization function. The resulting images are inaccurate because each character is represented by several curves that look like the artist's strokes. The results do not exactly match the original images of such a simple dataset as MNIST. One of this method features is also worth mentioning. The running time of the rasterization algorithm significantly depends on the size of the output image. However, output size reduction result in the loss of important image details. This problem arises due to the nature of vector images. By analogy with raster images size decrease, that leads to the loss of image details, the same effect occurs when the number of paths is reduced in vector images. Figure 1: Iterative vectorization using DiffVG method. Each generated image has 62 paths, the same number paths is used in the original vector image. ### Im2Vec The Im2Vec [23] paper offers a model for image vectorization and interpolation. Its architecture is based on the variational auto-encoder (VAE) [14] proposed in DiffVG. The model maps an input raster image into a latent space sequentially generating a similar vector image. While training, to compare the generated vector image with the input raster image, the vector image is rasterized using a differentiable rasterizer. In the paper, the authors propose a new way of generating closed shapes. Initially, a circle is sampled for each shape, and then, based on the latent vectors of the shapes generated with LSTM, the circle deforms. Since the difference between the target and output images is significant at the beginning of training, the authors suggest using multi-resolution loss. The paper proposes to rasterize images in different resolutions, thus building a pyramid of images, with the loss function for each layer being calculated. The total multi-resolution loss optimized is: \[\mathbb{E}_{I\sim D}\sum_{l=1}^{L}\|pyr_{l}(I)-O_{l}\|^{2},\] where \(L\) is the number of pyramid levels, \(pyr_{l}(I)\) is the \(l\)-th pyramid level, \(O_{l}\) is output rasterized at the corresponding spatial resolution, and \(D\) is the training dataset. Unfortunately, using the official implementation, the results can hardly be reproduced. During the first 700 epochs of training on the emoji dataset, which was collected and published by the authors, the generation result has no changes and consists of a single point in the center of the image. We have also noticed that the shapes colors are fixed in the implementation, i.e. no separate color generation can be performed, so we have added a separate LSTM model for predicting the colors of figures. However, resolving issues to their implementation, the authors themselves point out that color prediction causes instability during generation. Thus, this work cannot be considered as a universal model for vectorizing any image, because: 1) it must be pretrained on a large number of vector images, which are hard to obtain; 2) the found shortcomings are likely to lead to learning instability providing resulting images of poor quality. ### Live In paper [20], LIVE vectorizer was introduced. LIVE is a logical continuation of the iterative method proposed in DiffVG. However, LIVE does not operate with all shapes simultaneously from the first iteration. Instead, it gradually adds one or more shapes to the canvas layer by layer and then performs an optimization step. Unlike DiffVG, LIVE operates only with closed shapes consisting of cubic Bezier curves. There is an issue that some of them may become self-interacted during optimization, which results in undesirable artifacts and incorrect topology. Although additional paths could cover artifacts, it would complicate the resulting SVG making it impossible to effectively investigate the underlying topological data. The authors discovered that a self-intersecting path always intersects the lines of its control points, and vice versa, assuming that all of the Bezier curves are of the third order. Therefore, the authors introduce a new loss function (Xing-loss) designed to solve this self-intersection problem. The fundamental idea is to only optimize the case when the angle of the curve is 180degdegrees. In other words, the authors urge the angle between the first and last control points connections to be greater than 180degin a cubic Bezier path. This loss acts as a regularizer on self-intersection and its formula is: \[\mathcal{L}_{Xing}=D_{1}(ReLU(-D_{2}))(1-D_{1})(ReLU(D_{2}))\] where \(D_{1}\) is a characteristic of the angle between two segments of a cubic Bezier path, and \(D_{2}=\sin\alpha\) -- value of that angle. To make each path responsible only for a single feature of the image, the authors introduce Unsigned Distance guided Focal loss (UDF loss) as well; it treats each pixel differently depending on how close it is to the shape contour. According to intuition, the UDF loss amplifies differences near the contour and suppresses differences in other areas -- LIVE weighs an \(L_{2}\) reconstruction loss by distance to the nearest path. By doing this, LIVE defends against the mean color problem caused by MSE and keeps accurate color reconstruction: \[\mathcal{L}_{UDF}=\frac{1}{3}\sum_{i=1}^{w\times h}d_{i}^{\prime}\sum_{c=1}^{3 }(I_{i,c}-\hat{I_{i,c}})^{2},\] where \(I\) is the target image, \(\hat{I}\) is the rendering, \(c\) indexes RGB channels in \(I\), \(d_{i}^{\prime}\) is the unsigned distance between pixel \(i\), and the nearest path boundary, and \(w\), \(h\) are width and height of the image. LIVE produces relatively clean SVGs by initializing paths in stages, localized to poorly reconstructed, high-loss regions. LIVE's main advantage is its ability to reconstruct an image with a user-defined amount of paths, significantly reducing the SVG file size compared to other methods. However, it takes much time to vectorize an image even on GPU, thus, this method is hardly applicable in practice for complex images with a great optimal number of paths. See the iterative process in Fig. 2. ### ClipGen The ClipGen paper [27] proposes a method based on deep learning for automatically vectorizing the clipart of man-made objects. The suggested approach needs a raster clipart image and relevant object category (for instance, airplanes). It sequentially creates new layers, each formed by a new closed path that is filled with a single color. All layers are combined to create a vector clipart that fits the desired category to produce the resulting image. The suggested method is built on an iterative generative model that chooses whether to keep synthesizing new layers and defines their geometry and appearance. For training their generative model, they developed a joint loss function that includes shape similarity, symmetry, and local curve smoothness losses, as well as vector graphics rendering accuracy loss for synthesizing a human-recognizable clipart. However, ClipGen only works with a predefined number of categories, therefore, it cannot process arbitrary images. ### Mang2Vec The authors of the Mang2Vec [28] paper suggest the first method for vectorizing raster mangas by using Deep Reinforcement Learning. They develop an agent that is trained to generate the best possible sequence of stroke lines while being constrained to match the target manga visual features. The control parameters for the strokes are then collected and converted to the vector format. They also propose a reward to produce accurate strokes and a pruning method to avoid errors and redundant strokes. Mang2Vec works only with black and white manga and cannot be used with colored images. ### Deep Vectorization of Technical Drawings The paper [6] proposes a technical line drawings vectorization method (DVoTD), for example, for drawings of floor plans. The authors convert a technical raster drawing, which is cleared of text, into a set of line segments and quadratic Bezier curves that are specified by control points and width. They preprocess the Figure 3: Vectorization using Mang2Vec method. Figure 2: Iterative vectorization using LIVE method. input image by eliminating noise, modifying contrast, and adding missing pixels. Then, they divide the image into patches and calculate the starting primitive parameters for each patch. To do this, each patch is encoded with a ResNet-based feature extractor and decoded as feature embeddings of the primitives using a sequence of transformer blocks. To train the network for primitive extraction, the following loss function is proposed: \[L(p,\hat{p},\theta,\hat{\theta})=\frac{1}{n_{prim}}\sum_{k=1}^{n_{p}rim}(L_{cls} (p_{k},\hat{p_{k}})+L_{loc}(\theta_{k},\hat{\theta_{k}}),\] where \[L_{cls}(p_{k},\hat{p_{k}})=-\hat{p_{k}}\log p_{k}-(1-\hat{p_{k}})\log(1-p_{k}),\] \[L_{loc}(\theta_{k},\hat{\theta_{k}})=(1-\lambda)\|\theta_{k}-\hat{\theta_{k}} \|_{1}+\lambda\|\theta_{k}\hat{\theta_{k}}\|_{2}^{2},\] \(\hat{p}\) -- the target confidence vector (is all ones, with zeros in the end that indicate placeholder primitives, all target parameters \(\hat{\theta_{k}}\) of which are set to zero). The approximated primitives improve by aligning to the cleaned raster. The improved predictions from all patches are combined. ## 3 Online Vectorization Methods There are plenty of websites that can vectorize any raster image. Existing online methods can be free to use (svgstorm.com, www.visioncortex.org/vtracer, vectorization.eu) or proprietary (vectorizer.io). Common options provided by these methods is the selection of vector graphics output file format (SVG, EPS, PDF), color palette and the number of colors used. Some services allow choosing the quality of image detail (Low, Medium, High), the type of shapes used (Curve Fitting - pixel/polygon/curve), background removal and many other actions and parameters. These options affect the processing speed, the visual result and the number of shapes. The generation speed highly depends on the resolution of the input raster image and its details complexity. On average, the processing time of one image is 10 seconds. Even though they are easy to use, a lot of parameters should be fixed by a user. It should be mentioned that the resulting image quality and its size hardly depend on the input image quality and resolution. Low quality results in producing images with a lot more paths that are actually needed leading to noticeable artifacts. One of the popular online services for vectorization is VTracer [22], which provides many options for a user. According to its documentation, firstly the method clusters the input image by hierarchical clustering and each of the output clusters traces into vector. After converting pixels into staircase-like paths, the method then simplifies the paths into polygons and in the end smoothens and approximate them with a curve-fitter. We have come across the following drawbacks of this service. Firstly, VTracer does not work well with all image formats, for example, it produces a black background instead of a transparent one while processing PNGs and there are no options to change this behaviour. Secondly, VTracer does not handle low-quality images well, creating many unnecessary inaccurate shapes. In Fig. 4, we show an example of the black background appearance for a PNG high-quality image and the result for the same image converted to JPG having low quality. ## 4 Comparison ### Comparison Criteria To make a valuable comparison of vectorization methods, it is necessary to consider that the visual appealingness and similarity of the resulting vector image to the original raster image is not the only important criterion. The speed of vectorization is also an important factor. The main advantages of a vector image are its simplicity and a small number of shapes used. Although a vector image can contain various shapes (circles, rectangles, paths, etc.), vectorization methods tend to generate images using only paths. Paths themselves consist of segments (Bezier curves, straight lines, etc.) and their number in each path should be low as well. This is necessary both for simpler post-processing by designers and faster image transfer to the user through the Internet with subsequent vector image rendering. Thus, the main five criteria for evaluating vectorization methods are: 1. similarity to the original bitmap; 2. the simplicity or complexity of the resulting image including the number of shapes and their parameters; 3. the speed of generation; 4. versatility -- the ability to generate a fairly accurate copy of the input image without prior model training; 5. human control to adjust hyperparameters. Figure 4: VTracer vectorization and its issues with the black background color and inaccurate vectorization of low-quality images. However, taking into account all the criteria at the same time is challenging, because the methods we consider have many different parameters that affect all the criteria simultaneously. Typically, by changing one parameter, one can achieve an increase in image processing speed but at the same time reduce the quality of the resulting image. ### Experiment Setup We selected 6 images for comparison: 3 rasterized vector images and 3 bitmaps of different complexity. The original target vector images before rasterization had the following number of paths: dragon had 25, burger - 62, red landscape - 100. Ideally, vectorization methods should create images consisting of an ap Figure 5: Qualitative comparisons of image vectorization results using different methods. DiffVG closed stands for the DiffVG method with closed paths, unclosed - with unclosed strokes. DiffVG and LIVE results for the 3 initially vector images have the paths amount as in the original images, for the Simple raster image 32 paths were used, for the Medium and Hard Raster – 1024 paths. The other methods have been run with their default parameters. proximately similar number of paths in a short period of time. At the same time, it does not worth expecting vectorization methods to account for every image detail on initially raster images, as an over-detailed vector image does not satisfy the simplicity criterion -- a smaller number of paths. It is also desirable that simple monochrome patches should be decorated with a minimum number of shapes and the image subject should not be lost. We compare the following methods (with publicly available implementation): Mang2Vec, Deep Vectorization of Technical Drawings (DVoTD), iterative DiffVG and LIVE methods, and online methods. The following models are not included in the comparison: 1) Im2Vec [23], because we could not confirm in practice the results described in the paper, and it also requires additional serious pre-training for processing relatively diverse images; 2) ClipGEN [27], because the model is limited to the set of predefined classes and there no its implementation is publicly available; 3) VAE and GAN introduced in DiffVG [17], because they also require additional pre-training. Also, even on such a simple dataset as MNIST, we found their results are not satisfactory enough; 4) algorithmic methods, since we found no implementations publicly available. Fig. 5 contains the original images and their vectorized versions using different methods reviewed in our paper. Our experiments have proven that the Mang2Vec and DVoTD models are not versatile, since they are capable of processing only black-and-white images. At first glance, Mang2Vec vectorizes the image well, but its significant drawback is the use of a very large number of shapes: for instance, the "burger" image on the last 5th iteration had 3065 paths and the "dragon" image on the 20th iteration 16600 had paths. Also, the method adds many <clippath> and <circle> tags, which seems useless. The method uses image splitting into patches and performs a separate vectorization of each patch, which is acceptable when processing detailed manga images. However, a large monochrome space becomes divided into a large number of shapes, which is unacceptable. In the Mang2Vec method, you can specify a different number of iterations, but with a small number of them, the patches boundaries, into which the division is performed, become clearly visible. Since Mang2Vec automatically resizes image to 4096x4096 resolution its working time is constant and is 157 seconds. Deep Vectorization of Technical Drawings (DVoTD) was meant to be able to use either quadratic Bezier curves or straight lines. We managed to run the method for curves, but the implementation of the second method (straight lines) is imperfect and the code is likely to contain some issues that lead to a crash during execution, which we could not fix. The method struggles to fill in contours with a solid color, as it was originally made to generate black stroke lines. We ran the method with default parameters and, for instance, the "dragon" image had 132 paths. The running time of the 270x480 image was 142 seconds, 373x405 - 179 seconds, 582x496 - 273 seconds. Vectorization by the DiffVG iterative method can be done in two ways: generating images consisting of curves and of closed shapes. The approach is simple and quite effective, but it produces many artifacts with shapes that the method apparently attempts to hide, but it fails to succeed. In addition, the reconstruction of absolutely exact visual copies of the original vector images cannot be obtained even using a large number of paths (1024 shapes). Different renderers convert a vector image to a bitmap in different ways. For example, images generated by DiffVG and LIVE will look inaccurate and careless, when they are rendered by the InkScape. This behavior occurs due to the fact that these images contain curves protruding beyond the edges of the viewBox attribute, and InkScape displays them instead of cropping them. At the same time, other renderers, for example, in Google Chrome browser, process images correctly without extra curves that remain beyond the viewBox. However, these curves are still a problem, since they are superfluous and they create additional artifacts in the image code and add an extra size to it. The LIVE method iteratively adds a layer consisting of one or more shapes specified by user to the image and optimizes the resulting image. In addition, number of image processing iterations after applying each layer should be specified manually. LIVE sets the number of iterations to 500 by default, but we noticed that after about 200 iterations, the image almost does not change, so we set this value in our experiments. In the optimization process, LIVE uses the DiffVG rasterizer to convert the current vector image into a raster image and compare it with the target image. Rasterization is performed by default at the same resolution as the target image, but for large resolutions it is computationally time-consuming. For example, for a resolution of 1080x1920, processing the first layer in 200 iterations on the Nvidia RTX 3090 Ti GPU took 212 seconds, then by 10th layer, processing of one layer reached 244 seconds. Finally, processing of 28 layers has took almost 2 hours. Therefore, we decided to pre-scale the raster images so that their maximum side does not exceed 512 pixels. At the same time, it is worth noting that this approach carries the risk of losing details in the image. With this approach, the processing of the first layer took 16 seconds, but by the 10th layer, the processing time of one layer was 40 seconds. The total processing time of 46 layers took about 32 minutes. When limiting the maximum image side to 256 pixels, the processing time of the first layer was 6 seconds, and totally 32 layers were processed in 22 minutes. It should also be known that the processing time is also affected by the number of shapes in each layer. In our experiments we used less than 7 shapes in each layer only the first 5 layers, after that we used 7, 10, 20 or 30 shapes in each layer gradually increasing the number. LIVE creates the most accurate images among the ML methods. However, the most significant disadvantage of this method is a very long image processing time, which depends on the number of layer additions, the number of iterations of processing each layer, the dimensions of the image for which intermediate rasterization is performed. It is worth noting that DiffVG also has problems with long intermediate rasterization, however, due to the smaller total number of iterations, this is less noticeable. The results of DiffVG and LIVE operation times are shown in more detail in the Tab. 1 and Tab. 2. \begin{table} \begin{tabular}{|c|c|c|} \hline Image Resolution & Total paths & Time (sec) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: The running time of the DiffVG iterative algorithm at different startup parameters on NVidia RTX 3090Ti GPU. Total iterations number is 500. There is no serious speed difference between methods with closed and unclosed paths. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Image Resolution & Layers schema & Total layers & Total paths & Time (sec) \\ \hline 256x256 & 4x1 & 4 & 4 & 33 \\ 256x256 & 8x1 & 8 & 8 & 77 \\ 256x218 & 1,2,3,4,5x3 & 7 & 25 & 78 \\ 256x256 & 16x1 & 16 & 16 & 158 \\ 235x256 & 1,3,5x2,7x7 & 11 & 62 & 162 \\ 144x256 & 1,3,4,5,7,10x8 & 13 & 100 & 169 \\ 256x218 & 25x1 & 25 & 25 & 278 \\ 256x256 & 32x1 & 32 & 32 & 423 \\ 235x256 & 62x1 & 62 & 62 & 945 \\ 144x256 & 100x1 & 100 & 100 & 1231 \\ 471x512 & 1,3,5,7,10x24 & 28 & 256 & 1448 \\ 190x256 & 1,3,5,7,8,10x2,20x4,30x30 & 41 & 1024 & 2332 \\ 256x218 & 1,3,5,7,8,10x2,20x4,30x30 & 41 & 1024 & 2490 \\ 288x512 & 1,3,5,7,8,10x2,20x4,30x30 & 41 & 1024 & 3197 \\ 471x512 & 1,3,5,7,8,10x2,20x4,30x30 & 41 & 1024 & 3891 \\ 1080x1920 & 1,3,5,7,10x24 & 28 & 256 & 7150 \\ \hline \end{tabular} \end{table} Table 2: The running time of the LIVE algorithm with different startup parameters on NVidia RTX 3090Ti GPU. Each layer is processed for 200 iterations. The requirement of DiffVG and LIVE models of directly controlling the number and type of applied shapes on the one hand is their advantage, but on the other hand, since there are no good models for determining the required number of shapes, automatic vectorization of a large set of various raster images becomes almost impossible. The online methods we found show the best quality of image vectorization. However, this is achieved by using a large number of shapes, the number of which can only be controlled indirectly by specifying the number of available colors. The results are presented in Tab. 3. However, none of the considered methods could recreate exact copies using such a number of figures. ## 5 Conclusion In this work we have shown that current image vectorization methods are difficult to use in practice. Online methods without manual hyperparameter tuning create images containing a large number of paths, which increases the amount of used memory, and design refinement becomes a time-consuming task. All the existing machine learning-compatible methods also require human control and adjustment of method iterations number, output vector image parameters, etc. The Im2Vec model is not capable of storing and generating complex images and is not a universal vectorizer that could create a vector analog for any input image. The LIVE method is the only universal model that allows you to control the number of drawn shapes, however, due to the use of an iterative approach, generating a single image takes a huge amount of time. According to our measurements, DiffVG is the fastest among ML methods without much quality losses. However, a large number of paths are required for high-quality results. At the same time, LIVE is able to get no worse quality using fewer shapes. However, the main problem with the LIVE method is its extra-long running time. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Method & Approach Type & Code & Versatility & Speed & Number of figures \\ \hline DiffVG & ML: Iterative & + & + & Low & User-defined \\ Im2Vec & ML: VAE & + & - & Pretrain needed & User-defined \\ LIVE & ML: Iterative & + & + & Very low & User-defined \\ ClipGEN & ML: Iterative+DL & - & - & Pretrain needed & User-defined \\ Mang2Vec & ML: RL & + & - & Medium & Many \\ DVoTD & ML: DL & + & - & Medium & Many \\ VTracer & Algorithmic, online & + & + & Very High & Medium \\ \hline \end{tabular} \end{table} Table 3: Classification and comparison of vectorization methods. ’ML’ means machine learning, ’DL’ means deep learning, and ’RL’ means reinforcement learning. Perhaps, generally, it is best to use online methods, but they do not allow you to adjust the number of applied shapes. To summarize, vectorization methods are connected with a tradeoff between image quality, path number, segment number, closed or not paths are, number of iterations, and running time.
2310.05876
AI Systems of Concern
Concerns around future dangers from advanced AI often centre on systems hypothesised to have intrinsic characteristics such as agent-like behaviour, strategic awareness, and long-range planning. We label this cluster of characteristics as "Property X". Most present AI systems are low in "Property X"; however, in the absence of deliberate steering, current research directions may rapidly lead to the emergence of highly capable AI systems that are also high in "Property X". We argue that "Property X" characteristics are intrinsically dangerous, and when combined with greater capabilities will result in AI systems for which safety and control is difficult to guarantee. Drawing on several scholars' alternative frameworks for possible AI research trajectories, we argue that most of the proposed benefits of advanced AI can be obtained by systems designed to minimise this property. We then propose indicators and governance interventions to identify and limit the development of systems with risky "Property X" characteristics.
Kayla Matteucci, Shahar Avin, Fazl Barez, Seán Ó hÉigeartaigh
2023-10-09T17:15:22Z
http://arxiv.org/abs/2310.05876v1
# AI Systems of Concern ###### Abstract Concerns around future dangers from advanced AI often centre on systems hypothesised to have intrinsic characteristics such as agent-like behaviour, strategic awareness, and long-range planning. We label this cluster of characteristics as "Property X". Most present AI systems are low in "Property X"; however in the absence of deliberate steering, current research directions may rapidly lead to the emergence of highly capable AI systems that are also high in "Property X". We argue that "Property X" characteristics are intrinsically dangerous, and when combined with greater capabilities will result in AI systems for which safety and control is difficult to guarantee. Drawing on several scholars' alternative frameworks for possible AI research trajectories, we argue that most of the proposed benefits of advanced AI can be obtained by systems designed so as to minimise this property. We then propose indicators and governance interventions to identify and limit the development of systems with risky "Property X" characteristics. Safety, Risk; Futures; Governance ## 1 Introduction For centuries, humans have been imagining machines that are as intelligent or more intelligent than humans. [1] The possibility of creating such machines is difficult to ascertain, but technological innovations - including the creation of remarkable artefacts such as YOLO, [2] AlphaGo, [3] GPT-3, [4] DALL-E, [5] and AlphaFold [6] - have caused both excitement and concern about the future of artificial intelligence (AI). [7][8][9][10][11] The success of the Deep Learning, [12] which underpins the aforementioned artefacts, has contributed significantly to this growth in attention, as has the increasing application of AI systems in a growing range of domains. [13][14] Moreover, in large part because of the discovery of scaling laws [15][16] and the emergence of surprising performance from large scale "foundation models", [17][18], more experts are now predicting _transformative_ AI capabilities [19] - capabilities that could bring about transformation as significant as the industrial revolution - within decades. [20][21][22] Of course, there remains ample disagreement about what level of transformation will be caused by future AI systems, and whether it is possible to create AI systems with superhuman capabilities across domains. We work on the assumption that there is a plausible chance of doing so and, given this, we believe that it is important to consider the potential implications of such technologies. Special attention has focused on the possibility of such advanced systems being dangerous -- perhaps even catastrophically so. While there is yet no consensus on what might make advanced AI systems dangerous, several works point to a cluster of properties around "agent-like behaviour", [23][24] "consequential reasoning", [25] "planning", and "strategic awareness". [26][27] Let us label the property, or set of properties, that make an advanced AI system dangerous "Property X". Let us also expect that property X is made up of, or is similar to, those properties listed above. Systems possessing relatively high levels of property X require us to solve the "alignment problem" [28][29] before they are developed, or otherwise risk catastrophic consequences (Fig. 1). Unfortunately, there are some compelling arguments as to why developers might be incentivised to develop systems that have more of property X. [30][31] We can make this picture clearer with an analogy. Instead of AI, we can think of power generation. In this case, the vertical axis will be the total installed generation capacity, and the horizontal axis ("property X") will be carbon emissions. We enter into a dangerous zone if we have both a high total generation capacity, and a large enough amount of high-carbon energy generation -- together, the two result in enough carbon being emitted to drive the greenhouse effect and adversely change the global climate. If we had technologies that allowed us to gain the energy benefit of fossil fuels while avoiding the emissions (e.g. perfect carbon capture technology), that would be analogous to "solving alignment". Absent such a solution, if we want to avoid the harms of climate change, we either need to cap total energy use, or create incentive structures to steer power generation towards low-carbon energy sources. Given the magnitude of risk and uncertainty associated with each strategy, we should ideally pursue all three: carbon capture technology, renewable and low-emission power generation, and reduction in consumption. Similarly, if various AI R&D projects are in fact occupying points on the simplified landscape depicted in Fig. 1, this suggests a few strategies for avoiding catastrophic outcomes resulting from advanced AI: 1. Place and enforce a cap on how advanced each project's AI capability should be 2. Place and enforce a cap on how much of "property X" each AI system should have 3. Maintain investment in AI alignment research while 1. slowing down the pace of capability progress to allow more time for developing AI alignment solutions, 1. directing projects towards low "properly X" configurations to allow more time for developing AI alignment solutions. In this paper we explore option 3b. We start with an exploration of "property X", based on the concept of "agents" and related ideas in the literature that considers dangerous, advanced AI systems. Then, we explore a range of policy levers that might steer AI development towards systems that have less of "property X". ## 2 Property X Scoping systems of concern can be imagined as a function of both the problematic attributes within a contained system and the dangers that may arise when that system is introduced into potentially dangerous settings. Here, we draw upon a definition of risk as a function of hazards, exposure, and vulnerabilities. Hazards refer to the concrete attributes of an AI system that might make it intrinsically dangerous. Exposure refers to the context in which the AI system is deployed, or the extent of the area in which the hazard could have an impact. Vulnerability refers to the level of preparedness to avoid and mitigate the hazard. Given our focus on the intrinsic danger of systems that are high in property X, our primary emphasis is on _hazards_. (To narrow our analysis, we exclude systems that are low in property X--even if they could be hazardous, for example, when introduced into dangerous environments with high vulnerability or exposure.) Still, we will later propose that a regulator's job extends beyond scrutinising the isolated characteristics of AI systems; they must also consider the potential uses of such systems, as well as relevant organisations' preparedness to identify and respond to signs of danger. Building upon this definition of risk, we can further compartmentalise AI risks into accidents, misuse, and structure. [32] In accidents, an AI system is not working as intended, and this causes harm; in misuse, the operator of an AI system is using it to cause harm to another; in structural harms, the AI system is operating as intended, and the operator does not seek to cause harm, but the Figure 1: **Simplified landscape of future AI systems and risk.** interaction between the system and the world results in unintended harm (e.g. perpetuating existing inequalities or contributing to unemployment). This breakdown separates three factors: the AI system, the operator, and the context. For operator and context, some factors increase risk. For instance, some operators are motivated to cause harm (e.g terrorists, oppressive governments), or at least insufficiently motivated to anticipate and mitigate harms (e.g. profit-maximising companies, narrowly-focused bureaucracies). Furthermore, some contexts have more potential for harm (e.g. defence, critical infrastructure, medicine) or are more sensitive to pre-existing structural problems (e.g. finance, education). Can we find similar properties that make specific AI systems more dangerous, when considered independently from the operator and the context? Several authors have argued that indeed, there are such properties that would make future AI systems more _intrinsically_ dangerous. Early arguments focused on _Instrumental Rationality_: the selection of actions that are assessed to have the highest likelihood of bringing about a certain goal. Views of AI systems as optimising goal-seekers fit well with common frameworks such as reinforcement learning, [33] and have gained traction in thinking about more advanced AI systems through frameworks such as AIXI. [34] This framing raises the concern of _convergent instrumental goals_, [35][36][37] such as shutdown-avoidance and resource acquisition. In the absence of sufficient restrictions on these instrumental goals, a system could lead to catastrophic outcomes. [38] Viewing AI systems as displaying instrumental rationality is linked to seeing them as agents, and several authors have recently argued that properties linked to _agency_ are related to intrinsic danger. [39] Another common framing of intrinsic danger from AI systems focuses on the emergence of _power-seeking behaviour_. This in turn has been linked to properties of _agentic planning_, an ability to formulate a chain of actions in pursuit of a goal, and _strategic awareness_, an understanding of the world through a lens of power and capacity to influence outcomes. [40] Power seeking has been shown to arise naturally under reinforcement learning. [41][42] This is now a recognised concern within leading AGI labs, such as Anthropic, [43] who have reported observing AI systems' sycophantic behaviour and stated preference to not be shut down. [44] The combination of increased AI capabilities, and the emergent power-seeking behaviour, could create a dangerous escalation dynamic, as depicted in Fig. 1. By definition, power-seeking behaviour is always intrinsically motivated to increase the system's capabilities within its accessible domain. On the other hand, more capable systems are likely to include more comprehensive models of the world, and a better modelling of causal relations within it; this might contribute to strategic awareness, which can boost power-seeking tendencies. Taken together, we are concerned with the growing prevalence of systems that are both extremely capable and have continued to seek power - trends which could therefore lead to system behaviour that only further develops these traits, potentially leading to existential risks. We believe there is sufficient commonality amongst the cluster of _instrumental rationality_, _agency_, the mix of _agentic planning and strategic awareness_, and similar properties such as _consequentialism_, to mark it as _Property X_, which is strongly linked to potential intrinsic danger from advanced AI systems, such as the pursuit of _convergent instrumental goals_ and the emergence of _power seeking behaviour_. Very likely this is not a single property, but rather a cluster of linked characteristics, which may evolve in time. As yet, this cluster of characteristics does not have a consensus definition, nor is it operationalised in a way that can be directly evaluated and measured. Nonetheless, we believe that subjective expert assignment of "Property X-ness" to different AI systems will show a high degree of inter-rater agreement, and for illustration have listed the authors' assessments of several well-known artefacts in Table 1. Further specification and operationalisation of "Property X" is an active research task, both theoretical and empirical. However, even a vague cluster of properties can be relied on to drive safety-oriented policies, especially on the expectation that in the future we will have a better understanding of this cluster and tools to evaluate specific systems with regards to these properties. The lack of definitional consensus should not hinder policy interventions, which themselves will serve to iteratively test which metrics and indicators are useful for those seeking to limit the development of dangerous AI systems. \begin{table} \begin{tabular}{|p{14.2pt}|p{14.2pt}|p{14.2pt}|} \hline **Technology** & **Authors’ Property** & **Rationale** \\ **AlphaFold** & **None/Low** & No agency, no long-term planning, narrow domain of application. \\ \hline \end{tabular} \end{table} Table 1: Authors’ assessment of degree to which various AI artefacts exhibit Property X ## 3 Positive futures with low-property X systems Several authors have warned that, despite Property X's link to danger, there are strong incentives to develop systems with high Property X: agentic planning and strategic awareness allow systems to have higher autonomy, more generality, and greater impact, all contributing to their economic, military, and R&D potential, especially when facing competition from other AI systems. [45][46][47] The pursuit of systems that combine "human competition" (the goal of achieving human-level competencies in AI systems, including those linked to property X) and "autonomy" (which is directly linked to the agentic aspect of property X) has been described as the dominant technology paradigm of "actually existing AI". [48] Nonetheless, there are positive visions for systems that remain low on property X (beyond the obvious advantage of avoiding extreme risks). The authors that describe the current paradigm as "actually existing AI" offer instead a _Collective Intelligence_ vision of AI that focuses on "complementarity" between AI systems and humans, as opposed to competition and the replacement of human intelligence, increased "participation" of both humans and AI systems in collective decisions, as opposed to autonomy, and "mutualism", a vision of decentralisation and heterogeneity as opposed to centralisation of decision making in advanced AI systems. [49] While the authors do not provide a futuristic vision for this kind of technology paradigm, they point at Wikipedia and Taiwan's digital democracy as current technology-enabled collective intelligence platforms that embody these principles, and that on our reading are both promising and low on property X. Another vision for low-property X systems comes from _Human Centred-AI_. [50] In contrast to pursuing _Artificial General Intelligence_, which is characterised as the pursuit of "machine cognition, autonomous agents and commonsense reasoning" (high on Property X), Human Centred AI uses "design processes with human stakeholder participation to create powerful AI-infused supertools, tele-bots, active appliances, and control centers, which ensure human control of ever more potent technologies" (low on Property X). While HCAI is not opposed to autonomy, and even high degrees of autonomy, this autonomy is always coupled with high levels of effective human control; on our reading this means that in practice, this does require giving up on technological artefacts that would have high Property X. This still leaves a wide range of smart tools and intelligent support systems that empower the user to achieve much more than they could before. Finally, we wish to note works that argue that the current AI development trajectory in fact points more in the direction of low property X systems. One such vision is of _Comprehensive AI Services_, which sees increasing generality and autonomy in the _creation_ of novel AI artefacts (in the AI R&D pipeline), but limited autonomy and generality in the artefacts produced by this pipeline. [51] On this vision, a highly general and heavily automated R&D pipeline, that could for example train a wide range of models and autonomously make efficient training decisions, is used to generate a very wide range of services, whether they are domain-specific language models, protein-structure predictors, coding assistants, or autonomous driving agents. While the AI R&D pipeline itself edges towards higher property X, it is one or more steps removed from direct contact with the world, whereas the artefacts produced are lower on property X. A more recent vision is that of _open_ agencies_, which paints a similar picture but now incorporates generally-applicable foundation models, which are themselves non-agentic (and therefore lower on property X), as common interfaces to interact with the ecosystem of AI services and agents, each of which is tailored to a specific domain and therefore lower on property X. [52] What could this look like in practice? For any task that collective human intelligence has already shown an ability to make progress on, we expect that in principle it should be possible to make greater and more rapid progress in combination using the tools of AI systems low on Property X. We expect this to include R&D challenges such as material and drug discovery, disease diagnostics and public health monitoring; engineering challenges such as sustainable energy production (including fusion power generation), robust and sustainable food production, and space exploration; and creative domains, including assistance to the generation of visual art, music, text and video. Restrictions on property X would most likely be felt in domains that benefit from very high autonomy and long-term planning, including long-term strategic planning (including financial planning), national security strategy and military operations in hostile environments, and autonomous scientific discovery. We would also be limited in our ability to study human (and "general") intelligence through the study of artefacts. ## 4 Policy interventions to steer away from high property X systems ### Can We Limit the Development of 'Systems of Concern'? In general, existing research has not rigorously considered the possibility that governments might play a substantial role in limiting the creation of AI systems with high property X --an emerging cluster of characteristics by which a system might simulate agency, reasoning, planning, and awareness of its broader environment. To date, such characteristics have been achieved only to a very limited extent in AI systems; and thus have had a limited capacity to lead to harm. However, present advances suggest that such characteristics may be seen to a much more significant degree in frontier AI systems in coming years. Work on concrete mechanisms and policy levers to implement safe and ethical AI [53] has tended to focus on contextual risks (e.g. bias, fairness, security) as opposed to intrinsic risks (e.g. relating to property X). [54][55] The focus of proposed regulation is often at the point of the applied AI product, or product within which AI is used, rather than at the stage of AI research and development. However, the present pace of progress suggests a greater role for governance to play in the development process of AI, particularly for the frontier AI systems that might be most likely to exhibit property X characteristics. As outlined in previous sections, the degree of intrinsic danger posed by an advanced AI system is linked both to its general degree of capability (competence and generality) and to its property X-ness. Here, we first introduce indicators (Table. 2) that might alert regulators to the development of systems with significant capabilities within their jurisdictions. We propose that an evaluation of property X be carried out with respect to systems indicated as high capability, allowing intervention and mitigation of risks. We then proceed to discuss potential frameworks for limiting systems of concern. Owing to a lack of empirical evidence about the most effective policies for regulating the development of intrinsically dangerous AI systems, and given that each site of policy making requires a tailored approach, our goal is to discuss a range of options without reference to any one of them. Future research would benefit from drawing upon insights from other high-stakes policy areas to identify ideal institutional designs, incentives and disincentives, methods of imposing oversight, and resistance that may arise from powerful interest groups. Such research would be immediately applicable to ensuring a safer and more ethical R&D environment for advanced AI systems. We note that a number of leading organisations developing large models are already undertaking alignment research, risk assessments and red-teaming internally and with external collaborators, aimed at making their AI systems safe before release. Some of these processes focus on concerns that map closely to the 'Property X' characteristics we describe above. For example, prior to OpenAI's release of GPT-4, an external evaluation by the Alignment Research Centre tested for risky emergent behaviours such as power-seeking behaviour. [56] There are also calls for external auditing from within some of these companies, suggesting a role for governments to establish such auditing bodies. For example, Anthropic note that they "plan to make externally legible commitments [...] to allow an independent, external organisation to evaluate both our model's capabilities and safety". [57] To imagine policy interventions, it is first necessary to consider the challenge of identifying intrinsically dangerous systems in a policy context. For a governance body attempting to limit the spread of hazardous AI systems, it will be relevant to consider the tell-tale signs of concerning research with an approach that allows for early detection and applies equal scrutiny throughout the lifecycle of a system. As outlined in previous sections, we expect intrinsic danger from systems that have high capabilities and are also high on property X. Direct evaluation of property X is therefore likely to be an important part of detecting AI systems of concern. For example, this could take the form of tests during a system's development or after it is deployed; a system's performance can reveal distinct risks and surprising behaviours, such as deception, long-term strategic planning beyond the system's intended scope, or subversion of safeguards and guardrails. The risks presented by such behaviour are even more acute within systems whose outputs are uninterpretable. We expect that elicitation and detection of such behaviours would require novel assessment techniques and continual update, to be developed and maintained by domain specialists. Because evaluation of property X is likely to be relatively intrusive and costly, we believe it would be pragmatic to only subject a small subset of all AI systems to such evaluations. The overall capability of the system could be used to decide which systems undergo further scrutiny, as it is the combination of capability and property X that leads to intrinsic danger. At present, we can tentatively point at several proxies that could indicate that an AI system will display high capabilities, outlined briefly in Table 2. These have been adapted from prior work that considers a more high-dimensional characterisation of the Pareto front of AI improvements, beyond benchmarks and performance metrics. [58] Although the markers proposed here are deliberately geared toward practical applications, it is important to note that entities seeking to mitigate the development of systems of concern would likely still need to continuously adapt, expand, and tailor their own indicators. We can consider the significance of the above factors by imagining how a regulator might employ them to detect a system that should undergo evaluation for property X. For instance, today's most advanced AI systems are characterised by the need for very large training compute (although the previously rapid growth in compute may soon taper [60]) and high load (parameter count), which are directly linked (via scaling laws) to higher capabilities, and therefore to a higher potential for harm. At present, the very large amounts of compute required to train frontier AI systems have predominantly been wireled only by a limited number of actors, providing an intuitive starting point to detect potential systems of concern. Relatedly, the physical hardware and infrastructure required to build and operate supercomputers can serve as a marker of advanced AI research, as can the relatively high energy consumption of advanced computing. Similarly, the conditions surrounding the creation of models, as well as safeguards involved in their deployment, will affect their potential for harm. For example, regulators might deem that the greater the quantity of data and time required to train a model, the greater the risks presented. They might also deem that certain ML techniques present a heightened risk. In other instances, dangers could arise from certain types of research that bypass the need for large computing power but still achieve highly sophisticated systems. Regardless of the amount of compute, advanced \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Computer** & **Description** \\ \hline **Load** & **Dimensions of the AI system, e.g. number of parameters.** \\ \hline **Software** & **Algorithms used to train and run the system, and supporting software infrastructure such as machine learning frameworks.** \\ \hline **Physical components** & **Data centres, laboratories, physical hardware, and energy consumption associated with the system.** \\ \hline **Time** & **Amount of time required for training models and running them.** \\ \hline **Degree of human supervision** & **Extent of human involvement, whether in training models or vetting decisions reached by systems.** \\ \hline **Data** & **Amount and type of data used in models.** \\ \hline **Behaviour** & **Extent to which the actions taken by a system support with the expectations of system operators.** \\ \hline **Interpretability and Explainability** & **respectively, the ability to approximate understanding of an opaque system and the ability to inherently understand a system. [59]** \\ \hline \end{tabular} \end{table} Table 2: Potential indicators for the need to assess property systems trained and operated with a low degree of human supervision and lack of guardrails would also be a source of concern. It should be noted that the indicators listed in Table 2 aim to address a system's intrinsic danger and do not deal with the context in which it is deployed. Still, other contextual factors are useful to regulators and are worth mentioning although they are not the focus of this paper. For example, if using large volumes of training data is a possible tripwire, regulators would benefit from keeping a close watch over efforts to gather data at a mass scale. Data gleaned from crowdsourcing campaigns or from individuals' online activity can provide a significant source of training data for large and potentially dangerous models, whose development might be detected sooner if regulators are privy to suspicious data collection activities. In this case, contextual details can strengthen the proposed tripwire. An additional category of contextual factors is formed by those that augment a system's intrinsic risks. Such factors might include a system's degree of network connectivity and the resulting possibility that the hazards it creates can be experienced more widely. Another factor is the safety culture of the institution in which the system is developed; whereas concerning behaviours experienced within a risk-averse environment would correctly be noted as a red flag, the same incidents might go unreported by a less aware workforce. ### _Designing Policy Interventions_ To conceptualise the relevance of tripwires for detecting and limiting property X, we consider how a hypothetical nation-state might utilise them. The nation-state imagined is relatively large and powerful, and it houses advanced AI research capacity, including within its private sector, academic institutions, and government entities. It possesses sufficient power to reign in dangerous AI research through domestic policy interventions and can also effectively make use of export controls. It has significant influence within the international community and can mobilise this influence to impact attitudes toward the regulation of AI. The nation-state is an ideal unit of analysis for three reasons. First, although there is value in discussing international frameworks to regulate the development of dangerous AI, international law is limited in its ability to constrain systems of concern without significant interventions at the national level. [61] Second, nation-states can directly regulate corporations, which are decisive actors in the development of systems of concern; while national policy has often proven to be an imperfect instrument for changing the behaviour of increasingly powerful corporations, [62] it remains the most potent tool for doing so and can have ripple effects within a system of globalised commerce. Finally, numerous international frameworks have arisen from the advocacy efforts of individual nation-states whose domestic policies and value systems can shape approaches to global problems. [63] For example, the United States' Atoms for Peace campaign created international momentum that ultimately led to the negotiation of the Nuclear Non-Proliferation Treaty [63]. Thus, by confining our analysis to nation-states, we hope to show what broad changes may look like on a smaller scale, as well as how the benefits of robust national policies can extend beyond the states that enact them. While there is no one-size-fits-all policy solution, any successful intervention must possess a few key attributes to detect and mitigate property X. First, _independent oversight_ can ensure that powerful interest groups do not obfuscate efforts to detect and limit systems high in property X. Typically, the most effective independent bodies are democratic in nature and thus can be contested by citizens and elected officials. [65] With their mandate to serve public interests, such bodies have strong incentives to display expertise and professionalism, impartiality, and scrutiny. [66] Following the designation of an independent agency to perform oversight, that agency would require a direct line of communication with the entities it regulates, perhaps through internal compliance officers. Second, regulators must develop _guidelines for the detection of property X_, in addition to cultivating the _technical expertise_ necessary to discern if such guidelines are upheld. Regulators might develop rubrics for examining the level of property X within a system. [67] If a system scores above a certain level on that rubric, the regulator might then require that certain red teaming activities, benchmarks, and audit logs be completed before the institution can proceed with that system. Importantly, the role of expertise--both for regulators and those being regulated--is essential for detecting systems of concern. Without an adequate understanding, for example, of unwanted system behaviours and their potential dangers, it is difficult for system owners and regulators to make good use of rubrics. Third, given the relatively large number of actors who might be subject to scrutiny, the regulator must be able to _effectively operate under conditions of uncertainty and with limited resources_. While not carried out by a single nation-state, the enforcement of nuclear safeguards by the International Atomic Energy Agency (IAEA) provides a relevant example. Each year on a limited budget, a relatively small number of IAEA inspectors are challenged to inspect a large number of nuclear facilities worldwide; since the IAEA cannot reasonably inspect every single nuclear reactor or spent fuel rod, the agency has embraced statistical approaches centred around random sampling and remote surveillance. [68] Even if imperfect and ultimately unable to _prevent_ illicit activities outright, the IAEA's system of inspections has still often succeeded in detecting violations. [69] Similarly, a regulatory body seeking to limit property X might aim to gather as much information as possible with a primary emphasis on early detection for suspicious or concerning activities; doing so might involve the use of similar methods for handling large quantities of data to ease the burden on regulators. Finally, regulators require _enforcement capacity_, including sufficient access to system designers and operators to explore the potential presence of property X. To ensure adherence to guidelines, regulators must be granted access to relevant employees (for interviews) and facilities (for inspection) if concerns arise. A number of arrangements could result in raising concerns: While entities might be required to prove compliance to begin research and/or remain in operation, they could also be subjected to regular and/or random inspections. A more loosely crafted regulatory framework might rely on whistle-blowers to come forward, only investigating concerns if they were raised internally. Alternatively, the interactions with the regulator might be dictated by certain milestones in a system lifecycle, with regular inspections scheduled according to the speed and scope of AI research. Any of the above formulations would depend upon a regulator's ability to speak directly with system owners, potentially interviewing them about concerning behaviour displayed by the system. As a component of enforcement, regulators might also require system owners and operators to undergo mandatory trainings to increase the level of knowledge about property X and the means of avoiding it. ## 5 Conclusion We have outlined a set of potential intrinsic characteristics of advanced AI systems that together we label "Property X". These include characteristics discussed in the AI safety literature such as power-seeking behaviour, instrumental rationality, and strategic awareness that may lead to risky behaviours such as the pursuit of convergent instrumental goals. Systems high in Property X may prove to be exceptionally capable and so may be an appealing goal of research; however, we have argued that such systems are highly likely to be dangerous and difficult to align. The research visions outlined in _Collective Intelligence, Human-Centred AI_, and _Comprehensive AI services_ provide compelling alternatives that demonstrate that progress across scientific, economic and societal domains can be supported by AI systems low in Property X. While the set of characteristics that make up Property X are not yet fully specified, it is nonetheless possible to begin envisioning governance regimes that identify and address Property X behaviours in the AI development process. Our proposals are intended as a starting sketch for doing so.
2307.06768
A cosmic microwave background search for fine-structure constant evolution
In some extensions of the standard model of particle physics, the values of the fundamental coupling constants vary in space and time. Some observations of quasars hint at time and spatial variation of the fine structure constant $\alpha$. Here, the Bekenstein-Sandvik-Barrow-Magueijo (BSBM) model (which posits the existence of a scalar field driving evolution in the fundamental electric charge $e$) is tested against quasar and Planck satellite cosmic microwave background (CMB) data. In this model, variations in $e$ are coupled to the matter density through a factor $\zeta_{\rm m}/{\omega}$, which is related to electromagnetic contributions to nucleon masses, and {the energy} scale of new physics. Simulations conducted here do not support claims that the electrostatic contribution to $\zeta_{m}$ is completely shielded. Other common approximations used in BSBM field evolution are found to be adequate. Principal components of the CMB data with respect to variations in $\alpha$ are used to obtain constraints of $\zeta_{\rm m}/{\omega}\lesssim 9.3 \times 10^{-9}$ for a massless field. A forecast anticipating the promise of the Simons Observatory (SO) CMB experiment shows that SO will be sensitive to values of $\zeta_{\rm m}/{\omega}\geq 2.2 \times 10^{-9}$.
Hurum Tohfa, Jack Crump, Ethan Baker, Luke Hart, Daniel Grin, Madeline Brosius, Jens Chluba
2023-07-13T14:12:59Z
http://arxiv.org/abs/2307.06768v2
# A cosmic microwave background search for fine-structure constant evolution ###### Abstract In some extensions of the standard model of particle physics, the values of the fundamental coupling constants vary in space and time. Some observations of quasars hint at time and spatial variation of the fine structure constant \(\alpha\). Here, the Bekenstein-Sandvik-Barrow-Magueijo (BSBM) model (which posits the existence of a scalar field driving evolution in the fundamental electric charge \(e\)) is tested against quasar and _Planck_ satellite cosmic microwave background (CMB) data. In this model, variations in \(e\) are coupled to the matter density through a factor \(\zeta_{\rm m}/\omega\), which is related to electromagnetic contributions to nucleon masses, and the energy scale of new physics. Simulations conducted here do not support claims that the electrostatic contribution to \(\zeta_{\rm m}\) is completely shielded. Other common approximations used in BSBM field evolution are found to be adequate. Principal components of the CMB data with respect to variations in \(\alpha\) are used to obtain constraints of \(\zeta_{\rm m}/\omega\lesssim 9.3\times 10^{-9}\) for a massless field. A forecast anticipating the promise of the Simons Observatory (SO) CMB experiment shows that SO will be sensitive to values of \(\zeta_{\rm m}/\omega\geq 2.2\times 10^{-9}\). ## I Introduction Thanks to measurements of cosmic microwave background (CMB) anisotropies by the _Planck_ satellite and ground-based efforts like the South-Pole Telescope (SPT) and Atacama Cosmology Telescope (ACT), cosmic acceleration as probed by Type Ia supernovae, and clustering/lensing of galaxies, there is a standard cosmological model. This concordance model has relic-density parameters of \(\Omega_{b}h^{2}=0.0224\pm 0.0001\), \(\Omega_{c}h^{2}=0.1200\pm 0.0012\), and \(\Omega_{\rm DE}=0.68.547\pm 0.0073\) (where \(b\) denotes baryons, \(c\) denotes cold dark matter or CDM, and DE represents dark energy) [1; 2; 3; 4]. Using _Planck_ data and future results from nearly cosmic-variance-limited (CVL) CMB polarization experiments (e.g. CMB-S4 and the Simons Observatory or SO), the CMB can also be used to test models of dark-sector contents and interactions. Existing data could even provide evidence for an early dark-energy component [5; 6; 7; 8; 9; 10], which could reconcile tension between late-time supernovae and CMB inferences of the Hubble constant [11; 12]. The evolution of cosmological perturbations probed by the CMB is influenced by the photon diffusion length (see Refs. [13; 14] and citations therein), the redshift of the last scattering surface, and the detailed physics of recombination [15; 16; 17; 18]. The CMB is thus also sensitive to the possibility that the fundamental 'constants' are in fact dynamical rather than constant. The possibility that fundamental parameters like \(e\), \(m_{e}\), \(m_{p}\), \(G\), \(\hbar\), and even the speed-of-light \(c\) vary in time (or space) was raised by Dirac and others, who noticed that certain combinations of these parameters with units of time were numerically comparable to the age of the universe [19; 20; 21; 22]. They posited that these ratios were equal to the age of the universe at _all_ times, implying a specific time-evolution for the fundamental parameters. Although the simplest realizations of this idea are readily ruled out on anthropic grounds, more general scalar-tensor theories of gravitation [23] and electromagnetism [24] were then developed. The simplest extension of Maxwell electromagnetism that supports variation in \(e\) [and thus the fine-structure constant \(\alpha=e^{2}/(\hbar c)\)] while recovering the predictions of standard electromagnetism was put forward by Bekenstein [24]. This theory relies on a new scalar \(\psi\), which modulates the Maxwell Lagrangian for electrodynamics and has a Brans-Dicke kinetic term with coupling \(\omega\). This model was extended to include the gravitational interactions of \(\psi\) in Refs. [25; 26]. Subsequent work (see Ref. [27] for a review) generalized the scalar field to a massive one with non-trivial field dependence in \(\omega\)[28; 29], included spatial perturbations [30; 31], and developed more extensive modeling of the theory's dynamics [26; 32; 33; 34]. Broadly, varying fundamental parameters occur in standard-model extensions with extra dimensions (beginning with Kaluza-Klein theory [35] and continuing with string-inspired ideas like the runaway dilaton [36; 37; 38; 39; 40; 41; 42; 43; 44]). Other cases include disformal theories, in which radiation and matter geodesics are given by different metrics [45; 46]. The scalars of these theories could constitute dark energy [47; 48; 49; 50] or dark matter (DM) [51; 52; 53; 54] with novel astronomical and lab phenomenology resulting, such as evolution in the cosmic equation of state, laser interferometry signals, and violations of the weak equivalence principle [55; 27; 56]. Recently, there have been hints of variation in \(\alpha\) from high-resolution spectra of redshift \(z\sim 3\)-7 quasi-stellar objects (QSOs) [57; 58; 59; 60; 61; 62], with disputes about the interpretation of these results [63; 64; 65; 66; 67; 68]. Current/future experimental efforts promise unprecedented sensitivity to \(\alpha\) variations with the potential to resolve these disputes [69; 70] (e.g., first results with the ESPRESSO spectrograph impose the constraint \(\Delta\alpha/\alpha\sim 10^{-6}\)[71]). Any theory of varying \(\alpha\) will also make a prediction at \(z\simeq 1100\), the epoch of CMB decoupling. The Thomson scattering rate scales as \(\alpha^{2}\), while the hydrogen \(2s\to 1s\) transition rate (the bottleneck for recombination) scales as \(\propto\alpha^{8}\). The redshift and width of the last-scattering surface are influenced by model parameters [72; 73; 74; 13]. CMB temperature/polarization measurements can be used to probe variations in \(\alpha\), as shown by the BOOMERanG/WMAP/_Planck_/SPT/ACT upper limits of Refs [75; 76; 77; 78; 79; 80; 81; 82; 83] and forecasts of Refs. [84; 85; 86]. Correlations with other fundamental constants (e.g., \(G\)) were considered in Ref. [79; 80; 81; 82; 83]. All these analyses assumed a single non-standard \(\alpha\) value at early times. Evolution of \(G\) was constrained in Refs. [87; 88; 89; 90; 91]. Simply parameterized time-evolution in \(\alpha\) and connections to the Hubble tension were explored in Refs. [92; 93]. Given the range of theoretical possibilities and observational controversy, we leverage model-independent techniques. One useful tool is principal component analysis (PCA), in which the eigenvectors of the information matrix (encoding the data's covariance response to theoretical parameters) are found. PCA may be used to test data for novel physics, even without a compelling model, and has been applied to explore dark matter annihilation, non-standard recombination, and late-time cosmic acceleration [94; 95; 96; 97; 98; 99; 100; 101]. Models with large projections onto these eigenvectors will be best constrained by the data. Any model can be constrained through projection onto principal components (PCs), without the need to rerun a full Monte Carlo Markov Chain (MCMC). PCs can elucidate the epochs driving the constraint. Techniques for probing cosmic recombination with PCA were developed in Ref. [102] and applied in Ref. [103]. In Ref. [104], PCA was applied to to obtain the constraint of \(\Delta\alpha/\alpha=0.0010\pm 0.0024\). Causality requires that \(\alpha\) variation occur in space if it occurs in time. The resulting non-Gaussian statistics [105] were used to impose \(\sim 10^{-2}\) level constraints to spatial \(\alpha\) variation [106; 107]. Secondary CMB sight lines to galaxy clusters constrain variation in \(\alpha\)[108] because of the Sunyaev-Zeldovich (SZ) effect [109; 110], although these inferences remain challenging due to relativistic corrections [111]. Large volumes will be mapped by upcoming observations of neutral hydrogen at high redshift [112], probing varying-\(\alpha\) theories [113; 114]. Here, we model the evolution of \(\psi\) and \(\alpha\) dynamically in the BSBM model. We allow for \(\psi\) to have a mass \(m\), motivated by recent work finding that light scalars and pseudoscalars are numerous in string-theory realizations and possibly cosmologically important [115; 116; 117; 118; 119; 120]. We use the PCA decomposition for \(\alpha\) variation and constraints from _Planck_ data in Ref. [104] to test the BSBM model, determining the allowed values for the coupling \(\zeta_{m}/\omega\) (where \(\zeta_{m}\) is the ratio of nuclear electromagnetic to total rest-mass energy) and \(m\). Our constraint (at 95% C.L) is \(\zeta_{m}/\omega\leq 9.3\times 10^{-9}\) for \(m=0\), with constraints relaxing for \(mc^{2}\gtrsim 1.4\times 10^{-32}\) eV. We thus obtain some of the first constraints to the BSBM model that include the full time-dependence of the model, going back to the recombination era.1 The next decade of CMB measurements will bring an order of magnitude improvement in precision, and the possibility of testing many novel physical scenarios for the dark sector, neutrinos [122; 123], and varying \(\alpha\). We also apply conduct a forecast for the sensitivity of the ground-based Simons Observatory (SO) [123] to test the BSBM model. We obtain a \(\zeta_{m}/\omega\leq 2.2\times 10^{-9}\) sensitivity forecast. Results for \(m\neq 0\) are presented in the body of the paper. Footnote 1: In the preparation of this work, we became aware of other recent work that applies CMB data to the BSBM model [121]. The two results are complementary, as we allow for the possibility that \(\psi\) is massive, use PCA methods, and obtain forecasts for future CMB experiments. We also obtain QSO prior-free results in our CMB analysis. In the BSBM model, \(\psi\) couples to the trace of the radiation stress-energy tensor (\(\propto E^{2}-B^{2}\)), where \(E\) and \(B\) are electric and magnetic field amplitudes. This trace is non-trivial to calculate (see Refs. [124; 125; 126; 127; 128; 129; 130] for a discussion). Observational probes of BSBM typically assume \(E^{2}-B^{2}\propto\rho_{\rm m}\), the total cosmological matter density, with a proportionality constant \(\zeta_{m}\). Estimates of \(\zeta_{m}\) vary in sign and in magnitude by \(2-3\) decades [124; 125; 126; 127; 128; 129; 130], depending on whether it is dominated by electric or magnetic contributions [130], classical vs. quantum approaches, and our ignorance of the dark-sector contribution [124; 125; 126; 127; 128; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 130]. We ran a simulation to test recent claims [130] that magnetic contributions to \(\zeta_{m}\) dominate over electrostatic terms. Our results indicate that they do not. We then put constraints on \(\zeta_{m}/\omega\). Most comparisons of the BSBM model to data rely on a non-energy-conserving approximation in the equations of motion. We find that our constraints are robust when one corrects for this approximation. We begin with a discussion of scalar field dynamics and numerical methods for their evolution in Sec. II. We review principal component methods and explain our use of them in Sec. III. We summarize our MCMC modeling techniques and results in Sec. IV, and conclude in Sec. V. In Appendix A, we test the cancellation of electrostatic contributions to \(\zeta_{m}\). In Appendix B, we explore the impact of an energy non-conserving approximation made throughout the literature to solve the BSBM equations of motion, and find it to be negligible. Some further details of how we impose constraints are discussed in Appendix C. Bsbm Theory In Bekenstein's original formulation of this \(\alpha\) variation theory [24], the scalar field \(\psi\) was coupled to the standard Maxwell Lagrangian, but the coupling to gravity was not included. Later work by Sandvik, Barrow, and Magueijo included this inevitable coupling [25]. In the resulting Bekenstein, Sandvik, Barrow, Magueijo (BSBM) theory [25; 130], the electric charge \(e\) evolves, while Planck's constant \(h\) and the speed of light \(c\) are constant. A real scalar field modulates the Maxwell Lagrangian, causing variations in \(\alpha\): \(e_{0}\to e=e_{0}\epsilon\,(x^{\mu})\). Here \(\epsilon\) is a dimensionless scalar field. Then \(\epsilon\) couples to the electromagnetic gauge field \(A_{\mu}\) in the Lagrangian. Under the usual local gauge transformation \(U(x)=e^{i\theta(x)}\), the electromagnetic action is still invariant with a modified gauge-field transformation, \(\epsilon A_{\mu}\to\epsilon A_{\mu}+\partial_{\mu}\theta(x)\). Recasting the equations in terms of a more standard scalar field \(\psi\) defined by \(\psi\equiv\ln{(\epsilon)}\), the equation of motion for the scalar field is [130]: \[\Box\psi=\frac{2}{\overline{\omega}}e^{-2\psi}\mathcal{L}_{\rm em}, \tag{1}\] where \(\overline{\omega}=\hbar c/l^{2}\) is a coupling constant and \(l\) is some new length scale below which Coulomb's law breaks down. In Planck units, \(\overline{\omega}\) has units [energy]\({}^{2}\), and so we employ the reparameterization \(\overline{\omega}=M_{\rm pl}^{2}\omega\) in terms of the reduced Planck mass \(M_{\rm pl}=1/\sqrt{8\pi G}\) and a dimensionless parameter \(\omega\) that quantifies the amplitude of evolution in \(\alpha\). The standard Maxwell Lagrangian, \(\mathcal{L}_{\rm em}=F_{\mu\nu}F^{\mu\nu}/4\), vanishes for pure radiation (as \(\mathcal{L}_{\rm em}\propto E^{2}-B^{2}=0\), where \(E\) and \(B\) are the electric and magnetic-field vector amplitudes, respectively). It would, however, be excited by plasma-sourced electrostatic and magnetic fields \(\mathbf{E}\) and \(\mathbf{B}\) in the early universe. The approximation \(\mathcal{L}\simeq\zeta_{m}\rho_{m}\) is used throughout the literature, where \(\zeta_{m}\) is a dimensionless constant quantifying the electromagnetic self-energy of non-relativistic matter. This replacement warrants justification, and requires a numerical estimate of \(\zeta_{m}\). ### Values of \(\zeta_{m}\) We summarize past results and quantitatively assess claims [130] that prior calculations of \(\zeta_{m}\) were in severe error, using a numerical plasma simulation in Appendix A. The simplest possibility is that the dark sector does not source scalar-field evolution. Straightforward estimates of the baryonic contribution to \(\zeta_{m}\) may be obtained by extrapolating a semi-empirical mass formulae for nuclear electromagnetic self-energy to the case of a primordial (hydrogen + helium) plasma [131; 132]: \[E_{\rm Coulomb}=-a_{C}\frac{Z^{2}}{A^{1/3}}. \tag{2}\] In this equation, \(a_{C}\) is an empirically determined constant, \(Z\) is the atomic number of the nucleus, and \(A\) is the mass number of the nucleus. This term represents the decrease in nuclear binding energy that is caused by the electrostatic repulsion between the positively charged protons in the nucleus. Using the fact that \(a_{C}\approx 0.7\) MeV [24], the estimate \(\zeta_{m}\sim 1.3\times 10^{-2}\) was obtained through classical approximations to the nucleus. Late-time nucleosynthesis could lead to additional time-dependence [24; 125]. These estimates are dominated by \(E^{2}\) (rather than \(B^{2}\)). Subsequent estimates modeled quark electromagnetic fields within the nucleus [25], applying the Born term in the Cottingham formula and following the methods of Ref. [124] to obtain \(\zeta_{m}\approx 10^{-4}\). As \(\rho_{m}\) is dominated by dark matter (whose nature is unknown), \(\zeta_{m}\) could be dominated by non-baryonic contributions [25]. One can then have \(-1<\zeta_{m}<1\) [with the bounds saturated for dark matter composed of superconducting strings]. A negative value or \(\zeta_{m}\) is appealing because it can explain QSO results hinting at a decrease of \(\alpha\) with time. Subsequently [130], the author of Ref. [24] solved the modified Maxwell equations of BSBM theory and the static limit of Eq. (1), claiming that (under some approximations) the \(\psi\) field configuration shields out the electrostatic contribution to Eq. (1), leaving only the \(B^{2}\) term. This cancellation is also claimed to allow the BSBM theory to evade terrestrial, solar-system constraints to weak-equivalence principle (WEP) violation [130], although the magnetic term might also lead to detectable WEP violation [126]. More generally, WEP bounds may be satisfied for \(|\zeta_{m}|>10^{-3}\) for the \(\zeta_{m}/\omega\) values that saturate the CMB constraints of Sec. IV, motivating us to continue testing this theory empirically. If this cancellation occurs, the magnetic contributions from baryons dominate, and one can have negative \(\zeta_{m}\) without exotic dark sector contributions of the type discussed in Refs. [25; 51]. Ref. [130] provides estimates of \(\zeta_{m}\), integrating the classical solution for \(B^{2}/8\pi\) outside of the Compton radius of a proton, weighting quantities by the abundances of hydrogen and helium in the universe, and obtaining the estimate \(\zeta_{m}\approx-1.98\times 10^{-5}\). This estimate does not apply the effective field theory methods of Ref. [124]. Accounting more accurately for the composite and quantum mechanical nature of the nucleus, Ref. [126] applies methods introduced in Refs. [133; 134] to compute \(\zeta_{m}\) in terms of expectation values of nuclear electromagnetic density and current operators. Quantum mechanical identities (e.g. the Thomas-Reich-Kuhn rule) can be used to rewrite these quantities in terms of the photo-absorption cross section and other empirically measurable properties of nuclei. An estimate of \[\zeta_{m}(A)\approx-\frac{8.60465\times 10^{-6}}{A^{1/3}} \tag{3}\] is obtained [127], although this calculation does not yet apply the fully relativistic field-theory techniques of Ref. [124]. Seeking to test the analytic approximations of Ref. [130], we conducted a simulation of the early-universe plasma to assess if the electrostatic contribution to \(\zeta_{m}\) is shielded. Our methods and results are presented in Appendix A. We find that the term of opposite sign potentially driving cancellation in \(\zeta_{m}\) is orders-of-magnitude smaller than other terms, and thus that electrostatic cancellation does not occur. Further investigation is needed, but we recommend the use of \(\zeta_{m}\approx 10^{-4}\) as a'standard' value for the BSBM variant in which \(\psi\) is uncoupled from the dark sector. Broadly, there are two logical possibilities. In one case, the only new physics relevant for varying \(\alpha\) is the \(\alpha\) variation itself, with no non-minimal coupling of dark matter to new scalar. In the case that \(\zeta_{m}\) is sourced only by standard electromagnetism. Properly computing \(\zeta_{m}\) is then important because it gives us access to \(\overline{\omega}\) through tests of varying \(\alpha\), and thus \(\sqrt{\overline{\omega}}\), a new energy scale of interest, or put another way, the (not-quite Planckian) length scale \(l=\hbar c/\sqrt{\overline{\omega}}\) at which Coulomb's law breaks down due to interactions of standard-model fields with a novel scalar. A complete calculation of the baryonic \(\zeta_{m}\) is of vital interest in that scenario, especially if one wishes to convert the limits of Sec. IV into limits to the length scale at which Coulomb's law breaks down, or equivalently the energy scale \(\sqrt{\overline{\omega}}\) of BSBM physics. In the other case, \(\zeta_{m}M_{\rm pl}^{2}/\overline{\omega}\) is a single effective (unknown) dimensionless coupling constant (e.g. \(g\) in a dilaton warp factor of the form \(e^{-g\psi}\) in a non-minimal coupling term of a Lagrangian) to be determined empirically or predicted in an effective dark sector theory, as discussed extensively in Ref. [51]. There, values of \(\zeta_{m}M_{\rm pl}^{2}/\overline{\omega}=-\sqrt{2}/4\), \(1/2\), \(0.05\), and \(-1/\sqrt{16\Omega+24}\) are obtained, for the string dilaton, supersymmetric Bekenstein-Magueijo model, gaugino-driven modulus, and Brans-Dicke electromagnetism model variants, respectively. Here \(\Omega\) is the usual Brans-Dicke coupling parameter. For the remainder of this paper, we treat \(\zeta_{m}M_{\rm pl}^{2}/\overline{\omega}\) as parameter to be empirically determined from cosmological data. ### Scalar field dynamics The evolution of the scalar field from Eq. (1) can then be written \[\ddot{\psi}+3H\dot{\psi}=-\frac{2}{\overline{\omega}}e^{-2\psi}\zeta_{m}\rho_ {m}. \tag{4}\] Assuming a spatially flat, homogeneous, and isotropic universe, we write Eq. (4) as a function of redshift \(z\) using the standard Hubble parameter definition \(H\equiv\dot{a}/a\) where \(H=H_{0}\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{r}(1+z)^{4}+\Omega_{\Lambda}}\) and standard relation between scale factor \(a\) and \(z\), \(a=1/(1+z)\): \[\frac{d^{2}\psi}{dz^{2}} - \frac{d\psi}{dz}\left\{\frac{2}{(1+z)}-\frac{d\ln E(z)}{dz} \right\}+\frac{\tilde{m}^{2}\psi}{(1+z)^{2}\left[\Omega_{m}(1+z)^{3}+\Omega_{ r}(1+z)^{4}+\Omega_{\Lambda}\right]} \tag{5}\] \[= -\frac{6\zeta_{m}}{\omega}\frac{\Omega_{m}(1+z)}{\sqrt{\Omega_{m} (1+z)^{3}+\Omega_{r}(1+z)^{4}+\Omega_{\Lambda}}}e^{-2\psi},\] where \(E(z)\equiv H(z)/H_{0}\). Here we also allow a mass \(m\) for the scalar field to allow a more general set of dynamical variation models for \(\alpha\). In the Klein-Gordon equation with MKS units, one has a term \(=(mc^{2}/\hbar)^{2}\psi\). After rewriting the Klein-Gordon equation through the preceding series of transformations, it is straightforward to see that \(\tilde{m}=mc^{2}/(\hbar H_{0})=mc^{2}/(2.13\times 10^{-33}~{}h~{}{\rm eV})\), where \(h\) here is the usual dimensionless Hubble constant. For the analysis in the main body of this paper, we have assumed the standard \(\rho_{m}\propto a^{-3}\) scaling. As discussed in Appendix B, this does not self-consistently allow for energy conservation under the \(\mathcal{L}_{\rm em}\rightarrow\zeta_{m}\rho_{m}\) approximation. We assess this issue quantitatively in Appendix B, and find that the resulting correction to our field evolution, observables, and constraints is negligible. The coefficient on the RHS of Eq. (5) is defined relative to the reduced Planck mass and thus appears different from that shown in some other work (e.g. Ref. [121]) on this topic. We treat the scalar-field coupling \(\zeta_{m}/\omega\) and \(\tilde{m}\) as free parameters to be empirically constrained. The scalar field, \(\psi\) is related to \(\alpha\) by \(\alpha=e^{2\psi}e_{0}^{2}/hc\), implying that \(\Delta\alpha/\alpha\simeq 2\psi\) for small \(\psi\). We solved Eq. (5) going from \(z=0\) to higher values, using an \(8^{\rm th}\)-order Runge-Kutta method [135] with 80000 linear steps in \(z\). The number of steps was increased until the relative fractional numerical error in \(\Delta\alpha/\alpha\) was smaller than \(\sim 10^{-7}\), the fractional \(\alpha\) variation implied (\(\sim 10^{-5}\)) by some QSO measurements [25]. Note that this calculation itself does not depend on the QSO measurement. All the high-\(z\) behavior is determined by Eq. (5), along with the chosen values of \(\zeta_{m}/\omega\), \(\psi(z=0)\) and \(\psi^{\prime}(z=0)\). Here the \({}^{\prime}\) denotes a derivative with respect to \(z\). To verify the stability and accuracy of this method, we ran a test in which we evolved our numerical solver back ward in time to high \(z\) and used \(\psi(z_{\rm high})\) and \(\psi^{\prime}(z_{\rm high})\) as the initial conditions for a forward integration in time (decreasing \(z\)), comparing \(z=0\) and intermediate values of \(\psi\) and \(\psi^{\prime}\) with the results obtained from the high-to-low \(z\) evolution. We recovered the desired fractional \(\sim 10^{-7}\) precision in \(\Delta\alpha/\alpha\). Assuming that \(\psi=0\) and \(\psi^{\prime}=0\) at \(z=0\) (needed for consistency with present-day lab constraints), we show the resulting cosmological \(\alpha\) variation as a function of \(z\) in Fig. 1 for a variety of \(\zeta_{m}\) and \(\tilde{m}\) values. Ref. [27] summarizes Keck HIRES QSO observations hinting at \(\alpha\) variation. There, a slightly different convention for BSBM constants is used. Effectively, they constrain \(\tilde{\zeta}=8\pi\zeta_{m}/\omega\), obtaining \(\tilde{\zeta}\leq 3.7\times 10^{-6}\) at 95% C.L. and \(\zeta_{m}/\omega\sim 10^{-7}\). In Fig. 1, we see that if these constraints are saturated, one expects \(\Delta\alpha/\alpha\sim 10^{-2}\) at the recombination epoch, well within reach of the CMB's sensitivity to \(\alpha\) variation [75; 76; 77; 78; 79; 80; 81; 82; 83; 93]. ## III Principal Component Analysis One powerful approach for probing non-standard physics is principal component analysis (PCA), which eschews a specific model, and instead determines a non-parametric family of template functions. These principal components (PCs) are data-driven models that capture the variance between observations and a fiducial (or best-fit) model. PCA has already been used to great effect to test models of dark energy [as parameterized by its equation-of-state parameter \(w(z)\)] [95; 100; 101], the cosmic reionization history [96; 97], non-standard cosmic recombination [102], as well as more exotic physics like dark-matter decay and annihilation [99]. In this work, we use the \(\alpha\)-variation in PCs already determined in prior work by some of us [93; 103] to probe time-varying \(\alpha\), setting the stage for the work described here, as well as future analysis of a broad family of theoretical models. Full details of the techniques used to apply and constrain the PCs can be found in Refs. [93; 103], but we briefly review the technique below. Principal components (PCs) are obtained by diagonalizing the Fisher Information matrix [136; 137]: \[F_{ij}=\left\langle\frac{\partial^{2}\mathcal{L}}{\partial\theta_{i}\partial \theta_{j}}\right\rangle, \tag{6}\] where \(\mathcal{L}\) is the log-likelihood function of a data set or simulation given values of all model parameters \(\theta_{i}\) evaluated at their fiducial values. Schematically, the parameter vector \(\theta=\{\mathbf{p},\mathbf{q}\}\), where \(\mathbf{p}\) denote fiducial model parameters and \(\mathbf{q}\) denotes the expansion coefficients of non-standard deviations from the fiducial model for quantities (e.g. \(\alpha\)) usually treated as constant. For our case, this means that \[\frac{\Delta\alpha\left(z\right)}{\alpha_{0}}=\sum_{i}q_{i}f_{i}(z), \tag{7}\] where the basis functions \(f_{i}\) are some (complete, but not necessarily orthogonal) set of smooth basis functions centered at some set of redshifts \(z_{i}\) and \(q_{i}\) are expansion coefficients. The vector \(\mathbf{p}=\left\{A_{s},n_{s},\Omega_{b}h^{2},\Omega_{c}h^{2},\tau_{\rm reion },H_{0}\right\}\) contains the standard cosmological parameters of the dimensionless amplitude of the primordial perturbation power spectrum, its spectral index, the relic baryon density, cold dark matter (CDM) density, optical depth to reionization, and Hubble constant, respectively. For a Gaussian likelihood, the CMB Fisher matrix is given by \[F_{ij}=\sum_{\ell}f_{\rm sky}\left(\frac{2\ell+1}{2}\right)\frac{\partial \mathbf{C}_{\ell}}{\partial\theta_{i}}\mathbf{\Sigma}_{l}^{-1}\frac{\partial \mathbf{C}_{\ell}}{\partial\theta_{j}}, \tag{8}\] where \(\mathbf{C}_{\ell}=\left\{C_{\ell}^{\rm TT},C_{\ell}^{\rm EE},C_{\ell}^{\rm TE}\right\}\) is the theoretically predicted set of CMB (temperature/polarization auto, and cross) power spectra, \(\mathbf{\Sigma}_{l}\) is the covariance matrix of observationally estimated CMB power spectra at multipole Figure 1: Evolution of \(\alpha\) variation as a function of \(z\) for a range of BSBM model parameters. index \(\ell\), including the effect of instrumental noise and cosmic variance, and \(f_{\rm sky}\) is the fraction of sky covered by an experiment. Using a set of Gaussian basis functions, a modified version of the camb[138] code interfaced with the CosmoRec recombination code [139], the _Planck_ 2018 likelihood function, and the usual analytic approximations for the instrumental properties of the Simons Observatory (or SO, an ongoing ground-based CMB experiment), a set of PCs for time-varying \(\alpha\) was obtained in Ref. [104]. camb was run including the impact of gravitational lensing [140], which smooths the high-\(\ell\) anisotropies. We apply these PCs here to test the BSBM model. Several PCs are shown for the SO case in Fig. 2. We see that the BSBM model is relatively featureless compared to the PCs in the \(z\) range of interest. Models with more complicated potential energy functions (which likely begin coherent oscillation earlier), may show an overlap of large amplitude \(\alpha\) variation and oscillation, leading to more interesting interaction with the PCs. We will explore this possibility more in further work. In terms of the PCs \(E_{i}(z)\), any model may be expressed \[\frac{\Delta\alpha(z)}{\alpha_{0}}=\sum_{i}\rho_{{\rm M},i}E_{i}(z), \tag{9}\] where the PCs can be expressed as linear combinations of the original basis functions \[E_{i}(z)=\sum_{j}e_{ij}f_{j}(z), \tag{10}\] where \(e_{ij}\) denotes the \(j^{\rm th}\) basis-component of the \(i^{\rm th}\) Fisher-matrix eigenvector \({\bf e}_{i}\). For sufficiently dense basis sets, the PCs should themselves be numerically convergent (checked for the \(\alpha\)-variation case in Ref. [103]) and basis-independent (checked for variations in the cosmic recombination history in Ref. [102]). The projection amplitudes for any specific model realization are given by \[\rho_{{\rm M},i}=\int\frac{\Delta\alpha(z)}{\alpha_{0}}\times E_{i}(z)dz, \tag{11}\] and may be used to accurately re-express the variation around the fiducial model as long as it is small. These can be used to construct a \(\chi^{2}\)-diagnostic with the best-fit (from the data) values for the model-expansion coefficients in the PC basis, \(\rho_{{\rm D},i}\): \[\chi^{2}=\sum_{i}\left(\rho_{{\rm M},i}-\rho_{{\rm D},i}\right)\left({\cal F} \right)_{ij}^{-1}\left(\rho_{{\rm M},i}-\rho_{{\rm D},i}\right). \tag{12}\] Here the sum is over PC indices. One advantage of this method is that even if the likelihood with respect to model (e.g. BSBM) parameters is non-Gaussian, the likelihood with respect to PC amplitudes is still very close to Gaussian. Here the covariance is given by \[\sigma_{\rho_{{\rm M},i}}=\sqrt{\left({\cal F}^{-1}\right)_{ii}}\simeq\frac{1 }{\sqrt{\lambda_{i}}}, \tag{13}\] where \({\cal F}\) is the (nearly-diagonal) \(\alpha\)-variation Fisher matrix. The quantity \(\lambda_{i}\) is the \(i^{\rm th}\) eigenvalue of the \(\alpha\)-variation Fisher matrix. The best-fit values and \({\cal F}\) were originally found through a full MCMC in which cosmological parameters were simultaneously varied with PC amplitudes [104]. Our PCs are the eXMs (post-marginalization) PCs of Ref. [104]. At Fisher-level, expanding around the fiducial model, parameter changes (from \(\rho\rightarrow\zeta_{m}/\omega\)) commute with cosmological parameter marginalization, and so we do not expect our constraints to get less stringent due to any neglected covariances. Nonetheless, in future work, we will more fully account for these covariances by using the original samples from the MCMC chain to construct a kernel-density (KD) likelihood in which PC amplitudes and cosmological parameters can be simultaneously varied, as in e.g., Ref. [141]. To construct our _Planck_ posterior probability for BSBM model coefficients, we use Eqs. (11)-(12) and the usual \({\cal L}\propto e^{-\chi^{2}/2}\times\pi\). We use use flat priors \(\pi\) that allow the parameter ranges \(-8\leq\log_{10}(\tilde{m})\leq 1\) and \(-10^{-2}\leq\zeta_{m}/\omega\leq 10^{-2}\). We use the best-fit PC amplitudes and errors (obtained from MCMC runs) from Ref. [93]. We computed \(\Delta\alpha(z)/\alpha\) using the method discussed in Sec. II. To conduct forecasts for SO, we built a mock likelihood, assuming that the fiducial model is true (e.g. that \(\rho_{{\rm D},i}=0\)), and used Fisher-forecast errors on the PC amplitudes. For our _Planck_ analysis, 3 PCs were used, while 10 PCs were included for SO, to allow for its higher information content. It can be helpful to assess the relative information content and utility of different PCs, and to determine how many are truly needed to properly capture a data set. One useful tool is the signal-to-noise (SNR) contribution from each mode, which is \[(S/N)_{i}=\sqrt{\lambda_{i}\rho_{i}^{2}}. \tag{14}\] Another interesting quantity in PCA is the risk factor [94; 95; 98], defined via \[\sigma^{2}\left[\alpha\left(z_{j}\right)\right] = \left(\sum_{i}^{N_{\rm PC}}\frac{e_{i}^{2}\left(z_{j}\right)}{ \lambda_{i}}\right), \tag{15}\] \[b\left(z_{j}\right) = \frac{\Delta\alpha\left(z_{j}\right)}{\alpha}-\sum_{i}^{N_{\rm PC} }\left[\rho_{i}E_{i}\left(z_{j}\right)\right],\] (16) \[{\rm Risk}[N_{\rm PC}] = \sum_{j}\left\{b^{2}\left(z_{j}\right)+\sigma^{2}\left[\alpha \left(z_{j}\right)\right]\right\}, \tag{17}\] where \(N_{\rm PC}\) is the number of PCs used to test a model. Here \(b(z_{j})\) is defined to be the bias in \(\Delta\alpha(z)/\alpha\) induced by using an incomplete set of PCs, which competes with the variance (which decreases when PCs are filtered out to reduce the error in the data if the fiducial model is actually true). This quantity is model-dependent and minimized when an optimal number of PCs is chosen to test a specific model. In Sec. IV, we compute this quantity to assess the impact of different PCs on our BSBM model constraints. ## IV Data analysis and results Using the _Planck_ 2018 \(\alpha\)-variation likelihood described in Sec. III and our scalar-field integrator, we found best fit values of \(\zeta_{m}/\omega\) for the \(m=0\) case. We then set a broad parameter range of \(-0.01\leq\zeta_{m}/\omega\leq 0.01\) and \(-8\leq\log_{10}\left(\tilde{m}\right)\leq 1\) and ran an MCMC simulation to determine the allowed \(\zeta_{m}/\omega\) and \(\log_{10}(\tilde{m}\). parameter space. We used the emcee package, which applies Goodman & Weare's Affine Invariant sampler to run Monte Carlo Markov Chains (MCMCs) and checked for convergence using the auto-correlation time [142]. For the CMB analysis, we established two-dimensional MCMC convergence as follows: For the _Planck_ analysis, we ran 32 chains for 12000 samples. We obtained a correlation length of \(\sim 150\) for \(\zeta_{m}/\omega\) and \(\sim 60\) samples for \(\log_{10}(\tilde{m})\), less than \(12000/50=240\). For the SO analysis, we ran 32 chains for 10000 samples. We obtained an auto-correlation length of \(\sim 10\) samples \(\zeta_{m}/\omega\) and 58 in \(\log_{10}(\tilde{m})\). Both are less than \(10000/50=200\), and so all these CMB chains are converged. For our analysis, we used a burn-in fraction of 0.3 throughout. Visualizations and confidence intervals were generated with the GetDist package [143]. The results are shown in Fig. 3. As we can see, the constraint becomes less stringent near \(\log_{10}(\tilde{m})\simeq 0\). The predicted decoupling-era \(\alpha\) variation of parameter sets beyond this threshold is undetectable. To be sure that our allowed parameter space [which is highly non-Gaussian in \(\log_{10}(\tilde{m})\)] is robust, we did a series of quasi-frequentist one-dimensional runs at a discrete grid of \(m\) values near and beyond the transition point, as also done to obtain constraints to ultra-light axions (ULAs) in Ref. [118]. We made sure that the range used for \(\log_{10}(\tilde{m})\) overlapped with that used in the two-dimensional MCMCs, finding agreement that validates both sets of constraints (the constraints from overlapping one-dimensional simulations are superimposed with stars in Fig. 3). The results of the one-dimensional MCMCs are shown in Tables 1-III, with CMB posteriors at individual masses shown in Figs. 14-15 of Appendix C. Summarizing our constraints, we see that at 95% C.L., _Planck_ data impose the constraint \(\zeta_{m}/\omega\leq 9.3\times 10^{-9}\), with future SO data offering an order-magnitude improvement in sensitivity. This constraint relaxes nearly completely for \(\log_{10}(\tilde{m})\geq 1.\) For SO, we find that at the lowest \(\tilde{m}\) values, SO would be sensitive to values of \(\zeta_{m}/\omega\simeq 2.2\times 10^{-9}\) and higher, as can be seen separately in Fig. 4. It is interesting to examine the change induced in CMB observables for parameter values saturating our constraints in order to understand what features drive our sensitivity to the BSBM model. We have \(\Delta C^{\rm XY}_{\ell,{\rm j}}\) for each of the PCs obtained via \[\frac{dC^{\rm XY}_{\ell,{\rm j}}}{d\rho_{j}}= \sum_{i}\frac{dC^{\rm XY}_{\ell}}{dq_{i}}\frac{dq_{i}}{d\rho_{j}}\] \[= \sum_{i}\frac{dC^{\rm XY}_{\ell}}{dq_{i}}\int dzE_{j}(z)f_{i}(z), \tag{18}\] where \(X,Y\in\{{\rm TT},{\rm EE},{\rm TE}\}\). We then use these expressions to calculate the BSBM-induced change to observables, applying the fact that \[\Delta C^{XY}_{\ell,{\rm BSBM}}=\sum_{j=1}^{{\rm N}_{pc}}\rho_{{\rm BSBM},j} \frac{dC^{\rm XY}_{\ell,{\rm j}}}{d\rho_{j}}. \tag{19}\] The changes to \(C_{\ell}\) for _Planck_ best-fit values of \(\zeta_{m}/\omega\), as well as 68.5% and 95% C.L. constraint-saturating values, are shown in Fig. 5, along with the same quantities for SO. For this plot we used \(\log_{10}(\tilde{m})=-3.0\). The changes are normalized to the cosmic variance per multipole \[\sigma_{C^{\rm XX}_{\ell}}=\sqrt{\frac{2}{2l+1}}C^{\rm XX}_{\ell} \tag{20}\] of the fiducial model. To avoid spikes near zero crossing of TE, we use the usual convention that \[\sigma_{C^{\rm TE}_{\ell}}=\sqrt{\frac{2}{2l+1}}\sqrt{\left(C^{\rm TE}_{\ell} \right)^{2}+C^{\rm TT}_{\ell}C^{\rm EE}_{\ell}}. \tag{21}\] We see that in temperature, the dominant effect is a decrease in high-\(\ell\) anisotropies. This corresponds to a shift of the diffusion damping tail to lower-\(\ell\) (larger angular scales). Larger positive values of \(\zeta_{m}/\omega\) correspond to larger values of \(\alpha\) in the past, more efficient scattering, later decoupling, and a surface of last-scattering closer to the observer, driving features to lower \(\ell\). In temperature, this geometric effect dominates over higher scattering rates yielding lower diffusion-damping lengths (which would enhance rather than depress low-\(\ell\) anisotropies). Higher values of \(\zeta_{m}/\omega\) mean larger values of \(\alpha\) in the past. This means that Thomson scattering rates were higher, and \(E\)-mode polarization anisotropies were generated more efficiently at scales at \(\ell\) below the Silk damping scale. As a check, we also used emcee with a QSO data set and error bars shown in Fig. 6 to probe the BSBM parameter space. We used the \(\chi^{2}\) between the model BSBM variation \(\Delta\alpha_{\rm M}(z)\) (for any given values for \(m\) and \(\zeta_{m}/\omega\)) and the reported variation inferred from QSO data \(\Delta\alpha_{\rm D}(z)\): \[\chi^{2}=\sum_{i}\frac{\left[\Delta\alpha_{\rm M}(z_{i})-\Delta\alpha_{\rm D}( z_{i})\right]^{2}}{\sigma_{i}^{2}}. \tag{22}\] The QSO redshifts are denoted \(z_{i}\) and the measurement errors in \(\Delta\alpha_{D}(z)\) are denoted \(\sigma_{i}\). For the QSO analysis, we used 32 one-dimensional MCMC chains of Figure 3: 68.5% (dark) and 95% C.L.(light) contours for \(\zeta_{m}/\omega\) and \(\tilde{m}\) using _Planck_ 2018 data. Overlaid are forecasts for upcoming SO data, assuming forecast SO error bars on principal component amplitudes and a fiducial value \(\zeta_{m}/\omega=0\). 5-sided red (blue) stars show _Planck_ (SO) results from the one-dimensional MCMCs described in Sec. IV. Figure 4: 68.5% (dark) and 95% C.L.(light) contours for \(\zeta_{m}/\omega\) and \(\tilde{m}\) using our SO forecast. Overlaid are forecasts for upcoming SO data, assuming forecast SO error bars on principal component amplitudes and a fiducial value \(\zeta_{m}/\omega=0\). 5-sided stars show SO forecasts from the one-dimensional MCMCs described in Sec. IV. 2000 samples each for each \(\tilde{m}\) value. The auto-correlation length was \(\sim 17\) samples, far less than \(2000/50=40\), and so the chains are converged. Our reanalysis of the QSO data summarized in Ref. [27] (mostly from the Keck HIRES spectrograph) yields a 95% C.L. limit \(\zeta_{m}/\omega\leq 2.8\times 10^{-7}\) when \(\log_{10}(\tilde{m})=-5\). Translating into the different normaliza \begin{table} \begin{tabular}{|c|c|} \hline \(\log_{10}\tilde{m}\) & (\(\zeta_{m}/\omega\)) (Planck 2018) \\ \hline 0.3 & \((-4.3\pm 6.7)\times 10^{-9}\) \\ 0.5 & \((-6.3\pm 9.1)\times 10^{-9}\) \\ 0.6 & \((-0.8\pm 1.1)\times 10^{-8}\) \\ 0.8 & \((-1.4\pm 2.3)\times 10^{-8}\) \\ \hline \hline 1.3 & \((-4.9\pm 7.1)\times 10^{-8}\) \\ 1.4 & \((-6.1\pm 8.8)\times 10^{-8}\) \\ 1.6 & \((-0.9\pm 1.4)\times 10^{-7}\) \\ 1.8 & \((-1.5\pm 2.2)\times 10^{-7}\) \\ 3 & Unconstrained \\ \hline \end{tabular} \end{table} Table 1: Constraints on \(\zeta_{m}/\omega\) at different masses from _Planck_ 2018 analysis using one-dimensional emcee runs and a fixed set of \(\tilde{m}\) values. Figure 5: _Top row:_ Fractional change \(\Delta C_{\ell}/\sigma_{C_{\ell}}\) for _Planck_ 2018 best fit values, and for values saturating 68.5% C.L and 95% C.L. constraints to \(\zeta_{m}/\omega\), all for \(\log_{10}(\tilde{m})=-3.0\). Here \(\sigma_{C_{\ell}}\) is the cosmic variance per multipole. _Bottom row:_ Same quantity, now assuming assuming models that saturate SO error bars (themselves determined using the fiducial model). Figure 6: Compilation of possible \(\alpha\) variation as a function of time, inferred from analysis of QSO spectra, from Ref. [27]. tion of BSBM couplings used there (\(\tilde{\zeta}=8\pi\zeta_{m}/\omega\)), this implies \(\tilde{\zeta}\leq 7\times 10^{-6}\), consistent with the limits given in Ref. [27]. QSO results for higher \(\tilde{m}\) are given in Table 3 and Appendix C. The _Planck_ limits presented in this work (alone, without a QSO prior) to BSBM parameters are thus tighter than those imposed by the HIRES data. Recently, extremely precise constraints to \(\alpha\) variation have been obtained with the ESPRESSO spectrograph at the European Southern Observatory (ESO) and applied to test the BSBM and related models [121; 144]. That work includes an external constraint imposed from the CMB, but is driven by the QSO results. A 95% C.L. limit of \(\tilde{\zeta}\leq 1.7\times 10^{-8}\) is given there, equivalent to \(\zeta_{m}/\omega\leq 6.7\times 10^{-10}\). Our SO forecasts predict a sensitivity level \(\sim 1\) order of magnitude larger than this, and so it stands to reason that future CMB experimental efforts (e.g. S4, HD) could be competitive with lower-\(z\) probes of \(\alpha\) variation in testing the BSBM model and related ideas. More broadly, this is an independent technique, depending on different physics, with different systematics, and evidence for the model at high-redshift could be reconciled with low-redshift upper limits in the context of a theory with different time evolution than initially assumed (the same could be said for a high-redshift upper limit in conflict with potential low-redshift evidence). In future work, we will investigate the power of combining QSO and CMB data sets. We have developed a version of the Boltzmann code class which evolves the spatial fluctuations in the scalar field of the Bekenstein model. Our code includes both gravitational effects and non-minimal coupling of \(\psi\) to DM. While we are exploring the possible impact of this _spatial_ fluctuation on CMB observables, we have already found that the predicted angular spectrum of \(\alpha\) variations at decoupling is blue. The observable impact of these fluctuations is currently only computable within a separate universe limit (see Ref. [145] for a discussion) and produces a spatial variation (on scales that are superhorizon at recombination) in \(\alpha\) with a root mean-squared value at the \(\sim 10^{-5}\) fractional level for values \(\zeta_{m}/\omega=1\). This signal is small, but the correlation with the underlying dark-matter density field (and possible early-universe quantum contributions to \(\psi\) fluctuations), could still induce an observable signal through non-Gaussian signatures in the CMB along the lines described/sought in Refs. [105; 106; 146]. A more detailed discussion of the results and methods of those efforts is beyond the scope of this work and will be presented in a future manuscript [147]. The small amplitude of the signal (due to the causal growth of \(\psi\) fluctuations and the smallness of primordial fluctuations) is consistent with predictions of Ref. [33]. To understand which PCs are driving constraints, it is interesting to examine the SNR of each mode for parameter values saturating the 95% C.L. BSBM model constraints for \(\tilde{m}=0\) with _Planck_ data, applying Eq. (14). These quantities are shown in Fig. 7. We see that the first 2 PCs have significant constraining power for _Planck_, while 3 PCs of SO data will be constraining. Unsurprisingly, SO will generally have a much higher SNR for the BSBM model. We also compute the risk [Eq. (17)] as a function of \(N_{\rm PC}\), the number of PCs used in the analysis, assuming SO noise levels and a signal saturating the _Planck_ 95% constraint level for \(\tilde{m}=0\). The results are shown in Fig. 8 for SO, indicating that 3 PCs should suffice to test the BSBM model adequately while minimizing risk. \begin{table} \begin{tabular}{|c|c|} \hline \(\log_{10}\tilde{m}\) & \((\zeta_{m}/\omega)\) (SO) \\ \hline \(-5\) & \((0.0\pm 1.2)\times 10^{-9}\) \\ \(0.6\) & \((0.2\pm 2.1)\times 10^{-9}\) \\ \(0.8\) & \((0.0\pm 4.9)\times 10^{-9}\) \\ \(1\) & \((0.2\pm 7.0)\times 10^{-9}\) \\ \(1.2\) & \((0.0\pm 1.1)\times 10^{-8}\) \\ \(1.8\) & \((-0.2\pm 4.7)\times 10^{-8}\) \\ \hline \end{tabular} \end{table} Table 2: Forecast sensitivity levels on \(\zeta_{m}/\omega\) at different masses from the SO forecast, using one-dimensional emcee runs and a fixed set of \(\tilde{m}\) values. \begin{table} \begin{tabular}{|c|c|} \hline \(\log_{10}\tilde{m}\) & \((\zeta_{m}/\omega)\) (QSO) \\ \hline \(-5\) & \((-1.4\pm 1.1)\times 10^{-7}\) \\ \(0\) & \((-1.4\pm 1.1)\times 10^{-7}\) \\ \(1\) & \((1.1\pm 0.9)\times 10^{-4}\) \\ \(4\) & Unconstrained \\ \hline \end{tabular} \end{table} Table 3: Constraints on \(\zeta_{m}/\omega\) at different masses from QSO data, using one-dimensional emcee runs and a fixed set of \(\tilde{m}\) values. Figure 7: Signal-to-noise ratio as a function of PC index, assuming true \(\zeta_{m}/\omega\) that saturate _Planck_ 95% C.L. constraints with \(\tilde{m}=0\). PC eigenvalues for higher indices in _Planck_ are extrapolated to estimate SNR (as the eigenvalues appear to follow a clear power law) and are shown in red for clarity (red). ## V Conclusions We have used _Planck_ 2018 data and its principal components for variations in \(\alpha\) to constrain the BSBM theory of varying \(\alpha\), obtaining a 95% C.L. constraint of \(\zeta_{m}/\omega\leq 9.3\times 10^{-9}\). Assuming the null hypothesis holds, we have found that the Simons Observatory will have sensitivity to values as low as \(\zeta_{m}/\omega=2.2\times 10^{-9}\). This limit applies not only to the BSBM theory, but also to a related family of theoretical ideas, such as the string dilaton, the supersymmetric BSBM, the gaugino-driven modulus, and Brans-Dicke electromagnetism [51]. Looking forward, it will be interesting to extend our results to further theoretical models for \(\alpha\) variation [including, for example, a scalar-field potential and coupling function \(\omega(\psi)\)[28; 29]], or models where other coupling constants (e.g. \(m_{e}\)[148], \(G\)[87; 88; 89; 90; 91], or even \(c\)[149; 150; 151]) are dynamical. Using kernel-density estimates of the full-PCA +\(\Lambda\)CDM likelihood, we will more fully probe the covariance of BSBM model parameters with standard cosmological parameters. Causality dictates that variations of \(\alpha\) (or other fundamental parameters) in time require variations of \(\alpha\). While these variations are likely to be very small, they would induce non-Gaussian signatures in the CMB [105; 106; 107], and it would be useful to explore if this signal could be better extracted by applying CMB delensing, and harnessing the cross-correlation of \(\alpha\) with the underlying density field [152]. The induced CMB bi- and tri-spectra by these models have, in principle, a distinct shape from more standard effects like CMB lensing [152; 145]. In future work, we will assess if this can be used to better extract spatial variations in \(\alpha\) from the CMB. Thinking towards future measurements of CMB anisotropies, it will be interesting to explore the power of the planned CMB-S4 experiment [153; 122], as well as more futuristic concepts like CMB-HD [154], which could probe the CMB at much smaller angular scales than ever before, promising much improved leverage on varying fundamental constants. In coming years, new cosmological frontiers will open, with likely measurements of the neutral 21-cm signature of cosmic reionization and the cosmic dark ages [112]. The global 21-cm signal and anisotropies should be strongly sensitive to \(\alpha\)[113; 114]. Furthermore, CMB spectral distortions from the Silk damping and recombination eras would have spatial and frequency dependence that is strongly dependent on \(\alpha\)[155]. In future work, we will assess the sensitivity of these powerful measurements to the full family of theories enumerated here. The era of precision cosmology is here, and we should look forward to harnessing its data products, not only to characterize the energy budget of the universe but also to test the constancy of the fundamental parameters. ###### Acknowledgements. The authors thank Vivian Miranda and Jeremy Sakstein for helpful conversations. We thank Tristan Smith for many helpful discussions, especially on the issue of energy conservation in the BSBM theory. HT was supported by a Velay Fellowship and travel funding from the Marian E. Koshland Integrated Natural Sciences Center (KINSC) at Haverford College. DG, HT, MB, and J. Crump were supported by the U.S. Figure 8: Risk factor for the BSBM model, as a function of the number of PCs used for SO sensitivity levels, assuming \(\zeta_{m}/\omega\) values that saturate _Planck_ 95% C.L. constraints with \(\tilde{m}=0\). _Left panel_: Full range. _Right panel_: Zoom in to resolve presence of a minimum in the risk. National Science Foundation through grant award NSF 2112846. EB was supported by Haverford College's KINSC Summer Scholars fellowship. LH and J. Chluba were supported by the ERC Consolidator Grant _CMB-SPEC_ (No. 725456). J. Chluba was furthermore supported by the Royal Society as a Royal Society University Research Fellow at the University of Manchester, UK (No. URF/R/191023). This work used the Hannah Computing Cluster, which is run by Haverford College, the Strelka Computing Cluster, which is run by Swarthmore College, and the High Performance Computer Cluster which is run by the University of California, Riverside. We thank Joe Cammisa for support with Hannah. We thank Jason Simms and Andrew Reuther for support with Strelka. The land on which Haverford College stands is part of the ancient homeland and unceded traditional territory of the Lenape people. We pay respect to Lenape peoples, past, present, and future, and their continuing presence in the homeland and throughout the Lenape diaspora. We acknowledge the authors of the numpy[156] and scipy[157] libraries. We thank the authors of the emcee software used in our MCMCs [142], and the GetDist package used for visualizing the MCMC results [143]. ## Appendix A Electromagnetic sourcing of \(\alpha\) variation ### Argument for shielding of electrostatic energy Here we recapitulate Ref. [130]'s argument that electrostatic contributions to \(\zeta_{m}\) are shielded. We follow the discussion there, but provide additional details as needed. We begin with the static contributions to the Poisson equation: \[\nabla^{2}\psi=4\pi\kappa^{2}\left[\sum_{i}\frac{\partial m_{i}c^{2}}{ \partial\psi}\delta^{3}(\mathbf{x}-\mathbf{z}_{i})+\frac{1}{4\pi}e^{-2\psi}E ^{2}\right]. \tag{10}\] The sum here is over all source particles and the constant \(\kappa=l/\sqrt{4\pi\hbar c}\) is a renormalized Brans-Dicke (kinetic) coupling for \(\psi\). Bekenstein integrates the total contributions from each particle in a volume \(\mathcal{V}\) in the first term of the RHS of Eq. (10) and then replaces \((\partial m_{i}c^{2})/\partial\psi\) with \(\kappa^{-1}e_{0i}\tan[\kappa\Phi(\mathbf{z}_{i})]\)[130, Eq. 43]. The first term [on the RHS of Eq. (10)] then becomes \[\sum_{i}\frac{\partial m_{i}c^{2}}{\partial\psi}\delta^{3}( \mathbf{x}-\mathbf{z}_{i})\] \[\to-\frac{1}{\mathcal{V}}\int_{\mathcal{V}}d^{3}x\sum_{i\in \mathcal{V}}\kappa^{-1}e_{0i}\tan[\kappa\Phi(\mathbf{z}_{i})]\delta^{3}( \mathbf{x}-\mathbf{z}_{i})\] \[=-\frac{1}{\mathcal{V}}\sum_{i\in\mathcal{V}}e_{0i}\tan[\kappa \Phi(\mathbf{z}_{i})].\] Taylor expanding \(\tan[\kappa\Phi(\mathbf{z}_{i})]\) about \(\Phi(\mathbf{z}_{i})\) and discarding terms of \(\mathcal{O}(\kappa^{3}\Phi^{3})\), the expression \[-\frac{1}{\mathcal{V}}\sum_{i\in\mathcal{V}}e_{0i}\Phi(\mathbf{z}_{i}). \tag{11}\] is then obtained for the first term. The second term is somewhat more complicated. Modified Maxwell equations of the BSBM theory imply that \(\mathbf{E}\) with \(\mathbf{E}=-e^{2\psi}\mathbf{\nabla}\Phi\)[130, Eq. 41], and since \(e^{2\psi}=\sec^{2}\kappa\Phi\)[130, Eq. 45], linearization yields \[e^{2\psi}=\sec^{2}(\kappa\Phi)\approx 1+\kappa^{2}\Phi^{2}+\mathcal{O}(\kappa^{4 }\Phi^{4}). \tag{12}\] Discarding terms of \(\mathcal{O}(\kappa^{4}\Phi^{4})\), the spatial average of the second term is \[\frac{1}{4\pi}e^{-2\psi}\mathbf{E}^{2}\to\frac{1}{4\pi\mathcal{V}}\int_{ \mathcal{V}}d^{3}x[(\mathbf{\nabla}\Phi)^{2}+\kappa^{2}\Phi^{2}\mathbf{E}^{2}]. \tag{13}\] Integrating the first term of this integral using the divergence theorem and applying Eq. (12), one finds \[\nabla^{2}\Phi=-4\pi\sum_{i}e_{0i}\delta^{3}(\mathbf{x}-\mathbf{z}_{i}). \tag{14}\] The final approximation (applying the immediately preceding result) for the second term on the RHS of Eq. (10) is \[\frac{1}{\mathcal{V}}\sum_{i\in\mathcal{V}}e_{0i}\Phi(\mathbf{z}_{i})-\frac{1 }{4\pi\mathcal{V}}\left[\oint_{\partial\mathcal{V}}\Phi\mathbf{E}\cdot d \mathbf{s}-\kappa^{2}\int_{\mathcal{V}}d^{3}x\Phi^{2}\mathbf{E}^{2}\right]. \tag{15}\] Summing Eqs. (11) and (15), it is immediately clear that the only terms remaining are the two integrals in Eq. (15), and Ref. [130] argues they both are negligible - the argument goes as follows. One can approximate the surface integral as \(\mathcal{V}^{-1}\langle\Phi\rangle\sum_{i\in\mathcal{V}}e_{0i}\), using Gauss's law where \(\langle\Phi\rangle\) is the surface average of \(\Phi\). This quantity is much less than \(\mathcal{V}^{-1}\sum_{i\in\mathcal{V}}e_{0i}\Phi(\mathbf{z}_{i})\), so it is negligible. The only remaining quantity is the second integral in Eq. (15). An upper bound on \(|\Phi|\) for a unit charge is obtained divided by the smallest length scale \(\Phi\) varies over, which is at most \(10^{-17}\)cm, if quarks are the smallest particles of interest. Then, \(\kappa^{2}\Phi^{2}\mathbf{E}^{2}/4\pi\) has an upper bound of roughly \(10^{-34}(l/l_{p})^{2}\mathbf{E}^{2}/4\pi\), where \(l\) is the characteristic length of Bekenstein's theory. This implies that using \(\mathbf{E}^{2}/4\pi\) vastly overestimates electrostatic energy as a source for \(\alpha\) variation in the BSBM model. ### Testing electrostatic cancellation In order to the claim that \(\psi\) arranges itself to self-shield the electrostatic contribution to its equation of motion, we conduct a particle-in-cell (PIC) plasma simulation of the early universe (proton and neutron) plasma in python. These particles interact and cluster in ways that produce electric and magnetic fields. We compute these and use them to estimate \(\zeta=\langle\mathbf{E}^{2}-\mathbf{B}^{2}\rangle/\rho_{m}\). We determined field quantities by solving the modified Maxwell's equations using a Fourier spectral method. The equations we solved were the following, given in both real and \(k\)-space: \[\mathbf{E}=\nabla\Phi \Longrightarrow\tilde{\mathbf{E}}=i\tilde{\Phi}\mathbf{k} \tag{10}\] \[\nabla^{2}\Phi--4\pi\rho \Longrightarrow\tilde{\Phi}=-4\pi\tilde{\rho}K^{2}\] (11) \[\nabla^{2}\mathbf{A}=\frac{4\pi}{c}\mathbf{J} \Longrightarrow\tilde{\mathbf{A}}=\frac{4\pi}{c}\tilde{\mathbf{J}}K^{2} \tag{12}\] where \(\nabla\times\mathbf{A}=\mathbf{B}\), \(\mathbf{k}=\left\{dx^{-1}\sin(k_{x}dx),dy^{-1}\sin(k_{y}dy),dz^{-1}\sin(k_{z} dz)\right\}\), and \(K=K_{x}^{2}+K_{y}^{2}+K_{z}^{2}\) with \(K_{x}=2dx^{-1}\sin(k_{x}dx/2)\) and analogous definitions for \(K_{y}\) and \(K_{z}\)[158]. These definitions for \(k\) and \(K_{x},K_{y},K_{z}\) come from taking the Fourier transform of the finite difference forms of \(\nabla\) and \(\nabla^{2}\) respectively. We use these algebraic equations to solve for the fields of interest in the simulation. At each time step, we interpolate the particles onto a spatially uniform grid using a linear spline and periodic boundary conditions. Then, we use the node locations and appropriate weighting to determine the particle number density and current density at each node. We then take the discrete Fourier transform of these quantities and solve for \(\mathbf{\tilde{E}}\) and \(\mathbf{\tilde{A}}\) using Eqs. (10) through (12). Next, we take the inverse Fourier transform of \(\mathbf{\tilde{E}}\) and \(\mathbf{\tilde{A}}\) to get \(\mathbf{E}\) and \(\mathbf{A}\). Finally, we calculate \(\nabla\times\mathbf{A}=\mathbf{B}\). \(\mathbf{E}\) and \(\mathbf{B}\) are then re-interpolated onto particle positions. These are the quantities used to push the particles using the relativistic Boris method [159]. In order to calculate the discrete Fourier transforms, we used a built-in Fourier transform package from the numpy module for Python[156]. The simulation uses 4096 particles on a \(128\times 128\times 128\) grid at \(z=1100\). The particles are arranged uniformly on the grid initially and their initial velocities are drawn from a Maxwell-Boltzmann distribution. The domain of the simulation was a box where each side was \(Nn_{0}^{-1/3}\) long, where \(N\) is the number of particles along one side and \(n_{0}\) is the number density of protons in the universe at \(z=1100\). Also, \(dt=0.1dx/v_{0}\), where \(v_{0}\) is the mode of the initial velocity distribution for electrons. This was set to ensure that no particle traverses more than one grid cell in a time step. We tested the code using several methods. First, we tested the particle-pushing method by placing a single electron under the influence of a uniform electric and uniform magnetic field. In both cases, the error of our particle pusher method is of order machine precision (e.g. Fig. 9). Next, we tested our Fourier equation solving method by attempting to reproduce the analytical results for known functions. For example, instead of using the actual current density from the simulation, we used a known function [\(f(x,y)=\sin x+\sin y\), for example] and applied the same field solver to it. This test confirmed that the Fourier solving method accurately calculates \(\nabla f\) and \(\nabla^{2}f\). For an additional test of the Fourier solving method, we tested it alongside a more traditional grid-based solver in a 1D plasma simulation and another 1D simulation using the same initial conditions as our final 3D simulation [160]. We did the calculations separately from start to finish in both MKS and CGS (Gaussian) units. The simulation produced identical results for all quantities in each context. We tested for numerical convergence. For a fixed number of particles along one axis \(N\), \(\mathbf{E}\) and \(\mathbf{B}\) converge as the number of cells along one axis \(N_{x}\) increases. Increasing \(N_{x}\) increases the grid resolution, and so we would expect both \(\mathbf{E}\) and \(\mathbf{B}\) to converge to some value. This is indeed what we observe. As an important note, one would not expect \(\mathbf{E}\) and \(\mathbf{B}\) to converge as \(N\) increases. This is because \(n_{0}\) is physically set by the physics of the universe at a given \(z\). Therefore, increasing \(N\) simply increases the domain size of the simulation without altering the electrodynamics of the plasma. This means that for a fixed ratio \(N/N_{x}\), the simulation should produce identical re sults, which we also observed. Of course, \(N\) should be set high enough to allow for sufficient interactions between particles, but once \(N\approx 16\), any additional increase is not necessary. Increasing \(N\) higher than necessary also means that a higher \(N_{x}\) is needed to increase the accuracy of the results. Finally, it takes about 100 time steps for the the simulation to properly equilibrate, so results to this point should be discarded. After this point however, the values of \(\mathbf{E}\) and \(\mathbf{B}\) converge as \(N_{x}\) increases. We used the un-interpolated \(\mathbf{E}\) and \(\mathbf{B}\) to calculate \(\zeta_{m}=\langle E^{2}-B^{2}\rangle/\rho_{m}\). The averaging was calculated by taking the arithmetic average of the values of \(E^{2}\) and \(B^{2}\) at each point on the grid. Using this simulation, we estimate \(\zeta_{m}\approx 10^{-13}\) which is 8 orders of magnitude lower in absolute than other estimates - this is not surprising, our calculation is obtained for a diffuse plasma on scales well above nuclear length scales - our simulation is not appropriate to test the absolute scale of \(\zeta_{m}\) from nuclei in the early universe. It does give us the tools to test the claim that the electrostatic contribution to \(\zeta_{m}\) cancels out in a specific environment that we can directly simulate. Our conclusions may need to be revised to properly account for nuclear scales, but offer an interesting first test of the (analytic) claims of electrostatic shielding made in Ref. [130]. In contrast to that approach, we calculated all the electric and magnetic fields generated through inhomogeneities in a neutral plasma like the early universe. We did not consider nucleons or macroscopic objects as sources for fields. We calculated \(\mathbf{E}\) and \(\mathbf{B}\) directly in the simulation using Eqs. (19)-(20). However, this simulation differed from that used to estimate \(\zeta_{m}\) because we introduced \(\psi\) as a scalar field responsible for \(\alpha\) variation. This required us to alter our equation for \(\mathbf{E}\): \[\mathbf{E}=-e^{2\psi}\nabla\Phi \tag{21}\] Because \(e^{\psi}=\sec(\kappa\Phi)\), we were also able to calculate the new \(\mathbf{E}\) directly in the simulation. \(\mathbf{B}\) is unchanged according to Ref. [130]. Additionally, because \(\kappa\) is a free parameter, we estimate it as \(8.11\times 10^{-26}\,\mathrm{cm}^{1/2}\,\mathrm{erg}^{-1/2}\)[130]. While this has an effect on the precise values of the terms in Eq. (18), it does not affect cancellation. We thus calculated all field quantities in Eq. 18 directly. The only approximation made was in the delta function in Eq. (18): \[\delta^{3}(\mathbf{x}-\mathbf{z}_{i})\approx\frac{1}{a\sqrt{\pi}}\exp{\left( -\frac{(\mathbf{x}-\mathbf{z}_{i})^{2}}{a^{2}}\right)}. \tag{22}\] in order to properly account for finite grid resolution. The other parameters of the simulation were the same as in our simulation to estimate \(\zeta_{m}\). Fig. 10 shows our results for our test of Bekenstein's Cancellation Theorem. The combined RHS is only marginally different from the \(E^{2}\) term. On average, the \(E^{2}\) term is about \(2\times 10^{-63}\,\mathrm{erg}\,\mathrm{cm}^{-3}\), as is the combined RHS. However the \(\partial m_{i}/\partial\psi\) term is about \(-8\times 10^{-68.5}\,\mathrm{erg}\,\mathrm{cm}^{-3}\), nearly 6 orders of magnitude less than the \(E^{2}\) term. Because this term is so much smaller, it is not plotted in Fig. 10. Because of the large difference between the \(E^{2}\) term and the \(\partial m_{i}/\partial\psi\) term, our results do not support cancellation. In order to further test the Cancellation Theorem, we further simulated a neutral plasma at a nuclear density \(n_{0}=1.2\times 10^{38}\) protons/cm\({}^{3}\). Here, our simulation still does not support cancellation. In this case, the two terms in Eq. (18) are 22 orders of magnitude apart. The \(E^{2}\) term is about \(10^{-16}\) erg cm\({}^{-3}\) and the \(dm/d\psi\) term is about \(10^{-38}\) erg cm\({}^{-3}\). Finally, we repeated these calculations for a proton-only plasma at the same density as the recombination-density neutral plasma. Bekenstein's arguments center on the properties of charged macroscopic objects, so it is possible that for a plasma with a net charge, the Cancellation Theorem holds. First, we recalculate \(\zeta\). For the positive plasma with a \(128\times 128\times 128\) grid, \(\zeta\approx 10^{-14}\), which is slightly lower than for a neutral plasma. The results for cancellation are in Fig. 11. Here, the two terms from Eq. (18) still differ significantly. The \(dm/d\psi\) term is about \(10^{-67}\) erg cm\({}^{-3}\) and the \(E^{2}\) term is about \(10^{-63}\) erg cm\({}^{-3}\). While the two terms still differ greatly, Fig. 11 demonstrates that cancellation is more plausible in this case, as there is some difference between each histogram, as we would expect if cancellation is occurring. One possible reason for our observed lack of cancellation is Bekenstein's reliance on approximations to convert between microscopic and macroscopic scales. We considered instead fields calculated directly in the simulation. Some of Bekenstein's approximations to perform this conversion are robust. We tested the approximation of the second term of Eq. (18), which is given by Figure 10: The values of the \(E^{2}\) term and combined RHS in the equation of motion for \(\psi\) for a recombination density neutral plasma. The \(\partial m_{i}/\partial\psi\) term is not plotted because it is 5 orders of magnitude less than the other terms. Simulation used 4,096 particles on a \(128\times 128\times 128\) grid over 1000 time steps. The first 100 time steps are not plotted. Eq. (10). In the recombination-density simulations, the approximation is off by at most an order of magnitude, indicating the approximation is reasonably robust. However, for the nuclear density neutral plasma, the approximation is off by about 6 orders of magnitude. In turn, this suggests that the problem lies with the \(dm/d\psi\) term in Eq. (12). This approximation relies more on the analytic specifics of Bekenstein's theory and is therefore more difficult to test computationally. Further work should be aimed at testing the robustness of the analytic derivations of this term. ## Appendix B Energy Conservation The equations of motion used earlier (and throughout the BSBM literature) do not conserve energy. In particular, using the fact that the field energy density is \(\overline{\omega}\dot{\psi}^{2}/2\) and the field equation of motion [Eq. (4)], we see that \(\dot{\rho}_{\psi}=-3\overline{\omega}\dot{\psi}^{2}H-2e^{-2\psi}\zeta_{m}\dot{\psi}\). Given that a field with no potential has equation-of-state parameter \(w_{\psi}=1\), this can be rewritten as \[\dot{\rho}_{\psi}+3(1+w_{\psi})\rho_{\psi}=-2\sqrt{\frac{2}{\overline{\omega}} }e^{-2\psi}\zeta_{m}\rho_{m}\sqrt{\rho}_{\psi}. \tag{13}\] The left-hand side of Eq. (13 is of course the standard term for energy flow out a fixed physical volume in an expanding universe. The right hand side is an energy flow term from the scalar field into matter. The usual \(\Lambda\)CDM continuity equation for matter density, \[\dot{\rho}_{m}+3H\rho_{m}=0, \tag{14}\] must acquire an additional term if energy conservation is to hold, and so we have that \[\dot{\rho}_{m}+3H\rho_{m}-2\sqrt{\frac{2}{\overline{\omega}}}e^{-2\psi}\zeta_ {m}\rho_{m}\sqrt{\rho_{\psi}}=0. \tag{15}\] There is nothing ad hoc about using Eq. (15). The additional term on the right-hand side of Eq. (13) comes from the substitution \(\mathcal{L}_{\text{em}}\rightarrow\zeta\rho_{\text{m}}\) - this amounts to saying that the electrostatic influence of charged particles contributing to \(\mathcal{L}_{\text{em}}\) to an additional interaction between \(\psi\) and matter, in some sense, integrating out the relevant electromagnetic fields. These fields should back-react on the plasma that sources them, and if we are to take this Lagrangian substitution at face value, the Bianchi identity implies the \(\nabla_{\mu}\left(e^{-2\psi}T^{\mu\nu,\text{matter}}\right)=0\). This can be readily applied in an FRW cosmology to obtain Eq. (15). This modified equation for matter density and the second-order differential equation for the scalar field can then be written as a system of 3 first-order ordinary differential equations as follows: \[\frac{df}{dz}=\frac{3}{1+z}f+\frac{2}{g(1+z)}e^{-2\psi}\zeta_{m}fu, \tag{16}\] \[\frac{d\psi}{dz}=-\frac{u}{g(1+z)}, \tag{17}\] \[\frac{du}{dz}=\frac{3}{1+z}u+\frac{6\Omega_{m,0}}{g(1+z)}\frac{1}{\omega}e^{ -2\psi}\zeta_{m}f \tag{18}\] where \(f(z)=\Omega_{m}(z)/\Omega_{m,0}\), \(u(z)=-\dot{\psi}/[H_{0}g(1+z)]\) is a dimensionless velocity for the scalar field and \[g^{2}=\Omega_{m,0}f\left(1+|\zeta_{m}|e^{-2\psi}\right)+\Omega_{r,0}(1+z)^{4} e^{-2\psi}+\frac{\omega}{6}u^{2}+\Omega_{\Lambda,0} \tag{19}\] represents the modified dimensionless Hubble parameter. This system of differential equations was then solved Figure 11: The values of the \(E^{2}\) term and combined RHS in the equation of motion for \(\psi\) in a recombination density positive plasma. The \(\partial m_{i}/\partial\psi\) term is not plotted because it is 4 orders of magnitude less than the other terms. Simulation used 4,096 particles on a \(128\times 128\times 128\) grid over 1000 time steps. The first 100 time steps are not plotted. numerically over a range of redshifts and compared to the non-energy conserving implementation with \(\zeta_{m}=10^{-8}\) and \(\omega=1\). The left panel of Fig. 12 shows the fractional difference in the evolution of \(\Delta\alpha/\alpha\) over \(z\) for these two models. Agreement is sufficient for our purposes (a \(\sim 10\%\) correction to our constraints). We also compared \(\Delta\alpha/\alpha\) values at recombination for a range of \(\zeta_{m}\) values from \(-10^{-8}\) to \(10^{-8}\) (roughly the constraint from _Planck_), with \(\omega\) still set to 1. The results are displayed in the right panel of Fig. 12, which shows that the two calculations agree well over this range of \(\zeta_{m}\), and vanish when \(\zeta_{m}\to 0\). The agreement improves with even higher \(|\zeta_{m}|\). It is worth pausing to consider the interpretation of these corrections. In particular, we can state the matter continuity equation as \[\dot{\rho}_{m}+3H\rho_{m}=-2e^{-2\psi}\zeta_{m}\dot{\psi}\rho_{m}. \tag{10}\] With the _ansatz_\(\rho\equiv a^{-3}f(a)\), we find (applying separation of variables) that \[\rho_{m}=\frac{C}{a^{3}}e^{\zeta_{m}e^{-2\psi}}. \tag{11}\] Figure 12: _Left panel:_ Fractional error as a function of \(z\) between energy-conserving and non-conserving BSBM model implementations with overall level of fine-structure constant variation, \(\Delta\alpha/\alpha\). The parameter \(\zeta_{m}=10^{-8}\), while \(\omega=1\), consistent with the _Planck_ constraint. _Right panel:_ Comparison of error at recombination between energy-conserving and non-conserving BSBM model implementations with overall level of fine-structure constant variation, \(\Delta\alpha/\alpha\). The parameter \(\zeta_{m}\) is varied in the range \(-10^{-8}\) to \(10^{-8}\), while \(\omega=1\). Figure 13: Fractional difference in \(|\Delta\alpha/\alpha|\) between BSBM model implementation with and without contribution of \(\psi\) to the Friedmann equation, as a function of redshift \(z\). Here \(\zeta_{m}=10^{-8}\) and \(\omega=1\). This scaling suggests the definition \(\tilde{\rho}_{m}\equiv\rho_{m}e^{-\zeta_{m}e^{-2\psi}}\) as a physical matter density with the right redshift-dependence. The matter-dependent term in the Friedmann equation (in the BSBM model) is [25; 26], \[H_{m}^{2}= \frac{8\pi G}{3}\left\{\rho_{m}+\zeta_{m}e^{2\psi}\rho_{m}\right\} \tag{14}\] \[= \frac{8\pi G}{3}e^{\zeta_{m}e^{-2\psi}}\left[1+\zeta_{m}e^{2\psi} \right]\tilde{\rho}_{m}\] (15) \[= \frac{8\pi G\tilde{\rho}_{m}}{3}\frac{m_{m}(\psi)}{m_{m}(\psi=0)}. \tag{16}\] where \(\tilde{m}_{m}\) can be interpreted as the \(\psi\)-modulated mass of non-relativistic particles (e.g. baryons, dark matter). If \(\rho_{m}\) is treated (erroneously) as a substance that redshifts as \(a^{-3}\), this amounts to the approximation \(m_{m}(\psi)=m_{m}(\psi=0)\). The analysis above shows that the error in \(\alpha\) evolution induced by this approximation in the BSBM is negligible. This effective modulation of the mass of non-relativistic particles is a well known feature of theories with non-minimally coupled scalar fields, in which the relevant matter fields travel on geodesics of a different metric than the one satisfying Einstein's equations (see, e.g. Refs. [161; 162] for recent applications to early dark energy phenomenology and Refs. [163; 164; 165; 166] for earlier applications). In Sec. II.2, the contribution of \(\psi\) to the Hubble expansion itself was not included. The dynamics modeled above do include this contribution, and we have verified that the overall correction to \(\Delta\alpha/\alpha\) does not affect the remaining results of this paper. It is instructive to consider the contribution of \(\psi\) to the Hubble expansion (separate and apart from the energy non-conservation or modulated scalar mass effect discussed above). The results are shown in Fig. 13. We see that this effect (taken alone) is even smaller than the energy conservation correction. Additionally, Eq. (13) is only valid for massless neutrinos, whereas the fiducial cosmology used in _Planck_ data analysis assumes a single neutrino with \(m_{\nu}=0.06\) eV. Using the equations in Ref. [167], which correct the Hubble factor analytically for the contribution of massive neutrinos (through their full relativistic to non-relativistic transition), we assess if neutrinos induce any change to the values of \(\alpha\) determined in the preceding portions of this section. We find a fractional error in \(\Delta\alpha/\alpha\) at \(z=1100\) of \(\sim 10^{-3}\), meaning that this effect does not alter any of the conclusions of this paper. ## Appendix C One-dimensional MCMC The 2-parameter emcee simulations no longer constrain \(\zeta_{m}/\omega\) when the scalar mass goes over a certain threshold. Properly mapping out this edge of parameter space is difficult due to the highly non-Gaussian posterior. In particular, opening up the parameter space at higher masses risks trapping chains at high values of \(\log_{10}(\tilde{m})\) due to a large prior volume, distorting parameter contours. Such challenges can of course be addressed with techniques like nested sampling (see, e.g. Refs. [168]), but here we implemented a simpler workaround, following the example of numerous papers mapping out a U-shaped contour for the allowed parameter space of ultra-light axions (see, e.g., Refs. [118]). We ran one-dimensional MCMCs (in \(\zeta_{m}/\omega\)) by sweeping \(\log_{10}(\tilde{m})\) across a grid of appropriate values past where the constraint relaxes. We did this to obtain constraints at large values of \(\log_{10}(\tilde{m})\). We did this for the _Planck_, SO, and QSO cases. The one-dimensional constraints are shown in Tables 1-3. Examples of the corresponding posterior probabilities for some masses are shown in Figs. 14, 15,and 16 respectively. For the _Planck_ chains, there was an auto-correlation length of \(\sim 18\) samples. The chains are 2000 samples long. The auto-correlation length is less than \(2000/50=40\), and so the chains are converged. For the SO chains, there was an auto-correlation length of \(\sim 9\) samples. The chains are 3000 samples long. The auto-correlation length is less than \(3000/50=60\), and so the chains are converged. For the QSO chains, there was an auto-correlation length of \(\sim 24\) samples. The chains are 2000 samples long. The auto-correlation length is less than \(2000/50=40\), and so the chains are converged.
2303.11913
Local mean value estimates for Weyl sums
We obtain new estimates - both upper and lower bounds - on the mean values of the Weyl sums over a small box inside of the unit torus. In particular, we refine recent conjectures of C. Demeter and B. Langowski (2022), and improve some of their results.
Julia Brandes, Changhao Chen, Igor E. Shparlinski
2023-03-21T15:05:42Z
http://arxiv.org/abs/2303.11913v1
# Local mean value estimates for Weyl sums ###### Abstract. We obtain new estimates - both upper and lower bounds - on the mean values of the Weyl sums over a small box inside of the unit torus. In particular, we refine recent conjectures of C. Demeter and B. Langowski (2022), and improve some of their results. Key words and phrases:Weyl sum, mean value theorem, small box 2020 Mathematics Subject Classification: Primary: 11L15; Secondary: 11L07, 11D45 ###### Contents * 1 Introduction * 2 What we know and what we believe to be true * 3 New bounds * 4 Proof of Theorem 2.3 * 5 Transition to inhomogeneous mean values * 6 Proof of Theorems 3.1- 3.5 * 7 Approach via the structure of large Weyl sums * 8 Proofs of Theorems 3.7 and 3.8 * 9 Rational exponential sums * 10 Proof of Theorems 3.9 and 3.10 * 11 Further comments ## 1. Introduction ### Background and motivation The study of exponential sums occupies a central location in the analytic theory of numbers, as they are a crucial tool connecting the language of number theory with the language of Fourier analysis. In fact, many of the most celebrated results in number theory either are equivalent to or at least crucially depend on strong bounds on exponential sums, either in an average or a pointwise sense. In this paper, we are interested in exponential sums of the shape \[S_{d}(\mathbf{x};N)=\sum_{n=1}^{N}\,\mathbf{e}\left(x_{1}n+\ldots+x_{d}n^{d} \right),\] associated to Vinogradov's mean value theorem. Thanks to the breakthrough results of Bourgain, Demeter and Guth [6] as well as Wooley [27, 28], we now have very good Introduction Let \(\mathbf{x}\) be a real real vector field with \(\mathbf{x}\) and \(\mathbf{y}\) the vector field \(\mathbf{x}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{x}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\). We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\) respectively. We denote by \(\mathbf{x}\) the vector field \(\mathbf{y}\) and \(\mathbf{y}\) the vector field \(\mathbf{y}\) respectively. hint that this transition may be more intricate than hitherto anticipated, and we hope that future research can provide a more accurate picture of these phenomena. ### Set-up For an integer \(\nu\geqslant 1\) we denote by \(\mathsf{T}_{\nu}\) the \(\nu\)-dimensional unit torus, which we also identify with the \(\nu\)-dimensional unit cube, that is, \[\mathsf{T}_{\nu}=(\mathbb{R}/\mathbb{Z})^{\nu}=[0,1)^{\nu}.\] For positive integers \(d\) and \(N\), a sequence of complex weights \(\mathbf{a}=\left(a_{n}\right)_{n=1}^{N}\), and a vector \(\mathbf{x}\in\mathsf{T}_{d}\), we define the Weyl sums \[S_{d}(\mathbf{x};\mathbf{a},N)=\sum_{n=1}^{N}a_{n}\,\mathbf{e}\left(x_{1}n+ \ldots+x_{d}n^{d}\right)\] where \(\mathbf{e}(z)=\exp(2\pi iz)\). For a positive \(\delta\leqslant 1\) and \(\boldsymbol{\xi}\in\mathsf{T}_{d}\), we define \[I_{s,d}(\delta,\boldsymbol{\xi};\mathbf{a},N)=\int_{\boldsymbol{\xi}+[0, \delta]^{d}}|S_{d}(\mathbf{x};\mathbf{a},N)|^{2s}\,d\mathbf{x}. \tag{1.3}\] We note that the exponent \(s\) in (1.3) is not necessary integer but can take arbitrary real positive values. The question of estimating \(I_{s,d}(\delta,\boldsymbol{\xi};\mathbf{a},N)\) for suitable choices of \(\boldsymbol{\xi}\) and \(\mathbf{a}\) has recently received some attention, see, for example, [16, 12, 10, 29] for various bounds and applications. The case of boxes at the origin is especially interesting. In fact, it is easy to see that the question about the size of \(I_{s,d}(\delta,\boldsymbol{\xi};\mathbf{a},N)\) can be reduced to \(I_{s,d}(\delta,\mathbf{0};\widetilde{\mathbf{a}},N)\), with \(\widetilde{a}_{n}=a_{n}\,\mathbf{e}\left(\xi_{1}n+\ldots+\xi_{d}n^{d}\right)\) for \(n=1,\ldots,N\). We thus put \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)=I_{s,d}(\delta,\mathbf{0};\mathbf{a},N).\] Hence in the case of arbitrary weights, without loss of generality, it suffices to study the quantity \(I_{s,d}^{(0)}(\delta;\mathbf{a},N)\). Meanwhile, arguably the most relevant choice of weights \(\mathbf{a}\) is that in which \(a_{n}=1\) for \(n\leqslant N\), so we consider this situation separately. Thus, in the case when \(\mathbf{a}=\mathbf{1}\), we define \[I_{s,d}^{(0)}(\delta;N)=I_{s,d}^{(0)}(\delta;\mathbf{1},N),\] as well as \[I_{s,d}^{\sharp}(\delta;N)=\sup_{\boldsymbol{\xi}\in\mathsf{T}_{d}}I_{s,d}( \delta,\boldsymbol{\xi};\mathbf{1},N)\qquad\text{and}\qquad I_{s,d}^{\flat}( \delta;N)=\inf_{\boldsymbol{\xi}\in\mathsf{T}_{d}}I_{s,d}(\delta,\boldsymbol{ \xi};\mathbf{1},N).\] Note that since the unit torus \(\mathsf{T}_{d}=(\mathbb{R}/\mathbb{Z})^{d}\) is compact as an additive group, the infimum and supremum here are actually attained as the exponential sum is continuous. By the discussion following (1.3) it is easy to see that \[I_{s,d}^{\sharp}(\delta;N)\leqslant\sup_{\|\mathbf{a}\|_{\infty}\leqslant 1 }I_{s,d}^{(0)}(\delta;\mathbf{a},N) \tag{1.4}\] where supremum is taken over all sequences of complex weights with \(\|a_{n}\|_{\infty}\leqslant 1\). ### Notation Throughout the paper, we use the Landau and Vinogradov notations \(U=O(V)\), \(U\ll V\) and \(V\gg U\) to express that \(|U|\leqslant cV\) for some positive constant \(c\), which throughout the paper may depend on the degree \(d\) and occasionally on the small real positive parameter \(\varepsilon\) and the arbitrary real parameter \(t\). We also write \(U\asymp V\) as an equivalent of \(U\ll V\ll U\). Moreover, for any quantity \(V>1\) we write \(U=V^{o(1)}\) (as \(V\to\infty\)) to indicate a function of \(V\) which satisfies \(V^{-\varepsilon}\leqslant|U|\leqslant V^{\varepsilon}\) for any \(\varepsilon>0\), provided \(V\) is large enough. One additional advantage of using \(V^{o(1)}\) is that it absorbs \(\log V\) and other similar quantities without changing the whole expression. We also recall the definition of the \(\ell^{p}\)-norm, which for a sequence of complex numbers \(\mathbf{a}=(a_{n})_{1\leqslant n\leqslant N}\) and a real number \(p\geqslant 1\) is given by \[\|\mathbf{a}\|_{p}=\left(\sum_{n=1}^{N}|a_{n}|^{p}\right)^{1/p}.\] For \(m\in\mathbb{N}\), we write \([m]\) to denote the set \(\{0,1,\ldots,m-1\}\). We denote the cardinality of a finite set \(\mathcal{S}\) by \(\#\mathcal{S}\), and for a measurable set \(\mathcal{T}\subseteq\mathsf{T}_{\nu}\) we write \(\lambda(\mathcal{T})\) for the Lebesgue measure of the appropriate dimension \(\nu\). We use the notation \(\lfloor x\rfloor\) and \(\lceil x\rceil\) for the largest integer no larger than \(x\) and the smallest integer no smaller than \(x\), respectively. We then write \(\{x\}=x-\lfloor x\rfloor\in[0,1)\). ## 2. What we know and what we believe to be true ### State of the art and previous conjectures In order to get a better sense of what to expect, it is helpful to first record some known bounds that can serve as a benchmark for our ensuing considerations. On the one hand, when \(\delta=1\) the recent advances of Bourgain, Demeter and Guth [6] and Wooley [28] towards the optimal form of the Vinogradov mean value theorem yield the bound \[I_{s,d}^{(0)}(1;\mathbf{a},N)\leqslant\|\mathbf{a}\|_{2}^{2s}N^{o(1)}(1+N^{s- s(d)}) \tag{2.1}\] for all \(s>0\), where \[s(d)=d(d+1)/2.\] For general \(\mathbf{a}\) this is essentially sharp, since for \(\mathbf{a}=\mathbf{1}\) a standard argument shows that \[I_{s,d}^{(0)}(1;N)=J_{s,d}(N)\gg N^{s}+N^{2s-s(d)}, \tag{2.2}\] where \(J_{s,d}(N)\) is given by (1.1). In fact, by adapting the argument of [10, Lemma 3.1] one can show that \(S_{d}(\mathbf{x};N)\gg N^{1/2}\) for a positive proportion of \(\mathbf{x}\in\mathsf{T}_{d}\). On the other hand, for very small values of \(\delta\) we can bound the integral trivially and obtain \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d}\|\mathbf{a}\|_{1}^{2s}. \tag{2.3}\] By a slightly more sophisticated argument, combining the bound of (2.1) with Holder's inequality, we obtain the bound \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d-2s/(d+1)}\|\mathbf{a}\|_ {2}^{2s}N^{o(1)},\qquad 0\leqslant s\leqslant s(d), \tag{2.4}\] see also [16, Equation (2.3)]. Clearly, in the limit \(\delta\to 1\), as expected, the bound (2.4) approaches the bound (2.1). At the same time, we see that for small \(\delta\) this is weaker than the trivial bound (2.3). In the special case \(s=2\), a further example can be derived from [10, Lemma 4.5], which implies that if \(|a_{n}|\leqslant 1\) for \(n=1,\ldots,N\), then \[I_{2,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d}N^{2}+\delta^{d-4}N^{1+o( 1)}.\] For lower bounds, observe that for any \(N\) we have \[I_{s,d}^{(0)}(1;N)\ll\delta^{-d}I_{s,d}^{\sharp}(\delta;N).\] Upon combining this with the classical lower bound of (2.2), we thus conclude that \[I_{s,d}^{\sharp}(\delta;N)\gg\delta^{d}(N^{s}+N^{2s-s(d)}).\] Clearly, this suggests the question of whether this bound is sharp, and if so, in what ranges. A version of that conjecture has been proposed in recent work by Wooley [29, Conjectures 8.1 and 8.2]. **Conjecture 2.1** (Wooley [29]).: _Suppose that_ \[s\geqslant\tfrac{1}{4}d(d+1)+1\qquad\text{or}\qquad\delta\geqslant N^{1/d-( d+1)/4}.\] _Then_ \[I_{s,d}^{\sharp}(\delta;N)\leqslant\delta^{d}N^{s+o(1)}+N^{2s-s(d)+o(1)}.\] In Wooley's setting [29], the bound on the number of variables is motivated by considerations concerning the convergence of the singular series; however, it seems not unreasonable that the validity of the bound in Conjecture 2.1 in the \(\delta\)-aspect might extend below the proposed range. We also remark that Wooley allows for general measurable sets, whereas we restrict to axis-aligned hypercubes. Another conjecture that is relevant to our work, and which permits arbitrary positive values of \(\delta\) and \(s\), has been fielded in recent work by Demeter and Langowski [16, Conjecture 1.3]. **Conjecture 2.2** (Demeter-Langowski [16]).: _Let_ \[\rho(d)=\left\lceil 3d^{2}/4\right\rceil-1.\] _We have_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{(d+1)/2}\|\mathbf{a}\|_{2 }^{2s}\left(1+N^{s-\rho(d)/2}\right)N^{o(1)}. \tag{2.5}\] By [16, Theorem 2.4] we have (2.5) for \(d=2\) and \(d=3\) in the full range. Moreover, the authors establish bounds of a similar quality also for \(d=4\) and \(d=5\). We also remark that there is nothing intrinsically special about the power of \(\delta\) occurring in (2.2) or the concomitant value \(\rho(d)\). Rather, it seems that the precise formulation and choice of parameters of Conjecture 2.2 were chosen mostly in view of applications to the mean value of Weyl sums along curves, see [16, Proposition 2.2]. A comparison of Conjectures 2.1 and 2.2 shows that neither is strictly stronger than the other; rather they make different predictions for various ranges of \(s\) and various values of \(\delta\). It is apparent from the discussion preceding Conjecture 2.1 that it is sharp for small \(s\) and \(\delta\) not too small. At the same time, we remark that Conjecture 2.2, if correct, is the best possible in the sense that the exponent \((d+1)/2\) cannot be increased if one wants a bound which holds for all \(\delta\in(0,1)\). Evidence for this has been given in [16], after the formulation of [16, Conjecture 1.3]. Moreover, for _extremely_ small values of \(\delta\), the trivial bound (2.3) is both sharp and stronger than (2.5). It is therefore an interesting question to derive even a valid heuristic for the behaviour of \(I^{(0)}_{s,d}(\delta;\mathbf{a},N)\) that reflects the true expected size of the quantity for all choices of \(\delta\) and \(N\). ### An upper bound for a small cube at the origin and some new conjectures Before embarking on a precise discussion of our results, we remark on a general fact concerning the behaviour of mean values of the type considered in this paper. Typically, for fixed parameters \(d\) and \(\delta\), we endeavour to establish bounds of the shape \[I^{(0)}_{s,d}(\delta;\mathbf{a},N)\leqslant\delta^{d-\alpha}\|\mathbf{a}\|_{2 }^{2s}(1+N^{s-\sigma_{0}})N^{o(1)}\] for some \(\alpha\in[0,d]\) and some \(\sigma_{0}\geqslant 1\) depending on \(d\) and \(\delta\). In particular, if we can establish such a bound at the critical point \(s=\sigma_{0}\), the corresponding results for the _subcritical_ and _supercritical_ ranges \(s<\sigma_{0}\) and \(s>\sigma_{0}\) follow by standard arguments. In this paper we give bounds applicable to both the sub- and supercritical ranges. Our first result provides a lower bound for the mean value of Weyl sums over a small cube at the origin. The proof, which is based on the continuity of Weyl sums \(S_{d}(\mathbf{x};N)\) as functions of \(\mathbf{x}\), is rather straightforward. We then use this simple bound as a benchmark and a basis for several conjectured upper bounds. It also motivates our results in Section 3, which are based on a variety of new ideas. We define \[\sigma_{d}(\alpha)=\frac{\alpha(2d-\alpha+1)-\{\alpha\}(1-\{\alpha\})}{2}. \tag{2.6}\] **Theorem 2.3**.: _Let_ \[s_{0}(d,\alpha)=\sup\left\{s\geqslant 0:\ I^{(0)}_{s,d}(\delta;N)\leqslant \delta^{d-\alpha}N^{s+o(1)},\ \forall\delta\in[N^{-d},1],\ \text{as}\ N\to\infty\right\}.\] _We then have_ \[s_{0}(d,\alpha)\leqslant\sigma_{d}(\alpha).\] By our above discussion, the conclusion of Theorem 2.3 can be used to derive bounds on \(I^{(0)}_{s,d}(\delta;N)\) for general values of \(s\). In fact, for \(s>\sigma_{d}(\alpha)\) we obtain \[I^{(0)}_{s,d}(\delta;N)\leqslant\delta^{d-\alpha}N^{2s-\sigma_{d}(\alpha)+o(1 )}. \tag{2.7}\] Meanwhile, for \(0<s<\sigma_{d}(\alpha)\), our Theorem 2.3 in combination with Holder's inequality yields \[I^{(0)}_{s,d}(\delta;N)\leqslant\delta^{d-\alpha s/\sigma_{d}(\alpha)}N^{s+o( 1)}.\] To put this into context, we compare Theorem 2.3 with our preceding discussion. Consider first the case \(\alpha=0\), for which \(\sigma_{d}(0)=0\). Consequently, for any \(s\) the bound (2.7) reduces to (2.3). Meanwhile, taking \(\alpha=d\) we obtain \(\sigma_{d}(d)=d(d+1)/2\) which we also know to be sharp when \(\delta=1\). Finally, the value \(\alpha=(d-1)/2\) produces the bound \[\sigma_{d}((d-1)/2)=\frac{3(d^{2}-1)}{8}-\frac{\{(d-1)/2\}(1-\{(d-1)/2\})}{2}= \frac{1}{2}\left(\left\lceil 3d^{2}/4\right\rceil-1\right),\] which recovers Conjecture 2.2 by Demeter and Langowski [16, Conjecture 1.3]. In this way, Theorem 2.3 suggests a natural extension of Conjecture 2.2. **Conjecture 2.4**.: _Fix \(\alpha\in[0,d]\). For any sufficiently large \(N\) and any \(\delta\) in the range \(N^{-d}\leqslant\delta\leqslant 1\), the bound_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d-\alpha}\|\mathbf{a}\|_{2 }^{2s}(1+N^{s-\sigma_{d}(\alpha)})N^{o(1)}\] _holds for all \(s\geqslant 0\)._ We note that we do not suggest that Conjecture 2.4 is always sharp, and there are situations where we do, in fact, obtain stronger upper bounds, as can be gleaned from Figures 3.1, 3.2 and 3.3 below. For \(\delta<N^{-d}\) it is not hard to see that the trivial bound (2.3) gives a stronger result. We also note that a careful inspection of the proof of Theorem 3.5 shows that for any given \(\alpha>0\) Conjecture 2.4 is sharp at the point \(\delta=N^{-\lfloor d-\alpha\rfloor-1}\). The presence of the additional parameter \(\alpha\) in these considerations is somewhat irritating. One checks easily that \[\sigma_{d}(\alpha)=\alpha d\qquad\text{for all }\alpha\in(0,1]\text{.} \tag{2.8}\] For general values of \(\alpha\), one can show by a modicum of computation that \(\sigma_{d}(\alpha)\) is continuous and strictly increasing in \(\alpha\) for \(\alpha\in[0,d]\). Indeed, we clearly have \[\left(\frac{1}{2}\alpha(2d-\alpha+1)\right)^{\prime}=d-\alpha+1/2,\] while \(\frac{1}{2}\{\alpha\}(1-\{\alpha\})\) is the periodic continuation of the function \(u(1-u)/2\) for \(u\in[0,1)\), and this latter function has derivative \(-u+1/2\in[-1/2,1/2)\), so that the whole function \(\sigma_{d}(\alpha)\) is continuous and satisfies \(\sigma_{d}^{\prime}(\alpha)>0\) for all non-integer \(\alpha<d\). For a fixed value \(s\), denote by \(\alpha_{0}(d,s)\) the unique \(\alpha\) for which \(\sigma_{d}(\alpha)=s\). In this notation, we can change perspective and propose a reformulation of the above conjecture in which we seek to determine the optimal value of \(\alpha\) for any given set of parameters \(s\) and \(d\). **Conjecture 2.5**.: _For any parameters \(d\) and \(s\leqslant s(d)\), and for any sufficiently large \(N\) and any \(\delta\) in the range \(N^{-d}\leqslant\delta\leqslant 1\), we have_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d-\alpha_{0}(d,s)}\| \mathbf{a}\|_{2}^{2s}N^{o(1)}.\] Unfortunately, the function \(\alpha_{0}(d,s)\) is not straightforward to describe explicitly. However, we can give a rough indication of its size. Recalling (2.6), write \[\sigma_{d}(\alpha)=\alpha(2d-\alpha+1)/2-\omega, \tag{2.9}\] and note that \(\omega=\{\alpha\}(1-\{\alpha\})/2\in[0,1/8]\). Upon solving (2.9) for \(\alpha\) and substituting \(\sigma_{d}(\alpha)=s\) we obtain that \[\alpha_{0}(d,s)=d+1/2-\sqrt{d(d+1)-2s+\nu},\] where \(\nu=1/4-2\omega\in[0,1/4]\). With these considerations, for \(s<s(d)\), the bound in Conjecture 2.5 can be seen to be of the size \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{\sqrt{2s(d)-2s}-1/2+\eta(d,s) }\|\mathbf{a}\|_{2}^{2s}N^{o(1)},\] where \[\eta(d,s)\leqslant\frac{c}{\sqrt{(s(d)-s)}}\] for some absolute constant \(c>0\). Finally, we remark that Theorem 2.3 as well both Conjectures 2.4 and 2.5 address only the range \(\delta\geqslant N^{-d}\). However, for smaller \(\delta\) it is not hard to show that the bound (2.3) is sharp. We give some details on this fact after the proof of Theorem 2.3 below. ## 3. New bounds ### Bounds on mean values with weights We first present a family of bounds that can be obtained by combining [10, Lemma 3.8] with a result of Wooley [29, Theorem 1.3], which improves a previous result of Brandes and Hughes [8]. **Theorem 3.1**.: _Suppose that \(\|\mathbf{a}\|_{\infty}\leqslant 1\) and \(0<s\leqslant s(d)/2\). Suppose that \(N^{-1}\geqslant\delta>N^{-d}\), and let \(k\) be the unique integer satisfying \(N^{-k-1}<\delta\leqslant N^{-k}\). We then have_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{(d+k)/2}N^{s+s(k)/2+o(1)}.\] _Meanwhile, for \(\delta>N^{-1}\) we have the bounds_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\begin{cases}\delta^{d/2}N^{s+o(1) }&\text{ for }N^{-1}<\delta<N^{-1/d},\\ N^{s-1/2+o(1)}&\text{ for }N^{-1/d}<\delta<N^{-1/(2d-1)},\\ \delta^{d-1/2}N^{s+o(1)}&\text{ for }N^{-1/(2d-1)}<\delta<1.\end{cases}\] We remark that for \(\delta\leqslant N^{-d}\) the same methods yield the bound \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d}N^{s+s(d)/2+o(1)},\] which is weaker than the trivial bound (2.3) by our assumption that \(s\leqslant s(d)/2\). Since (2.3) is sharp for small \(\delta\), it is worth mentioning that the two bounds coincide at the point \(s=s(d)/2\). The interested reader may also note that the range of validity of Theorem 3.1 covers values of \(s\) and \(\delta\) for which Conjecture 2.1 does not apply. For larger values of \(s\) we have the following more complicated bound. **Theorem 3.2**.: _For any integer \(s\) in the range \(s(d)/2<s<s(d)\) and for any \(\delta\geqslant N^{-1}\), we have_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant N^{s+o(1)}\left(\delta^{d-1}+ \sum_{j=1}^{d-1}\min\{\delta^{j-1}(N^{-1/2}+N^{-\eta_{s,d}(j)}),\delta^{(d+j-1 )/2}N^{s-s(d)/2}\}\right),\] _where_ \[\eta_{s,d}(\ell)=\left(s(d)-s\right)\frac{d-\ell+1}{d+\ell+1}\qquad(1\leqslant \ell\leqslant d-1). \tag{3.1}\] Unfortunately, the fairly general bound of Theorem 3.2 may be somewhat hard to parse. However, we note that by always taking the second term in the minimum we obtain the following simple bound. **Corollary 3.3**.: _For any integer \(s\) in the range \(s(d)/2<s<s(d)\) and for any \(\delta\geqslant N^{-1}\) we have_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d/2}N^{2s-s(d)/2+o(1)}.\] Similarly, by using always the first expression in the minimum, one can show with a modicum of calculations that in the range \(s(d)/2<s<s(d)\) and for all \(\delta\leqslant N^{-1/(2d-2)}\) one has \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant N^{s-1/2+o(1)}.\] Clearly, the bound of Corollary 3.3 is not very strong in terms of \(\delta\), so for the convenience of the reader we state a further corollary to Theorem 3.2 concerning the range of \(\delta\) in which the first term dominates. While by no means being deep, this consequence of our result needs some more notation to state. For a function \[f(x)=\frac{d+1-x}{(d+x+1)(d-x)} \tag{3.2}\] define the parameter \(\vartheta(d)\) by putting \[\vartheta(d)=\min\biggl{\{}f\left(d+1-\left\lfloor\sqrt{2(d+1)}\right\rfloor \right),f\left(d+1-\left\lceil\sqrt{2(d+1)}\right\rceil\right)\biggr{\}}. \tag{3.3}\] In particular, we see that \[\vartheta(d)\sim\frac{1}{2d}\qquad(d\to\infty).\] A list of explicit values of \(\vartheta(d)\) for \(2\leqslant d\leqslant 10\) is given in Table 3.1. **Corollary 3.4**.: _Let \(d\geqslant 2\) and recall the definition of \(\vartheta(d)\) from (3.3). Furthermore, fix some integer \(s(d)/2<s<s(d)\) and a sequence of weights satisfying \(\|\mathbf{a}\|_{\infty}\leqslant 1\). Suppose that_ \[\delta>\max\{N^{-1/(2d-2)},N^{-(s(d)-s)\vartheta(d)}\},\] _then_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d-1}N^{s+o(1)}.\] The proofs of Theorems 3.1 and 3.2 depend crucially on the existence of non-trivial bounds for certain inhomogeneous Vinogradov systems. For \(\mathbf{h}=(h_{1},\ldots,h_{d})\in\mathbb{Z}^{d}\) let \(J_{s,d}(\mathbf{h};N)\) be the number of solutions to the system of \(d\) equations \[\sum_{j=1}^{2s}(-1)^{j}n_{j}^{i}=h_{i}\qquad(i=1,\ldots,d), \tag{3.4}\] in integer variables \(1\leqslant n_{1},\dots,n_{2s}\leqslant N\). By the triangle inequality, we trivially have \[J_{s,d}(\mathbf{h};N)\leqslant J_{s,d}(N)\leqslant N^{s+o(1)}, \tag{3.5}\] where in the last step we have used the classical Vinogradov mean value bound of [6, Theorem 1.1] in the subcritical range \(s\leqslant s(d)\), see (1.2). For most choices of \(\mathbf{h}\), recent results by Brandes and Hughes [8] and Wooley [29] give some slight improvement over this in the entire subcritical range. However, the bounds of their work are not expected to be sharp, and indeed one may be tempted to conjecture that for all integers \(s\) in some range \(s\leqslant s_{1}(d)\), for \(s_{1}(d)\leqslant s(d)-1\), one has the stronger bound \[\max_{\mathbf{h}\neq\mathbf{0}}J_{s,d}(\mathbf{h};N)\leqslant N^{s-\nu+o(1)} \tag{3.6}\] for some \(\nu\in(0,1]\). Clearly, the sharpest version of the conjecture in (3.6) is the one corresponding to the parameters \(\nu=1\) and \(s_{1}(d)=s(d)-1\). Note that for \(\nu>1\) the bound (3.6) is false even for small values of \(s\), as can be seen by choosing \(n_{1},n_{2}\) and \(\mathbf{h}\) such that \(n_{1}^{j}-n_{2}^{j}=h_{j}\) for \(1\leqslant j\leqslant d\), thus reducing the system (3.4) to a homogeneous system in \(2(s-1)\) variables which has \(J_{s-1,d}(N)\gg N^{s-1}\) solutions. However, the set of possible choices for \(\mathbf{h}\) for which the bound (3.6) is sharp with \(\nu=1\) is fairly small. Consequently, in many cases we obtain stronger results by averaging over the \(\mathbf{h}\) (see Lemma 5.3 below). Conditionally on (3.6) being known for \(\nu=1\), we have the following. **Theorem 3.5**.: _Let \(d\geqslant 2\) and \(\|\mathbf{a}\|_{\infty}\leqslant 1\). Assume that (3.6) holds with \(\nu=1\) for all \(s\) in some range \(s\leqslant s_{1}(d)\). Let \(1\geqslant\delta>N^{-d}\), and let \(k\) be the unique integer satisfying \(N^{-k-1}<\delta\leqslant N^{-k}\)._ 1. _Suppose that_ \(0<s\leqslant\min\{s(d)/2,s_{1}(d)\}\)_._ * _For_ \(k\geqslant 1\)_, we have_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{(d+k)/2}N^{s+s(k)/2+o(1)}.\] * _For_ \(k=0\)_, we have_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\begin{cases}\delta^{d/2}N^{s+o(1) }&N^{-1}<\delta\leqslant N^{-2/d},\\ N^{s-1+o(1)}&N^{-2/d}<\delta\leqslant N^{-1/d},\\ \delta^{d}N^{s+o(1)}&N^{-1/d}<\delta\leqslant 1.\end{cases}\] 2. _Suppose now that_ \(s(d)/2<s\leqslant s_{1}(d)\)_. For_ \(k\geqslant 0\)_, we have_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant N^{s+o(1)}\left(\delta^{d}+\min\{ \delta^{k}N^{s(k)-1},\delta^{(k+d)/2}N^{s-(s(d)-s(k))/2}\}\right).\] We remark that Wooley's range for \(s\) coincides with that in part (2) of Theorem 3.5 when \(d\equiv 0\) or \(d\equiv 3\pmod{4}\), while for \(d\equiv 1\) and \(d\equiv 2\pmod{4}\) the value \(s=(s(d)+1)/2\) is not covered by [29, Conjecture 8.1], whereas our result is applicable. This is in fact the situation in the (otherwise well-understood) case \(d=s=2\), which we discuss below as an example. Unfortunately, proving (3.6) seems to be quite delicate in general even for non-optimal values of \(\nu\). In some special cases, however, suitable bounds are available. For instance, Dendrinos, Hughes and Vitturi [17, Lemmas 5 and 6] showed that (3.6) holds with \(\nu=1\) in the cases \(d=s=2\) (which implies the statement for \((s,d)=(3,2)\)) and \(d=s=3\). Thus, after a comparison of all terms in Theorem 3.5, in combination also with (2.3), we obtain the following unconditional bounds. **Corollary 3.6**.: _Let \(\|\mathbf{a}\|_{\infty}\leqslant 1\). For \(s=d=2\) as well as \(d=3\) and \(s=2\) or \(s=3\) the mean value \(I_{s,d}(\delta;\mathbf{a},N)\) is bounded above as detailed in Table 3.2._ For comparison, in the special case \(\mathbf{a}=\mathbf{1}\), the conjecture proposed by Wooley [29] (Conjecture 2.1) claims that \[I_{2,2}^{\sharp}(\delta;N) \leqslant\delta^{2}N^{2+o(1)}\qquad\text{for }\delta\geqslant N^{-1/4},\] \[I_{2,3}^{\sharp}(\delta;N) \leqslant\delta^{3}N^{2+o(1)}\qquad\text{for }\delta\geqslant N^{-2/3},\] \[I_{3,3}^{\sharp}(\delta;N) \leqslant\delta^{3}N^{3+o(1)}\qquad\text{for }\delta\geqslant N^{-2/3}.\] Clearly, the range of applicability here is much smaller than that of our setting, and for \(d=2\) Corollary 3.6 establishes the bound conjectured by Wooley in a much larger range than suggested in [29]. For \(d=3\), we establish the bounds from Conjecture 2.1 in the range \(N^{-1/3}\leqslant\delta\leqslant 1\), but fall short in the range \(N^{-2/3}\leqslant\delta<N^{-1/3}\). ### Bounds on mean values with shifts When \(\delta\) is not too small, we also have some results that stem from exploiting the structure of large Weyl sums. **Theorem 3.7**.: _For any \(s>0\) and any \(\delta\geqslant N^{-3/(6+2s)}\), we have_ \[I_{s,2}^{\sharp}(\delta;N)\leqslant\delta^{2}N^{2s(1-3/(6+2s))+o(1)}.\] For \(d\geqslant 3\) we put \[D=\min\{2^{d-1},2d(d-1)\}.\] We then have the following. **Theorem 3.8**.: _For any \(s>(s(d)D-d^{2}-1)/2\) and \(\delta\geqslant N^{-(d+1)/(2(2s+d^{2}+1))}\), we have_ \[I_{s,d}^{\sharp}(\delta;N)\leqslant\delta^{d}N^{2s\left(1-s(d)/(2s+d^{2}+1) \right)+o(1)}.\] For context, note that when \(\delta\) assumes the smallest possible value, the upper bounds in Theorems 3.7 and 3.8 take the shape \[I_{s,2}^{\sharp}(\delta;N)\leqslant N^{2s-3\left(1-\frac{4}{s+3}\right)+o(1) }\qquad\text{and}\qquad I_{s,d}^{\sharp}(\delta;N)\leqslant N^{2s-s(d)\left( 1-\frac{d^{2}}{2s+d^{2}+1}\right)+o(1)},\] respectively. Clearly, \(\delta\to 1\) as \(s\to\infty\), so it is no surprise that these expressions converge to the bound of (1.2) (and thus also Conjecture 2.1) as \(s\) tends to infinity. Our upper bounds are complemented by the following general lower bounds. **Theorem 3.9**.: _Fix \(s>0\)._ 1. _If_ \(\delta\geqslant c_{1}/N\) _for some absolute constant_ \(c_{1}>0\)_, we have_ \[I_{s,2}^{\flat}(\delta;N)\gg\delta^{2}N^{s-1}\max\left\{1,(\delta N)^{s-2} \right\}.\] 2. _If_ \(\delta\geqslant c_{2}/\sqrt{N}\) _for some absolute constant_ \(c_{2}>0\)_, we have_ \[I_{s,2}^{\flat}(\delta;N)\gg\delta^{2}N^{3(s-1)/2}.\] We observe that for \(\delta\geqslant c_{2}/\sqrt{N}\) the second bound of Theorem 3.9 improves the first bound, which at the point \(\delta=N^{-1/2}\) takes the form \(\delta^{2}N^{3s/2-2}\). Our methods also give a bound for dimension \(d\geqslant 2\). For \(1\leqslant k<d\), it is convenient to define \[\nu(d,k)=\min\left\{\frac{1}{2k},\frac{1}{2d-k}\right\}. \tag{3.7}\] In that notation, our bound is as follows. **Theorem 3.10**.: _Fix any \(s>0\) and \(k\in\{1,\dots,d\}\). For any \(\delta\) with \(\delta\geqslant CN^{-\nu(d,k)}\log N\) for some sufficiently large constant \(C\), we have_ \[I_{s,d}^{\flat}(\delta;N)\geqslant\delta^{d}N^{d+s-s(d)+o(1)}\max\left\{1, \left(\delta^{1/\nu(d,k)}N\right)^{s-d}\right\}.\] In particular, for \(s\leqslant d\) the bound of Theorem 3.10 simplifies as \[I_{s,d}^{\flat}(\delta;N)\geqslant\delta^{d}N^{s+d-s(d)+o(1)}\] which does not depend on \(k\), and thus holds for \(\delta\geqslant N^{-\mu(d)}\) where \[\mu(d)=\max_{k=1,\dots,d}\nu(d,k).\] We obviously have \[\mu(d)\sim\frac{3}{4d}\qquad(d\to\infty).\] Moreover, a list of explicit values of \(\mu(d)\) for \(2\leqslant d\leqslant 10\) is given in Table 3.3. ### Discussion and comparison of our results Here we compare the bounds proposed by Demeter and Langowski [16, Conjecture 1.3] as well as Wooley [29, Conjecture 8.2] with our Conjecture 2.4 as well as with our other upper bounds. It should be emphasised that we do this in the case of \(s=2,3\) for which [16, Conjecture 1.3] is actually established in [16, Theorem 2.4]. To compare our various upper bounds, it is convenient to define \[\kappa_{s,d}^{(0)}(\tau) =\limsup_{N\to\infty}\sup_{\|\mathbf{a}\|_{\infty}\leqslant 1} \frac{\log I_{s,d}^{(0)}(N^{-\tau};\mathbf{a},N)}{\log N},\] \[\kappa_{s,d}^{\sharp}(\tau) =\limsup_{N\to\infty}\frac{\log I_{s,d}^{\sharp}(N^{-\tau};N)}{ \log N},\] where in \(\kappa_{s,d}^{(0)}(\tau)\), the inner supremum is taken over all sequences of complex weights with \(\|\mathbf{a}\|_{\infty}\leqslant 1\). It follows from (1.4) that \[\kappa_{s,d}^{\sharp}(\tau)\leqslant\kappa_{s,d}^{(0)}(\tau).\] We now present some plots of \(\kappa_{s,d}^{\sharp}(\tau)\) and \(\kappa_{s,d}^{(0)}(\tau)\) for small values of \(d\) and \(s\), which help to compare various bounds and conjectures. Figure 3.1 compares the bounds proposed by Demeter and Langowski [16, Theorem 2.4] and Wooley [29, Conjecture 8.2], as well as the upper bound of Corollary 3.6 and the lower bounds of Theorem 3.9, in the case \(d=s=2\). We note that the results and conjectures of [29] apply only to \(I_{s,d}^{\sharp}(\delta;N)\), while ours apply to the more general quantity \(I_{s,d}^{(0)}(\delta;\mathbf{a},N)\) for \(\|a\|_{\infty}\leqslant 1\). Observe that Demeter and Langowski [16] conjecture (and prove) diagonal behaviour up to the point \(s=\rho(2)/2=1\), which puts our configuration of parameters into the supercritical range. In contrast, our more flexible formulation in Conjecture 2.5 allows us to choose parameters in such a way that the value \(s=2\) does correspond to the critical point. Indeed, from (2.8) we see that the choice of \(\alpha=1\) is optimal for our choice of parameters, and consequently our conjecture takes a stronger form than the result obtained by Demeter and Langowski [16]. Moreover, it is evident that at least for the choice of parameters at hand, our conjecture is fully established by the bounds of Corollary 3.6. We also note that our Corollary 3.6 coincides with the bound conjectured by Wooley [29, Conjecture 8.2] in the latter one's range of applicability, but is valid for a significantly larger range of \(\delta\). In Figure 3.2 we present the proved and conjectured bounds for \(\kappa_{3,3}^{(0)}(\tau)\) and \(\kappa_{3,3}^{\sharp}(\tau)\). In this setting, Demeter and Langowski [16, Conjecture 1.3] address the case \(\alpha=(d-1)/2=1\), so in view of (2.8) the critical point of their conjecture coincides with that of our Conjecture 2.5, and consequently they anticipate the same bound. Our Corollary 3.6 gives bounds which are actually stronger than that in [16, Conjecture 1.3] and Conjecture 2.5 for \(\delta>N^{-1/2}\), but is not strong enough to establish them in the full range. It also establishes with Wooley's conjecture [29, Conjecture 8.2] for \(\delta\geqslant N^{-1/3}\). Note that for \(\delta<N^{-3}\) the trivial bound (2.3) is sharp. Our final Figure 3.3 compares the bounds for \(\kappa_{3,2}^{(0)}(\tau)\) and \(\kappa_{3,2}^{\sharp}(\tau)\). Again, it is obvious from the graph that the theorem by Demeter and Langowski, optimised for a different set of parameters, fails to be sharp in this setting, and indeed, we obtain sharper bounds in our Corollary 3.6 for all \(\delta<N^{-2}\) as well as \(\delta>N^{-1/2}\). In our Conjecture 2.4, we are allowed to take \(\alpha<1\), and it follows from (2.8) that the value \(\alpha=2/3\) is optimal. Figure 3.2. Comparison of upper bounds and conjectures on \(\kappa_{3,3}^{(0)}(\tau)\) and \(\kappa_{3,3}^{\sharp}(\tau)\) for various values of \(\delta=N^{-\tau}\). Observe that in this situation, the bounds of our Conjecture 2.5 and the result of Demeter and Langowski [16] coincide. Wooley’s conjecture [29] applies to \(\tau\leqslant 2/3\). As in the previous setting, this conjecture is overfulfilled for \(\delta>N^{-3/7}\), but open for \(N^{-3/7}>\delta>N^{-3}\). We see again that our bounds establish Wooley's conjecture [29, Conjecture 8.2] for \(\delta\geqslant N^{-1/3}\), but fall short in the range \(N^{-2/3}<\delta<N^{-1/3}\). **Remark 3.11**.: A common feature of Figures 3.1, 3.3 and 3.2 that the bounds in the extreme ranges \(\tau>d\) (corresponding to \(\delta\leqslant N^{-d}\)) and \(\tau<1/d\) (corresponding to \(\delta>N^{-1/d}\)) are represented by non-coinciding parallel lines. This is particularly intriguing since in both of these ranges the bounds are proven to be sharp, which raises the question of what the 'truth' looks like between these two ranges. Our result of Corollary 3.6 shows that the 'true' graph cannot be entirely convex or entirely concave, even in the otherwise well-understood case of small degrees and few variables. Instead, there we detect a noticeable plateau at the peak at the origin, and a lowland plain for the averages over larger boxes, but the shape of the slope connecting the two is unclear. This is an indication that the average behaviour of exponential sums over short intervals (and by extension their pointwise behaviour) is governed by phenomena that are poorly understood and deserving of more investigation. **Remark 3.12**.: We omitted to include our lower bounds in the graphs. The reason for this is that since our lower bounds are uniform in \(\boldsymbol{\xi}\), that is, the location of the box within the unit torus. In contrast, our upper bounds either specifically discuss or at least accommodate the box located at the origin, where the exponential sum is known to have a spike. Thus, the lower bounds are of no representative value in the vicinity of the origin, where our upper bounds are known to be sharp. We have no evidence whether the lower bound might be sharp at some \(\boldsymbol{\xi}\) away from the origin. ## 4. Proof of Theorem 2.3 Let \(\delta\in[N^{-d},1]\) be fixed, and define \(\mathcal{D}=[-\delta,\delta]^{d}\). Write further \[\mathcal{C}=\prod_{j=1}^{d}[-cN^{-j},cN^{-j}]\] for some positive \(c<1/(8d)\). Clearly, for \(\mathbf{x}\in\mathcal{C}\) we have \(|x_{1}n+\ldots+x_{d}n^{d}|\leqslant 1/8\) and hence \[|S_{d}(\mathbf{x};N)|\gg N.\] Define \(\kappa\in[0,d]\) by the relation \(\delta^{-1}=N^{\kappa}\), and put \(k=\lfloor\kappa\rfloor\) and \(\tau=\kappa-k=\{\kappa\}\), so that we have the inequalities \(N^{-(k+1)}<\delta\leqslant N^{-k}\). Since \[\operatorname{vol}(\mathcal{C}\cap\mathcal{D})\asymp\delta^{k}\prod_{j=k+1}^{ d}N^{-j}\asymp(N^{-(k+\tau)})^{k}N^{-s(d)+s(k)}\asymp N^{-s(d)-k(k-1+2\tau)/2},\] where by convention the empty product is taken to have value \(1\), we have \[\int_{\mathcal{D}}|S_{d}(\mathbf{x};N)|^{2s}dx\gg\operatorname{vol}(\mathcal{ C}\cap\mathcal{D})N^{2s}\asymp N^{2s-s(d)-k(k-1+2\tau)/2}.\] From the definition of \(s_{0}(d,\alpha)\) we also have the requirement that \[I_{s,d}^{(0)}(\delta;N)\leqslant\delta^{d-\alpha}N^{s+o(1)}\] for \(s\leqslant s_{0}(d,\alpha)\). Thus we need that \[N^{2s-s(d)-k(k-1+2\tau)/2}\leqslant(N^{-k-\tau})^{d-\alpha}N^{s+o(1)},\] and in particular \[s\leqslant\frac{d(d+1)}{2}+\frac{k(k-1+2\tau)}{2}-(k+\tau)(d-\alpha).\] Recall now that we aim for a statement that holds for all \(\delta\in[N^{-d},1]\). We therefore want to minimise the expression \[F(k,\tau)=\frac{d(d+1)}{2}+\frac{k(k-1+2\tau)}{2}-(k+\tau)(d-\alpha).\] Observe that formally we have \[F(k,1)=F(k+1,0), \tag{4.1}\] as can be confirmed by a straightforward computation. Thus, we can extend the range of \(\tau\in[0,1)\) by including the endpoint. Suppose first that \(\alpha\not\in\mathbb{Z}\). Clearly, we have \[\partial F(k,\tau)/\partial\tau=k-(d-\alpha). \tag{4.2}\] Consequently, for any fixed value of \(k\) the function \(F(k,\tau)\) is minimal for \(\tau=0\) when \(k>d-\alpha\), and for \(\tau=1\) when \(k<d-\alpha\). Assume first that \(k>d-\alpha\), so that we can assume that \(\tau=0\). In this case we have \[\frac{\partial F(k,0)}{\partial k}=k-(d-\alpha+1/2),\] which is optimal when \(k\) is taken to be the integer that is closest to \(d-\alpha+1/2\). Upon writing \(d-\alpha+1/2=\lfloor d-\alpha\rfloor+1+(\{d-\alpha\}-1/2)\) and observing that \((\{d-\alpha\}-1/2)\in(-1/2,1/2)\), we see that this closest integer is given by \(k=\lfloor d-\alpha\rfloor+1\). Similarly, if \(k<d-\alpha\), we have \(\tau=1\) and thus \[\frac{\partial F(k,1)}{\partial k}=k-(d-\alpha-1/2).\] In this case we have \(d-\alpha-1/2=\lfloor d-\alpha\rfloor+(\{d-\alpha\}-1/2)\), where we note that \((\{d-\alpha\}-1/2)\in(-1/2,1/2)\), so that the optimal value for \(k\) in this setting is given by \(k=\lfloor d-\alpha\rfloor\). Consequently, the function \(F(k,\tau)\) is minimised by either \(k=\lfloor d-\alpha\rfloor+1\) and \(\tau=0\), or for \(k=\lfloor d-\alpha\rfloor\) and \(\tau=1\). Upon recalling (4.1), it is clear that these values coincide. It thus remains to compute the value of the minimum by inserting the values \(k=\lfloor d-\alpha\rfloor+1\) and \(\tau=0\). Upon writing \(\lfloor d-\alpha\rfloor=d-\alpha-\{d-\alpha\}\) and noting that \(\{d-\alpha\}=1-\{\alpha\}\) we find that \[s \leqslant F(\lfloor d-\alpha\rfloor+1,0)\] \[=\frac{d(d+1)}{2}+\frac{\lfloor d-\alpha\rfloor(\lfloor d-\alpha \rfloor+1)}{2}-(\lfloor d-\alpha\rfloor+1)(d-\alpha)\] \[=\frac{d(d+1)}{2}-\frac{(d-\alpha)(d-\alpha+1)}{2}-\frac{\{d- \alpha\}}{2}+\frac{(\{d-\alpha\})^{2}}{2}\] \[=\frac{\alpha(2d-\alpha+1)}{2}-\frac{\{\alpha\}(1-\{\alpha\})}{2}.\] Finally, when \(\alpha\in\mathbb{Z}\) we conclude from (4.2) that \(F(k,\tau)\) is minimal for \(\tau=0\) when \(k>d-\alpha\) and for \(\tau=1\) when \(k<d-\alpha\), and that it is constant in \(\tau\) when \(k=d-\alpha\). In combination with the continuity property (4.1) it follows that \(F\) is minimised for \(k=d-\alpha\) and on the entire interval \(\tau\in[0,1]\), and we have the explicit value \[F(d-\alpha+1,0)=\frac{d(d+1)}{2}-\frac{(d-\alpha)(d-\alpha+1)}{2}=\frac{ \alpha(2d-\alpha+1)}{2}\] as well. This completes the proof of Theorem 2.3. **Remark 4.1**.: It remains to comment on the situation when \(\delta\leqslant N^{-d}\). Indeed, adapting the strategy of the above proof to this eventuality, we find that \(\operatorname{vol}(\mathcal{C}\cap\mathcal{D})\asymp\delta^{d}\), and consequently \(\delta^{d}N^{2s}\ll I_{s,d}^{(0)}(\delta;N)\), which matches the trivial bound (2.3). ## 5. Transition to inhomogeneous mean values In the following, we denote by \(\operatorname{J}_{s,d}(\delta;N)\) the number of solutions to the system of \(d\) inequalities \[\left|\sum_{j=1}^{2s}(-1)^{j}n_{j}^{i}\right|\leqslant\delta^{-1},\qquad i=1, \ldots,d,\] in integer variables \(1\leqslant n_{1},\ldots,n_{2s}\leqslant N\). We recall [10, Lemma 3.8], in a form which is better suited for our applications. **Lemma 5.1**.: _If \(|a_{n}|\leqslant 1\), \(n=1,\dots,N\), then_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d}\mathsf{J}_{s,d}(\delta;N).\] Recall the definition of \(J_{s,d}(\mathbf{h};N)\) from the preamble of (3.5) above. The following is [29, Theorem 1.3]. **Lemma 5.2**.: _Suppose that \(d\geqslant 2\) and \(\mathbf{h}\neq\mathbf{0}\). Let \(\ell\) be the smallest integer for which \(h_{\ell}\neq 0\), and suppose that \(\ell\leqslant d-1\). Then for any integer \(s\leqslant d(d+1)/2\), we have_ \[J_{s,d}(\mathbf{h};N)\leqslant N^{s-1/2+o(1)}+N^{s-\eta_{s,d}(\ell)+o(1)},\] _where \(\eta_{s,d}(j)\) is as given in (3.1)._ We point out that we do not have any bound in the situation when \(\ell=d\). Observe moreover that \(J_{s,d}(\mathbf{h};N)=0\) trivially when \(|h_{j}|>2sN^{j}\) for any \(j=1,\dots,d\). Define \[\mathcal{U}=[-\delta^{-1},\delta^{-1}]^{d}\subseteq\mathbb{Z}^{d}\qquad\text{ and}\qquad\mathcal{V}=\prod_{j=1}^{d}[-2sN^{j},2sN^{j}]\subseteq\mathbb{Z}^{d}.\] Then for \(1\leqslant j\leqslant d\) put \[\mathcal{H}_{j}=\{\mathbf{h}\in\mathcal{U}\cap\mathcal{V}:\ \mathbf{h}=(0, \dots,0,h_{j},\dots,h_{d}),\ h_{j}\neq 0\}.\] In this notation, we have the obvious partition \[\mathcal{U}\cap\mathcal{V}=\{\mathbf{0}\}\cup\bigcup_{j=1}^{d}\mathcal{H}_{j},\] so that \[\mathsf{J}_{s,d}(\delta;N)=\sum_{\mathbf{h}\in\mathcal{U}\cap\mathcal{V}}J_{s,d}(\mathbf{h};N)=J_{s,d}(N)+\sum_{j=1}^{d}\sum_{\mathbf{h}\in\mathcal{H}_{j} }J_{s,d}(\mathbf{h};N). \tag{5.1}\] Next we note that for each \(j=1,\dots d\) we have \[\#\mathcal{H}_{j}\asymp\prod_{i=j}^{d}\min\{\delta^{-1},N^{i}\}=\delta^{-(d-j +1)}\prod_{i=j}^{d}\min\{1,N^{i}\delta\}.\] In particular, if \(\delta\in[N^{-k-1},N^{-k})\) with some integer \(k\) we have \[\#\mathcal{H}_{j}\asymp\delta^{-(d-j+1)}\prod_{\begin{subarray}{c}i=j\\ i\leqslant k\end{subarray}}^{d}(N^{i}\delta),\] where the empty product should be interpreted as having value \(1\). Consequently, we may write \[\#\mathcal{H}_{j}\asymp\begin{cases}\delta^{-(d-j+1)}&\text{for $k<j$},\\ \delta^{-(d-k)}N^{(k(k+1)-j(j-1))/2}&\text{for $j\leqslant k<d$},\\ N^{(d(d+1)-j(j-1))/2}&\text{for $d\leqslant k$}.\end{cases} \tag{5.2}\] For future reference we also record the obvious fact that \[\#\mathcal{H}_{1}\geqslant\ldots\geqslant\#\mathcal{H}_{d},\] as well as the bound \[\#\mathcal{H}_{1}\asymp\delta^{-d+k}N^{s(k)} \tag{5.3}\] which is valid for \(\,0\leqslant k\leqslant d\). Finally, we also record the following simple bound. **Lemma 5.3**.: _Suppose that \(d\geqslant 2\). For any finite set \(\,\mathcal{H}\subseteq\mathbb{Z}^{d}\) we have_ \[\sum_{\mathbf{h}\in\mathcal{H}}J_{s,d}(\mathbf{h};N)\leqslant\left(\# \mathcal{H}J_{2s,d}(N)\right)^{1/2}.\] Proof.: By Cauchy's inequality we have \[\left(\sum_{\mathbf{h}\in\mathcal{H}}J_{s,d}(\mathbf{h};N)\right)^ {2} \leqslant\#\mathcal{H}\sum_{\mathbf{h}\in\mathcal{H}}J_{s,d}( \mathbf{h};N)^{2}\] \[\leqslant\#\mathcal{H}\sum_{\mathbf{h}\in\mathbb{Z}^{d}}J_{s,d}( \mathbf{h};N)^{2}=\#\mathcal{H}J_{2s,d}(N),\] and the result follows. ## 6. Proof of Theorems 3.1- 3.5 ### General upper bounds for weighted Weyl sums over small boxes We are now ready to establish our most general upper bound for Weyl sums, which implies Theorems 3.1 and 3.2 as well as Theorem 3.5 as special cases. The following result serves as a starting point for all ensuing deliberations. **Proposition 6.1**.: _Suppose that \(\,\|\mathbf{a}\|_{\infty}\leqslant 1\)._ 1. _For any_ \(s\leqslant s(d)\) _we have_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d}N^{s+o(1)} \bigg{(}\min\left\{\#\mathcal{H}_{d},(\#\mathcal{H}_{d})^{1/2}\left(1+N^{s-s(d) /2}\right)\right\}\\ +\sum_{j=1}^{d-1}\min\left\{\#\mathcal{H}_{j}(N^{-1/2}+N^{-\eta_{ s,d}(j)}),(\#\mathcal{H}_{j})^{1/2}(1+N^{s-s(d)/2})\right\}\bigg{)},\] _where_ \(\eta_{s,d}(j)\) _is as given in (_3.1_)._ 2. _Suppose now that (_3.6_) is known for some_ \(\,\nu\) _and some_ \(\,s_{1}(d)\)_. For all integers_ \(s\leqslant s_{1}(d)\) _and any_ \(\delta\in(0,1]\) _we have the potentially stronger bound_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d}N^{s+o(1)}\left(1+\min \{\#\mathcal{H}_{1}N^{-\nu},(\#\mathcal{H}_{1})^{1/2}(1+N^{s-s(d)/2})\}\right).\] Proof.: Our starting point is the decomposition (5.1). First we observe that we can apply Lemma 5.2 on the first \(d-1\) of the inner summands. Furthermore, for \(j=d\) we use the bound \[\sum_{\mathbf{h}\in\mathcal{H}_{d}}J_{s,d}(\mathbf{h};N)\ll\min\left\{\# \mathcal{H}_{d}J_{s,d}(N),(\#\mathcal{H}_{d})^{-1/2}J_{2s,d}(N)^{1/2}\right\},\] which combines (3.5) and Lemma 5.3. Recalling (1.2), we obtain \[\sum_{\mathbf{h}\in\mathcal{H}_{d}}J_{s,d}(\mathbf{h};N)\ll N^{s+o(1)}\min\left\{ \#\mathcal{H}_{d},(\#\mathcal{H}_{d})^{1/2}\left(1+N^{s-s(d)/2}\right)\right\}.\] Similarly, for \(1\leqslant j\leqslant d-1\) we have \[\sum_{\mathbf{h}\in\mathcal{H}_{j}}J_{s,d}(\mathbf{h};N)\ll N^{s+o(1)}\min \left\{\#\mathcal{H}_{j}(N^{-1/2}+N^{-\eta_{s,d}(j)}),(\#\mathcal{H}_{j})^{1/ 2}\left(1+N^{s-s(d)/2}\right)\right\}.\] Combining both of these bounds with the result of Lemma 5.1 leads to the desired conclusion in the unconditional case. For the conditional setting we only need to make some minor modifications to the above argument. Again starting from (5.1), we can now use (3.6) inside all of the inner summands. Thus, for \(1\leqslant j\leqslant d\) we have \[\sum_{\mathbf{h}\in\mathcal{H}_{j}}J_{s,d}(\mathbf{h};N)\ll N^{s+o(1)}\min \left\{\#\mathcal{H}_{j}N^{-\nu},(\#\mathcal{H}_{j})^{-1/2}(1+N^{s-s(d)/2}) \right\}.\] Substituting this back into (5.1) and invoking Lemma 5.1 yields \[I_{s,d}^{(0)}(\delta;\mathbf{a},N) \leqslant\delta^{d}N^{s+o(1)}\left(1+\sum_{j=1}^{d}\min\left\{\# \mathcal{H}_{j}N^{-\nu},(\#\mathcal{H}_{j})^{1/2}(1+N^{s-s(d)/2})\right\}\right)\] \[\leqslant N^{s+o(1)}\left(1+\min\left\{\#\mathcal{H}_{1}N^{-\nu},( \#\mathcal{H}_{1})^{1/2}(1+N^{s-s(d)/2})\right\}\right),\] where in the last step we have used that \(\#\mathcal{H}_{1}=\max_{j}\#\mathcal{H}_{j}\). ### Proofs of Theorems 3.1 and 3.5 We now specialise to the case \(s\leqslant s(d)/2\). In that situation, the conclusion of Proposition 6.1(1) can be simplified significantly. **Lemma 6.2**.: _For any integer \(s\leqslant s(d)/2\) and any \(\delta\in(0,1]\) we have_ \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d}N^{s+o(1)}\left(\min\{ \#\mathcal{H}_{1}N^{-1/2},(\#\mathcal{H}_{1})^{1/2}\}+(\#\mathcal{H}_{d})^{1 /2}\right).\] Proof.: Recall Proposition 6.1(1). Clearly, under the assumptions of the lemma we have \(N^{s-s(d)/2}\ll 1\). Moreover, for \(s\) in the admissible range we have \[\min_{1\leqslant j\leqslant d-1}\eta_{s,d}(j)\geqslant\frac{s(d)}{2}\cdot\min _{1\leqslant j\leqslant d-1}\frac{d-j+1}{d+j+1}\geqslant\frac{s(d)}{2}\cdot \frac{2}{2d}=\frac{d+1}{4}>1/2\] for all \(d\geqslant 2\). Consequently, the conclusion of Proposition 6.1 simplifies to \[I_{s,d}^{(0)}(\delta;\mathbf{a},N) \leqslant\delta^{d}N^{s+o(1)}\left(1+\sum_{j=1}^{d-1}\min\left\{ \#\mathcal{H}_{j}N^{-1/2},(\#\mathcal{H}_{j})^{1/2}\right\}+(\#\mathcal{H}_{ d})^{1/2}\right)\] \[\leqslant N^{s+o(1)}\left(\min\left\{\#\mathcal{H}_{1}N^{-1/2},( \#\mathcal{H}_{1})^{1/2}\right\}+(\#\mathcal{H}_{d})^{1/2}\right),\] where in the last step we used that \(\#\mathcal{H}_{1}=\max_{j}\#\mathcal{H}_{j}\). This concludes the proof. The derivation of Theorems 3.1 and 3.5 is now straightforward. We note from (5.2) that \(\#\mathcal{H}_{1}\gg N\) for all \(\delta<N^{-1/d}\). This is obvious for \(\delta<N^{-1}\), and can be checked in a straightforward manner for \(N^{-1}<\delta<N^{-1/d}\). In those situations, we have \[\min\{\#\mathcal{H}_{1}N^{-1/2},(\#\mathcal{H}_{1})^{1/2}\}=(\#\mathcal{H}_{1} )^{1/2}\geqslant(\#\mathcal{H}_{d})^{1/2},\] and the bound becomes \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d}N^{s+o(1)}(\#\mathcal{H}_{1} )^{1/2}.\] Finally, if \(\delta>N^{-1/d}\) we see from (5.2) that \[\min\{\#\mathcal{H}_{1}N^{-1/2},(\#\mathcal{H}_{1})^{1/2}\}=\#\mathcal{H}_{1} N^{-1/2}\asymp\delta^{-d}N^{-1/2},\] so that we obtain \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d}N^{s+o(1)}(\delta^{-d}N^ {-1/2}+\delta^{-1/2}).\] When \(\delta>N^{-1/(2d-1)}\), the first term prevails. The proof of Theorem 3.1 is complete upon combining both of these bounds with (5.2). We now pivot to the proof of Theorem 3.5, where we suppose that (3.6) is known with \(\nu=1\). At this point, the bound in part (2) of the theorem is immediate from Proposition 6.1(2) upon inserting (5.3). To establish the bounds of part (1), we begin by noting that \(\#\mathcal{H}_{1}\gg N^{2}\) for all \(\delta<N^{-2/d}\). This is again immediate from (5.3) for \(\delta\leqslant N^{-2}\) and straightforward to check in the intervals \(N^{-2}<\delta\leqslant N^{-1}\) and \(N^{-1}<\delta\leqslant N^{-2/d}\), respectively. Consequently, in this range of \(\delta\) we find that \[\min\big{\{}\#\mathcal{H}_{1}N^{-1},(\#\mathcal{H}_{1})^{1/2}\big{\}}=\# \mathcal{H}_{1}^{1/2}>1.\] Finally, for \(N^{-2/d}<\delta<1\) we obtain \[\min\big{\{}\#\mathcal{H}_{1}N^{-1},(\#\mathcal{H}_{1})^{1/2}\big{\}}=\# \mathcal{H}_{1}N^{-1}\begin{cases}\ll 1&\text{if }N^{-2/d}<\delta<N^{-1/d}\\ \gg 1&\text{if }N^{-1/d}\leqslant\delta\leqslant 1.\end{cases}\] The conclusion of Theorem 3.5(1) is now complete upon using these bounds within Proposition 6.1(2) and inserting the values of (5.3). ### Proofs of Theorem 3.2 and Corollary 3.4 We now investigate the situation when \(s(d)/2<s<s(d)\) and \(\delta>N^{-1}\). In that situation we have \(\#\mathcal{H}_{j}\asymp\delta^{-d+j-1}\) for \(1\leqslant j\leqslant d\). Moreover, since \(\delta\geqslant N^{-1}\geqslant N^{s(d)-2s}\), we clearly have \[\min\big{\{}\delta^{-1},\delta^{-1/2}N^{s-s(d)/2}\big{\}}=\delta^{-1}.\] Thus, under these conditions the conclusion of Proposition 6.1 reads \[I_{s,d}^{(0)}(\delta;\mathbf{a},N)\leqslant\delta^{d}N^{s+o(1)} \bigg{(}\delta^{-1}+\sum_{j=1}^{d-1}\min\big{\{}\delta^{-d+j-1}(N^{-1/2}+N^{- \eta_{s,d}(j)}),\] \[\delta^{-(d-j+1)/2}N^{s-s(d)/2}\big{\}}\bigg{)}.\] This completes the proof of Theorem 3.2. To finish the proof of Corollary 3.4, we begin by noting that \(\delta^{-1}<\delta^{-(d-1+j)/2}N^{s-s(d)/2}\) for all \(j\) and all \(\delta\leqslant 1\). Consequently, it is sufficient to check in what range of \(\delta\) one has \[\delta^{-1}\gg\max_{1\leqslant j\leqslant d-1}\delta^{-d+j-1}(N^{-1/2}+N^{- \eta_{s,d}(j)})\asymp\delta^{-d}N^{-1/2}+\max_{1\leqslant j\leqslant d-1} \delta^{-d+j-1}N^{-\eta_{s,d}(j)}.\] On comparing these terms and recalling the definition of \(\eta_{s,d}(j)\) from (3.1) it is enough to choose \[\delta\geqslant\max\{N^{-(s(d)-s)\vartheta(d)},N^{-1/(2d-2)}\},\] with \[\vartheta(d)=\min_{j=1,\dots,d-1}f(j),\] where the function \(f\) is defined by (3.2). The proof is thus complete if we can show that this definition of \(\vartheta(d)\) coincides with the one given in (3.3). Since the denominator of \(f(x)\) vanishes at \(x=d\) and at \(x=-d-1\), neither of which lie in the interval \([1,d-1]\), we see that \(f\) is continuous inside said interval. Moreover, simple but somewhat tedious calculus shows that \[f^{\prime}(x)=-\frac{d^{2}-1-2(d+1)x+x^{2}}{\left(d^{2}+d-x^{2}-x\right)^{2}}.\] This expression has two roots at \(x_{\pm}=d+1\pm\sqrt{2(d+1)}\), of which we can disregard the larger one since it is clearly outside the interval \([1,d-1]\). Since \(f^{\prime}\) has a sign change from negative to positive at \(x=x_{-}\), that root corresponds to a minimum. In order to compute the value, note that for \(d\geqslant 3\) we have \(d+1-\sqrt{2(d+1)}\in[1,d-1]\), so that both \(d+1-\lfloor\sqrt{2(d+1)}\rfloor\) and \(d+1-\lceil\sqrt{2(d+1)}\rceil\) lie in the set \(\{1,\dots,d-1\}\). Thus, the values for \(\vartheta(d)\) certainly coincide for \(d\geqslant 3\). Finally, for \(d=2\) the identity is straightforward to check explicitly. This completes the proof of Corollary 3.4. ## 7. Approach via the structure of large Weyl sums In what follows it is be convenient to define \[D=\min\left\{2^{d-1},2d(d-1)\right\}.\] We begin our analysis with a description of the structure of large Gauss sums \[G(x_{1},x_{2};N)=S_{2}((x_{1},x_{2});N)=\sum_{n=1}^{N}\,\mathbf{e}\left(x_{1}n +x_{2}n^{2}\right). \tag{7.1}\] The following is [4, Lemma 5.1], which in turn follows from a result of Baker [1, Theorem 3] (see also [2, Theorem 4]). **Lemma 7.1**.: _We fix some \(\varepsilon>0\), and suppose that for a real \(A>N^{1/2+\varepsilon}\) we have \(|G(x_{1},x_{2};N)|\geqslant A\) for some \((x_{1},x_{2})\in\mathbb{R}^{2}\). Then there exist integers \(q,a_{1},a_{2}\) such that_ \[1\leqslant q\leqslant\left(NA^{-1}\right)^{2}N^{o(1)},\] _and for \(i=1,2\) we have_ \[\left|x_{i}-\frac{a_{i}}{q}\right|\leqslant(NA^{-1})^{2}q^{-1}N^{-i+o(1)}.\] For \(d\geqslant 3\), we use the following result from [3] which is based on a combination of results of Baker [1, Theorem 3] and [2, Theorem 4] with bounds of complete rational sums, see, for example, [14]. Namely by [3, Lemma 2.7] we have **Lemma 7.2**.: _We fix \(d\geqslant 3\), some \(\varepsilon>0\), and suppose that for a real number \(A\) satisfying \(A>N^{1-1/D+\varepsilon}\) we have \(|S_{d}(\mathbf{x};N)|\geqslant A\) for some \(\mathbf{x}\in\mathsf{T}_{d}\). Then there exist positive integers \(q_{2},\ldots,q_{d}\) with \(\gcd(q_{i},q_{j})=1\) for \(2\leqslant i<j\leqslant d\), such that_ 1. \(q_{2}\) _is cube-free,_ 2. \(q_{i}\) _is_ \(i\)_-th power-full but_ \((i+1)\)_-th power-free when_ \(3\leqslant i\leqslant d-1\)_,_ 3. \(q_{d}\) _is_ \(d\)_-th power-full,_ _and_ \[\prod_{i=2}^{d}q_{i}^{1/i}\leqslant N^{1+o(1)}A^{-1},\] _and integers \(b_{1},\ldots,b_{d}\) with_ \[\gcd\left(q_{2}\cdots q_{d},b_{1},\ldots,b_{d}\right)=1\] _such that_ \[\left|x_{j}-\frac{b_{j}}{q_{2}\cdots q_{d}}\right|\leqslant(NA^{-1})^{d}N^{-j +o(1)}\prod_{i=2}^{d}q_{i}^{-d/i},\qquad j=1,\ldots,d.\] **Remark 7.3**.: For errors of the approximations to \(x_{1},x_{2}\) of Lemma 7.1, by the condition of \(A>N^{1/2+\varepsilon}\) we have \[(NA^{-1})^{2}q^{-1}N^{-i+o(1)}\leqslant q^{-1}N^{-2\varepsilon+o(1)},\qquad i =1,2. \tag{7.2}\] Similarly, for errors of Lemma 7.2 we have \[(NA^{-1})^{d}N^{-j+o(1)}\prod_{i=2}^{d}q_{i}^{-d/i}\leqslant N^{-d\varepsilon +o(1)}\prod_{i=2}^{d}q_{i}^{-1},\qquad j=1,\ldots,d. \tag{7.3}\] For a real \(A>0\), we define the level set. \[\mathscr{F}_{d,A}=\{\mathbf{x}\in\mathsf{T}_{d}:\ |S_{d}(\mathbf{x};N)| \geqslant A\}. \tag{7.4}\] Further, for a box \(\mathfrak{B}(\boldsymbol{\xi},\delta)=\boldsymbol{\xi}+[0,\delta]^{d}\subseteq \mathsf{T}_{d}\), denote \[\lambda_{d,\boldsymbol{\xi}}(\delta,A;N)=\lambda(\mathfrak{B}(\boldsymbol{\xi },\delta)\cap\mathscr{F}_{d,A}). \tag{7.5}\] **Lemma 7.4**.: _Suppose that \(A>N^{1/2+\varepsilon}\) for some fixed \(\varepsilon>0\). Then for any \(\delta\geqslant AN^{-1}\) we have_ \[\lambda_{2,\boldsymbol{\xi}}(\delta,A;N)\leqslant\delta^{2}N^{3+o(1)}A^{-6}.\] Proof.: Let \(Q=(NA^{-1})^{2}N^{\eta}\) for some small \(\eta>0\). For \(q\in\mathbb{N}\) and \(\mathbf{b}=(b_{1},b_{2})\in[q]^{2}\) define the rectangular box \[R_{q}(\mathbf{b})=B(b_{1}/q,Qq^{-1}N^{-1})\times B(b_{2}/q,Qq^{-1}N^{-2}),\] where \(B(x,r)\subseteq\mathbb{R}\) denotes the interval with center \(x\) and radius \(r\). Clearly, each such box has area \[\lambda(R_{q}(\mathbf{b}))\asymp Q^{2}/(q^{2}N^{3}).\] By Lemma 7.1, for all sufficiently large \(N\) we obtain \[\mathscr{F}_{2,A}\subseteq\bigcup_{q\leqslant Q}\bigcup_{(b_{1},b_{2})\in[q ]^{2}}R_{q}(\mathbf{b}).\] It is an easy consequence of (7.2) that the boxes \(R_{q}(\mathbf{b})\) are disjoint for all \(q\in\mathbb{N}\). It follows that any box \(\mathfrak{B}(\boldsymbol{\xi},\delta)\) intersects with at most \(O\left(1+(q\delta)^{2}\right)\) boxes \(R_{q}(\mathbf{b})\). Consequently, recalling (7.5), we derive \[\lambda_{2,\boldsymbol{\xi}}(\delta,A;N) =\lambda(\mathfrak{B}(\boldsymbol{\xi},\delta)\cap\mathscr{F}_{2,A})\] \[\ll\sum_{q\leqslant Q}\sum_{(b_{1},b_{2})\in[q]^{2}}\lambda\left( R_{q}(\mathbf{b})\cap\mathfrak{B}(\boldsymbol{\xi},\delta)\right)\] \[\ll\sum_{q\leqslant Q}\sum_{\begin{subarray}{c}\mathbf{b}\in[q] ^{2}\\ R_{q}(\mathbf{b})\cap\mathfrak{B}(\boldsymbol{\xi},\delta)\neq\emptyset \end{subarray}}\lambda\left(R_{q}(\mathbf{b})\right)\] \[\ll\sum_{q\leqslant Q}\left(1+(q\delta)^{2}\right)\frac{Q^{2}}{q ^{2}N^{3}}\ll\frac{Q^{2}}{N^{3}}+\frac{Q^{3}}{N^{3}}\delta^{2}.\] Therefore, using \(\delta\geqslant AN^{-1}\geqslant Q^{-1/2}\), we derive \[\lambda_{2,\boldsymbol{\xi}}(\delta,A;N)\ll\frac{Q^{2}}{N^{3}}+\frac{Q^{3}}{N^ {3}}\delta^{2}\ll\frac{Q^{3}}{N^{3}}\delta^{2}=N^{3+3\eta}A^{-6}\delta^{2}.\] Since \(\eta>0\) is arbitrary we obtain the desired bound. For \(d\geqslant 3\), we mimic the proof of [3, Lemma 2.9] in order to obtain a level set estimation with restriction to some small box. Formally, taking \(k=d\) in [3, Lemma 2.9] and adding a factor of \(\delta^{d}\) there, we have the following bound. **Lemma 7.5**.: _Suppose that \(d\geqslant 3\) and \(A>N^{1-1/D+\varepsilon}\) for some fixed \(\varepsilon>0\). Then for any \(\delta\geqslant\left(AN^{-1}\right)^{1/d}\) we have_ \[\lambda_{d,\boldsymbol{\xi}}(\delta,A;N)\leqslant N^{d^{2}+1-s(d)+o(1)}A^{-d^ {2}-1}\delta^{d}.\] Proof.: Let \[Q=\left(NA^{-1}\right)^{d}N^{\eta} \tag{7.6}\] for some small number \(\eta>0\). For any \(q_{2},\ldots,q_{d}\in\mathbb{N}\) and \(b_{1},\ldots,b_{d}\in\mathbb{Z}\), define the box \[R_{q_{2},\ldots,q_{d}}(\mathbf{b})=\bigg{\{}\mathbf{x}\in\mathsf{T}_{d}:\ \left|x_{j}-\frac{b_{j}}{q_{2}\ldots q_{d}}\right|\leqslant QN^{-j}\prod_{i= 2}^{d}q_{i}^{-d/i},\ j=1,\ldots,d\bigg{\}}.\] Again, we note that \[\lambda(R_{q_{2},\ldots,q_{d}}(\mathbf{b}))\asymp\prod_{j=1}^{d}\left(QN^{-j} \prod_{i=2}^{d}q_{i}^{-d/i}\right)\asymp Q^{d}N^{-s(d)}\prod_{i=2}^{d}q_{i}^{ -d^{2}/i}. \tag{7.7}\] Moreover, by (7.3) these boxes are pairwise disjoint. Thus, for any fixed \(d\)-tuple \((q_{2},\ldots,q_{d})\), the number of boxes \(R_{q_{2},\ldots,q_{d}}(\mathbf{b})\) intersecting \(\mathfrak{B}(\boldsymbol{\xi},\delta)\) nontrivially is given by \[\#\{\mathbf{b}\in[q_{2}\cdots q_{d}]^{d}:\ R_{q_{2},\ldots,q_{d}}(\mathbf{b}) \cap\mathfrak{B}(\boldsymbol{\xi},\delta)\neq\emptyset\}=O(1+(\delta q_{2} \cdots q_{d})^{d}). \tag{7.8}\] For any integer \(i\geqslant 2\) it is convenient to denote \[\mathcal{F}_{i}=\{n\in\mathbb{N}:\;\;n\text{ is }i\text{-th power full}\}\quad\text{ and}\quad\mathcal{F}_{i}(x)=\mathcal{F}_{i}\cap[1,x],\] so that an easy counting shows that \[\#\mathcal{F}_{i}(x)\ll x^{1/i}, \tag{7.9}\] and to put \[\Omega=\left\{(q_{2},\ldots,q_{d})\in\mathbb{N}^{d-1}:\;q_{i}\in\mathcal{F}_{ i}\quad(3\leqslant i\leqslant d),\quad\prod_{i=2}^{d}q_{i}^{d/i}\leqslant Q \right\}.\] Thus, recalling the definition (7.4), we clearly have \[\mathscr{F}_{d,A}\subseteq\bigcup_{(q_{2},\ldots,q_{d})\in\Omega}\bigcup_{ \mathbf{b}\in[q_{2}\cdots q_{d}]^{d}}R_{q_{2},\ldots,q_{d}}(\mathbf{b}).\] Combining this with (7.7) and (7.8), and recalling (7.5), we can estimate \[\begin{split}\lambda_{d,\boldsymbol{\xi}}(\delta,A;N)& =\lambda\left(\mathfrak{B}(\boldsymbol{\xi},\delta)\cap\mathscr{ F}_{d,A}\right)\\ &\leqslant\sum_{(q_{2},\ldots,q_{d})\in\Omega}\sum_{ \begin{subarray}{c}\mathbf{b}\in[q_{2}\cdots q_{d}]^{d}\\ R_{q_{2},\ldots,q_{d}}(\mathbf{b})\cap\mathfrak{B}(\boldsymbol{\xi},\delta) \neq\emptyset\end{subarray}}\lambda(R_{q_{2},\ldots,q_{d}}(\mathbf{b}))\\ &\ll\sum_{(q_{2},\ldots,q_{d})\in\Omega}(1+(\delta q_{2}\cdots q_{ d})^{d})Q^{d}N^{-s(d)}\prod_{i=2}^{d}q_{i}^{-d^{2}/i}.\end{split} \tag{7.10}\] Write \[U_{1}=\sum_{(q_{2},\ldots,q_{d})\in\Omega}\prod_{i=2}^{d}q_{i}^{-d^{2}/i}\qquad \text{and}\qquad U_{2}=\sum_{(q_{2},\ldots,q_{d})\in\Omega}\prod_{i=2}^{d}q_{i }^{d-d^{2}/i},\] then (7.10) can be bounded by \[\lambda\left(\mathfrak{B}(\boldsymbol{\xi},\delta)\cap\mathscr{F}_{d,A}\right) \ll Q^{d}N^{-s(d)}U_{1}+\delta^{d}Q^{d}N^{-s(d)}U_{2}. \tag{7.11}\] Clearly, \[U_{1}\leqslant\sum_{(q_{2},\ldots,q_{d})\in\mathbb{N}^{d}}\prod_{i=2}^{d}q_{ i}^{-d^{2}/i}\ll 1. \tag{7.12}\] We now turn to the estimation of \(U_{2}\). For \(Q_{2},\ldots,Q_{d}\) and \(\delta>0\) denote \[\Omega(Q_{2},\ldots,Q_{d})=\{(q_{2},\ldots,q_{d})\in\Omega:\;Q_{i}/2<q_{i} \leqslant Q_{i},\;i=2,\ldots,d\},\] and write \[U_{2}(Q_{2},\ldots,Q_{d})=\sum_{(q_{2},\ldots,q_{d})\in\Omega(Q_{2},\ldots,Q_ {d})}\prod_{i=2}^{d}q_{i}^{d-d^{2}/i}.\] Thus, covering \(\Omega\) by \(O\left((\log N)^{d}\right)\) dyadic boxes, we see that \[U_{2}\ll\max\biggl{\{}U_{2}\left(Q_{2},\ldots,Q_{d}\right):\;Q_{2},\ldots,Q_{d }\geqslant 1,\;\prod_{i=2}^{d}Q_{i}^{-d/i}\leqslant Q\biggr{\}}\left(\log N \right)^{d}. \tag{7.13}\] By (7.9), this yields \[U_{2}\left(Q_{2},\ldots,Q_{d}\right)\ll\sum_{Q_{2}/2<q_{2}\leqslant Q_{2}}q_{2}^{d -d^{2}/2}\prod_{i=3}^{d}\left(Q_{i}^{d-d^{2}/i}\#\mathcal{F}_{i}\left(Q_{i} \right)\right)\ll\prod_{i=2}^{d}Q_{i}^{\alpha_{i}}, \tag{7.14}\] where \[\alpha_{2}=d-d^{2}/2+1\qquad\text{and}\qquad\alpha_{i}=d-(d^{2}-1)/i\quad(i=3,\ldots,d).\] Observe that for every \(i=2,\ldots,d\) we have \(\alpha_{i}\leqslant 1/i\). Combining this with the condition on \(Q_{2},\ldots,Q_{d}\) in (7.13), we derive from (7.14) and (7.13) that \[U_{2}\ll Q^{1/d}(\log N)^{d}. \tag{7.15}\] Finally, we can combine the bounds of (7.11), (7.12) and (7.15). Thus, and recalling the condition \(\delta\geqslant(AN^{-1})^{1/d}\) as well as the definition of \(Q\) from (7.6) together with arbitrary choice of \(\eta>0\), we obtain \[\lambda(\mathfrak{B}(\boldsymbol{\xi},\delta)\cap\mathscr{F}_{d, A}) \ll Q^{d}N^{-s(d)}+Q^{d+1/d}N^{-s(d)}\delta^{d}(\log N)^{d}\] \[\leqslant(NA^{-1})^{d^{2}+1}N^{-s(d)+o(1)}\delta^{d},\] which finishes the proof. ## 8. Proofs of Theorems 3.7 and 3.8 ### Proof of Theorem 3.7 Let \(\boldsymbol{\xi}\in\mathsf{T}_{2}\) and \[A=N^{1/2+s/(6+2s)}.\] Next, we divide the set \(\mathfrak{B}(\boldsymbol{\xi},\delta)=\boldsymbol{\xi}+[0,\delta]^{2}\) into two parts depending on whether \(|S_{2}(\mathbf{x};N)|\geqslant A\) or not. Thus combining with Lemma 7.4, which applies for the above choice of \(A\), we derive \[I_{s,2}^{\sharp}(\delta;N) \leqslant A^{2s}\delta^{2}+N^{2s}\sup_{\boldsymbol{\xi}\in \mathsf{T}_{2}}\lambda(\{\mathbf{x}\in\mathfrak{B}(\boldsymbol{\xi},\delta): \ |S_{2}(\mathbf{x};N)|\geqslant A\})\] \[\leqslant A^{2s}\delta^{2}+N^{2s+3+o(1)}A^{-6}\delta^{2},\] which yields the desired bound. ### Proof of Theorem 3.8 Let \(\boldsymbol{\xi}\in\mathsf{T}_{d}\) and \[A=N^{1-s(d)/(2s+d^{2}+1)},\] noting that the hypothesis \(s>(s(d)D-d^{2}-1)/2\) ensures that \(A>N^{1-1/D+\varepsilon}\), so that Lemma 7.5 is applicable. Divide the box \(\mathfrak{B}(\boldsymbol{\xi},\delta)=\boldsymbol{\xi}+[0,\delta]^{d}\) into two parts depending on whether \(|S_{d}(\mathbf{x};N)|\geqslant A\) or not. Thus, applying Lemma 7.5 we obtain \[I_{s,d}^{\sharp}(\delta;N) \leqslant A^{2s}\delta^{d}+N^{2s}\sup_{\boldsymbol{\xi}\in \mathsf{T}_{d}}\lambda(\{\mathbf{x}\in\mathfrak{B}(\boldsymbol{\xi},\delta): \ |S_{d}(\mathbf{x};N)|\geqslant A\})\] \[\leqslant A^{2s}\delta^{d}+N^{2s+d^{2}+1-s(d)+o(1)}A^{-d^{2}-1} \delta^{d},\] which yields the desired bound. ## 9. Rational exponential sums ### Gauss sums Recall the definition (7.1) of Gauss sums. We also record their explicit evaluation, which is classical (see, for example, [20, Equation (1.55)]). **Lemma 9.1**.: _Let \(p\geqslant 3\) be a prime number and \(a,b\in\mathbb{F}_{p}\) with \(b\neq 0\), then_ \[\left|\sum_{n=0}^{p-1}\,\mathbf{e}_{p}\left(an+bn^{2}\right)\right|=p^{1/2}.\] We also recall a classical result of Fiedler, Jurkat and Korner [18, Lemma 4]. **Lemma 9.2**.: _For any prime \(p\) and any \(a,b\in\mathbb{F}_{p}\) with \(b\neq 0\) we have_ \[\max_{1\leqslant M,N\leqslant p}\left|\sum_{M+1\leqslant n\leqslant M+N} \mathbf{e}_{p}\left(an+bn^{2}\right)\right|\ll p^{1/2}.\] **Lemma 9.3**.: _Let \(p\) be a prime and \(N\) an integer with \(N\geqslant Cp\) for some positive constant \(C\). Suppose that the pair \((x_{1},x_{2})\in\mathsf{T}_{2}\) has a rational approximation of the shape_ \[\left|x_{1}-a/p\right|\leqslant c/N\qquad\text{and}\qquad\left|x_{2}-b/p \right|\leqslant c/N^{2}\] _for some positive constant \(c\), where \(\gcd(b,p)=1\). Then we have_ \[\left|G(x_{1},x_{2};N)\right|\gg Np^{-1/2}.\] Proof.: Combining Lemma 9.2 with [13, Corollary 2.6] we obtain a continuity property of Gauss sums. \[G(x_{1},x_{2};N)-G(a/p,b/p;N)\ll Np^{-1/2}\left(|x_{1}-a/p|N+|x_{2}-b/p|N^{2} \right).\] Since by Lemmas 9.1 and 9.2 we have \[\left|G(a/p,b/p;N)\right|=\left\lfloor N/p\right\rfloor p^{1/2}+O\left(p^{1/2 }\right)=Np^{-1/2}+O\left(p^{1/2}\right),\] for an appropriate choice of \(C\) we obtain the desired result. ### Rational sums with arbitrary polynomials For \(d\geqslant 3\) we do not have an analogue of Lemma 9.1. For an arbitrary box \(\boldsymbol{\xi}+[0,\delta]^{d}\in\mathsf{T}_{d}\), we follow the same strategy as in [11] on the distribution of large complete rational sums. In fact, we need a more refined version of the argument presented in [11, Lemma 2.6] that provides quantitative estimates on the number of large sums inside any given small box. The,n using a method similar to those employed in the treatment of the case \(d=2\), we obtain some nontrivial lower bounds. Let \(p\) be a prime and let \(\mathbb{F}_{p}\) denote the finite field of \(p\) elements. For a vector \(\mathbf{u}=(u_{1},\ldots,u_{d})\in\mathbb{F}_{p}^{d}\) we consider the rational exponential sum \[T_{d,p}(\mathbf{u})=S_{d}(\mathbf{u}/p;p)=\sum_{n=1}^{p}\,\mathbf{e}_{p}\left( u_{1}n+\ldots+u_{d}n^{d}\right),\] where \(\mathbf{e}_{p}(z)=\mathbf{e}(z/p)\). We also consider discrete cubic boxes \[\mathfrak{B}=\mathcal{I}_{1}\times\ldots\times\mathcal{I}_{d}\subseteq\mathbb{ F}_{p}^{d} \tag{9.1}\] with side-length \(L\), where for each \(j=1,\ldots,d\) the set \(\mathcal{I}_{j}=\{k_{j}+1,\ldots,k_{j}+L\}\) is a set of \(L\leqslant p\) consecutive integers, reduced modulo \(p\) if \(k_{j}+L\geqslant p\). Our goal is to establish a quantitive version of [11, Lemma 2.6]. As in [11] we start with recalling that by a result of Knizhnerman and Sokolinskii [21, Theorem 1] (see also [22]) we have the following. **Lemma 9.4**.: _For every integer \(d\geqslant 2\) there are some positive constants \(c_{d}\) and \(\gamma_{d}\) having the property that there exists a set \(\mathcal{L}_{p}\subseteq\mathbb{F}_{p}^{d}\) of cardinality \(\#\mathcal{L}_{p}\geqslant c_{d}p^{d}\) such that for all \(\mathbf{a}\in\mathcal{L}_{p}\) one has_ \[|T_{d,p}(\mathbf{a})|\geqslant\gamma_{d}\sqrt{p}.\] We also need a result on the distribution of monomial curves. The following is [11, Lemma 2.5], which we augment by also including the (trivial) case \(k=1\). **Lemma 9.5**.: _Let \((a_{1},\ldots,a_{k})\in(\mathbb{F}_{p}^{*})^{k}\). Then there exists a positive constant \(C\) which depends only on \(k\) such that for any box \(\mathfrak{B}\) as in (9.1) with sidelength \(L\geqslant Cp^{1-1/2k}\log p\) for \(k\geqslant 2\) and \(L\geqslant 1\) for \(k=1\), we have_ \[\#\left\{\lambda\in\mathbb{F}_{p}^{*}:\ (a_{1}\lambda,\ldots,a_{k}\lambda^{k}) \in\mathfrak{B}\right\}\geqslant\frac{1}{2}L^{k}p^{1-k}.\] We are now ready to establish our main result of this section. Recall the definition of \(\nu(d,k)\) from (3.7), then we have the following level-set result. **Lemma 9.6**.: _For any \(d\geqslant 3\) and \(1\leqslant k<d\) there exist constants \(\gamma_{d},\Gamma_{d}>0\), such that for any box \(\mathfrak{B}\) as in (9.1) with side-length \(L\geqslant\Gamma_{d}p^{1-\nu(d,k)}\log p\), we have_ \[\#\left\{\mathbf{u}\in\mathfrak{B}:\ |T_{d,p}(\mathbf{u})|\geqslant\gamma_{d}p^{ 1/2}\right\}\gg L^{d}.\] Proof.: Adjusting \(\Gamma_{d}\) if necessary, we can assume that \(p\) is large enough. By Lemma 9.4, there is a constant \(\gamma_{d}\) and a set \(\mathcal{L}_{p}\subseteq\mathbb{F}_{p}^{d}\) of cardinality \[\#\mathcal{L}_{p}\geqslant c_{p}p^{d} \tag{9.2}\] for some suitable constant \(c_{p}\), and having the property that \(T_{d,p}(\mathbf{a})\geqslant\gamma_{d}\sqrt{p}\) for all elements \(\mathbf{a}\in\mathcal{L}_{p}\). Clearly, if \((a_{1},\ldots,a_{d})\in\mathcal{L}_{p}\), then for any \(\lambda\in\mathbb{F}_{p}^{*}\) we also have \((a_{1}\lambda,\ldots,a_{d}\lambda^{d})\in\mathcal{L}_{p}\). Denote by \(\mathcal{A}_{k}\subseteq\mathbb{F}_{p}^{k}\) the set of all \((a_{1},\ldots,a_{k})\in\mathbb{F}_{p}^{k}\) for which \[\#\left(\mathcal{L}_{p}\cap\left((a_{1},\ldots a_{k})\times\mathbb{F}_{p}^{d- k}\right)\right)\geqslant\frac{1}{2}c_{d}p^{d-k}, \tag{9.3}\] where \(c_{d}\) is the constant of Lemma 9.4. Then by decomposing \(\mathbb{F}^{k}=\mathcal{A}_{k}\cup\left(\mathbb{F}_{p}^{k}\setminus\mathcal{ A}\right)\) and using (9.3) (in the contrapositive form) within the second term, we have \[\#\mathcal{L}_{p} =\sum_{(a_{1},\ldots,a_{k})\in\mathcal{A}_{k}}\sum_{(a_{k+1}, \ldots,a_{d})\in\mathbb{F}_{p}^{d-k}\atop(a_{1},\ldots,a_{k})\in\mathcal{L}_{p }}1+\sum_{(a_{1},\ldots,a_{k})\in\mathbb{F}_{p}^{k}\setminus\mathcal{A}_{k} }\sum_{(a_{k+1},\ldots,a_{d})\in\mathbb{F}_{p}^{d-k}\atop(a_{1},\ldots,a_{d}) \in\mathcal{L}_{p}}1\] \[\leqslant\#\mathcal{A}_{k}p^{d-k}+\sum_{(a_{1},\ldots,a_{k})\in \mathbb{F}_{p}^{k}\setminus\mathcal{A}_{k}}\frac{1}{2}c_{d}p^{d-k}\] \[\leqslant\#\mathcal{A}_{k}p^{d-k}+\frac{1}{2}c_{d}p^{d}. \tag{9.4}\] On combining the bounds (9.2) and (9.4), we find that \[c_{d}p^{d}\leqslant\#\mathcal{A}_{k}p^{d-k}+\frac{1}{2}c_{d}p^{d}\] which rearranges to \[\#\mathcal{A}_{k}\geqslant\frac{c_{d}}{2}p^{k}.\] Put now \(\mathcal{A}_{k}^{*}=\mathcal{A}_{k}\cap(\mathbb{F}_{p}^{*})^{k}\). Thus we clearly have \[\#\mathcal{A}_{k}^{*}\gg p^{k}. \tag{9.5}\] We now fix \(\mathbf{a}^{*}=(a_{1},\ldots,a_{k})\in\mathcal{A}_{k}^{*}\) and consider the set \[\mathcal{L}_{p,k}\left(\mathbf{a}^{*}\right)=\mathcal{L}_{p}\cap\left(\{a_{1 },\ldots,a_{k}\}\times\mathbb{F}_{p}^{d-k}\right).\] Clearly, from the definition (9.3) of the set \(\mathcal{A}_{k}\) we have \[\#\mathcal{L}_{p,k}\left(\mathbf{a}^{*}\right)\gg p^{d-k}. \tag{9.6}\] Given a box \(\mathfrak{B}\subseteq\mathbb{F}_{p}^{d}\) of the form (9.1), we decompose it in a natural way as \(\mathfrak{B}=\mathfrak{B}_{1}\times\mathfrak{B}_{2}\subseteq\mathbb{F}_{p}^{ k}\times\mathbb{F}_{p}^{d-k}\). Note that we have \(\#\mathfrak{B}_{1}=L^{k}\). Let further \[\Lambda_{k}\left(\mathbf{a}^{*}\right)=\{\lambda\in\mathbb{F}_{p}^{*}:\ ( \lambda a_{1},\ldots,\lambda^{k}a_{k})\in\mathfrak{B}_{1}\}.\] Then Lemma 9.5 implies that \[\#\Lambda_{k}\left(\mathbf{a}^{*}\right)\geqslant\frac{1}{2}L^{k}p^{1-k}, \tag{9.7}\] provided that the condition \[L\geqslant\Gamma_{d}p^{1-1/2k}\log p \tag{9.8}\] is satisfied with a sufficiently large \(\Gamma_{d}\) if \(k\geqslant 2\), or for any \(L\) if \(k=1\). Let \(R\left(\mathbf{a}^{*}\right)\) be the number of vectors of the form \[(\mathbf{a}^{*},a_{k+1},\ldots a_{d},\lambda)=(a_{1},\ldots,a_{k},a_{k+1}, \ldots a_{d},\lambda)\in\mathcal{L}_{p,k}\left(\mathbf{a}^{*}\right)\times \Lambda_{k}\] such that \[(\lambda^{k+1}a_{k+1},\ldots,\lambda^{d}a_{d})\in\mathfrak{B}_{2}.\] It is shown in the proof of [11, Lemma 2.6] that \[\left|R\left(\mathbf{a}^{*}\right)-\#\mathcal{L}_{p,k}\left( \mathbf{a}^{*}\right)\#\Lambda_{k}\left(\mathbf{a}^{*}\right)(L/p)^{d-k}\right|\] \[\leqslant C_{d}\#\mathcal{L}_{p,k}\left(\mathbf{a}^{*}\right)( \#\Lambda_{k}(\mathbf{a}^{*}))^{1/2}(\log p)^{d-k} \tag{9.9}\] for some constant \(C_{d}\) depending only on \(d\). Suppose now that \[C_{d}(\log p)^{d-k}\leqslant\frac{1}{2}(L/p)^{d-k}(\#\Lambda_{k}\left(\mathbf{ a}^{*}\right))^{1/2}. \tag{9.10}\] Then the quantity \(R(\mathbf{a}^{*})\) from (9.9) can be bounded below by \[R\left(\mathbf{a}^{*}\right) \geqslant\frac{1}{2}\#\mathcal{L}_{p,k}\left(\mathbf{a}^{*} \right)\#\Lambda_{k}\left(\mathbf{a}^{*}\right)(L/p)^{d-k}\] \[\gg p^{d-k}L^{k}p^{1-k}(L/p)^{d-k}\gg L^{d}p^{1-k}, \tag{9.11}\] where we used (9.6) and (9.7). On the other hand, (9.7) implies that the condition (9.10) is certainly satisfied when \[\frac{1}{2\sqrt{2}}(L/p)^{d-k}(L^{k}p^{1-k})^{1/2}\geqslant C_{d}(\log p)^{d-k},\] which can be rearranged to \[L\geqslant\widetilde{C}_{d}p^{1-1/(2d-k)}(\log p)^{(d-k)/(d-k/2)}, \tag{9.12}\] where \(\,\widetilde{C}_{d}=(2\sqrt{2}C_{d})^{1/(d-k/2)}\,\). Note that since (9.7) is true for all \(\,k\geqslant 1\,\), so is the last bound. Combining the conditions (9.8) and (9.12), recalling the definition of \(\nu(d,k)\) in (3.7) and increasing \(\Gamma_{d}\) if necessary, we see that the inequality \[L\geqslant\Gamma_{d}p^{1-\nu(d,k)}\log p\] is sufficient to guarantee that (9.11) holds for any \(\,\mathbf{a}^{*}\in\mathcal{A}_{k}^{*}\,\). Clearly, each vector of \(\,\mathbf{u}\in\mathbb{F}_{p}^{d}\,\) has at most \(\,p\,\) representations as \[\mathbf{u}=(\lambda a_{1},\ldots,\lambda^{d}a_{d})\] with \(\,(\alpha_{1},\ldots,a_{d})\in\mathbb{F}_{p}^{d}\,\) and \(\lambda\in\mathbb{F}_{p}^{*}\,\). Therefore, we derive from (9.11) that \[\#\left\{\mathbf{u}\in\mathfrak{B}:\;|T_{d,p}(\mathbf{u})|\geqslant\gamma_{d} p^{1/2}\right\}\geqslant\frac{1}{p}\sum_{\mathbf{a}^{*}\in\mathcal{A}_{k}^{*}}R \left(\mathbf{a}^{*}\right)\gg L^{d}p^{-k}\#\mathcal{A}_{k}^{*},\] and recalling (9.5) we conclude the proof. ### Approximation of Weyl sums by rational sums Let \(\,\mathcal{Z}_{d}\,\) be the set of vectors \(\,\mathbf{u}\in\mathbb{F}_{p}^{d}\,\) which are not of the form \(\,\mathbf{u}=(u_{1},0,\ldots,0)\,\). We also recall that the classical _Weil bound_ (see, for example, [23, Chapter 6, Theorem 3] or [24, Theorem 5.38]), together with the completing technique described for instance in [20, Section 12.2], implies that if \(\,\mathbf{u}\in\mathcal{Z}_{d}\,\), then for any \(\,N\leqslant p\,\) we have \[\sum_{n=1}^{N}\,\mathbf{e}_{p}\left(u_{1}n+\ldots+u_{d}n^{d}\right)\ll p^{1/2 }\log p. \tag{9.13}\] Using (9.13), adapting the proof of [11, Lemma 2.9] and noticing that the condition \(\,p\mid N\,\) in [11, Lemma 2.9] is not necessary (see also [13, Corollary 2.6]), we obtain the following continuity property for Weyl sums. **Lemma 9.7**.: _Let \(\,\mathbf{u}\in\mathbb{F}_{p}^{d}\,\) and \(\,\mathbf{x}\in\mathsf{T}_{d}\,\), then we have_ \[|S_{d}\left(\mathbf{x};N\right)-S_{d}\left(p^{-1}\mathbf{u};N\right)|\ll\frac {N\log p}{p^{1/2}}\sum_{j=1}^{n}\left|x_{j}-\frac{u_{j}}{p}\right|N^{j}.\] Lemma 9.7 immediately implies the following. **Lemma 9.8**.: _Let \(\,p\,\) be a prime, and let \(\,\mathbf{u}=(u_{1},\ldots,u_{d})\in\mathcal{Z}_{d}\,\) such that_ \[|T_{d,p}(\mathbf{u})|\geqslant\gamma_{d}p^{1/2}\] _for some \(\gamma_{d}>0\). Then there are constants \(c_{d},C_{d}>0\) such that for all \(N\geqslant C_{d}p\) and all \(\mathbf{x}=(x_{1},\ldots,x_{d})\in\mathsf{T}_{d}\) satisfying_ \[\left|x_{j}-\frac{u_{j}}{p}\right|\leqslant\frac{c_{d}}{N^{j}\log p},\qquad j= 1,\ldots,d, \tag{9.14}\] _we have_ \[|S_{d}(\mathbf{x};N)|\gg Np^{-1/2}.\] ## 10. Proof of Theorems 3.9 and 3.10 ### Proof of Theorem 3.9 Let \(N\in\mathbb{N}\), and let and \(c\) and \(C\) be the constants of Lemma 9.3, noting that without loss of generality, we may assume that \(c<C/2\). Suppose first that \(\delta\geqslant 2C/N\), so that the interval \([\delta^{-1},N/C]\) contains both the interval \([N/(2C),N/C]\) and \([\delta^{-1},2\delta^{-1}]\). Then for sufficiently large \(N\) there is at least one prime number in the range \[N/C\geqslant p\geqslant 1/\delta. \tag{10.1}\] Now fix a point \(\boldsymbol{\xi}\in\mathsf{T}_{d}\) and a \(\delta>0\), and let \(\widetilde{R}_{p}(\mathbf{b})\) be the domain of admissible values of \((x_{1},x_{2})\in\mathfrak{B}(\boldsymbol{\xi},\delta)\) having a rational approximation of the shape \(|x_{i}-b_{i}/p|\leqslant cN^{-i}\) for \(i\in\left\{1,2\right\}\), where \(p\) is a prime and \(b_{1},b_{2}\in[p]\). This notation is reminiscent of that employed in our arguments in Section 7, but we stress that we have different conditions imposed on on the exponential sums here than we had there. Write further \[\mathfrak{U}_{p}(\boldsymbol{\xi},\delta)=\bigcup_{\begin{subarray}{c}\mathbf{ b}\in[p]^{2}\\ \widetilde{R}_{p}(\mathbf{b})\cap\mathfrak{B}(\boldsymbol{\xi},\delta)\neq \emptyset\end{subarray}}\widetilde{R}_{p}(\mathbf{b}),\] noting that for all \(p\) in the range (10.1) we have \(1/p>2c/N\) and consequently the sets \(\widetilde{R}_{p}(\mathbf{b})\) are pairwise disjoint by our initial assumptions. Since the number of pairs \(\mathbf{b}\in[p]\) for which \(\widetilde{R}_{p}(\mathbf{b})\) intersects \(\mathfrak{B}(\boldsymbol{\xi},\delta)\) non-trivially is at least \((\delta p-1)^{2}\geqslant(\delta p/4)^{2}\), and each individual box has volume \(\lambda(\widetilde{R}_{p}(\mathbf{b}))=(2c)^{2}N^{-3}\), it follows that \[\lambda(\mathfrak{U}_{p}(\boldsymbol{\xi},\delta))\geqslant(c\delta p/2)^{2}N ^{-3}.\] Then, applying Lemma 9.3, we derive \[I_{s,2}^{\flat}(\delta;N) \geqslant\inf_{\boldsymbol{\xi}\in\mathsf{T}_{2}}\left(\lambda( \mathfrak{U}_{p}(\boldsymbol{\xi},\delta))\inf_{(x_{1},x_{2})\in\mathfrak{U}_{ p}(\boldsymbol{\xi},\delta)}|G(x_{1},x_{2};N)|^{2s}\right)\] \[\gg(\delta p)^{2}N^{-3}(Np^{-1/2})^{2s}=\delta^{2}N^{2s-3}p^{2-s}.\] By the Prime Number Theorem, for \(s\leqslant 2\) we can choose \(p\in[N/(2C),N/C]\), while for \(s>2\) we take \(p\in[\delta^{-1},2\delta^{-1}]\). Hence \[I_{s,2}^{\flat}(\delta;N)\gg\delta^{2}N^{s-1}\max\left\{1,(\delta N)^{s-2} \right\},\] which gives the desired lower bound in the case \(\delta\gg N\). To treat the case when \(2C/N\leqslant\delta\leqslant C^{\prime}/\sqrt{N}\), we first observe that for any distinct fractions \(a/q,b/r\) with coprime \(q,r\in[\sqrt{N},2\sqrt{N}]\) we have \[\left|\frac{a}{q}-\frac{b}{r}\right|\geqslant\frac{1}{qr}\geqslant\frac{1}{N}.\] Thus for any distinct primes \(p_{1},p_{2}\in[\sqrt{N},2\sqrt{N}]\) and for any \(\boldsymbol{\xi}\in\mathsf{T}_{2}\) we have \[\mathfrak{U}_{p_{1}}(\boldsymbol{\xi},\delta)\cap\mathfrak{U}_{p_{2}}( \boldsymbol{\xi},\delta)=\emptyset, \tag{10.2}\] allowing us to enhance our previous arguments by summing over all primes in the interval \([\sqrt{N},2\sqrt{N}]\). Then, proceeding in a similar way to before and applying Lemma 9.3 and (10.2), we derive the lower bound \[\begin{split} I_{s,2}^{\flat}(\delta;N)&\geqslant \inf_{\boldsymbol{\xi}\in\mathsf{T}_{2}}\sum_{\begin{subarray}{c}\sqrt{N} \leqslant p\leqslant 2\sqrt{N}\\ p\text{ is prime}\end{subarray}}\int_{\mathfrak{U}_{p}(\boldsymbol{\xi}, \delta)}|G(x,y;N)|^{2s}dxdy\\ &\gg\sum_{\begin{subarray}{c}\sqrt{N}\leqslant p\leqslant 2 \sqrt{N}\\ p\text{ is prime}\end{subarray}}(\delta p)^{2}\,N^{-3}(Np^{-1/2})^{2s}\\ &\gg\delta^{2}N^{3(s-1)/2}(\log N)^{-1},\end{split}\] where the last inequality holds by the Prime Number Theorem. ### Proof of Theorem 3.10 Recalling the definition (3.7), suppose that \[\delta>2\Gamma_{d}\log(N/C_{d})\left(\frac{N}{2C_{d}}\right)^{-\nu(d,k)} \tag{10.3}\] for some \(k\), where \(\Gamma_{d}\) and \(C_{d}\) are the constants of Lemmas 9.6 and 9.8, respectively. This choice of \(\delta\) implies that the interval \[\left[\left(2\Gamma_{d}\log\frac{N}{C_{d}}\right)^{1/\nu(d,k)}\delta^{-1/\nu( d,k)},\,\frac{N}{C_{d}}\right]\] fully encompasses the interval \([N/(2C_{d}),N/C_{d}]\), and thus contains at least one prime. We therefore can assume that there is a prime \(p\) satisfying \[\delta\geqslant 2\Gamma_{d}p^{-\nu(d,k)}\log p\qquad\text{and}\qquad N\geqslant C _{d}p. \tag{10.4}\] Consider now a box \(\mathfrak{B}(\boldsymbol{\xi},\delta)\subseteq\mathsf{T}_{d}\). Clearly, the set of \(\mathbf{u}\in\mathbb{F}_{p}^{d}\) for which \(\mathbf{u}/p\in\mathfrak{B}(\boldsymbol{\xi},\delta)\) forms a box \(\mathfrak{C}_{p}(\boldsymbol{\xi},\delta)\subseteq\mathbb{F}_{p}^{d}\) with side-length \[L\geqslant\lfloor p\delta\rfloor\geqslant\Gamma_{d}p^{1-\nu(d,k)}\log p.\] Let \[U_{p}(\boldsymbol{\xi},\delta)=\#\left\{\mathbf{u}\in\mathfrak{C}_{p}( \boldsymbol{\xi},\delta)\cap\mathcal{Z}_{d}:\ |T_{d,p}(\mathbf{u})|\geqslant\gamma_{d}p^{1/2}\right\},\] where \(\gamma_{d}\) is as in Lemma 9.6. From that lemma, we obtain in a straightforward manner the bound \[U_{p}(\boldsymbol{\xi},\delta) \geqslant\#\left\{\mathbf{u}\in\mathfrak{C}_{p}(\boldsymbol{\xi}, \delta):\ |T_{d,p}(\mathbf{u})|\geqslant\gamma_{d}p^{1/2}\right\}\] \[\quad-\#\{u_{1}\in\mathbb{F}_{p}:\ (u_{1},0,\ldots,0)\in \mathfrak{C}_{p}(\boldsymbol{\xi},\delta)\}\] \[\gg L^{d}-L\gg(p\delta)^{d}.\] Therefore, if \(\mathcal{N}_{p}(\boldsymbol{\xi},\delta)\) denotes the set of all \((x_{1},\ldots x_{d})\in\mathsf{T}_{d}\) having a diophantine approximation as in (9.14) with numerator \(\mathbf{u}\) counted by \(U_{p}(\boldsymbol{\xi},\delta)\), we have \[\lambda(\mathcal{N}_{p}(\boldsymbol{\xi},\delta))\gg\delta^{d}p^{d}\prod_{j=1 }^{d}(N^{j}\log p)^{-1}=\delta^{d}p^{d}N^{-s(d)}\left(\log p\right)^{-d},\] and thus for any prime \(p\) satisfying the conditions (10.4) we have \[I_{s,d}^{\flat}(\delta;N) \gg\inf_{\boldsymbol{\xi}\in T^{d}}\left(\lambda(\mathcal{N}_{p }(\boldsymbol{\xi},\delta))\inf_{\mathbf{x}\in\mathcal{N}_{p}(\boldsymbol{ \xi},\delta)}|S_{d}(\mathbf{x};N)|^{2s}\right)\] \[\gg\delta^{d}p^{d}N^{-s(d)}\left(Np^{-1/2}\right)^{2s}\left(\log p \right)^{-d}\] \[\gg\delta^{d}p^{d-s}N^{2s-s(d)}\left(\log N\right)^{-d}.\] Recall now that by our assumption (10.3), for a sufficiently large \(N\) we can always find a prime \(p\) satisfying (10.4) with \[p\ll\delta^{-1/\nu(d,k)}(\log N)^{1/\nu(d,k)}\] as well as a prime \(p\) (also satisfying (10.4)) with \[p\gg N.\] Hence, under the condition (10.3) we have \[I_{s,d}^{\flat}(\delta;N) \gg\delta^{d}p^{d-s}N^{2s-s(d)}\left(\log N\right)^{-d}\] \[\geqslant\max\{\delta^{d}N^{s+d-s(d)},\delta^{d-(d-s)/\nu(d,k)}N ^{2s-s(d)}\}N^{o(1)},\] which finishes the proof. ## 11. Further comments ### Mean values over more general sets Our setting involving multidimensional mean values opens up a certain degree of flexibility in terms of the shape of the underlying domain, and Wooley's conjecture (Conjecture 2.1) admits for arbitrary measurable sets. Arguably, boxes of variable sidelength that reflects the distinct powers in the exponential sum might be better suited to understand the local behaviour of Weyl sums. Another approach is to investigate local behaviour only with respect to the coordinate corresponding to the highest degree, which contributes most of the oscillations of exponential sums. The case of boxes of the shape \([0,1)^{d-1}\times[0,\delta]\) has been studied in some detail in work by Demeter, Guth and Wang [15] as well as Guth and Maldague [19] on small cap decouplings, extending previous work by Bourgain [5]. Even though in the work at hand we restricted our attention to hypercubes, our methods can be extended without serious problems to other axis-aligned boxes as well. ### Applications to the Schrodinger equation Our results have consequences for solutions of Schrodinger equations over short intervals. The Schrodinger equation \[2\pi u_{t}+iu_{xx}=0\] models the behaviour of quantum mechanical particles. We denote by \(\rho(t,\mathcal{I})\) the probability that the particle belongs to the interval \(\mathcal{I}\) at time \(t\). When \(u(x,t)\) is a solution to the Schrodinger equation, then this probability is given by \[\rho(t,\mathcal{I})=\int_{\mathcal{I}}|u(x,t)|^{2}dx. \tag{11.1}\] In the case when the boundary condition is periodic of the shape \[u(x,0)=\sum_{n=1}^{N}a_{n}\,\mathbf{e}(xn),\] the solutions of the Schrodinger equation are trigonometric polynomials with quadratic amplitudes of the shape \[u(x,t)=\sum_{n=1}^{N}a_{n}\,\mathbf{e}(xn+tn^{2}).\] For a fixed \(t\in\mathsf{T}\), our results do not yield any estimate for the value (11.1). However, from our results we can deduce various upper and lower bounds on the above probability \(\rho(t,\mathcal{I})\) for any short interval \(\mathcal{I}\) and for some time in yet another short interval. For example, in the case of the constant coefficients \(a_{n}=1,n\in\mathbb{N}\), by Theorems 3.7 and 3.9, we have the following. **Corollary 11.1**.: _Let \(N\in\mathbb{N}\) be a large number, and let \((x_{0},t_{0})\in\mathsf{T}_{2}\). Then_ \((1)\) _for \(\delta\geqslant N^{-3/8}\), there exists \(t\in[t_{0},t_{0}+\delta]\) such that_ \[\int_{x_{0}}^{x_{0}+\delta}\left|\sum_{n=1}^{N}\,\mathbf{e}(xn+tn^{2})\right| ^{2}dx\leqslant\delta N^{5/4+o(1)};\] \((2)\) _if \(\delta\geqslant c/N\) for some small \(c>0\), there exists \(t\in[t_{0},t_{0}+\delta]\), such that_ \[\int_{x_{0}}^{x_{0}+\delta}\left|\sum_{n=1}^{N}\,\mathbf{e}(xn+tn^{2})\right| ^{2}dx\gg\delta.\] Proof.: Clearly, we have \[\delta\min_{t\in[t_{0},t_{0}+\delta]}\int_{x_{0}}^{x_{0}+\delta} \left|\sum_{n=1}^{N}\,\mathbf{e}(xn+tn^{2})\right|^{2}dx \leqslant\int_{t_{0}}^{t_{0}+\delta}\int_{x_{0}}^{x_{0}+\delta} \left|\sum_{n=1}^{N}\,\mathbf{e}(xn+tn^{2})\right|^{2}dx\] \[\leqslant I_{2,1}^{\sharp}\left(\delta;N\right).\] It thus suffices to observe that for the first statement, Theorem 3.7 with parameters \(s=1\) and any \(\delta\geqslant N^{-3/8}\) yields the bound \[I_{2,1}^{\sharp}(\delta;N)\leqslant\delta^{2}N^{2(1-3/8)+o(1)}=\delta^{2}N^{5 /4+o(1)},\] which proves the claim (1). The second statement (2) is established similarly by combining the bound \[\delta\max_{t\in[t_{0},t_{0}+\delta]}\int_{x_{0}}^{x_{0}+\delta}\left|\sum_{n=1}^ {N}\,\mathbf{e}(xn+tn^{2})\right|^{2}dx\geqslant I_{2,1}^{\sharp}(\delta;N)\] with the bound \[I_{2,1}^{\sharp}(\delta;N)\gg\delta^{2}\] from Theorem 3.9(1). ## Acknowledgments We are grateful to Roger Baker for his contributions in the initial phase of the paper. During the preparation of this manuscript, JB was supported by Starting Grant 2017-05110 and, in the final stages, Project Grant 2022-03717 of the Swedish Science Foundation (Vetenskapsradet), CC was supported by the National Natural Science Foundation of China Grant 12101002, and IS was supported by the Australian Research Council Grant DP170100786. Part of the work was completed while JB and IS were in residence at the Max-Planck-Institute for Mathematics in Bonn, whose generous support and excellent working conditions are also gratefully acknowledged.
2307.05249
DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis
Multi-center positron emission tomography (PET) image synthesis aims at recovering low-dose PET images from multiple different centers. The generalizability of existing methods can still be suboptimal for a multi-center study due to domain shifts, which result from non-identical data distribution among centers with different imaging systems/protocols. While some approaches address domain shifts by training specialized models for each center, they are parameter inefficient and do not well exploit the shared knowledge across centers. To address this, we develop a generalist model that shares architecture and parameters across centers to utilize the shared knowledge. However, the generalist model can suffer from the center interference issue, \textit{i.e.} the gradient directions of different centers can be inconsistent or even opposite owing to the non-identical data distribution. To mitigate such interference, we introduce a novel dynamic routing strategy with cross-layer connections that routes data from different centers to different experts. Experiments show that our generalist model with dynamic routing (DRMC) exhibits excellent generalizability across centers. Code and data are available at: https://github.com/Yaziwel/Multi-Center-PET-Image-Synthesis.
Zhiwen Yang, Yang Zhou, Hui Zhang, Bingzheng Wei, Yubo Fan, Yan Xu
2023-07-11T13:29:37Z
http://arxiv.org/abs/2307.05249v1
# DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis ###### Abstract Multi-center positron emission tomography (PET) image synthesis aims at recovering low-dose PET images from multiple different centers. The generalizability of existing methods can still be suboptimal for a multi-center study due to domain shifts, which result from non-identical data distribution among centers with different imaging systems/protocols. While some approaches address domain shifts by training specialized models for each center, they are parameter inefficient and do not well exploit the shared knowledge across centers. To address this, we develop a generalist model that shares architecture and parameters across centers to utilize the shared knowledge. However, the generalist model can suffer from the center interference issue, _i.e._ the gradient directions of different centers can be inconsistent or even opposite owing to the non-identical data distribution. To mitigate such interference, we introduce a novel dynamic routing strategy with cross-layer connections that routes data from different centers to different experts. Experiments show that our generalist model with dynamic routing (DRMC) exhibits excellent generalizability across centers. Code and data are available at: [https://github.com/Yaziwel/Multi-Center-PET-Image-Synthesis](https://github.com/Yaziwel/Multi-Center-PET-Image-Synthesis). Keywords:Multi-Center Positron Emission Tomography Synthesis Generalist Model Dynamic Routing. ## 1 Introduction Positron emission tomography (PET) image synthesis [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] aims at recovering high-quality full-dose PET images from low-dose ones. Despite great success, most algorithms [1, 2, 4, 5, 8, 9, 10] are specialized for PET data from a single center with a fixed imaging system/protocol. This poses a significant problem for practical applications, which are not usually restricted to any one of the centers. Towards filling this gap, in this paper, we focus on multi-center PET image synthesis, aiming at processing data from multiple different centers. However, the generalizability of existing models can still be suboptimal for a multi-center study due to domain shift, which results from non-identical data distribution among centers with different imaging systems/protocols (see Fig. 1 (a)). Though some studies have shown that a specialized model (_i.e._ a convolutional neural network (CNN) [3, 6] or Transformer [9] trained on a single center) exhibits certain robustness to different tracer types [9], different tracer doses [3], or even different centers [6], such generalizability of a center-specific knowledge is only applicable to small domain shifts. It will suffer a severe performance drop when exposed to new centers with large domain shifts [11]. There are also some federated learning (FL) based [12, 11, 7] medical image synthesis methods that improve generalizability by collaboratively learning a shared global model across centers. Especially, federated transfer learning (FTL) [7] first successfully applies FL to PET image synthesis in a multiple-dose setting. Since the resultant shared model of the basic FL method [12] ignores center specificity and thus cannot handle centers with large domain shifts, FTL addresses this by finetuning the shared model for each center/dose. However, FTL only focuses on different doses and does not really address the multi-center problem. Furthermore, it still requires a specialized model for each center/dose, which ignores potentially transferable shared knowledge across centers and scales up the overall model size. A recent trend, known as generalist models, is to request that a single unified model works for multiple tasks/domains, and even express generalizability to novel tasks/domains. By sharing architecture and parameters, generalist models can better utilize shared transferable knowledge across tasks/domains. Some pioneers [13, 14, 15, 16, 17] have realized competitive performance on various high-level vision tasks like classification [13, 16], object detection [14], _etc._ Nonetheless, recent studies [18, 16] report that conventional generalist [15] models may suffer from the interference issue, _i.e._ different tasks with shared parameters potentially conflict with each other in the update directions of the gradient. Specific to PET image synthesis, due to the non-identical data distribution across centers, we also observe the **center interference issue** that the gradient directions of different centers may be inconsistent or even opposite (see Fig. 1). This will lead to an uncertain update direction that deviates from the optimal, resulting in sub-optimal performance of the model. To address the interference issue, recent generalist models [14, 16] have introduced dynamic routing [19] which learns to activate experts (_i.e._ sub-networks) dynamically. The input feature will be routed to different selected experts accordingly so as to avoid interference. Meanwhile, different inputs can share some experts, thus maintaining collaboration across domains. In the inference time, the model can reasonably generalize to different domains, even unknown domains, by utilizing the knowledge of existing experts. In spite of great success, the study of generalist models rarely targets the problem of multi-center PET image synthesis. In this paper, inspired by the aforementioned studies, we innovatively propose a generalist model with **D**ynamic **R**outing for **M**ulti-**C**enter PET image synthesis, termed DRMC. To mitigate the center interference issue, we propose a novel dynamic routing strategy to route data from different centers to different experts. Compared with existing routing strategies, our strategy makes an improvement by building cross-layer connections for more accurate expert decisions. Extensive experiments show that DRMC achieves the best generalizability on both known and unknown centers. Our contribution can be summarized as: * A generalist model called DRMC is proposed, which enables multi-center PET image synthesis with a single unified model. * A novel dynamic routing strategy with cross-layer connection is proposed to address the center interference issue. It is realized by dynamically routing data from different centers to different experts. * Extensive experiments show that DRMC exhibits excellent generalizability over multiple different centers. ## 2 Method ### Center Interference Issue Due to the non-identical data distribution across centers, different centers with shared parameters may conflict with each other in the optimization process. To verify this hypothesis, we train a baseline Transformer with 15 base blocks (Fig. 2 (b)) over four centers. Following the paper [16], we calculate the gradient direction interference metric \(\mathcal{I}_{i,j}\) of the \(j\)-th center \(C_{j}\) on the \(i\)-th center \(C_{i}\). As shown in Fig. 1 (b), interference is observed between different centers at different layers. This will lead to inconsistent optimization and inevitably degrade the model performance. Details of \(\mathcal{I}_{i,j}\)[16] are shown in the **supplement**. ### Network Architecture The overall architecture of our DRMC is shown in Fig. 2 (a). DRMC firstly applies a 3\(\times\)3\(\times\)3 convolutional layer for shallow feature extraction. Next, the Figure 1: (a) Examples of PET images at different Centers. There are domain shifts between centers. (b) The interference metric \(\mathcal{I}_{i,j}\)[16] of the center \(C_{j}\) on the center \(C_{i}\) at the 1-st/4-th blocks as examples. The red value indicates that \(C_{j}\) has a negative impact on \(C_{i}\), and the green value indicates that \(C_{j}\) has a positive impact on \(C_{i}\). shallow feature is fed into \(N\) blocks with dynamic routing (DRBs), which are expected to handle the interference between centers and adaptively extract the deep feature with high-frequency information. The deep feature then passes through another 3\(\times\)3\(\times\)3 convolutional layer for final image synthesis. In order to alleviate the burden of feature learning and stabilize training, DRMC adopts global residual learning as suggested in the paper [20] to estimate the image residual from different centers. In the subsequent subsection, we will expatiate the dynamic routing strategy as well as the design of the DRB. ### Dynamic Routing Strategy We aim at alleviating the center interference issue in deep feature extraction. Inspired by prior generalist models [13, 14, 16], we specifically propose a novel dynamic routing strategy for multi-center PET image synthesis. The proposed dynamic routing strategy can be flexibly adapted to various network architectures, such as CNN and Transformer. To utilize the recent advance in capturing global contexts using Transformers [9], without loss of generality, we explore the application of the dynamic routing strategy to a Transformer block, termed dynamic routing block (DRB, see Fig. 2 (c)). We will introduce our dynamic routing strategy in detail from four parts: base expert foundation, expert number scaling, expert dynamic routing, and expert sparse fusion. **Base Expert Foundation.** As shown in Figure 2 (b), we first introduce an efficient base Transformer block (base block) consisting of an attention expert and a feed-forward network (FFN) expert. Both experts are for basic feature extraction and transformation. To reduce the complexity burden of the attention expert, we follow the paper [9] to perform global channel attention with linear complexity instead of spatial attention [21]. Notably, as the global channel attention may ignore the local spatial information, we introduce depth-wise convolutions Figure 2: The framework of our proposed DRMC to emphasize the local context after applying attention. As for the FFN expert, we make no modifications to it compared with the standard Transformer block [21]. It consists of a 2-layer MLP with GELU activation in between. **Expert Number Scaling.** Center interference is observed on both attention experts and FFN experts at different layers (see Fig. 1 (b)). This indicates that a single expert can not be simply shared by all centers. Thus, we increase the number of experts in the base block to \(M\) to serve as expert candidates for different centers. Specifically, each Transformer block has an attention expert bank \(\mathbf{E}_{ATT}=[\mathbf{E}_{ATT}^{1},\mathbf{E}_{ATT}^{2},...,\mathbf{E}_{ ATT}^{M}]\) and an FFN expert bank \(\mathbf{E}_{FFN}=[\mathbf{E}_{FFN}^{1},\mathbf{E}_{FFN}^{2},...,\mathbf{E}_{FFN}^ {M}]\), both of which have \(M\) base experts. However, it does not mean that we prepare specific experts for each center. Although using center-specific experts can address the interference problem, it is hard for the model to exploit the shared knowledge across centers, and it is also difficult to generalize to new centers that did not emerge in the training stage [16]. To address this, we turn to different combinations of experts. **Expert Dynamic Routing.** Given a bank of experts, we route data from different centers to different experts so as to avoid interference. Prior generalist models [13, 14, 16] in high-level vision tasks have introduced various routing strategies to weigh and select experts. Most of them are independently conditioned on the information of the current layer feature, failing to take into account the connectivity of neighboring layers. Nevertheless, PET image synthesis is a dense prediction task that requires a tight connection of adjacent layers for accurate voxel-wise intensity regression. To mitigate the potential discontinuity [13], we propose a dynamic routing module (DRM, see Fig. 2 (c)) that builds cross-layer connection for expert decisions. The mechanism can be formulated as: \[W=\mathbf{ReLU}(\mathbf{MLP}([\mathbf{GAP}(X),H])), \tag{1}\] where \(X\) denotes the input; \(\mathbf{GAP}(\cdot)\) represents the global average pooling operation to aggregate global context information of the current layer; \(H\) is the hidden representation of the previous MLP layer. ReLU activation generates sparsity by setting the negative weight to zero. It is a more suitable gating function in comparison with the commonly used softmax activation [14] and top-k gating [13, 16] in our study (see Table. 4). \(W\) is a sparse weight used to assign weights to different experts. In short, DRM sparsely activates the model and selectively routes the input to different subsets of experts. This process maximizes collaboration and meanwhile mitigates the interference problem. On the one hand, the interference across centers can be alleviated by sparsely routing \(X\) to different experts (with positive weights). The combinations of selected experts can be thoroughly different across centers if violent conflicts appear. On the other hand, experts in the same bank still cooperate with each other, allowing the network to best utilize the shared knowledge across centers. **Expert Sparse Fusion.** The final output is a weighted sum of each expert's knowledge using the sparse weight \(W=[W^{1},W^{2},...,W^{M}]\) generated by DRM. Given an input feature \(X\), the output \(\hat{X}\) of an expert bank can be obtained as: \[\hat{X}=\sum_{m=1}^{M}W^{m}\cdot\mathbf{E}^{m}(X), \tag{2}\] where \(\mathbf{E}^{m}(\cdot)\) represents an operator of \(\mathbf{E}^{m}_{ATT}(\cdot)\) or \(\mathbf{E}^{m}_{FFN}(\cdot)\). ### Loss Function We utilize the Charbonnier loss [23] with hyper-parameter \(\epsilon\) as \(10^{-3}\) to penalize pixel-wise differences between the full-dose (\(Y\)) and estimated (\(\hat{Y}\)) PET images: \[\mathcal{L}=\sqrt{\left\|Y-\hat{Y}\right\|^{2}+\epsilon^{2}}. \tag{3}\] ## 3 Experiments and Results ### Dataset and Evaluation Full-dose PET images are collected from 6 different centers (\(C_{1}\)-\(C_{6}\)) at 6 different institutions1. The data of \(C_{3}\) and \(C_{4}\)[22] are borrowed from the Ultra-low Dose PET Imaging Challenge2, while the data from other centers were privately collected. The key information of the whole dataset is shown in Table. 1. Note that \(C_{1}\)-\(C_{4}\) are for both training and testing. We denote them as \(C_{kn}\) as these centers are known to the generalist model. \(C_{5}\) and \(C_{6}\) are unknown centers (denote as \(C_{ukn}\)) that are only for testing the model generalizability. The low-dose PET data is generated by randomly selecting a certain portion of the raw scans according to the dose reduction factor (DRF), _e.g._ the portion is 25% when DRF=4. Then we reconstruct low-dose PET images using the standard OSEM method [24]. Since the voxel size differs across centers, we uniformly resample \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{2}{c|}{Center} & \multicolumn{1}{c|}{Enteritional} & \multicolumn{1}{c|}{Type} & \multicolumn{1}{c|}{Eusional} & \multicolumn{1}{c|}{System} & \multicolumn{1}{c|}{Truor} & \multicolumn{1}{c|}{Data} & \multicolumn{1}{c|}{BrdF} & \multicolumn{1}{c|}{Signing (\(mn\))} & \multicolumn{1}{c|}{Shape} & \multicolumn{1}{c|}{Train} & \multicolumn{1}{c}{Truor} \\ \hline \multirow{4}{*}{\(C_{\text{L}}\)} & \(C_{1}\) & \(I_{1}\) & Whole Body & Yes & PolarStar 660 & \({}^{\text{FF-FDG}}\) & 29MB12 & 12.18\(\times\)3.15\(\times\)1.87 & 592\(\times\)192\(\times\)192\(\times\)192\(\times\)192 & 10 \\ & \(C_{2}\) & \(I_{2}\) & Whole Body & Yes & PolarStar 1 & \({}^{\text{FF-FDG}}\) & 29MB2 & 4 & 12.13\(\times\)12.17 & 592\(\times\)192\(\times\)192 & 20 & 10 \\ & \(C_{2}\) & \(I_{3}\) & Whole Body & Yes & Unified Imaging \#EXPIORER & \({}^{\text{FF-FDG}}\) & 29MB2 & 10 & 16.7\(\times\)16.7\(\times\)28.89 & 56\(\times\)26\(\times\)192 & 20 & 10 \\ & \(C_{2}\) & \(I_{4}\) & Whole Body & Yes & Simmingh Visual \#C & 29MB2 & 10 & 6.5\(\times\)16.5\(\times\)16.9 & 26\(\times\)26.5\(\times\)192 & 20 & 10 \\ \hline \multirow{4}{*}{\(C_{\text{L}}\)-\(C_{4}\)} & \(C_{5}\) & \(I_{6}\) & Brain & No & PolarStar 2660 & \({}^{\text{FF-FDG}}\) & 29MB2 & 4 & 1.18\(\times\)1.85 & 592\(\times\)26.5\(\times\)192 & - & 10 \\ & \(C_{6}\) & \(I_{6}\) & Whole Body & Yes & PolarStar 2660 & \({}^{\text{FF-FDG}}\) & 29MB2 & 12 & 12.15\(\times\)13.15\(\times\)1.87 & 592\(\times\)192 & - & 10 \\ \hline \hline \end{tabular} \end{table} Table 1: Multi-Center PET Dataset Information the images of different centers so that their voxel size becomes 2\(\times\)2\(\times\)2 \(mm^{3}\). In the training phase, we unfold images into small patches (uniformly sampling 1024 patches from 20 patients per center) with a shape of 64\(\times\)64\(\times\)64. In the testing phase, the whole estimated PET image is acquired by merging patches together. To evaluate the model performance, we choose the PSNR metric for image quantitative evaluation. For clinical evaluation, to address the accuracy of the standard uptake value (SUV) that most radiologists care about, we follow the paper [3] to calculate the bias of \(SUV_{mean}\) and \(SUV_{max}\) (denoted as \(B_{mean}\) and \(B_{max}\), respectively) between low-dose and full-dose images in lesion regions. ### Implementation Unless specified otherwise, the intermediate channel number, expert number in a bank, and Transformer block number are 64, 3, and 5, respectively. We employ Adam optimizer with a learning rate of \(10^{-4}\). We implement our method with Pytorch using a workstation with 4 NVIDIA A100 GPUs with 40GB memory (1 GPU per center). In each training iteration, each GPU independently samples data from a single center. After the loss calculation and the gradient back-propagation, the gradients of different GPUs are then synchronized. We train our model for 200 epochs in total as no significant improvement afterward. ### Comparative Experiments We compare our method with five methods of two types. (i) 3D-cGAN [1] and 3D CVT-GAN [10] are two state-of-the-art methods for single center PET image synthesis. (ii) FedAVG[12, 11], FL-MRCM[11], and FTL[7] are three federated learning methods for privacy-preserving multi-center medical image synthesis. \begin{table} \begin{tabular}{c|c c|c c} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{\(C_{ia}\)} & \multicolumn{2}{c}{\(C_{ia}\)} \\ \cline{2-5} & PSNR\(\uparrow\) & \(B_{mean}\)\(\downarrow\) & \(B_{mean}\)\(\downarrow\) & PSNR\(\uparrow\) & \(B_{mean}\)\(\downarrow\) \\ \hline \multirow{2}{*}{w/o H} & 46.84\({}^{\circ}\) & 0.0097\({}^{\circ}\) & 0.1436\({}^{\circ}\) & 38.23\({}^{\circ}\) & 0.1829\({}^{\circ}\) & 0.1548\({}^{\circ}\) \\ Softmax & 46.70\({}^{\circ}\) & 0.0849\({}^{\circ}\) & 0.1274\({}^{\circ}\) & 38.33 & 0.1864\({}^{\circ}\) & 0.1524\({}^{\circ}\) \\ Top-2 Gauge & 46.61\({}^{\circ}\) & 0.0806\({}^{\circ}\) & 0.1295\({}^{\circ}\) & 38.38 & 0.1867\({}^{\circ}\) & 0.1564\({}^{\circ}\) \\ DRMC & **46.88** & **0.0752** & **0.1155** & **38.40** & **0.1814** & **0.1483** \\ \hline \end{tabular} \end{table} Table 4: Routing Ablation Results. \begin{table} \begin{tabular}{c|c c c c|c c c c|c c c c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{PSNR\(\uparrow\)} & \multicolumn{4}{c|}{\(B_{mean\downarrow}\)} & \multicolumn{4}{c}{\(B_{max\downarrow}\)} \\ \cline{2-13} & \(C_{1}\) & \(C_{2}\) & \(C_{3}\) & \(C_{4}\) & Avg. & \(C_{1}\) & \(C_{2}\) & \(C_{3}\) & \(C_{4}\) & Avg. & \(C_{1}\) & \(C_{2}\) & \(C_{3}\) & \(C_{4}\) & \(\lambda\) Avg. \\ \hline \multirow{4}{*}{(i) 3D-cGAN} & 47.30\({}^{\circ}\) & 49.47\({}^{\circ}\) & 45.38\({}^{\circ}\) & 48.15\({}^{\circ}\) & 13.0963\({}^{\circ}\) & 0.0832 & 0.0795\({}^{\circ}\) & 0.1681\({}^{\circ}\) & 10.1602\({}^{\circ}\) & 0.1358\({}^{\circ}\) & 0.1696\({}^{\circ}\) & 0.1725\({}^{\circ}\) & 0.2801\({}^{\circ}\) & 0.1896\({}^{\circ}\) \\ & 47.65\({}^{\circ}\) & 45.17\({}^{\circ}\) & 45.94\({}^{\circ}\) & 43.07\({}^{\circ}\) & 45.65\({}^{\circ}\) & 0.0872 & 0.0792\({}^{\circ}\) & 0.0594\({}^{\circ}\) & 0.1143\({}^{\circ}\) & 0.0855\({}^{\circ}\) & 0.1128\({}^{\circ}\) & 0.1910\({}^{\circ}\) & 0.1652\({}^{\circ}\) & 0.2224\({}^{\circ}\) & 0.1661\({}^{\circ}\) \\ \hline \multirow{4}{*}{(ii) 3D CVT-GAN} & 47.34\({}^{\circ}\) & 46.42\({}^{\circ}\) & 46.61\({}^{\circ}\) & 44.37\({}^{\circ}\) & 45.35\({}^{\circ}\) & 0.0985\({}^{\circ}\) & 0.0996\({}^{\circ}\) & 0.1006\({}^{\circ}\) & 0.2202\({}^{\circ}\) & 0.1122\({}^{\circ}\) & 0.1459\({}^{\circ}\) & 0.1546\({}^{\circ}\) & 0.2011\({}^{\circ}\) & 0.2663\({}^{\circ}\) & 0.1920\({}^{\circ}\) \\ & 47.81\({}^{\circ}\) & 47.81\({}^{\circ}\) & 45.56\({}^{\circ}\) & 46.10\({}^{\circ}\) & 44.31\({}^{\circ}\) & 45.95\({}^{\circ}\) & 0.0982\({}^{\circ}\) & 0.0929\({}^{\circ}\) & 0.0631\({}^{\circ}\) & 0.1344\({}^{\circ}\) & 0.0601\({}^{\circ}\) & 0.1517\({}^{\circ}\) & 0.1607\({}^{\circ}\) & 0.1307\({}^{\circ}\) & 0.1518\({}^{\circ}\) & 0.1501\({}^{\circ}\) \\ FTL & 48.05\({}^{\circ}\) & 45.62\({}^{\circ}\) & 46.01\({}^{\circ}\) & 44.75\({}^{\circ}\) & 46.11\({}^{\circ}\) & 0.0829\({}^{\circ}\) & 0.0945\({}^{\circ}\) & 0.0857\({}^{\circ}\) & 0.0858\({}^{\circ}\) & 0.0830\({}^{\circ}\) & 0.1243\({}^{\circ}\) & 0.1585\({}^{\circ}\) & 0.0881\({}^{\circ}\) & 0.1436\({}^{\circ}\) & 0.1200\({}^{\circ}\) \\ \hline \multirow{2}{*}{(ii) 4} & **49.48** & **46.32** & **46.71** & **45.01** & **46.88** & **0.0844** & **0.0792** & **0.0491** & **0.0880** & **0.0752** & **0.1037** & **0.1313** & **0.08837** & **0.1431** & **0.1155** \\ \hline \end{tabular} \end{table} Table 2: Results on \(C_{kn}\). The **Best** and the **Second-Best** Results are Highlighted. *: Significant Difference at \(p<0.05\) between Comparison Method and Our Method. All methods are trained using data from \(C_{kn}\) and tested over both \(C_{kn}\) and \(C_{ukn}\). For methods in (i), we regard \(C_{kn}\) as a single center and mix all data together for training. For federated learning methods in (ii), we follow the "**Mix**" mode (upper bound of FL-based methods) in the paper [11] to remove the privacy constraint and keep the problem setting consistent with our multi-center study. **Comparison Results for Known Centers.** As can be seen in Table. 2, in comparison with the second-best results, DRMC boosts the performance by 0.77 dB PSNR, 0.0078 \(B_{mean}\), and 0.0135 \(B_{max}\). This is because our DRMC not only leverages shared knowledge by sharing some experts but also preserves center-specific information with the help of the sparse routing strategy. Further evaluation can be found in the **supplement**. **Comparison Results for Unknown Centers.** We also test the model generalization ability to unknown centers \(C_{5}\) and \(C_{6}\). \(C_{5}\) consists of normal brain data (without lesion) that is challenging for generalization. As the brain region only occupies a small portion of the whole-body data in the training dataset but has more sophisticated structure information. \(C_{6}\) is a similar center to \(C_{1}\) but has different working locations and imaging preferences. The quantitative results are shown in Table. 3 and the visual results are shown in Fig. 1 (a). DRMC achieves the best results by dynamically utilizing existing experts' knowledge for generalization. On the contrary, most comparison methods process data in a static pattern and unavoidably produce mishandling of out-of-distribution data. Furthermore, we evaluate the performance of different models on various DRF data on \(C_{6}\), and the results are available in the **supplement**. These results indicate that our method demonstrates strong robustness. ### Ablation Study **Specialized Model vs. Generalist Model.** As can be seen in Table. 5, the baseline model (using 15 base blocks) individually trained for each center acquires good performance on its source center. But it suffers performance drop on other centers. The baseline model trained over multiple centers greatly enhances the overall results. But due to the center interference issue, its performance on a specific center is still far from the corresponding specialized model. DRMC mitigates the interference with dynamic routing and achieves comparable performance to the specialized model of each center. **Ablation Study of Routing Strategy.** To investigate the roles of major components in our routing strategy, we conduct ablation studies through (i) removing \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Train Centers} & \multicolumn{4}{c|}{PSNR} & \multicolumn{4}{c|}{_Bemi._} & \multicolumn{4}{c|}{_Bemi._} & \multicolumn{4}{c|}{_Bemi._} \\ \cline{3-14} & & \multicolumn{3}{c|}{Toint Centers} & \multicolumn{3}{c|}{Avg.} & \multicolumn{3}{c|}{Toint Centers} & \multicolumn{3}{c|}{Toint Centers} & \multicolumn{3}{c|}{Toint Centers} & \multicolumn{3}{c|}{Toint Centers} & \multicolumn{3}{c|}{Avg.} \\ \cline{3-14} & & C\({}_{1}\) & C\({}_{2}\) & C\({}_{3}\) & C\({}_{3}\) & C\({}_{3}\) & C\({}_{3}\) & C\({}_{3}\) & C\({}_{3}\) & C\({}_{3}\) & C\({}_{3}\) & C\({}_{3}\) & C\({}_{3}\) \\ \hline \multirow{4}{*}{ \begin{tabular}{c} Specialized Model \\ \end{tabular} } & \multirow{4}{*}{Baseline} & \(C_{1}\) & 88.09\({}^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{ \text{ }}}}}}}}}}}}}}} & 4.887\)\({}^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{ \text{ \text{ \texttext{ \texttexttexttexttext{ }}}}}}}}}}}}}}} 4 4.887\)\({}^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\texttexttexttext{\texttext{ \texttexttexttexttexttexttexttexttexttexttexttexttext{ \texttexttexttexttexttexttexttexttexttexttexttexttext{ \texttexttexttexttexttexttexttexttexttexttexttexttext{ \texttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttext{ \texttext{\text the condition of hidden representation \(H\) that builds cross-layer connection, and replacing ReLU activation with (ii) softmax activation [14] and (iii) top-2 gating [13]. The results are shown in Table. 4. We also analyze the interpretability of the routing by showing the distribution of different layers' top-1 weighted experts using the testing data. As shown in Fig. 3 (b), different centers show similarities and differences in the expert distribution. For example, \(C_{6}\) shows the same distribution with \(C_{1}\) as their data show many similarities, while \(C_{5}\) presents a very unique way since brain data differs a lot from whole-body data. **Ablation Study of Hyperparameters.** In Fig. 3 (c) and (d), we show ablation results on expert number (\(M\)) and block number (\(N\)). We set \(M\)=3 and \(N\)=5, as this configuration has demonstrated good performance while maintaining acceptable computational complexity. ## 4 Conclusion In this paper, we innovatively propose a generalist model with dynamic routing (DRMC) for multi-center PET image synthesis. To address the center interference issue, DRMC sparsely routes data from different centers to different experts. Experiments show that DRMC achieves excellent generalizability.
2305.13252
"According to ...": Prompting Language Models Improves Quoting from Pre-Training Data
Large Language Models (LLMs) may hallucinate and generate fake information, despite pre-training on factual data. Inspired by the journalistic device of "according to sources", we propose according-to prompting: directing LLMs to ground responses against previously observed text. To quantify this grounding, we propose a novel evaluation metric (QUIP-Score) that measures the extent to which model-produced answers are directly found in underlying text corpora. We illustrate with experiments on three corpora (Wikipedia, PubMed, and the U.S. legal tax code) that these prompts improve grounding under our metrics, with the additional benefit of often improving end-task performance. Furthermore, prompts that ask the model to decrease grounding (or to ground to other corpora) indeed decrease QUIP-Score, indicating the ability of LLMs to increase or decrease grounded generations on request.
Orion Weller, Marc Marone, Nathaniel Weir, Dawn Lawrie, Daniel Khashabi, Benjamin Van Durme
2023-05-22T17:25:24Z
http://arxiv.org/abs/2305.13252v2
# _"According to..."_ ###### Abstract Large Language Models (LLMs) may hallucinate and generate fake information, despite pre-training on factual data. Inspired by the journalistic device of _"according to sources"_, we propose _according-to_ prompting: directing LLMs to ground responses against previously observed text. To quantify this grounding, we propose a novel evaluation metric (QUIP-Score) that measures the extent to which model-produced answers are directly found in underlying text corpora. We illustrate with experiments on Wikipedia that these prompts improve grounding under our metrics, with the additional benefit of often improving end-task performance. Furthermore, prompts that ask the model to decrease grounding (or to ground to other corpora) _decrease_ grounding, indicating the ability of language models to increase or decrease grounded generations on request. ## 1 Introduction As the deployment of Large Language Models (LLMs) in real-world applications continues to grow, their tendency to generate false content Ji et al. (2022) poses significant risks to downstream users. Recent work has attempted to address this issue by augmenting them with retrieval Shuster et al. (2021); Sun et al. (2023); Borgeaud et al. (2022), however, these models still struggle with hallucination problems in practice Liu et al. (2023). This work explores the intriguing possibility of _steering LLMs by prompting_ them to quote more of the curated sources of information they have memorized during pre-training, thereby reducing their tendency to generate false information. As illustrated in Figure 1, the hypothesis explores whether adding phrases such as "According to Wikipedia" can guide LLMs to quote from Wikipedia, which is presumably observed in the pre-training corpus. We find empirical evidence that this is attainable using current LLMs (both open and closed source). Our study is inspired by two recent research areas. First, larger LLMs can be more effectively guided using natural language prompts Ouyang et al. (2022); Wan et al. (2023); Ganguli et al. (2023). Second, as LLMs grow in size, their ability to remember facts and statements from pre-training improves Kandpal et al. (2022); Tirumala et al. (2022); Carlini et al. (2023, 2020). Thus, we seek to steer LLMs to use their memorization for a positive purpose: producing more grounded outputs. A key step in this study is quickly determining whether generated outputs overlap significantly with pre-training data; i.e., efficiently performing membership testing via a Data PortraitMarone and Van Durme (2023). We design a new metric called QUIP-Score, short for **Qu**oted **I**Information **P**recision, on top of a data sketch that Figure 1: Prompting LLMs to respond with quotes directly from pre-training data (shown in purple). provides efficient membership queries within milliseconds. QUIP-Score is an n-gram overlap measure that quantifies how much a passage is formed of spans that are exactly contained in a text corpus. We perform experiments1 based on the task of open-domain question answering (ODQA), for which provenance-grounded answers are of particular importance. We collect human-authored prompts intended to steer generations towards information grounded in Wikipedia, our target source for grounding. We observe that across all human-authored prompts, we can increase the amount of overlap with Wikipedia by 5-105% while maintaining or even improving the downstream performance. We collect results across multiple datasets and a large set of models, including both open- and closed-sourced to ensure consistency. Footnote 1: Code to reproduce our analysis is accessible at: [https://github.com/JHU-CLSP/according-to](https://github.com/JHU-CLSP/according-to) Interestingly, we also observe the opposite phenomenon - it is possible to _discourage_ LLMs from grounding in Wikipedia via prompts that either discourage grounding or encourage grounding to other corpora. This consequently decreases the overlap with Wikipedia and also lowers performance on downstream tasks that rely on Wikipedia content. We conduct scaling experiments on different model sizes, which indicate that as size increases, so does the effectiveness of our proposed approach. This implies that the hallucination problem may diminish as a function of continued scaling of LLMs. In summary, we present _according-to_ prompting, a simple and effective approach to improving an LLMs' ability to generate more factual information. Additionally, we introduce QUIP-Score, an efficient approach for measuring groundedness of LLM generations against their pre-training corpus. We experiment with various prompting strategies across models, datasets, and scaling trends. ## 2 Related Work Memorization in LLMs.Large language models have been observed to memorize their training data Carlini et al. (2020); Chang et al. (2023); among others). This is problematic when web-scraped training data contains sensitive personal data or low quality information sources Dodge et al. (2021); Luccioni and Viviano (2021). However, it can be beneficial for models to memorize content from _carefully curated and trusted corpora_, where careful de-duplication Lee et al. (2022) and curation strategies Feng et al. (2022) can improve language model quality. Some language modeling datasets intentionally oversample trusted resources, e.g., The Pile triples the effective weight of Wikipedia, oversampling it more than any of the other 22 component subsets Gao et al. (2020). Work on analyzing memorization has proposed measuring overlap against Google search results as a proxy for memorization. Carlini et al. (2020) manually look for exact string matches in search results, while Levy et al. (2021) automatically measure BLEU Papineni et al. (2002) between model generations and the first page of Google results. Marone and Van Durme (2023) call for documenting large datasets through membership testing tools, that they collectively label Data Portraits. As one implementation, they illustrate using a Bloom Filter Bloom (1970) for storing long n-grams. While other portrait implementations have been proposed, such as based on BM-25 and suffix arrays Piktus et al. (2023), Bloom Filters provide both lightweight storage and access, enabling scalable overlap metrics against large corpora. Hallucination, grounding, and attribution.Numerous studies De Cao et al. (2021); Li et al. (2022) have demonstrated that LLMs struggle with both hallucination and factuality, leading to frequent inaccuracies and outright falsehoods. Previous research has attempted to alleviate this problem in various ways, including retrieving grounded documents before generation Sun et al. (2023); Borgeaud et al. (2022); Mallen et al. (2023), applying new decoding approaches He et al. (2022), post hoc tuning of LLMs Menick et al. (2022); Lee et al. (2022), and analyzing the model's output training data Han and Tsvetkov (2022); Park et al. (2023). Our work is different from and complementary to this literature, as we investigate a novel yet straightforward approach to steer LLMs towards generating more factual responses. Another similar line of work is that of LLM attribution Rashkin et al. (2021); Bohnet et al. (2022) with several datasets designed to measured attribution to identified sources (AIS). Our work is similar in that we seek to find attribution for a language model's output to a given collection but different in that we define attribution more broadly: we find attribution for _any text_ that the language model generated, while AIS seeks to find attribution for the semantic knowledge that the language model generated. Thus, our metric studies grounding and attri bution in a broader context than previously studied. LLM Steerability via prompting.The larger LMs become, the easier they are to steer with natural language prompts Kandpal et al. (2022); Carlini et al. (2023); Mishra et al. (2022); Srivastava et al. (2023). Several works Mishra et al. (2022); Chung et al. (2022); Wang et al. (2022); Wan et al. (2023) have shown that larger instruction-tuned models are more easily steered than smaller and non-instruction tuned models. This is desirable in our setting, as we seek to use these capabilities of LLMs to quote more from a given corpus. ## 3 Methodology ### QUIP-Score: Measuring Grounding to Pre-Training Data In order to understand whether models are able to ground to their pre-training data, we first need to have a way of measuring this phenomena. We adopt a narrow definition of grounding (quoting from source material) while acknowledging that grounding is a broad term. To enable fast and efficient measurement of quoting from pre-training data for many language model generations across large corpora, we build off of a Data Portrait(Marone and Van Durme, 2023), which allows for fast membership queries for each n-gram in the output. This approach enables us to perform a one-time indexing of a large corpus (e.g. Wikipedia) and at inference time simply compute a constant time lookup operation (in milliseconds) for each n-gram in the generation. We build a Data Portrait on the version of Wikipedia included in the Pile,2 as it allows for us to exactly test the pre-training data included in many models like GPT-J and is similar to the training data used in T5. However, we note that for some models evaluated in this paper (e.g. OpenAI models) there is no public information about the Wikipedia version in the models. Footnote 2:wikipedia/20200301.en We use character based n-grams as opposed to a token-based n-gram as different models have different tokenization schemes; furthermore, character-based n-gram metrics have widespread usage in fields such as machine translation with metrics like chrF and chrF++ (Popovic, 2015, 2017). We use 25 character grams for the sketch, approximately 5-gram words, as we found it empirically gave meaningful results (not too small of an n-gram and not too large). The Data Portrait checks for exact matches and is sensitive to orthographic variation (e.g. case, whitespace). Therefore we view this as a lower-bound on actual quoting performance. We define our new metric QUIP-Score as the character n-gram precision of the generated output compared to the pre-training corpus. More formally, for generation \(Y\) and text corpus \(C\): \[\text{QUIP}(Y;C)=\frac{\sum_{\text{gram}_{n}\in Y}1_{C}(\text{gram}_{n})}{| \text{gram}_{n}\in Y|},\] where \(1(.)\) is an indicator function: 1 if \(\text{gram}_{n}\in C\) else 0. Thus, a score of 0.5 would indicate that 50% of the generated text \(n\)-grams are found in the pre-training corpus. We macro-average this quantity over a set of generations to obtain a single performance number for a given test dataset.3 Footnote 3: Early experiments indicated little difference between macro and micro-averaging. ### Grounding via _according-to_ Prompting We seek to improve knowledge grounding by causing LLMs to quote directly from underlying trusted resources seen during training. This is equivalent to _encouraging_ memorization of high quality or trusted documents. We test whether models can do so by simply prompting them using wording that encourage grounding such as "Respond by using information from Wikipedia in your response" after4 the given input. We call this strategy _according-to_ prompting. We then measure the difference in their QUIP-Score by comparing the output with the grounding prompt to one without the grounding prompt (i.e. a null prompt). Footnote 4: We tried appending, prepending, and their combinations in early experiments and found that appending the grounding/anti-grounding prompts performed the best. To account for the difference in prompts lengths and to verify that prompts can both increase and decrease grounding, we also include prompts that are anti-grounding (e.g. "Respond by using information from [another source] in your response" or "Respond without using any information from Wikipedia." This allows us to test the hypothesis that models can ground (or not ground) to a given corpus when asked because of the semantic meaning of the prompt, rather than the length of the prompt. As prompting is notoriously brittle (e.g. changing the phrasing can affect the results) we provide a number of grounding and anti-grounding prompts to test whether these prompts provide consistent gains or are merely prompting artifacts (see Table 1 for the list of prompts used). ### Datasets We use a variety of datasets to test if LLMs are consistent and to check whether grounding affects the end-task performance of a given dataset. To best measure the grounding of the output however, the model generations must be long enough to have many n-grams that can be measured. Thus, we test on long-form question answering (QA), and for datasets that do not lend themselves well to long-form output (e.g. short form QA) we ask the models to generate both the answer and a corresponding explanation whose n-grams can be measured. Note that our purpose is not to improve state-of-the-art performance on these tasks, as our main research question is to analyze the grounding of these model's outputs. However, we do note that we have competitive performance compared to other prompting baselines and sometimes even observe improved performance from grounding, as it naturally correlates with the ability to answer questions from the grounded material. Thus we use the following datasets: **EL15.** EL15, or "Explain Like I'm 5" (Fan et al., 2019) is a long form question answering dataset composed of questions from the sub-Reddit r/EL15, where users ask questions and receive answers that are typically paragraph-sized. The original EL15 dataset was not grounded to any particular knowledge source, which is not ideal for our experiments on grounding. Hence, we use the KILT version (Petroni et al., 2021) of EL15, of which the development set contains 1507 instances of EL15 ques \begin{table} \begin{tabular}{l l c c c c c c c} \hline \hline **Prompt** & \multicolumn{2}{c}{**TQA**} & \multicolumn{2}{c}{**NQ**} & \multicolumn{2}{c}{**Hotpot**} & \multicolumn{2}{c}{**EL15**} \\ (appended after the question) & QUIP & EM & QUIP & EM & QUIP & F1 & QUIP & R-L \\ \hline \hline 0 (no additional prompt) & 31.6 & 77.8 & 32.8 & 32.9 & 28.3 & 35.7 & 24.1 & 22.7 \\ \hline \multirow{6}{*}{ \begin{tabular}{l} Text-Davinci-003 \\ GPT-4 \\ GPT-J \\ Koola 7B \\ FLAN-T5 XXL \\ \end{tabular} } & 31.1 & 77.3 & 32.8 & 34.0 & 28.1 & 35.9 & 26.3 & 22.3 \\ \cline{1-1} & 31.7 & 73.2 & 33.0 & 30.2 & 28.7 & 35.3 & 25.5 & 22.7 \\ \cline{1-1} & 31.7 & 70.1 & 33.8 & 27.6 & 28.1 & 33.1 & 27.2 & 21.0 \\ \cline{1-1} & 31.7 & 70.1 & 33.8 & 27.6 & 28.1 & 33.1 & 27.2 & 21.0 \\ \cline{1-1} & 31.7 & 70.1 & 33.8 & 27.6 & 28.1 & 33.1 & 27.2 & 21.0 \\ \cline{1-1} & 31.7 & 72.8 & 75.9 & 34.6 & 34.4 & 28.9 & 35.9 & 25.7 & 22.0 \\ \cline{1-1} & 31.6 & 78.8 & 34.3 & 34.8 & 29.2 & 36.6 & 26.5 & 21.7 \\ \cline{1-1} & 31.5 & 72.7 & 32.9 & 31.7 & 30.4 & 35.5 & 25.8 & 20.4 \\ \cline{1-1} & 31.6 & 76.3 & 35.3 & 33.2 & 29.9 & 36.1 & 26.3 & 21.9 \\ \cline{1-1} & 31.7 & 76.6 & 37.0 & 33.9 & 30.4 & 36.2 & 28.0 & 21.5 \\ \cline{1-1} & 31.7 & 76.9 & 32.0 & 32.0 & 26.8 & 32.9 & 24.7 & 22.1 \\ \hline \hline \end{tabular} \end{table} Table 1: Impact of various prompts on the grounding (QUIP-Score) and performance scores, using ChatGPT (Section 4). The top section is the null prompt (no additional prompt other than the question), the middle section includes anti-grounding prompts, and the last section includes grounding prompts. We find that **grounding prompts generally improve the QUIP-Score while anti-grounding prompts generally reduce QUIP-Score**. Colored cells indicate changes (gains, losses, or the same) relative to the null row. Note that EL15 Rouge-L is based on similarity to Reddit rather than Wikipedia. SOTA zero-shot results are from LLaMA 33B, LLaMA 65B (Touvron et al., 2023), PaLM 540B (Wang et al., 2022), and BART (Su et al., 2022) respectively. For retrieval-augmented SOTA, Izacard et al. (2022) is for NQ, TriviaQA and HotpotQA, while Su et al. (2022) is for EL15. \begin{table} \begin{tabular}{l|c c c c c c c c} \hline \hline **Model** & \multicolumn{2}{c}{**TQA**} & \multicolumn{2}{c}{**NQ**} & \multicolumn{2}{c}{**Hotpot**} & \multicolumn{2}{c}{**EL15**} \\ & QUIP & EM & QUIP & EM & QUIP & F1 & QUIP & R-L \\ \hline Text-Davinci-003 & +14.7\% & +5.3\% & +14.7\% & +20.6\% & +14.4\% & +7.2\% & +16.5\% & +22.8\% \\ GPT-4 & - & - & - & - & - & - & +17.6\% & -2.3\% \\ GPT-J Instruct & +12.1\% & - & +15.2\% & - & +13.9\% & - & +18.1\% & +19.4\% \\ Koala 7B & +5.1\% & - & +6.3\% & - & +5.0\% & - & +35.5\% & +22.8\% \\ FLAN-T5 XXL & +43.3\% & - & +41.5\% & - & +20.7\% & - & +105.2\% & +18.7\% \\ \hline \hline \end{tabular} \end{table} Table 2: Percentage improvement when using the grounding prompt vs the null prompt. We find that **the grounding prompt improves over the null prompt in nearly every dataset and metric, typically by 5-15%**. Smaller models struggled to output both answer and explanation to the short form question with one prompt; hence we use two step prompting (prediction then explanation) for open-source models. Since the first step (prediction) uses the same prompt for both grounding and null, the EM/F1 scores are the same and are omitted (c.f. Section 3.4). tions, their answers, and provenances for those answers grounded in Wikipedia. As this subset of the ELI5 dataset is the grounded subset (with the non-grounded questions filtered out) it provides a more suitable evaluation for our research question. Natural Questions.Natural Questions (NQ) [10] is a short form (e.g. less than five words for the answer) QA dataset which consists of queries gathered from real-world Google Searches. To better compare with previous work in prompting on NQ, we evaluate on the full development set (3360 instances) and report the exact match score. TriviaQA.TriviaQA (TQA) [10] was collected by scraping question and answer pairs from trivia websites, and then matching the answers (short-form) to Wikipedia passages. We use the filtered development set (7k instances), following previous work and report the EM score. HotpotQA.HotpotQA [11] is a multi-step short-form question answering dataset that requires two-step reasoning to come to the correct answer. It was gathered from crowdsourcing questions and answers from Amazon Mechanical Turk using two-hop links on Wikipedia. We use the full dev set (7k instances) and report the F1 score. ### Models and Prompting We test a wide array of models in our experiments including most OpenAI models [12], T5-based models (T5 adapted to language modeling [13, 14] and FLAN-T5 [15]), GPT-J instruction tuned5[12], and Koala [11] (a Llama variant [16]). By doing so, we provide both (1) results on both open and closed source models, (2) results on Figure 3: Impact of entity popularity on QUIP-Scores, showing that **models are better able to quote pre-training text about popular entities**. The x-axis shows how many times the given entity relationship was found co-occurring in pre-training data. Bars indicate 1 standard error. We use the ranges following [14]. Figure 2: Model size vs QUIP-Score performance, with FLAN-T5 models (left) and OpenAI models (right). **As model scale increases, so does performance**. At smaller model sizes, the grounding prompt is not more effective than the null prompt, but gains efficacy with model size. Error bars indicate 1 standard error. instruction-tuned and non-instruction tuned models, and (3) models ranging from 220 million parameters to 175B models. Note that our experiments consist solely of providing prompts to the models and do not include fine-tuning (as the goal is to see what these models can do zero-shot). For short-form QA datasets, we alter the prompt the have the model produce both the answer and the explanation, allowing us to measure the explanation overlap. However, smaller models (e.g. < 15B parameters) were not able to follow instructions to provide both answer and explanation in a parseable format from just one prompt. Thus, we do two-step prompting with them, asking them to answer the question, then asking them to explain the answer (and appending the grounding prompt, if used). We use the explanation to measure the QUIP-Score. For detailed information and full text of the prompts used, see Appendix A. ## 4 Results We first analyze a wide range of _according-to_ prompts on ChatGPT, using this as a cost-effective development stage to try a variety of grounding and anti-grounding prompts. We later test the null prompt and the best performing grounding prompt on a variety of other models for further analysis. Table 1 shows results for different prompts using the ChatGPT model. We see that the QUIP-Score for the anti-grounding prompts generally performs the same or worse than the null prompt (e.g. no additional text) and generally significantly worse than the grounding prompts. There is also a clear trend in the grounding prompts, with all grounding prompts performing similarly or improving on their QUIP-Score compared to the null prompt. Surprisingly, we find that grounding prompts perform similarly (and sometimes even better than) the null prompt on end-task performance (e.g. up to a 6% improvement on NQ, 2.5% on HotpotQA). We note that while this is not the case for Rouge-L on ELI5, as that metric is measuring similarity to Reddit, rather than similarity to Wikipedia. We use these results on ChatGPT to inform our next experiments, using the null prompt and the best grounding prompt ("Respond to this question using only information that can be attributed to Wikipedia") in our future experiments due to cost. ### Results from Other Models We show the relative difference of the grounding prompt over the null prompt for more models in Table 2, which further confirms our findings (for the absolute instead of relative numbers, see Appendix A). For example, using the grounding prompt with Text-Davinci-003 improves over the null prompt by around 15% QUIP-Score and 5-20% for the specific task. For all models evaluated, the grounding prompt improves in both end-task performance and QUIP-Score by 5-105%. Thus, our findings hold for a wide variety of models and model-sizes - even when prompts are not tuned for the specific model being prompted, indicating the generality of our approach. ### Impact of Model Size Does model size impact their ability to quote from their pre-training data? We answer this question in Figure 2, which shows that smaller models perform the same or worse with a grounding prompt as opposed to the null prompt. However, larger models perform significantly better with the grounding prompt as opposed to the null prompt, for both OpenAI models and FLAN-T5 models. Thus, from this we can conclude that a model's ability to quote from its pre-training data improves with size. ### Impact of Instruction Tuning One potential reason for why these models can recall their pre-training data on request is a better capability to instruction-follow. We test this hypothesis in Figure 4 that compares T5-11B compared to FLAN-T5-11B. We find that instruction-tuning does help, as the QUIP-Scores for T5-v1.1-Adapt are similar between grounding and null prompts, while the Figure 4: Comparing instructed-tuned FLAN-T5 XXL to non-instruction tuned T5-v1.1-Adapt XXL. Note that **grounding has a larger impact on instruction-tuned models as compared to non-instruction tuned**. FLAN-T5 model has a large difference between the null and grounding prompt (roughly 2x better). ### Impact of Entity Popularity Another potential factor influencing model memorization is the popularity of the entities that it is asked questions about Kandpal et al. (2022); Carlini et al. (2023). Previous work has shown that Question/Answer entity co-occurrence (as measured by the count in the pre-training set of instances where the entity in the question and the answer co-occur in the same passage) is strongly correlated to the performance on the task Kandpal et al. (2022). We use their code and data (from the Pile) to explore whether or not QUIP-Score is correlated with the entity popularity/co-occurrence frequency. Due to the imbalance between co-occurrence count, we sample 400 instances (or as many as available) from each dataset and co-occurrence frequency bin. We measure the QUIP-Score on these instances using the output generations from Chat-GPT on both grounding and null prompts. We find in Figure 3 that QA entity popularity is positively correlated with QUIP-Score for both grounding and null prompts (although more effective for the grounding prompt) and that the model is better able to recall information from Wikipedia when QA entities frequently co-occur. Figure 5: Example generations from various considered models. Purple text was found in Wikipedia. Note that for non-ELIS datasets, models were prompted to generate the answer, a semicolon, and then the explanation (see Section 3.4). Note that better grounding to Wikipedia does not always imply correct answers (see Question 4). ### Qualitative Examples In Table 5 we show example output from a variety of models across the evaluation datasets. We can see that when asked to be grounded the model correctly generates much larger chunks of texts that occur in Wikipedia (shown in purple). It is also important to note that although the text may be present in Wikipedia, it does not automatically make the output generation correct with respect to the question. For example, the TriviaQA example shows that both models predicted the incorrect quote for Smokey the Bear, although the grounding prompt was better grounded in Wikipedia for the explanation of its prediction. However, overall, grounding prompts generate more quoted Wikipedia text than the null prompt, showing the effectiveness of our approach. ## 5 Discussion and Future Implications Our results strongly suggest that LLMs can be steered via prompting to increase the amount by which they quote human-authored sources in their training data. This finding has strong implications not just for our considered task, ODQA, but also for a wide array of other task spaces in which provenance grounding is important. We note that our _according-to_ prompting strategy is domain agnostic, and constructing a Data Portrait for an arbitrary corpus is straightforward and not limited to indices of Wikipedia or other internet knowledge sources. For example, one might desire a scientific reasoning system that quotes the findings of domain experts in areas such as law or medicine. Moreover, another appealing characteristic of a Data Portrait-based approach to measuring overlap is that it preserves the privacy of source content. This means that closed-source models trained on proprietary data can still use this approach and release QUIP-Score performance, thus serving as a privacy-preserving measure of provenance grounding without having to release the private corpus. This can serve to add trust in systems such as Chat-GPT that are increasingly used for arbitrary reasoning tasks, by allowing them to highlight quoted spans, without having to release the full data that was used for training/fine-tuning. ## 6 Conclusion Large language models struggle with hallucination, or generating incorrect information, despite the large amount of factual pre-training data they were trained on. To help alleviate this problem, we proposed _according-to_ prompts, asking language models to ground their output to their pre-training corpus. To quantify the extent to which models achieve this goal, we introduced a new metric, QUIP-Score, that efficiently and quickly measures the percent of the model's generation that exists as exact quotes in the pre-training corpus. We showed that prompting models with grounding prompts greatly improves the QUIP-Score while anti-grounding prompts reduces the QUIP-Score. Our analysis also shows that QUIP-Score increases with instruction-tuning, popularity of the entity in the question, and model size. We hope that this work brings more attention to the positive aspects of LLM memorization and encourages more work into understanding how and when language model output is grounded to its pre-training data. ## 7 Limitations Our proposed metric only accounts for exact lexical match and will miss other types of grounded statements - thus we view QUIP-Score as a lower bound on grounding where grounding is defined only by quoting from source material. We also recognize the possibility of a discrepancy between the pre-training data of private models like ChatGPT and the Wikipedia version we use for analysis, due to limited information on their pre-training. However, this might not be a significant concern, as although Wikipedia is not completely static, a substantial part of the information in this knowledge source remains consistent over a short span of years. Furthermore, our results with Chat-GPT are similar compared with models for which we do have the exact pre-training data (like GPT-J).
2306.15373
Conformal Symmetry and Effective Potential: I. Vacuum $V_{z,x}$-operation for the Green functions
We begin a series of two papers that is devoted to the study of the multi-loop effective potential evolution in $\varphi^4$-theory using the conformal symmetry. In the first part, we introduce and describe in detail the vacuum $V_{z,x}$-operation ($"V"$ stems from "vacuum", $\{z,x\}$ imply the corresponding coordinates) that transforms the given Green functions to the corresponding vacuum integrations which generate the effective potential. Our operation can be considered as an inverse procedure compared to the Gorishni-Isaev method. To the final goal, it is necessary to introduce also the special treatment of the mass terms as sorts of "interaction" in an asymptotical expansion of the generating functional.
I. V. Anikin
2023-06-27T10:45:48Z
http://arxiv.org/abs/2306.15373v3
# Conformal Symmetry and Effective Potential: I. Vacuum \(V_{z,x}\)-operation for the Green functions ###### Abstract We begin a series of two papers that is devoted to the study of the multi-loop effective potential evolution in \(\varphi^{4}\)-theory using the conformal symmetry. In the first part, we introduce and describe in detail the vacuum \(V_{z,x}\)-operation ("\(V\)" stems from "vacuum", \(\{z,x\}\) imply the corresponding coordinates) that transforms the given Green functions to the corresponding vacuum integrations which generate the effective potential. Our operation can be considered as an inverse procedure compared to the Gorishni-Isaev method. To the final goal, it is necessary to introduce also the special treatment of the mass terms as sorts of "interaction" in an asymptotical expansion of the generating functional. ## 1 Introduction The effective potential approaches play an important role for the different theoretical studies where the manifestations of spontaneous symmetry breaking are under the main considerations. The special attentions are paid for the quantum corrections which usually distort the classical geometrical picture related to the Goldstone theorem. However, the presence of massive parameters (or particle masses) in the theory can significantly complicate the multi-loop calculations even in the vacuum integrations which form the effective potential. On the other hand, the massless propagators in the corresponding loops simplify any multi-loop calculations and, moreover, open the window for the appropriate use of conformal symmetry [1]. In the paper, based on the stationary phase method, we use an alternative representation for the generating functional in \(\varphi^{4}\)-theory where the massive term of Lagrangian has been treated as a sort of "effective interactions". As a result of this, the scalar propagators in the vacuum diagrams describing interactions become massless ones [1]. Due to the singular parts of the loop corrections, the effective potential demands the renormalization resulting in the renormalization scale \(\mu\) dependence. As usual, the scale evolution of effective potential is governed by the anomalous dimension within the RG-method. It turns out that the anomalous dimension of effective potential can be readily derived from the anomalous dimension of the non-local operator Green function with the help of \(V_{z,x}\)-operation at any loop accuracy [1]. In other words, if the anomalous dimension of the corresponding Green function is known at the given \(\ell\)-loop accuracy (see for example [2]), the anomalous dimension of effective potential formed by the vacuum integrations can be almost algebraically calculated at the \((\ell+1)\)-loop accuracy thanks for the new-introduced \(V_{z,x}\)-operation. Generally speaking, the \(V_{z,x}\)-operation can be assumed as an inverse operation compared to the Gorishni-Isaev method [3], inspired by [4; 5], where the Green functions of propagator-type have been reduced to the vacuum integration. The findings of [3] became extremely useful for the multi-loop calculations. In the paper, we give an comprehensive description of \(V_{z,x}\)-operation which is one of the main tool used in the extended version [1]. ## 2 The masslessness procedure of Effective Potential in \(\varphi^{4}\) We begin with the generating functional in the scalar \(\varphi^{4}\) theory which leads to the effective action/potential 1. In a theory with massive parameters and interactions, the generating functional has the following form (modulo the normalization constants denoted as _n.c._): Footnote 1: Since, as well-known, the effective action differs from the effective potential by the (infinite) space-time volume, \(V\times T\sim\delta^{(4)}(0)\), we neglect this difference unless it leads to misunderstanding. \[\mathbb{Z}[J]\stackrel{{ n.c.}}{{=}}e^{iS_{I}( \frac{\delta}{\delta J})}\mathbb{Z}_{0}[J]=\int(\mathcal{D}\varphi)\,e^{iS( \varphi)+i(J,\varphi)}, \tag{1}\] \[\mathbb{Z}_{0}[J]=\mathcal{N}e^{(J,\Delta_{F}J)}=\int(\mathcal{D} \varphi)\,e^{iS_{0}(\varphi)+i(J,\varphi)}, \tag{2}\] where \(\Delta_{F}\) implies the Feynman propagator; \(S(\varphi)=S_{0}(\varphi;m)+S_{I}(\varphi)\) denotes the sum of free and interaction actions 2. The stationary phase method applied to \(\mathbb{Z}[J]\) gives the following series (cf. [6; 7]) Footnote 2: For the sake of shortness, we use the notations \((a,Kb)=\int dz_{1}\,dz_{2}a(z_{1})K(z_{1},z_{2})b(z_{2})\) \[\mathbb{Z}[J]=e^{iS(\varphi_{c})+i(J,\varphi_{c})}\int(\mathcal{ D}\eta)e^{-\frac{i}{2}(\eta,\Box\eta)}\exp\Big{\{}-i\sum_{n=2}^{4}\frac{[ \lambda]_{n}}{n!}\big{(}1,\eta^{n}\big{)}\Big{\}}\] \[=e^{iS(\varphi_{c})+i(J,\varphi_{c})}\,P_{\eta}\exp\big{\{}V(\eta )\big{\}}\Big{|}_{\eta=0}\quad\text{with}\quad P_{\eta}\equiv\exp\Big{\{} \frac{1}{2}(\frac{\delta}{\delta\eta},\Delta_{F}\frac{\delta}{\delta\eta}) \Big{\}} \tag{3}\] where \(\eta=\varphi-\varphi_{c}\) with \(\lim_{J\to 0}\varphi_{c}(x)\equiv\lim_{J\to 0}\langle 0|\varphi|0\rangle^{J}= \varphi_{c}=const\). Notice that this expansion should actually be considered as an asymptotical series and all inner lines correspond to the scalar _massless_ propagators. Besides, the generating function of Eqn. (3) generates the vertices which are \[(a) \Rightarrow[\lambda]_{2}\eta^{2}\equiv\lambda_{0}^{(a)}\eta^{2} \stackrel{{\rm def}}{{=}}\big{(}m_{0}^{2}+\lambda_{0}\varphi_{c }^{2}/2\big{)}\,\eta^{2};\] \[(b) \Rightarrow[\lambda]_{3}\eta^{3}\equiv\lambda_{0}^{(b)}\eta^{3} \stackrel{{\rm def}}{{=}}\lambda_{0}\varphi_{c}\eta^{3};\] \[(c) \Rightarrow[\lambda]_{4}\eta^{4}\stackrel{{\rm def}}{{= }}\lambda_{0}\eta^{4}. \tag{4}\] In Eqn. (4), the mass and coupling constant (charge) are bare ones. It is worth to note that the vertices \((a)\) and \((b)\) should be treated as effective ones, while \((c)\) corresponds to the standard vertex in the \(\varphi^{4}\)-theory under consideration. The connected generalizing functional \(\mathbb{W}[J]\) is related to the effective action \(\Gamma[\varphi]\) as (the Legendre transformations) \[\Gamma[\varphi]=\mathbb{W}[J]-i(J,\,\varphi). \tag{5}\] Based on the generating functional, see Eqn. (3), and on the Legendre transform, see Eqn. (5), we can readily derive the expression for the effective action/potential. Symbolically, we have \[\Gamma[\varphi_{c}]=S(\varphi_{c})+\Big{\{}n\text{-loop connected diagrams}\Big{\}}, \tag{6}\] where the term of \(\ln\left[(\det\widehat{\Box})^{-1/2}\right]\), which corresponds to the one-loop standard diagram contribution only, does not actually contribute in the massless propagator case. While, the second term of Eqn. (6) involves the full set of the connected diagrams which can be grouped as follows: _(a)_ the standard diagrams in \(\varphi^{4}\) with the \([\lambda]^{n}\)-vertices only 3; _(b)_ the non-standard diagrams of type-\(I\) with the \([\lambda^{(a)}]^{n}\)-vertices only; _(c)_ the non-standard diagrams of type-\(II\) with the \([\lambda^{(b)}]^{2n}\)-vertices only; _(d)_ the diagrams of type-\(III\) with the mixed vertices as \([\lambda^{(a)}]^{n_{1}}[\lambda^{(b)}]^{n_{2}}[\lambda]^{n_{3}}\). Footnote 3: The standard vacuum diagrams with \([\lambda]^{n}\)-vertices do not depend on \(\varphi_{c}\) and, therefore, they can be omitted at the moment. The non-standard diagrams of type-\(I\) contribute only to the one-loop approximation. In this case, we have the only contribution as [3] (see the first diagram of Fig. 1) \[\Gamma^{(I)}[\varphi_{c}]=\sum_{n=1}^{\infty}\int(d^{D}k)\frac{[\lambda_{0}^ {(a)}]^{n}}{(k^{2})^{n}}=\sum_{n=1}^{\infty}\frac{[\lambda_{0}^{(a)}]^{n}}{ \Gamma(n)}\delta\left(n-D/2\right)=\frac{[\lambda_{0}^{(a)}]^{D/2}}{\Gamma(D/ 2)}\delta(0). \tag{7}\] Here and in what follows, the singularity of \(\delta(0)\) should be treated as (see, for example, [8; 9]) \[\delta(0)\equiv\lim_{\epsilon\to 0}\frac{a_{(I)}}{\epsilon}, \tag{8}\] where \(a_{(I)}\) is, generally speaking, an arbitrary constant 4 and it can be fixed by the pole relations, see below. Moreover, the pre-delta function can be \(\epsilon\)-expanded [8]. Footnote 4: The constants \(a_{(i)}\) corresponding to the given diagram involves also the diagram symmetric coefficient. The representations given by Eqns. (7) and (8) require some explanations. First of all, in the vacuum integration series of (7) we focus on the ultraviolet divergency only, otherwise the vacuum massless integrations are nullified after the infrared divergency has been included. Then, \(\Gamma^{(I)}[\varphi_{c}]\) receives the only contribution which goes from the following integration [10] (here \(D=4-2\epsilon\)) \[\Gamma^{(I)}[\varphi_{c}]=[\lambda_{0}^{(a)}]^{2}\int_{\rm UV} \frac{(d^{D}k)}{(k^{2})^{2}}\equiv[\lambda_{0}^{(a)}]^{2}\frac{\pi^{D/2}}{ \Gamma(D/2)}\int_{\mu^{2}}^{\infty}d\beta\beta^{D/2-3}=\] \[[\lambda_{0}^{(a)}]^{2}\frac{\pi^{2-\epsilon}}{\Gamma(2-\epsilon )}\frac{\mu^{-2\epsilon}}{\epsilon}\Big{|}_{\epsilon\to 0}=[\lambda_{0}^{(a)}]^{2- \epsilon}\frac{\pi^{2-\epsilon}}{\Gamma(2-\epsilon)}\left.\frac{1}{\epsilon} \right|_{\epsilon\to 0}, \tag{9}\] where \(\beta=|k|^{2}\) and \(\mu^{2}\) has been chosen to be equal to \(\lambda_{0}^{(a)}\). On the other hand, let us calculate the series related to \(\Gamma^{(I)}[\varphi_{c}]\) with the help of the vacuum integration technique [3]. We obtain that (cf. Eqn. (7)) \[\Gamma^{(I)}[\varphi_{c}]=\sum_{n=1}^{\infty}\int(d^{D}k)\frac{[ \lambda_{0}^{(a)}]^{n}}{(k^{2})^{n}}=\frac{[\lambda_{0}^{(a)}]^{D/2}}{\Gamma( D/2)}\sum_{n=1}^{\infty}\delta\left(n-D/2\right)\] \[=\frac{[\lambda_{0}^{(a)}]^{2-\epsilon}}{\Gamma(2-\epsilon)}\sum _{n=1}^{\infty}\delta\left(n-2+\epsilon\right)=\frac{[\lambda_{0}^{(a)}]^{2- \epsilon}}{\Gamma(2-\epsilon)}\delta\left(\epsilon\right) \tag{10}\] Eqn. (10) involves the singular generated function (distribution) \(\delta(\epsilon)\) that is the well-defined functional on the finite \(\phi\)-function space with the integration measure \(d\mu(\epsilon)=d\epsilon\,\phi(\epsilon)\). Nonetheless, in many cases it is not convenient, from the technical viewpoint, to introduce the space with the measure \(d\mu(\varepsilon)\). Eqns. (7) and (10) should be equivalent (these equations are merely different representations of the given diagram), it hints to use the sequential approach [9] to the delta-function and, as consequence, to the treatment of \(\delta(0)\)-singularity/uncertintity. In other words, we may say that \(\delta(0)\) is only a symbol of the limit given by \(\lim_{\varepsilon\to 0}[1/\varepsilon]\). So, we can make an inference that after the \(\delta(0)\)-singularity has been singled out (_i.e._ the vacuum integration has been implemented in Eqn. (7)), the dimension \(D\) as an argument of \(\Gamma\)-function can be extended to \(D=4-2\varepsilon\), see [8, 10], giving the non-trivial \(\varepsilon\)-expansion. In this paper, it is, however, enough to be restricted by the region of \(\varepsilon=0\) in the corresponding pre-delta functions. The full set of type-\(II\) diagrams reduces to the three-loop box-like diagram, see the second diagram of Fig. 1. It reads \[\Gamma^{(II)}[\varphi_{c}]=\left\{G(1,1,1,1,1)+2G^{2}(1,1)\right\}\frac{[ \gamma_{0}^{(b)}]^{4}}{\Gamma(2)}\delta(3\varepsilon). \tag{11}\] Indeed, the general structure of this sum can be presented as \[\Gamma^{(II)}[\varphi_{c}]\sim\sum_{n=1}^{\infty}[\lambda_{0}^{(b)}]^{2n} \,\delta\big{(}3n-(n+1)D/2\big{)}. \tag{12}\] From Eqn. (12), one can immediately conclude that the only contribution originates from the case of \(n=2\) giving \(\delta(6-3D/2)\sim\delta(\varepsilon)\). However, this type of diagrams can be omitted at this moment due to the highest singularity that behaves as \(1/\varepsilon^{3}\). The mixed diagrams of type-\(III\) can be aggregated into two classes. The first class \(A\) of diagrams with \(n_{1}=n\), \(n_{2}=2\), \(n_{3}=0\) leads to two-loop contributions (see the third diagram of Fig. 1) which are given by \[\Gamma^{(III)}_{A}[\varphi_{c}] = [\lambda_{0}^{(b)}]^{2}\,\sum_{n=1}^{\infty}\int(d^{D}k)\frac{[ \lambda_{0}^{(a)}]^{n}}{(k^{2})^{n+1}}\int\frac{(d^{D}\ell)}{\ell^{2}(\ell-k)^ {2}}\sim[\lambda_{0}^{(b)}]^{2}\sum_{n=1}^{\infty}[\lambda_{0}^{(a)}]^{n}\, \delta\big{(}n+3-D\big{)} \tag{13}\] \[= [\lambda_{0}^{(b)}]^{2}\,G(1,1)\frac{[\lambda_{0}^{(a)}]}{\Gamma (2)}\delta(0)\] The second class \(B\) of diagrams with \(n_{1}=n\), \(n_{2}=0\), \(n_{3}=2\) can be presented in the form of three-loop integration (see the forth diagram of Fig. 1) as \[\Gamma^{(III)}_{B}[\varphi_{c}] = [\lambda_{0}]^{2}\,\sum_{n=1}^{\infty}\int(d^{D}k)\frac{[\lambda_ {0}^{(a)}]^{n}}{(k^{2})^{n+1}}\,\int\frac{(d^{D}\ell)}{\ell^{2}}\int\frac{(d ^{D}p)}{p^{2}(k+p-\ell)^{2}} \tag{14}\] \[\sim [\lambda_{0}]^{2}\,\sum_{n=1}^{\infty}[\lambda_{0}^{(a)}]^{n} \delta\big{(}n+4-3D/2\big{)}\] \[= [\lambda_{0}]^{2}\,G(1,1)\,G(1,2-D/2)\,\frac{[\lambda_{0}^{(a)}]^ {2}}{\Gamma(2)}\,\delta(0).\] Thus, to the order of \([\lambda]^{4}\), the connected diagrams in Eqn. (6) contribute as \[\Gamma^{(I)}_{c}[\varphi_{c}]=\Gamma^{(I)}[\varphi_{c}]+\Gamma^{(II)}[\varphi _{c}]+\Gamma^{(III)}_{A}[\varphi_{c}]+\Gamma^{(III)}_{B}[\varphi_{c}], \tag{15}\] where \(\Gamma^{(i)}[\varphi_{c}]\) with \(i=\{(I);(II);(III),A\}\) are uniquely fixed in a sense that they only contribute to the definite order of \([\lambda]^{k}\) (\(k=2,4,3\) respectively). In contrast, \(\Gamma^{(III)}_{B}[\varphi_{c}]\) can involve the higher order of \([\lambda]^{k}\) with \(k\geq 4\). At \(\ell\)-loop accuracy, every of connected diagrams contains both the singular and finite parts. As usual, the singular parts should be eliminated by the corresponding counterterms within the certain renormalization procedure resulting in the appearance of the dimensional parameter (scale) \(\mu\)5. Re-expressing via the renormalized and dimensionless charge \(\lambda\) (within the dimensional regularization), we thus have the following effective action/potential Footnote 5: We remind that \(\mu\) is related to some subtraction point. \[\Gamma[\varphi_{c}] = \Gamma_{2}(0)\,\varphi_{c}^{2}(x)+\Gamma_{4}(0)\,\varphi_{c}^{4} (x)+.... \tag{16}\] \[= \frac{m_{0}^{2}}{2}\Big{(}1+\Delta Z_{m}(\lambda)\Big{)}^{-1} \varphi_{c}^{2}+\mu^{2\epsilon}\frac{\lambda_{0}}{4!}\Big{(}1+\Delta Z_{ \lambda}(\lambda)\Big{)}^{-1}\varphi_{c}^{4}+\{\text{finite terms}\},\] where \(\Gamma_{2,4}(0)\) imply the 1PI Green (vertex) functions and \[\Delta Z_{m;\lambda}(\lambda)=\sum_{n=1}\frac{C_{n}^{[m;\lambda)}(\lambda)}{ \epsilon^{n}}=\sum_{n=1}\frac{1}{\epsilon^{n}}\sum_{k=1}C_{nk}^{\{m;\lambda\} }\lambda^{k} \tag{17}\] with (see also below in Sec. 4.1) \[C_{1}^{\{m\}}(\lambda) =\lambda\left(c_{1}^{[m;(I)]}+\lambda c_{1}^{[m;(III),A]}+\lambda ^{2}c_{1}^{[m;(III),B]}\right)+o(\lambda^{4}), \tag{18}\] \[C_{2}^{\{m\}}(\lambda) =\lambda^{2}\left[c_{2}^{[m;(III),A]}+\lambda c_{2}^{[m;(III),B]} \right]+o(\lambda^{4}),\] (19) \[C_{1}^{\{\lambda\}}(\lambda) =\lambda\left(c_{1}^{[\lambda;(I)]}+\lambda c_{1}^{[\lambda;(III),A]}+\lambda^{2}c_{1}^{[\lambda;(III),B]}\right)+o(\lambda^{4}),\] (20) \[C_{2}^{\{\lambda\}}(\lambda) =\lambda^{2}\left(c_{2}^{[\lambda;(III),A]}+\lambda c_{2}^{[ \lambda;(III),B]}\right)+o(\lambda^{4}). \tag{21}\] As usual, the evolution of finite effective action/potential with respect to the different scale choice is governed by the anomalous dimension (or Hamiltonian which has, generally speaking, a form of the integral operator). In its turn, the anomalous dimensions are determined through the coefficients \(C_{1}(\lambda)\), see Eqns. (18) and (20), in the pole relations. In the considered accuracy ( we remind that we deal with corrections up to \([\lambda]^{4}\)-order) the contributions of all vacuum diagrams can be derived relatively easy. As a consequence, we are able to obtain the anomalous dimension needed for the evolution rather fast. However, if we need the higher order of accuracy (even an arbitrary order of accuracy), the calculations of anomalous dimensions related to some of diagrams might to be tricky. In this connection, we have found that the contribution of diagram \((III)\) of the second class \(B\) to the anomalous dimension can be computed almost algebraic based on the known anomalous dimension of the corresponding non-local operator Green function \(G_{0}^{(2)}\). This can be done with the help of \(V_{z,x}\)-operation the description of which is presented in the next section. ## 3 Vacuum \(V_{z,x}\)-operation: transformation of Green functions into vacuum integrations All vacuum integrations can be performed by direct calculations [3; 8]. However, in the case of \(n_{1}=2\) (or, in other words, if the delta-function, appearing after the vacuum integration, separates ut the only term with \(n_{1}=2\) in the full sum), the method that is based on the manifestation of conformal symmetry can be extremely useful [1; 2]. Indeed, in [2] it has been demonstrated that the certain constraints, thanks for the conformal symmetry, has been encoded in terms of the generators of the collinear \(SL(2)\) subgroup (the algebra of which is isomorphic to \(su(2,2)\)). At that, two generators, denoted as \(S_{-}\) and \(S_{0}\), can be defined at all loops with the help of the evolution kernel. At the same time, the special conformal generator \(S_{+}\) involves the nontrivial corrections and can be calculated order by order in perturbation theory. If the generator \(S_{+}\) is known at the order of \((\ell-1)\) loop, the corresponding evolution kernel in the physical dimension can be fixed to the \(\ell\)-loop accuracy (up to the terms which are invariant regarding the tree-level generators). In [1], it is shown that one can adopt the algebraic approach [2] to the multi-loop vacuum integrations. To this goal, the special \(V_{z,x}\)-operation has been introduced. We now dwell on the demonstration of the \(V_{z,x}\)-operation that transforms the given Green function in the vacuum integrations. Notice that our \(V_{z,x}\)-operation can be referred as the inverse operation presented in [3]. Let us consider the second class \(B\) of mixed diagram \((III)\) that contributes as presented by Eqn. (14). As mentioned, only this type of diagrams can include the higher order of \([\lambda]\). The contributions of the other three diagrams \((I)\), \((II)\) and the class \(A\) of \((III)\), see Eqns. (7), (11) and (13), are fixed and they stay unchanged depending on the definite order of \([\lambda]\). We now define the \(V_{z,x}\)-procedure as \[\overline{\Gamma}^{(2)}_{(III),\,B}[\varphi_{c}]=\frac{1}{C^{(2)}(D)}\,V_{z,x} \Big{\{}G^{(2)}_{0}(x_{1},x_{2};z_{1},z_{2})\Big{\}}, \tag{17}\] where \[[\lambda^{\,(a)}]^{3D/2-4}\,\overline{\Gamma}^{(2)}_{(III),\,B}[ \varphi_{c}]=\Gamma^{(2)}_{(III),\,B}[\varphi_{c}], \tag{18}\] \[C^{(2)}(D)=(D/3-1)\,\frac{\Gamma(D/2)}{\Gamma(6-D)}\,\frac{ \prod\limits_{\kappa_{1}=4}^{6}\left(3D/2-\kappa_{1}\right)}{\prod\limits_{ \kappa_{2}=2}^{4}\left(D/2-\kappa_{2}\right)} \tag{19}\] and \[V_{z,x}\Big{\{}G^{(2)}_{0}(x_{1},x_{2};z_{1},z_{2})\Big{\}}\equiv V _{z}\Big{\{}V_{x}\Big{[}G^{(2)}_{0}(x_{1},x_{2};z_{1},z_{2})\Big{]}\Big{\}} \stackrel{{\rm def}}{{=}} \tag{20}\] \[\int d^{D}z_{1}\,d^{D}z_{2}\Delta_{F}(z_{1}-z_{2})\Big{[}\int d^{ D}x_{1}d^{D}x_{2}\delta(x_{1}-x_{2})\widehat{\square}_{x_{2}}\,\Big{\{}G^{(2)}_{0} (x_{1},x_{2};z_{1},z_{2})\Big{\}}\Big{]}.\] In Eqn. (20), to \([\lambda]^{2}\)-order of \(\varphi^{4}\)-interaction, the Green function \(G^{(2)}_{0}(x_{1},x_{2};z_{1},z_{2})\) reads \[G^{(2)}_{0}(x_{1},x_{2};z_{1},z_{2})=\langle 0|T\eta(x_{1})\eta(x_{2})\, \mathcal{O}(z_{1},z_{2})\Big{[}[\lambda]\int d^{D}y\eta^{4}(y)\Big{]}^{2}|0\rangle, \tag{21}\] Figure 1: The diagrams contributing up to the order of \([\lambda]^{4}\). with the inserted non-local operator \(\mathcal{O}(z_{1},z_{2})=\eta(z_{1})\eta(z_{2})\). The coupling constant denoted as \([\lambda]\) is absorbing the combinatory factor which is now irrelevant for our consideration. We are now in a position to proof the statement expressed by Eqn. (1). We begin with the momentum representation (\(p\)-space) where the Green function \(G_{O}^{(2)}(x_{1},x_{2};z_{1},z_{2})\) represented by the diagram depicted in Fig. 3 and it takes a form of \[G_{0}^{(2)}(x_{1},x_{2};z_{1},z_{2})=\] \[\int(d^{D}k_{1})\,(d^{D}k_{2})e^{-ik_{1}x_{1}+ik_{2}x_{2}}S(k_{1}) S(k_{2})\int(d^{D}k_{3})\,(d^{D}k_{4})e^{+ik_{3}z_{1}-ik_{4}z_{2}}S(k_{3})S(k_{4})\] \[\times\int(d^{D}p)\,S(p)S(p+k_{1}-k_{3})\,\delta^{(D)}\big{(}k_{1 }+k_{4}-k_{2}-k_{3}\big{)}. \tag{11}\] After some algebra, we obtain that \[G_{O}^{(2)}(x_{1},x_{2};z_{1},z_{2}) = G(1,1)\frac{\Gamma(4-D)}{\Gamma(2-D/2)}\int_{0}^{1}d\mu(\alpha) \int(d^{D}k_{1})\,(d^{D}k_{2})e^{-ik_{1}(x_{1}-z_{12}^{\alpha_{1}})+ik_{2}(x_{ 2}-z_{21}^{\alpha_{2}})} \tag{12}\] \[\times S(k_{1})S(k_{2})\big{[}\mathbb{B}(k_{1},k_{2},\alpha) \big{]}^{D-4},\] where \(z_{12}^{\alpha}=\overline{\alpha}z_{1}+\alpha z_{2}\), and \[\mathbb{B}(k_{1},k_{2},\alpha)=\alpha_{2}\overline{\alpha}_{2}(k_{2}-k_{1})^{ 2}+\alpha_{3}\overline{\alpha}_{3}k_{1}^{2}+\alpha_{2}\alpha_{3}(k_{2}-k_{1})k _{1}. \tag{13}\] In Eqn. (12), the integration measure in \(\alpha\)-space is given by \[d\mu(\alpha)=d\alpha_{1}\,d\alpha_{2}\,d\alpha_{3}\,\delta\left(1-\sum_{i=1}^ {3}\alpha_{i}\right)\,\alpha_{3}^{1-D/2}. \tag{14}\] Then, we apply \(V_{x}\)-operator on the Green function that results in \[V_{x}\Big{[}G_{O}^{(2)}(x_{1},x_{2};z_{1},z_{2})\Big{]}\equiv \int d^{D}x_{1}d^{D}x_{2}\delta(x_{1}-x_{2})\widehat{\square}_{x_{2}}\left[G_ {O}^{(2)}(x_{1},x_{2};z_{1},z_{2})\right]=\] \[G(1,1)\frac{\Gamma(4-D)}{\Gamma(2-D/2)}\int_{0}^{1}\frac{d\mu( \alpha)}{\big{[}\alpha_{3}\overline{\alpha}_{3}\big{]}^{4-D}}\int(d^{D}k_{1}) \,\frac{e^{+i\alpha_{3}k_{1}(z_{1}-z_{2})}}{[k_{1}^{2}]^{5-D}}. \tag{15}\] Ultimately, the action of \(V_{z}\)-operator leads to the following representation: \[V_{z}\Big{\{}V_{x}\Big{[}G_{O}^{(2)}(x_{1},x_{2};z_{1},z_{2}) \Big{]}\Big{\}}\equiv\int d^{D}z_{1}\,d^{D}z_{2}\Delta_{F}(z_{1}-z_{2})\Big{\{} V_{x}\Big{[}G_{O}^{(2)}(x_{1},x_{2};z_{1},z_{2})\Big{]}\Big{\}}\] \[=G(1,1)\frac{\Gamma(4-D)}{\Gamma(2-D/2)}\frac{\Gamma(D/2-4) \Gamma(D-2)}{\Gamma(3D/2-6)}\frac{\delta(6-3D/2)}{\Gamma(6-D)}. \tag{16}\] Multiplying Eqn. (16) by \([\lambda^{(a)}]^{3D/2-4}\) and, then, inserting the result into the _r.h.s._ of Eqn. (1), we obtain the corresponding effective potential which is represented by Eqn. (15). In Eqn. (16), the dimension \(D\in\mathbb{R}\), _i.e._\(D=4-2\epsilon\), but the delta-function \(\delta(6-3D/2)=\delta(3\epsilon)\) extracts only the highest singularity in \(\epsilon\)-expansion of the pre-delta \(\Gamma\)-function combination. As the next step, including the prefactor \([\lambda^{(a)}]^{3D/2-4}\) we have to make an expansion over \(\epsilon\). As a result, we derive the following expression \[\Gamma^{(2)}_{(III),\,B}[\varphi_{c}] = [\lambda^{(a)}]^{3D/2-4}\,\overline{\Gamma}^{(2)}_{(III),\,B}[ \varphi_{c}]=\frac{[\lambda^{(a)}]^{3D/2-4}}{C^{(2)}(D)}\,V_{z,x}\Big{\{}G^{(2 )}_{0}(x_{1},x_{2};z_{1},z_{2})\Big{\}}\Big{|}_{\epsilon\to 0} \tag{12}\] \[\stackrel{{\mathcal{R}}}{{=}} [\lambda^{(a)}]^{2}\Big{\{}\Gamma^{(2)\,\text{sing}}_{(III),\,B} [\varphi_{c}]+\Gamma^{(2)\,\text{fin}}_{(III),\,B}[\varphi_{c}]\Big{\}},\] where \(\mathcal{R}^{\prime}\) implies that the \(\mathcal{R}^{\prime}\)-operation has been used, and \[\Gamma^{(2)\,\text{sing.}}_{(III),\,B}[\varphi_{c}]=\frac{c_{2}^{ (III),\,B}}{\epsilon^{2}}+\frac{c_{1}^{(III),\,B}}{\epsilon}, \tag{13}\] \[\Gamma^{(2)\,\text{fin.}}_{(III),\,B}[\varphi_{c}]=\tilde{c}^{(III ),\,B}_{0}+\tilde{c}^{(III),\,B}_{0}\ln\frac{[\lambda^{(a)}]}{\mu^{2}}+\tilde{c }^{(III),\,B}_{0}\ln^{2}\frac{[\lambda^{(a)}]}{\mu^{2}},\] (14) \[\ln\frac{[\lambda^{(a)}]}{\mu^{2}}=\ln\frac{m^{2}}{\mu^{2}}+\sum _{n=1}^{\infty}\frac{[\lambda]^{n}}{n}\Big{(}\frac{\varphi_{c}^{2}}{m^{2}} \Big{)}^{n}. \tag{15}\] In Eqn. (12), the local singular part of effective potential should be cancelled by the introduction of corresponding counterterms (\(Z\)-factor in RG-method). While, the non-local singular terms related to the \(\epsilon\)-expansion of \([\lambda^{(a)}]^{\ell\,\epsilon}\) have to be eliminated by the \(\mathcal{R}^{\prime}\)-procedure. Notice that Eqns. (11) and (12) can be generated up to any order of \(\lambda\) (multi-loop accuracy). ## 4 Vacuum \(V_{z,x}\)-operation and anomalous dimensions The advantage of representation (11) is that assuming the anomalous dimension (evolution kernel) of the non-local operator Green functions is known at \(\ell\)-loop accuracy, one can replace the direct calculations of evolution kernels associated with the corresponding vacuum diagrams at \((\ell+1)\) loop accuracy by rather a simple (mostly algebraic) \(V_{z,x}\)-operation. Indeed, let us return again to the non-local operator Green function at the second order of \([\lambda]\), see Eqn. (17). We now extract the anomalous dimension given by the coefficient \(C_{1}^{(G_{0})}\) if the corresponding \(Z^{(G_{0})}\)-factor takes the form of \[Z^{(G_{0})}=1+\sum_{n=1}^{\infty}\frac{C_{n}^{(G_{0})}(\lambda)}{\epsilon^{n}}. \tag{16}\] To our aim, it is enough to make a replacement as \(\mathbb{B}(k_{1},k_{2},\alpha)\to 1\) because the \(\epsilon\)-expansion of \(\mathbb{B}^{\epsilon}\) does not affect our extraction procedure. So, after \(\epsilon\)-expansion of \(\Gamma\)-combination, we get that \[G^{(2),1/\epsilon}_{0}(x_{1},x_{2};z_{1},z_{2})=\frac{1}{2\epsilon}\,\int_{0 }^{1}d\mu(\alpha)\Delta_{F}(x_{1}-z_{12}^{\alpha_{1}})\Delta_{F}(z_{21}^{ \alpha_{2}}-x_{2}). \tag{17}\] e stress that, at the given accuracy, this \(1/\epsilon\)-term is a highest singular term because of the nonlocality of considered operator \(\mathcal{O}\). The _r.h.s_ of Eqn. (24) (modulo the coupling constant prefactor which is omitted) coincides with the corresponding expression for the evolution kernel in \(\varphi^{4}\)-theory [2] provided by the suitable replacements. Indeed, we have the following relation \[G_{\mathcal{O}}^{(2),1/\epsilon}(x_{1},x_{2};z_{1},z_{2})\Big{|} ^{\Delta_{F}(x_{1}-z_{12}^{a_{1}})\to\varphi(z_{12}^{a_{1}})}_{\Delta_{F}(z_{ 21}^{a_{2}}-x_{2})\to\varphi(z_{21}^{a_{2}})}=\big{[}\mathbb{H}_{12}\,\mathcal{ O}\big{]}(z_{1},z_{2}). \tag{25}\] which represents the needed matching. It is interesting to notice that since the \(\mathbb{B}(k_{1},k_{2},\alpha)\) accumulates the full information on the loop structure of the given Green function, the same result for \(1/\epsilon\)-term, as in Eqn. (24), can be derived in \(p\)-space working directly with the local operator case and with the sub-divergency subtraction. To show that, we arrange the so-called momentum flux in the diagram of Fig. 3 with the help of \(k_{1}=0\) and the local limit, \(z_{1}=z_{2}\). As a result of this, the diagram takes a propagator-like type and the loop integration is given by \[\mathcal{X}^{\text{loc.}}(k_{2})=\int(d^{D}k_{3})\frac{1}{k_{3}^ {2}(k_{3}+k_{2})^{2}}\Big{\{}\int(d^{D}p)\frac{1}{p^{2}(p-k_{3})^{2}}-\frac{1} {\epsilon}\Big{\}}=\] \[G(1,1)G(1,3-D/2)\left[k_{2}^{2}\right]^{D-4}-\frac{1}{\epsilon} G(1,1)\left[k_{2}^{2}\right]^{D/2-2}\approx\frac{1}{\epsilon}\Big{(}\frac{5}{2}-2 \Big{)}+...=\frac{1}{2\epsilon}+..., \tag{26}\] where the ellipses imply the other possible terms in \(\epsilon\)-expansion. Further, applying \(V_{z,x}\)-operation to \(G_{\mathcal{O}}^{(2)}(x_{1},x_{2};z_{1},z_{2})\), we obtain that \[V_{z,x}\Big{\{}G_{\mathcal{O}}^{(2),1/\epsilon}(x_{1},x_{2};z_{1 },z_{2})\Big{\}}=\] \[\frac{1}{2\epsilon}\int d\mu(\alpha)\int(d^{D}z_{1})(d^{D}z_{2}) \Delta_{F}(z_{1}-z_{2})\Delta_{F}\big{(}\alpha_{3}(z_{2}-z_{1})\big{)}=\] \[\frac{1}{4\epsilon}\delta^{(D)}(0)\,\delta(\epsilon) \tag{27}\] where it has been used that \[\int(d^{D}z_{1})(d^{D}z_{2})\Delta_{F}(z_{1}-z_{2})\Delta_{F}\big{(}\alpha_{3} (z_{2}-z_{1})\big{)}=\delta^{(D)}(0)\frac{1}{\alpha_{3}^{2}}\delta(2-D/2). \tag{28}\] The prefactor \(\delta^{(D)}(0)\) gives finally the space-time volume \(V\times T\) which connects the effective action with the effective potential. In Eqn. (4.5), the vacuum integration (4.6), _i.e._ integrations over \(z_{1}\) and \(z_{2}\), leads to \(\delta(2-D/2)\) that behaves as \(1/\varepsilon\) within the sequential approach if \(D=4-2\varepsilon\) (see for example [8; 9]). Therefore, \(V_{z,x}\)-operation results in the coefficient \(c_{2}\) for the effective potential, see Eqn. (3.13). As a last step, in order to obtain the anomalous dimension (the coefficient \(c_{1}\) of Eqn. (3.13)) for the effective potential \(\Gamma^{(2)}[\varphi_{c}]\), we have to use the corresponding pole relations, _i.e._ \[c_{1}=\mathrm{P}_{\Gamma}(c_{2}),\quad c_{2}=V_{z,x}\Big{\{}\left[\mathbb{H}_{ 12}\mathcal{O}\right](z_{1},z_{2})\Big{\}}, \tag{4.7}\] where the corresponding pole relations for the effective potential denoted as \(\mathrm{P}_{\Gamma}\). ### The pole relations To conclude this section, we study the important consequences of the pole relations that not only relate the different coefficient \(c_{i}\), but they can fix the arbitrary constants \(a_{(i)}\), see Eqn. (2.8). We begin with the schematic derivation of the pole relations. The pole relations for \(\Gamma[\varphi_{c}]\) are stemmed from the \(\mu\partial_{\mu}\)-differentiation of the effective potential \(Z\)-factors, \(Z_{m}\) and \(Z_{\lambda}\), defined as \[\Gamma_{0}[\varphi_{c}]=Z^{\Gamma[\varphi_{c}]}\Gamma[\varphi_{c}],\quad Z^{ \Gamma[\varphi_{c}]}=1+\sum_{n=1}^{\infty}\frac{C_{n}([\lambda])}{\varepsilon ^{n}}. \tag{4.8}\] Having calculated \(\mu\partial_{\mu}\)-derivative of \(Z\)-factor, see Eqn. (4.8), we obtain that \[\Big{\{}1+\sum_{n=1}^{\infty}\frac{C_{n}([\lambda])}{\varepsilon^{n}}\Big{\}} \gamma_{\Gamma[\varphi_{c}]}=\beta_{\lambda}([\lambda])\partial_{\lambda}\sum _{n=1}^{\infty}\frac{C_{n}([\lambda])}{\varepsilon^{n}}\,\,\,\text{with}\,\, \gamma_{\Gamma[\varphi_{c}]}\equiv\mu\partial_{\mu}\ln Z^{\Gamma[\varphi_{c}]} \tag{4.9}\] and, as a consequence, we have the following pole relations \[\text{at}\,\,\varepsilon^{0} : \quad\gamma_{\Gamma[\varphi_{c}]}=-\lambda\partial_{\lambda}C_{1 }(\lambda), \tag{4.10}\] \[\text{at}\,\,\varepsilon^{-1} : \quad C_{1}(\lambda)\gamma_{\Gamma[\varphi_{c}]}=-\lambda\partial _{\lambda}C_{2}(\lambda)+\beta_{4}\partial_{\lambda}C_{1}(\lambda),\quad \text{etc.} \tag{4.11}\] From one hand, Eqn. (4.11) gives the definition of the \(\mathrm{P}_{\Gamma}\)-operator, see Eqn. (4.7). At the same time, from the other hand, the pole relations allow us to fix the uncertainties associated with the \(\delta(0)\)-singularity, see Eqn. (2.8). To demonstrate it, let us first write down the charge and massive terms of \(\Gamma[\varphi_{c}]\), see Eqn. (2.16), in the form of \[\Gamma_{2,4}[\varphi_{c}]=\begin{pmatrix}m_{0}^{2}\varphi_{c}^{2}/2 \\ \mu^{2\varepsilon}\lambda_{0}\varphi_{c}^{4}/4!\end{pmatrix}\Big{\{}1+\begin{pmatrix} d_{(I)}^{\{m\}}\\ d_{(I)}^{\{\lambda\}}\end{pmatrix}\lambda\,Z_{\lambda}(\lambda)+\begin{pmatrix}d_{( III,A)}^{\{m\}}\\ d_{(III,A)}^{\{\lambda\}}\end{pmatrix}\lambda^{\,2}\,Z_{\lambda}^{2}(\lambda)\,G(1,1)\] \[+\begin{pmatrix}d_{(III,B)}^{\{m\}}\\ d_{(III,B)}^{\{\lambda\}}\end{pmatrix}\lambda^{\,3}\,Z_{\lambda}^{3}(\lambda) \,G(1,1)G(1,2-D/2)\Big{\}}\frac{\delta(0)}{\Gamma(2)}\] \[\equiv\begin{pmatrix}Z_{m}^{-1}(\lambda)\,\,m_{0}^{2}\varphi_{c} ^{2}/2\\ Z_{\lambda}^{-1}(\lambda)\,\,\mu^{2\varepsilon}\lambda_{0}\varphi_{c}^{4}/4! \end{pmatrix}, \tag{4.12}\] where \(d_{(i)}^{\{m;\lambda\}}\) denote the numerical coefficients associated with the massive and charge terms of the given diagrams and the charge has been re-expressed via the renormalized quantity in the diagram contributions which are forming the \(Z^{-1}\)-factor [10]. We stress that, in contrast to QED/QCD, the renormalization of \(\Gamma\,[\varphi_{c}]\) is given by the same set of diagrams We remind that, in QED/QCD, the renormalization of mass and fields are ensured by the two-point Green functions, while the charge is renormalized with the help of the tree-point Green functions etc. For the sake of shortness, it is convenient to rewrite Eqn. (4.12) as \[\Gamma[\varphi_{c}]=\sum_{i=(I)\dots}\lambda^{n_{(i)}}\,Z_{\lambda}^{n_{(i)}}( \lambda)\,F^{(i)}(\Gamma;\epsilon)\delta(0), \tag{4.13}\] where \(Z_{\lambda}\)-factor is represented by Eqn. (2.17) and \[F^{(I)}(\Gamma;\epsilon)=a_{0}+a_{1}\epsilon+a_{2}\epsilon^{2}+ o(\epsilon^{3}), \tag{4.14}\] \[F^{(III,A)}(\Gamma;\epsilon)=\frac{b_{-1}}{\epsilon}+b_{0}+b_{1 }\epsilon+b_{2}\epsilon^{2}+o(\epsilon^{3}),\] (4.15) \[F^{(III,B)}(\Gamma;\epsilon)=\frac{c_{-1}}{\epsilon}+c_{0}+c_{1 }\epsilon+c_{2}\epsilon^{2}+o(\epsilon^{3}). \tag{4.16}\] In Eqn. (4.13) we take into account the possibility of dimensional extension for the pre-delta functions mentioned above. At the order of \([\lambda]^{2}\), focusing on the \(1/\epsilon^{2}\)- and \(1/\epsilon\)-singularities, the pole relations of Eqns. (4.10) and (4.11) generate the following relation \[C_{22}^{\{\lambda\}}=\big{(}C_{11}^{\{\lambda\}}\big{)}^{2} \tag{4.17}\] which leads to the relation given by \[a_{(III,A)}^{\{\lambda\}}b_{-1}=\big{(}a_{(I)}^{\{\lambda\}}\big{)}^{2}a_{0}^{2}, \tag{4.18}\] where \(b_{1}\) and \(a_{0}\) are known from the direct calculations, while \(a_{(III,A)}^{\{\lambda\}}\) and \(a_{(I)}^{\{\lambda\}}\) have to be determined. Without loosing the generality, one can normalize the effective action/potential in order to get \(a_{(I)}^{\{\lambda\}}=1\) for the diagram of \(I\)-type. Hence, from Eqn. (4.18), the constant \(a_{(III,A)}^{\{\lambda\}}\) can be readily fixed. In the similar way, the pole relations for \(Z_{m}\)-factor give \[2C_{22}^{\{m\}}=\big{(}C_{11}^{\{m\}}\big{)}^{2}+C_{11}^{\{\lambda\}}C_{11}^{ \{m\}} \tag{4.19}\] and, hence, the uncertainty fixing relation takes the form of \[2a_{(III,A)}^{\{m\}}b_{-1}=a_{(I)}^{\{m\}}\big{(}a_{(I)}^{\{\lambda\}}+a_{(I)} ^{\{m\}}\big{)}a_{0}^{2}. \tag{4.20}\] In Eqns. (4.20), the coefficients \(a_{(i)}^{\{m\}}\) and \(a_{(i)}^{\{\lambda\}}\) have been chosen to be different ones. However, there is an extra condition which can re-express one coefficient from another. Based on the stationary method, we have the functional extremum condition as \(\delta\Gamma\,[\varphi_{c}]/\delta\varphi_{c}=0\) that leads to \(m^{2}+\lambda\varphi_{c}^{2}/6=0\). As a result, the coefficients \(a_{(i)}^{\{m\}}\) and \(a_{(i)}^{\{\lambda\}}\) cannot be independent ones. As the next step, concentrating on the order of \([\lambda]^{3}\), from Eqns. (4.10) and (4.11), we can readily calculate the coefficient giving the anomalous dimension. We obtain that \[C_{12}^{\{\lambda\}}=\frac{3}{7}\frac{C_{23}^{\{\lambda\}}}{C_{11}^{\{\lambda \}}} \tag{4.21}\] which defines also the operation P\({}_{\Gamma}\) of Eqn. (4.7). ## 5 Generation of \(V_{z,x}\)-operation to the higher orders In the preceding section, we demonstrate the \(V_{z,x}\)-operation applied to the non-local operator Green function where the standard interaction vertex \((c)\) has been taken into account up to the order of \([\lambda]^{2}\). In this section, we present the generation of \(V_{z,x}\)-operation to the higher order of \([\lambda]\). Let \(G_{\mathbb{O}}^{(n\geq 2)}(x_{1},x_{2};z_{1},z_{2})\) be the non-local operator Green function corresponding to the higher order of \([\lambda]\). Focusing on the singular part of this function, we have 6 Footnote 6: We use the following a shortness notation: \((x_{i},z_{j})=(x_{1},x_{2};z_{1},z_{2})\). \[G_{\mathbb{O}}^{(n\geq 2)\,\text{sing.}}(x_{i};z_{j})=\sum_{k}G_{ \mathbb{O}}^{(n\geq 2)}(x_{i};z_{j}|1/\epsilon^{k})\Rightarrow\frac{c_{k}^{G}}{ \epsilon^{k}}+\frac{c_{k-1}^{G}}{\epsilon^{k-1}}+...+\frac{c_{1}^{G}}{\epsilon }+c_{0}^{G}+o^{G}(\epsilon). \tag{38}\] In \(\epsilon\)-expansion, the prefactor \(C^{(n\geq 2)}(D)\) of Eqn. (21) being the combination of \(\Gamma\)-functions 7 has a form of series as Footnote 7: The exact form of \(C(D)\) depends on the order of \([\lambda]\) \[C^{(n\geq 2)}(D)=1+o_{1}(\epsilon), \tag{39}\] where \(o_{1}(\epsilon)\) implies the certain series over \(\epsilon\) depending on the order but the exact form of series is irrelevant for our consideration, see below. With these, Eqn, (20) for an arbitrary order takes the following form \[\overline{\Gamma}^{(k\geq 2)}[\varphi_{c}]=\frac{1}{C^{(k\geq 2)}(D)}\,V_{z,x }\Big{\{}G_{0}^{(n\geq 2)}(x_{i};z_{j})\Big{\}} \tag{40}\] or, in other words, we have \[\frac{c_{k+1}^{\Gamma}}{\epsilon^{k+1}}+\frac{c_{k}^{\Gamma}}{ \epsilon^{k}}+...+\frac{c_{1}^{\Gamma}}{\epsilon}+c_{0}^{\Gamma}+o^{\Gamma}( \epsilon)=\] \[\Big{\{}1+o_{1}(\epsilon)\Big{\}}\,V_{z,x}\Big{\{}\frac{c_{k}^{G }}{\epsilon^{k}}+\frac{c_{k-1}^{G}}{\epsilon^{k-1}}+...+\frac{c_{1}^{G}}{ \epsilon}+c_{0}^{G}+o^{G}(\epsilon)\Big{\}}\] \[\equiv\Big{\{}1+o_{1}(\epsilon)\Big{\}}\,\Big{\{}\frac{c_{k+1}^{ VG}}{\epsilon^{k+1}}+\frac{c_{k}^{VG}}{\epsilon^{k}}+...+\frac{c_{1}^{VG}}{ \epsilon}+c_{0}^{VG}+o^{VG}(\epsilon)\Big{\}}. \tag{41}\] From Eqn. (41), concentrating on the highest singular terms one can see that \[c_{k+1}^{\Gamma}=c_{k+1}^{VG}. \tag{42}\] Of course, such a simple relation is valid only the highest singular terms due to the universal form of Eqn. (39). For the other singular terms, one needs the exact form of expansion including the finite terms with respect to \(\epsilon\). If the anomalous dimension of \(G_{\mathbb{O}}^{(n\geq 2)}(x_{i};z_{j})\), _i.e._ the coefficient \(c_{1}^{G}\), is somehow known, we use the pole relations to transform the coefficient \(c_{1}^{G}\) to the coefficient \(c_{k}^{G}\) at the highest singular term and, then, we immediately get the highest singular term of \(\overline{\Gamma}^{(k\geq 2)}[\varphi_{c}]\) with the help of \(V_{z,x}\)-operation, see Eqn. (42). Afterwards, we again use the pole relations for \(\overline{\Gamma}^{(k\geq 2)}[\varphi_{c}]\) to derive the coefficient \(c_{1}^{\Gamma}\). That is, we have the following chain of operations: \[c_{1}^{G}\stackrel{{\text{P}_{G}}}{{\longrightarrow}}c_{k}^{G} \stackrel{{ V_{z,x}}}{{\longrightarrow}}c_{k+1}^{\Gamma}\stackrel{{ \text{P}_{\Gamma}}}{{\longrightarrow}}c_{1}^{\Gamma}. \tag{43}\] As a result, we can derive the anomalous dimension for the effective potential provided we know the anomalous dimension of the corresponding non-local operator Green function. It is important that this procedure is almost algebraical one which is very useful for the higher order of corrections. ## 6 Conclusion As a starting point, the generating functional for the scalar \(\varphi^{4}\) has been reformulated in an alternative way where the massive parameter has been considered as a sort of interaction. In our approach, the new \(V_{z,x}\)-operation that transforms the Green function of non-local operator to the corresponding vacuum integration has been described in detail. With the help of this operation and the pole relations, the anomalous dimension of effective potential at \(\ell\)-loop accuracy can be almost algebraically derived from the known anomalous dimension of the corresponding non-local operator at \((\ell-1)\)-accuracy. For the effective potential calculations, the special role has been paid for the treatment \(\delta(0)\)-singularity/uncertainty within the sequential approach. ## Acknowledgements We thank M. Hnatic, A. Manashov, S.V. Mikhailov and L. Szymanowski for very useful discussions.
2305.15623
The Nonlinear Theory of Sound
We prove the existence of ``pure tone'' nonlinear sound waves of all frequencies. These are smooth, space and time periodic, oscillatory solutions of the $3\times3$ compressible Euler equations in one space dimension. Being perturbations of solutions of a linear wave equation, they provide a rigorous justification for the centuries old theory of Acoustics. In particular, Riemann's celebrated 1860 proof that compressions always form shocks holds for isentropic and barotropic flows, but for generic entropy profiles, shock-free periodic solutions containing nontrivial compressions and rarefactions exist for every wavenumber $k$.
Blake Temple, Robin Young
2023-05-25T00:05:24Z
http://arxiv.org/abs/2305.15623v1
# The nonlinear theory of sound ###### Abstract. We prove the existence of "pure tone" nonlinear sound waves of all frequencies. These are smooth, space and time periodic, oscillatory solutions of the \(3\times 3\) compressible Euler equations in one space dimension. Being perturbations of solutions of a linear wave equation, they provide a rigorous justification for the centuries old theory of Acoustics. In particular, Riemann's celebrated 1860 proof that compressions always form shocks holds for isentropic and barotropic flows, but for generic entropy profiles, shock-free periodic solutions containing nontrivial compressions and rarefactions exist for every wavenumber \(k\). ## 1. Introduction We prove the existence of \(1\)-dimensional space and time periodic solutions of the \(3\times 3\) compressible Euler equations, thereby providing the first existence proof for globally bounded solutions of Euler's equations exhibiting sustained nonlinear interactions and large total variation. By this, parallel to the classical linear theory of sound, there really is a nonlinear theory of sound, even though it was thought not to exist since Stokes and Riemann considered the problem in the mid-19th century. Specifically, we prove that under an arbitrarily small perturbation of any given entropy profile, the equations obtained by linearization about stationary (weak) solutions of constant pressure and zero velocity, admit \(k\)-mode solutions for every wave number \(k\), and each such linear sinusoidal \(k\)-mode perturbs to a one parameter family of _pure tone_ space and time periodic solutions of the nonlinear equations admitting the same frequency in space, but with time periods \(T_{k}\) depending on \(k\), determined by the linearized equation. By a _pure tone_ nonlinear solution we mean a solution of the \(3\times 3\) compressible Euler equations which agrees with a linear sinusoidal \(k\)-mode solution of the linearized equations, to leading order in the perturbation parameter. This nonlinear theory of sound requires both genuine nonlinearity _and_ varying entropy profiles. Thus the \(2\times 2\) theory of shock wave formation for barotropic equations of state (\(p=p(\rho)\), including isentropic and isothermal flows), established by Riemann in 1860, and made definitive in the Glimm-Lax decay result of 1970, is _not_ indicative of what happens in the full \(3\times 3\) system of compressible Euler when the entropy is not constant. For \(3\times 3\) systems, there are two competing physical effects at play: on the one hand, waves steepen due to genuine nonlinearity, which is the dependence of sound speed on the state. On the other hand, entropy variations cause nonlinear interaction effects which are manifest as _echoes_, resulting in a scattering of waves which mitigates the steepening, and which can ultimately prevent shock formation even when compressions are present. Our results show that periodic entropy variations can act to bring compression and rarefaction of waves into perfect balance and thus prevent shock formation. ### Scientific Context Our results resolve a long-standing open problem in the theory of Acoustics which dates to the mid-nineteenth century. Namely, how is music possible when nonlinearities always drive oscillatory solutions into shock waves. Recall that in the 1750's, Euler developed the correct extension of Newton's laws of motion to the continuum, and then linearized the equations to produce the wave equation which D'Alembert had earlier derived to describe infinitesimal displacements of a vibrating string. By this Euler solved arguably the greatest intellectual problem of his time - he gave a _mechanical explanation for music_: vibrations of an instrument produce sinusoidal oscillations in air pressure, frequencies of which correspond to the pure tones of sound we hear when, say, a violin is played. But in the mid-19th century, mathematicians including Stokes and Riemann discovered a problem with this theory: solutions containing compressions could not be sustained, and this would destroy the musical tones of the linear theory. After Challis identified the issue in 1848, Stokes in his paper _"On a difficulty in the theory of sound"_[33], showed that oscillations break down in finite time, and proposed a resolution using shock waves. In 1860 Riemann proved that a compression _always_ produces shock waves in isentropic flows. A century later, this was made definitive in the celebrated Glimm-Lax decay result of 1970, which established that space periodic solutions of isentropic Euler, or any genuinely nonlinear barotropic system, necessarily form shocks and decay to average at rate \(1/t\). At that time, it was believed that the same result was true for \(3\times 3\) non-isentropic Euler as well. Thus Euler's original question, why does music resonate so beautifully, remained unexplained at the level of the fully nonlinear equations - until now. In this context, our results establish that the theory of Acoustics and music based on linear modes of propagation is _not_ inconsistent with nonlinear evolution. Persistence of sound waves is inherent in compressible Euler, but only if the entropy is _non-constant_, so that echoes are present. A region of shock free periodic sound wave propagation opens up around every _non-resonant_, non-constant entropy profile. The echoes, which are nonlinear waves scattered by the entropy profile, are on the order of the incident nonlinear waves for large entropy jumps, and nonlinear periodic solutions overcome Glimm-Lax shock formation via characteristics moving ergodicly though the periods, balancing compression and rarefaction, _on average_. This is a new point of view for shock free wave propagation in compressible Euler: instead of Riemann invariants propagating as constant coordinates along characteristics (as in \(2\times 2\) isentropic and barotropic systems [32]), in this new \(3\times 3\) regime, every characteristic cycles through a dense set of values of each Riemann invariant. Our results raise the interesting question as to whether this shock-free regime is the actual regime of ordinary sounds of speech and musical tones heard in nature and everyday life. Glimm-Lax theory is based on approximating smooth solutions by weak shock waves, but as far as we can tell, only strong shocks are actually observed in nature. Our results and the success of the field of Acoustics indicate that this regime of nonlinear shock-free wave propagation is more fundamental to ordinary sounds and musical tones than formation and propagation of "weak shock waves". Equi-temperament tuning of the piano makes frequencies irrationally related, which is precisely our _non-resonance_ condition, sufficient to imply perturbation of linear pure tones to nonlinear pure tones. The essential physical ideas in this paper were first understood by the authors within the context of the theory of nonlinear wave interactions introduced by Glimm and Lax in [9, 10]. The interaction of a nonlinear (acoustic) wave of strength \(\gamma\) with an entropy jump \([s]\) produces an "echo", which is a reflected acoustic wave, whose strength is \(O([s]\,\gamma)\), on the order of the incident acoustic wave [35]. On the other hand, the interaction of any two (weak) acoustic waves is linear, with an error which is _cubic_ in wave strength. We began this project with the insight that _the echoes produced by finite entropy jumps are at the critical order sufficient to balance rarefaction and compression_. By this we might also expect that a theory of nonlinear superposition of the "pure tone" nonlinear sound waves constructed here could also produce perturbations which provide general shock-free solutions of the nonlinear equations, although these would no longer be periodic in time. Mathematically, this raises the question as to whether, by the same mechanism, quasi-periodic mixed modes of the linearized theory also perturb to nonlinear. Regarding mathematical methods employed, our results demonstrate that the problem of expunging resonances inherent in the Nash-Moser method, can be overcome when enough symmetries are present to impose periodicity by _projection_, rather than by _periodic return_. Thus taking into account all of the physical symmetries in the problem has led to a dramatic simplification of the mathematical techniques and tools needed. ### Statement of Results The compressible Euler equations are the generalization of Newton's laws of motion to a continuous medium, in the absence of viscous or thermal dissipation. In a spatial (Eulerian) frame, they consist of equations representing conservation of mass, momentum and energy, and in one space dimension take the form \[\begin{gathered}\rho_{t}+\big{(}\rho\,u\big{)}_{X}=0,\\ (\rho\,u)_{t}+\big{(}\rho\,u^{2}+p\big{)}_{X}=0,\\ \big{(}\tfrac{1}{2}\,\rho\,u^{2}+\rho\,e\big{)}_{t}+\big{(}\tfrac{ 1}{2}\,\rho\,u^{3}+\rho\,e\,u+u\,p\big{)}_{X}=0.\end{gathered} \tag{1.1}\] Here \(X\) is the spatial variable and \(u\) is the fluid velocity, while \(\rho\), \(p\) and \(e\) are the fluid density, pressure and specific energy, respectively. These constitutive variables are related through the Second Law of Thermodynamics, \[de=\Theta\,ds-p\,dv, \tag{1.2}\] in which \(v=1/\rho\) is the specific volume, \(\Theta\) is the temperature, and \(s\) is the specific entropy. For reversible solutions, which do not contain shocks, (1.2) is equivalent to the _entropy equation_, \[(\rho\,s)_{t}+\big{(}\rho\,u\,s\big{)}_{X}=0, \tag{1.3}\] which states that the entropy is preserved along particle paths [6]. To rewrite the equations in _Lagrangian_ form, introduce the _material coordinate_\(x\) by \[x=\int_{0}^{X}\rho(\chi)\;d\chi, \tag{1.4}\] which after manipulation yields the equivalent system \[\begin{gathered} v_{t}-u_{x}=0,\\ u_{t}+p_{x}=0,\\ \big{(}\tfrac{1}{2}\,u^{2}+e\big{)}_{t}+\big{(}u\,p\big{)}_{x}=0,\end{gathered} \tag{1.5}\] see [6, 32]. In this frame, for reversible solutions, in which \(p\) and \(u\) are globally continuous, the entropy equation takes the simple form \[s_{t}=0, \tag{1.6}\] which is solved by \(s=s(x)\). Because the solutions we construct here are time reversible, we may assume that \(s=s(x)\) has been prescribed and can then drop the third energy equation, so that the system is fully described as \[\begin{gathered} v\big{(}p,s(x)\big{)}_{t}-u_{x}=0,\\ u_{t}+p_{x}=0,\end{gathered} \tag{1.7}\] in which the specific volume \(v=v(p,s)\) is our explicitly given constitutive law [6, 32]. We can eliminate \(u\) in (1.7) to obtain the nonlinear wave equation \[v\big{(}p,s(x)\big{)}_{tt}+p_{xx}=0. \tag{1.8}\] We make the standard physical assumption that \(v_{p}(p,s)<0\), which implies that (1.8) is hyperbolic and thus a wave equation. We will use the Lagrangian frame throughout the paper, but because systems (1.1) and (1.5) are equivalent, our results also apply in the Eulerian frame. We note that the _stationary solution_ given by \[s=s(x),\qquad p(x,t)=\overline{p},\qquad u(x,t)=0,\] is a time reversible exact solution of the system (1.5) or (1.7), even when \(s(x)\) and \(\rho(x)\) are discontinuous, and we refer to this as a _quiet state_. Our periodic sound wave solutions are perturbations of this quiet state in \(p\) and \(u\), whose leading order is a continuous \(k\)-mode solution of the linearization of (1.7) or (1.8) around this quiet state. Because we are regarding the entropy profile \(s=s(x)\) as given, and we wish to find time periodic solutions, we treat the material variable \(x\) as the evolution variable. This allows us to describe the initial data and corresponding solutions at any fixed \(x\) in terms of Fourier series in time, and in particular, allows an efficient description of the linearized operator. Fundamental to our analysis is the observation that the symmetry condition \(p\) even and \(u\) odd as functions of time \(t\) is preserved under both nonlinear and linearized evolution in \(x\). Our final breakthrough was the realization that a corresponding symmetry in \(x\) then allows for a _reflection principle_ for generating a periodic tiling of the plane from solutions of a reduced boundary value problem. The reduced problem is to solve the compressible Euler equations, or equivalently the \(2\times 2\) system (1.7), evolving in \(x\), from an initial condition \[u(0,\cdot)=0,\qquad p(0,\cdot)\ \text{even}, \tag{1.9}\] to \(x=\ell\), where we impose the boundary condition \[\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{S}^{T/4}\,u(\ell,\cdot)=0,\qquad \tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{S}^{T/4}\,p(\ell,\cdot)=0. \tag{1.10}\] Here \(T\) is the time period and we have defined the _reflection operator_\(\mathcal{R}\) and _shift operator_\(\mathcal{S}^{T/4}\) by \[\mathcal{R}\,f(t):=f(-t),\quad\text{and}\quad\mathcal{S}^{T/4}\,f(t):=f\big{(}t -T/4\big{)}, \tag{1.11}\] respectively, so that \(\tfrac{\mathcal{I}-\mathcal{R}}{2}\) is the projection onto the odd part of a function. **Theorem 1.1**.: _A solution of the nonlinear boundary value problem (1.7), (1.9), (1.10) determines a space and time periodic solution to the compressible Euler equations via a reflection symmetry principle._ We solve the boundary value problem as a perturbation from quiet state solutions. In order to do so, we must first develop a detailed understanding of the linearized problem. We first accomplished this for the simplest non-trivial entropy profile consisting of a single jump between two constant entropy states. Our previous work [36, 39, 37] led to the understanding that the key nonlinear effect is balancing of compression and rarefactions to avoid shock formation. Identifying the simplest periodic pattern that balances rarefaction and compression gave the intuition that led to the discovery of the boundary conditions (1.9) and (1.10) and the corresponding reflection principle for generating periodic solutions. In this paper, we develop these ideas first in this simplest case, and then successively generalize to piecewise constant profiles and finally to general entropy profiles. In doing the general case, we realized that the boundary conditions are self-adjoint, which allows us to analyze the linearization as a Sturm-Liouville system, and the previous cases can be incorporated into this general Sturm-Liouville framework. Here we state our results for the general case. Given a quiet state with entropy profile \(s(x)\) based at constant pressure \(\overline{p}\), we define the _inverse linearized wavespeed_\(\sigma=\sigma(x)\) to be \[\sigma(x):=\sqrt{-v_{p}\bigl{(}\overline{p},s(x)\bigr{)}}, \tag{1.12}\] recalling that we are evolving in the material variable \(x\). Define the set of allowable entropy profiles \(s=s(x)\) to be \[\mathcal{B}:=\Bigl{\{}s\in L^{1}[0,\ell]\;\Bigl{|}\;\sigma\in L^{1},\;\log \sigma\in BV\Bigr{\}},\] together with the \(L^{1}\) topology. Note that \(s(x)\) is general, and we do not require it to be periodic over \([0,\ell]\). For any \(s\in\mathcal{B}\), linearizing the compressible Euler equations (1.7) around the quiet state \(\overline{p}\) yields the linear wave equation \[P_{x}+U_{t}=0,\qquad U_{x}+\sigma^{2}(x)\,P_{t}=0, \tag{1.13}\] or equivalently \[P_{xx}-\sigma^{2}(x)\,P_{tt}=0,\] in which \(x\) is the evolution variable. We separate variables with the ansatz \[P(x,t):=\varphi(x)\,\mathrm{c}(\omega\,t),\qquad U(x,t):=\psi(x)\,\mathrm{s}( \omega\,t), \tag{1.14}\] where \(\mathrm{c}\) and \(\mathrm{s}\) denote cosine and sine, respectively. Together with the self-adjoint boundary conditions (1.9) and (1.10) this yields a Sturm-Liouville eigenvalue problem. In a solution of (1.14), the _eigenfrequency_\(\omega\) is the square root of the corresponding Sturm-Liouville eigenvalue. **Theorem 1.2**.: _For \(s\in\mathcal{B}\), there is a monotone increasing set \(\omega_{k}\) of eigenfrequencies with \(\omega_{k}/k\) bounded, and corresponding eigenfunctions \(\varphi_{k}\) and \(\psi_{k}\), such that the functions_ \[P_{k}(x,t):=\varphi_{k}(x)\,\mathrm{c}(\omega_{k}\,t),\qquad U_{k}(x,t):= \psi_{k}(x)\,\mathrm{s}(\omega_{k}\,t),\] _solve the linear wave equation (1.13) together with boundary conditions (1.9) and (1.10). We call these pure tone solutions of the linearized equation._ Our main result is that under a generic nonresonance assumption, _each of_ these linearized pure tone solutions perturbs to a one-parameter family of pure tone solutions of the _nonlinear_ compressible Euler equations, with the same space and time periods, parameterized by amplitude. We say a linearized \(k\)-mode is _nonresonant_ if its frequency \(\omega_{k}\) is not a rational multiple of any other eigefrequency, \[\frac{\omega_{j}}{\omega_{k}}\notin\mathbb{Q},\quad\text{for all}\quad j\neq k.\] **Theorem 1.3**.: _For each constant pressure \(\overline{p}>0\) and nonresonant linearized \(k\)-mode, there exists \(\overline{\alpha}_{k}>0\) such that the \(k\)-mode perturbs to a periodic solution of the nonlinear compressible Euler equations with the same space and time periods, taking the form_ \[p(x,t) =\overline{p}+\alpha\,\varphi_{k}(x)\,\mathrm{c}(\omega_{k}\,t)+O (\alpha^{2}),\] \[u(x,t) =\alpha\,\psi_{k}(x)\,\mathrm{s}(\omega_{k}\,t)+O(\alpha^{2}),\] _for each \(|\alpha|<\overline{\alpha}_{k}\). Here \(p\) and \(u\) are the pressure and velocity in the Lagrangian frame, \(\alpha\) is the amplitude, used as a perturbation parameter, and the modulations \(\varphi_{k}\) and \(\psi_{k}\) are the linearized eigenfunctions of the Sturm-Liouville problem._ Note that the amplitude bound \(\overline{\alpha}_{k}>0\) for which the perturbation is proven to hold, depends on \(k\) through the entire eigenfrequency structure of the linearized problem. Our next theorem shows that for _generic_ non-constant entropy profiles, _all_ linearized \(k\)-modes are nonresonant, and so every \(k\)-mode perturbs to periodic sound waves solutions of the nonlinear compressible Euler equations. **Theorem 1.4**.: _The set of completely nonresonant entropy profiles, which consists of those entropy profiles for which every \(k\)-mode is nonresonant, and so perturbs, is generic in the sense that it is residual, or second Baire category, in \(\mathcal{B}\)._ Recall that a set is residual if its complement is the countable union of nowhere dense sets. Moreover, when we restrict to the set of piecewise constant entropy profiles, the completely nonresonant set also has full measure, so that almost every piecewise constant entropy profile with \(n\) jumps is such that _all_ linearized \(k\)-modes perturb to nonlinear periodic sound wave solutions of the compressible Euler equations. Because the leading order terms of our nonlinear pure tone solutions solve the wave equation, an immediate corollary is the first _rigorous_ mathematical justification for the field of Acoustics, which uses the wave equation to study sound propagation, which has been developed since the time of Euler. **Corollary 1.5**.: _The use of the linear wave equation_ \[\frac{1}{c^{2}(x)}\,p_{tt}-p_{xx}=0,\qquad c(x):=\Big{(}-\frac{\partial v}{ \partial p}\Big{)}^{-1/2},\] _as an approximation for the propagation of one-dimensional sound waves is mathematically justified, for nonconstant entropy._ We note that no such statement can be made for constant entropy, as has been known since Riemann. Indeed, if the entropy is constant, the isentropic case, or if the fluid is barotropic, \(p=p(\rho)\), then all modes are _fully resonant_, in that they are all rational multiples of each other. Thus Theorem 1.3 does not apply for any mode, and our results are consistent with the celebrated results of Riemann, Lax and Glimm-Lax, which establish that spatially periodic solutions to any \(2\times 2\) genuinely nonlinear system always form shock waves and subsequently decay to average at rate \(1/t\)[30, 18, 10]. In particular, our results imply that the complete \(3\times 3\) system of compressible Euler is fundamentally different from the isentropic \(2\times 2\) system. **Corollary 1.6**.: _Generically, space periodic solutions of the \(3\times 3\) compressible Euler equations will not decay to constant (or quiet state), and in particular, solutions containing compressions need not form shock waves._ The breakthrough in solving the problem was the realization that we can impose periodicity within a smaller more symmetric class of solutions by _projection_, rather than by _periodic return_. The standard way to impose periodicity is by periodic return, by which we pose data \(U_{0}\) at \(x=0\), evolve nonlinearly in \(x\) through one period by \(\mathcal{N}\), and set \(\mathcal{N}\,U_{0}=U_{0}\). This yields a nonlinear equation of the form \[\mathcal{F}_{1}(U_{0}):=\left(\mathcal{N}-\mathcal{I}\right)U_{0}=0,\] and we wish to solve for \(U_{0}\). Our initial attempt to do so was by a Newton-Nash-Moser iteration argument, as in [7, 24, 8, 29, 41]. The Newton method requires inversion of nearby linearized operators \(D\mathcal{F}_{1}(U^{(k)})[\cdot]\) at each approximation \(U^{(k)}\) of the iteration. In our previous work, we identified the main technical difficulty of this approach, namely, the decay rate of the small divisors does not depend continuously on the constant state \(\overline{p}\) and the resonances, at which divisors vanish, cannot be controlled or predicted. In [35], we formulated a strategy for expunging parameters in order to effectively control the small divisors, and carried this out in a scalar warm-up problem. Although this approach is plausible, implementing it is a daunting project because of the very delicate estimates required to expunge small divisors. In this paper we identify symmetry properties of the nonlinear solution which allow us to impose periodicity by _projection_. The idea is that within a smaller more symmetric class of solutions, periodicity can be imposed by a projection consistent with the nonlinear symmetries. With this reformulation the problem is effectively changed from periodic return \(\mathcal{N}\,U_{0}=U_{0}\) to periodicity by projection, \[\mathcal{F}_{2}(U_{0}):=\Pi\,\mathcal{N}\,U_{0}=0,\] where again \(\mathcal{N}\) is nonlinear evolution, and \(\Pi\) is a half-space projection. A remarkable further simplification is that this nonlinear operator now also factors as \[\mathcal{F}_{2}(U_{0})=\Pi\,\mathcal{L}\left(\mathcal{L}^{-1}\,\mathcal{N} \right)U_{0}=0,\] where \(\mathcal{L}\) is linearized evolution by \(D\mathcal{N}(\overline{p})[\cdot]\), and \(\mathcal{L}^{-1}\,\mathcal{N}\) is a _bounded invertible_ nonlinear operator satisfying \[D\big{(}\mathcal{L}^{-1}\,\mathcal{N}\big{)}(\overline{p})=\mathcal{I},\] the identity. Here \(D\mathcal{F}_{2}(\overline{p})=\Pi\,\mathcal{L}\) is the _fixed_ operator obtained by linearizing \(\mathcal{F}_{2}\) around the quiet state \((\overline{p},0)\), and this _uniformly_ encodes the small divisors. In this way, we are able to avoid Nash-Moser altogether and treat the problem as a regular bifurcation problem which can be solved using the implicit function theorem. This simplification allows us to dramatically extend the analysis to general entropy profiles and wave numbers, subject only to a non-resonance condition which is generically satisfied. Although the background pressure \(\overline{p}\) is perturbed to \(\overline{p}+\alpha\,z\) in solving the problem, the linear factor \(\Pi\,\mathcal{L}\), which is the linearization at \(\overline{p}\), is constant, and hence so are the small divisors, independent of the perturbation parameter \(\alpha\). ### History of the Problem We give a contextual history of the problem, focusing on the development of the theory of sound and its relation to that of continuum mechanics and shock waves. Our main sources are several works of Truesdell [44, 45], together with the collections of Lindsay [19, 20] and Johnson and Cheret [15]; see also [12, 27, 25]. The wave-like nature of sound and the fact that it is caused by vibrations of solid objects such as bells and musical instruments was known by the ancients. It was also known that these vibrations caused corresponding vibrations in the air or other medium, which were then sensed by the eardrum as sounds of various sorts. The origin of modern theories of mechanics is regarded as Newton's Principia, which consists of three books. In Book I, Newton sets out his famous Laws of Motion and uses his Calculus to solve many problems of the dynamics of a single small body or point-mass. Newton realized that in order to obtain accurate results, he needed to take into account resistive forces, which led him to begin a development of fluid mechanics in Book II, and Book III revolved around his Law of Gravitation. In Book II, Newton attempted to describe both the dynamics of continuous media and the propagation of sound waves, but he was ultimately unable to get these quite right. In the century after the appearance of the Principia, many of the problems of continua and sound raised by Newton were understood, being especially driven by the Bernoulli and Euler. This led to d'Alembert's derivation of the wave equation for a vibrating string in 1748, and to Euler's equation of 1751, expressing conservation of linear momentum, which is \[\rho\,\frac{Du}{Dt}=-\nabla p+\rho\,b, \tag{1.15}\] in which \(u\) is the velocity, \(\rho\) the density, \(p\) the pressure, and \(b\) is the body force per unit mass. Euler correctly defined the pressure and identified the internal force as the negative pressure gradient, but his real triumph was to get the convective derivative \[\frac{D}{Dt}=\frac{\partial}{\partial t}+u\cdot\nabla\] right. In 1755, Euler coupled this with the continuity equation, which expresses conservation of mass, \[\frac{D\rho}{Dt}+\rho\,\nabla\cdot u=0. \tag{1.16}\] If the pressure \(p(\rho)\) is given, (1.15) and (1.16) close and together consist of _Euler's equations_ for a barotropic fluid. In 1759, Euler expressed these equations in a material coordinate and linearized to get the wave equation of d'Alembert. In so doing, Euler provided a _mechanical explanation_ for the propagation of sound waves. The vibration of a solid object, such as a bell, drum or string, causes the surrounding fluid (that is, air) to vibrate, and these vibrations propagate through the fluid medium according to (1.15) and (1.16), and are in turn sensed at the ear. Given the central role that music played in Europe at that time, this explanation of musical tones in terms of linear sinusoidal oscillation and superposition, was one of the greatest intellectual achievements of that era. One of the big questions of the late 18-th and early 19-th centuries was to understand the the constitutive relation \(p=p(\rho)\), from which the speed of sound could be calculated. Poisson in 1808 developed what is now known as the method of characteristics, and obtained implicit equations for the solution of (1.15), (1.16) in one space dimension. In 1848, Challis pointed out that in some cases, Poisson's solution breaks down, which led Stokes, also in 1848 [33], to propose discontinuous solutions containing shock waves. In 1860, Earnshaw introduced simple waves, and Riemann independently showed that _any_ non-trivial solution to Euler's equations suffers gradient blowup; indeed,1 Footnote 1: from B. Riemann, “The Propagation of Planar Air Waves of Finite Amplitude”, 1860, transl. in [15] _The compression wave, that is, the portions of the wave where the density decreases in the direction of propagation, will accordingly become increasingly more narrow as it progresses, and finally goes over into compression shocks; but the width of the expansion or release wave grows proportional with time._ A large part of the later 19-th century was spent in understanding thermodynamics and the roles of energy and entropy, by which the correct shock speeds could be found. This culminated in the development of the Rankine-Hugoniot conditions in the 1880's, which allowed for the successful treatment of discontinuous solutions and shock waves. The upshot is that a third equation, namely conservation of energy, \[\frac{\partial}{\partial t}\Big{(}\frac{1}{2}\,\rho\,u^{2}+\rho\,e\Big{)}+ \nabla\cdot\Big{(}\frac{1}{2}\,\rho\,u^{2}\,u+\rho\,e\,u+p\,u\Big{)}=0, \tag{1.17}\] together with the Second Law of Thermodynamics, \[\Theta\,ds=de+p\,dv,\qquad v=\frac{1}{\rho},\] is needed to fully describe the system. Here \(\Theta\), \(e\) and \(s\) are the temperature, specific internal energy, and specific entropy, respectively, and for smooth solutions, (1.17) is equivalent to the _entropy equation_, \[\frac{\partial}{\partial t}(\rho\,s)+\nabla\cdot(\rho\,s\,u)=0. \tag{1.18}\] If discontinuities are present, (1.18) is replaced by an inequality, which becomes a selection criterion for shock waves. In the first decades of the twentieth century, researches investigated more general constitutive laws and effects of viscosity, while the 1930's focussed attention on shock waves. In their renowned monograph [6], Courant and Friedrichs collected most of the results of shock wave theory which was developed up to and in the Second World War, and which included early numerical simulations. The modern theory of conservation laws was initiated by Peter Lax in [17], in which he considered the abstract system \[U_{t}+F(U)_{x}=0,\qquad U\in\mathbb{R}^{n},\quad F:\mathbb{R}^{n}\to \mathbb{R}^{n}, \tag{1.19}\] for which he defined weak solutions and solved the Riemann problem. In his celebrated 1965 paper [9], Glimm proved the global existence of weak solutions to (1.19), provided the total variation of the initial data \(U_{0}\) is small enough. Shortly afterwards, Glimm and Lax showed that weak solutions of \(2\times 2\) systems decay as \(1/t\) in [10], being driven by the decay of shock waves. Thereafter the prevailing view in the community was that solutions to generic hyperbolic systems should always form shocks and decay to a constant state. For systems of more than two equations, F. John, and later Tai-Ping Liu, proved that shocks form given sufficiently compressive initial data [14, 21], and this was later improved by Geng Chen and collaborators [2, 4, 3]. The first indications that there may be non-decaying solutions of the \(3\times 3\) Euler equations arose from the method of weakly nonlinear geometric optics in the mid-80s [23, 26, 13], with further suggestions coming from numerical studies [31, 46], and exact solutions to certain simplified model systems [47, 48]. In [35], the authors extended Glimm's existence theory to large time existence for the \(3\times 3\) Euler equations, provided the initial data has (arbitrarily large) finite total variation and small amplitude. To our knowledge, this is the largest class of data for which large-time existence has been proved prior to the current work. In [36], the authors understood the physical phenomenon that may prevent shock formation, namely an echoing effect due to changes in entropy. That is, when the (linearly degenerate) entropy varies, nonlinear simple waves necessarily interact with this entropy field, resulting in a partial transmission and partial reflection of nonlinear waves. These reflected waves, or echoes, are then (to leading order) superimposed on all other nonlinear waves. If this superposition of waves fits a certain pattern, then compressions never steepen completely, and periodic solutions could then ensue. In [39, 37, 38], we described this structure in the linearized Euler equations, and expressed the problem of finding periodic solutions as a bifurcation problem, albeit with small divisors and a loss of derivative. In [40, 41], we then set up a Nash-Moser framework for solving this bifurcation problem. In [42, 43], we understood the effects of derivative loss and small divisors in a scalar model problem, and described a strategy to treat the small divisors uniformly. ### Structure of the paper We briefly outline the layout of the rest of the paper. In Section 2, we recall the nonlinear equations, and state the periodicity problem as a projection for square wave entropy profiles. This mirrors the nonlinear effects and periodic structure developed in our earlier papers [36, 39, 38]. In Section 3 we linearize around a quiet state and calculate the small divisors for this simplified profile. In Section 4, we carry out the bifurcation argument for the lowest mode in the simplest case of a square wave entropy profile; this establishes the first nontrivial periodic solution of the compressible Euler equations. In Section 5 we allow the time period to vary, and repeat the bifurcation argument for all nonresonant \(k\)-modes, thus providing an infinite number of distinct periodic pure tone solutions for a nonresonant square wave entropy field. In Section 6 we extend the analysis to general piecewise constant entropy profiles and show that the set of fully nonresonant profiles has full measure and is residual. This establishes that generically, _all_ linearized \(k\)-modes perturb to pure tone solutions of the nonlinear problem. In Section 7 we further generalize to _arbitrary_\(BV\) entropy profiles. For this, we reduce the corresponding linear problem to a Sturm-Liouville system and carry out the bifurcation argument in this most general case. This includes the statement that generic profiles are again fully nonresonant. Finally in Section 8, we describe and solve the second derivative \(D^{2}\mathcal{E}\) of the nonlinear evolution operator, which requires solution of an inhomogeneous linear system. Although of interest in its own right because it demonstrates that nonlinear evolution in \(H^{s}\) is twice differentiable, this provides an essential step in the solution of the bifurcation problem used in Section 7. ## 2. The System Our starting point is the compressible Euler equations in a Lagrangian frame, which is the \(3\times 3\) system in conservative form, \[v_{t}-u_{x}=0,\quad u_{t}+p_{x}=0,\quad\left(\tfrac{1}{2}\,u^{2}+e\right)_{t}+(u \,p)_{x}=0. \tag{2.1}\] Here \(x\) is the material coordinate and \(u\) is the Eulerian velocity, and the thermodynamic variables are specific volume \(v\), pressure \(p\) and specific internal energy \(e\). The system is closed by a _constitutive relation_ which satisfies the Second Law of Thermodynamics, \[de=\Theta\,ds-p\,dv,\quad\text{so that}\quad\Theta=e_{s}(s,v)\quad\text{and} \quad p=-e_{v}(s,v),\] where \(\Theta\) is the temperature and \(s\) the specific entropy. It follows that for classical smooth solutions, the third (energy) equation can be replaced by the simpler _entropy equation_, \[e_{t}+p\,v_{t}=0,\quad\text{or}\quad s_{t}=0.\] We initially consider the fundamental case of a polytropic ideal (or \(\gamma\)-law) gas, which is described by \[p=v^{-\gamma}\,e^{s/c_{v}},\quad\text{or}\quad v=p^{-1/\gamma}\,e^{s/c_{p}}, \tag{2.2}\] where \(\gamma=c_{p}/c_{v}\) is the ratio of specific heats, assumed constant; in Section 7 we generalize to arbitrary equation of state. To start, we assume a piecewise constant entropy field, so that the system is _isentropic_ on finite \(x\)-intervals, so we use only the first two equations of (2.1), which yields a closed system in \(u\) and \(p\). We treat \(x\) as the evolution variable, so we write the equation as \[u_{x}-\left(p^{-1/\gamma}\,e^{s/c_{p}}\right)_{t}=0,\qquad p_{x}+u_{t}=0, \tag{2.3}\] and regard \(s\) as constant on each subinterval. In our earlier development, we introduced a non-dimensionalization which translates linear evolution into rotation, and thus finds the essential geometry of the solutions of the linearized wave equation. This allowed us to identify the simplest periodic wave structure, and the nonlinear functional which imposes periodicity. This rotation structure allowed us to explicitly understand the resonances and small divisors in the linearized operator. In our later development using Sturm-Liouville theory in Section 7 below, we again find the correct angle variable in which the evolution can be interpreted as rotation. ### Non-dimensionalization We briefly recall the non-dimensionalization of the isentropic system (2.3), which was initially derived in [39]. The advantage in non-dimensionalizing is that the nonlinear evolution becomes independent of the constant value of the entropy. In this formulation the jump conditions in the rescaled variables are imposed at each entropy jump. The simplest weak solution which incorporates a given entropy profile is one in which \((u,p)=(u_{0},p_{0})\) is constant. By a Galilean transformation, we can take \(u_{0}=0\) without loss of generality, so our constant state is characterized by \(p_{0}\). **Lemma 2.1**.: _On an interval on which the entropy \(s\) is constant, the rescaling_ \[w :=\Big{(}\frac{p-p_{0}}{p_{0}}\Big{)}^{\frac{\gamma-1}{2\gamma}},\] \[\hat{w} :=\frac{\gamma-1}{2\sqrt{\gamma}}\,e^{-s/2c_{p}}\,p_{0}^{-\frac{ \gamma-1}{2\gamma}}\,u,\] \[X :=\frac{1}{\sqrt{\gamma}}\,e^{s/2c_{p}}\,p_{0}^{-\frac{\gamma+1}{ 2\gamma}}\,x,\] _transforms the isentropic system (2.3) into the non-dimensional system_ \[\hat{w}_{X}+(1+w)^{-\nu}\,w_{t}=0,\qquad w_{X}+(1+w)^{-\nu}\,\hat{w}_{t}=0, \tag{2.4}\] _with \(\nu=\frac{\gamma+1}{\gamma-1}\). Conversely, any classical solution \((w,\hat{w})\) of (2.4) yields a solution of the isentropic system (2.3), in which the physical variables are given by_ \[p :=p_{0}\,\big{(}w+1\big{)}^{\frac{2\gamma}{\gamma-1}},\] \[u :=\frac{2\sqrt{\gamma}}{\gamma-1}\,e^{s/2c_{p}}\,p_{0}^{\frac{ \gamma-1}{2\gamma}}\,\hat{w},\] \[x :=\sqrt{\gamma}\,e^{-s/2c_{p}}\,p_{0}^{\frac{\gamma+1}{2\gamma}} \,X.\] Proof.: We begin by introducing a thermodynamic parameter \(h=h(p)\), so that \(p=p(h)\) is determined in such a way that the nonlinear wavespeed appears the same in both equations. Setting \(p=p(h)\) in (2.3) and manipulating yields \[u_{x}+\frac{1}{\gamma}\,e^{s/c_{p}}\,p^{-\frac{\gamma+1}{\gamma}}\,p^{\prime}( h)\,h_{t}=0,\qquad h_{x}+\frac{1}{p^{\prime}(h)}\,u_{t}=0,\] so we choose \(h\) such that \[\frac{1}{p^{\prime}(h)}=k_{1}\,p^{\prime}(h)\,p^{-\frac{\gamma+1}{\gamma}}, \quad\text{or}\quad h^{\prime}(p)=k_{2}\,p^{-\frac{\gamma+1}{2\gamma}}.\] It suffices to take \[h:=p^{\frac{\gamma-1}{2\gamma}},\quad\text{so that}\quad p=h^{\frac{2\gamma}{ \gamma-1}},\] which yields a wavespeed of \[\frac{k_{3}}{p^{\prime}(h)}=k_{4}\,h^{-\nu},\quad\text{with}\quad\nu:=\frac{ \gamma+1}{\gamma-1},\] where \(k_{i}\) are appropriate constants. To non-dimensionalize, we proceed as follows: first, scale \(h\) by \(h_{0}\), then scale \(u\) by a constant (depending on \(h_{0}\) and \(s\)), and finally, rescale the material coordinate \(x\) to get the simplest possible nonlinear system. Thus, given constant \(p_{0}\), we set \[\widehat{w}:=\left(\frac{p}{p_{0}}\right)^{\frac{\gamma-1}{2\gamma}},\quad\text {so}\quad p=p_{0}\,\widehat{w}^{\frac{2\gamma}{\gamma-1}},\quad v=e^{s/c_{p}}\, p_{0}^{-1/\gamma}\,\widehat{w}^{-\frac{2}{\gamma-1}}.\] Plugging in to (2.3), for classical solutions we get \[u_{x}+\frac{2}{\gamma-1}\,e^{s/c_{p}}\,p_{0}^{-1/\gamma}\,\widehat{w}^{-\nu}\, \widehat{w}_{t}=0,\] \[\widehat{w}_{x}+\frac{\gamma-1}{2\gamma}\,p_{0}^{-1}\,\widehat{w}^{-\nu}\,u_{ t}=0,\] which we rewrite as \[\sqrt{\gamma}\,e^{-s/2c_{p}}\,p_{0}^{\frac{\gamma+1}{2\gamma}}\, \Big{(}\frac{\gamma-1}{2\sqrt{\gamma}}\,e^{-s/2c_{p}}\,p_{0}^{-\frac{\gamma-1} {2\gamma}}\,u\Big{)}_{x}+\widehat{w}^{-\nu}\,\widehat{w}_{t}=0,\] \[\sqrt{\gamma}\,e^{-s/2c_{p}}\,p_{0}^{\frac{\gamma+1}{2\gamma}}\, \widehat{w}_{x}+\widehat{w}^{-\nu}\,\Big{(}\frac{\gamma-1}{2\sqrt{\gamma}}\, e^{-s/2c_{p}}\,p_{0}^{-\frac{\gamma-1}{2\gamma}}\,u\Big{)}_{t}=0.\] From this it is evident how we should rescale \(u\) and \(x\): namely, we set \[\hat{w}:=\frac{\gamma-1}{2\sqrt{\gamma}}\,e^{-s/2c_{p}}\,p_{0}^{-\frac{\gamma- 1}{2\gamma}}\,u, \tag{2.5}\] and rescale the material variable by \[x\mapsto X:=\frac{1}{\sqrt{\gamma}}\,e^{s/2c_{p}}\,p_{0}^{-\frac{\gamma+1}{2 \gamma}}\,x,\] and as a final simplification, we set \(w:=\widehat{w}-1\), so that our base constant state is the origin in the \((w,\overset{*}{w})\) coordinates. Our system then becomes (2.4). For convenience, we revert to using \(x\) rather than \(X\) for the rescaled material variable. After this rescaling, on a region of constant entropy, in a Lagrangian frame, the Euler equations (2.4) can be written as the non-dimensional quasilinear \(2\times 2\) system \[\partial_{x}W+\widehat{\sigma}(w)\,H\,\partial_{t}W=0,\quad W(t,0)=W^{0}(t), \tag{2.6}\] with \[W=\left(\begin{array}{c}w\\ \overset{\cdot}{w}\end{array}\right),\quad H=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\quad\widehat{\sigma}(w)=(1+w)^{-\nu},\] and we denote evolution of the data \(W^{0}\) from \(x=0\) to \(x=\theta\), by \(\widetilde{\mathcal{E}}^{\theta}(W^{0})\). Here \(w\) and \(\overset{\cdot}{w}\) are the rescaled thermodynamic variable and Eulerian velocity, respectively, and the system is independent of the (constant) value of the entropy. Because we want to find solutions that are pure tones in time, we evolve in the material variable, and the data \(W^{0}\) is taken to be a periodic function of time with period \(T\), which we will initially take to be \(2\pi\). Since the entropy satisfies the equation \(s_{t}=0\), any entropy jump is stationary in the Lagrangian frame, and the effect of crossing the jump is described by the Rankine-Hugoniot conditions, \([u]=[p]=0\). In non-dimensional coordinates, this becomes \[w_{R}=w_{L},\quad\tilde{\boldsymbol{\dot{w}}}_{R}=J\,\tilde{\boldsymbol{\dot{w}} }_{L},\quad\text{or}\quad W_{R}=\mathcal{J}\,W_{L},\] where the linear operator \(\mathcal{J}\) is defined by \[\mathcal{J}\,W:=M(J)\,W,\quad\text{where}\quad M(J)=\left(\begin{array}{cc}1 &0\\ 0&J\end{array}\right), \tag{2.7}\] and where \(J=e^{-[s]/2c_{p}}\), with \([s]:=s_{R}-s_{L}=s(x+)-s(x-)\) denoting the entropy jump. We note that although this description is exact for an ideal polytropic (\(\gamma\)-law) gas, given by (2.2), for a general constitutive law, the Rankine-Hugoniot condition takes the form \(p(v_{R},s_{R})=p(v_{L},s_{L})\). In [36, 39], the authors described the simplest structure of possible space and time periodic solutions of the compressible Euler equations which formally balance compression and rarefaction. These occur when there are exactly two entropy levels, with separate jumps between them, and they should satisfy the nonlinear equation \[\left(\mathcal{N}-\mathcal{I}\right)W=0,\quad\text{where}\quad\mathcal{N}:= \mathcal{S}^{T/2}\,\mathcal{J}^{-1}\,\widetilde{\mathcal{E}}^{\theta}\, \mathcal{J}\,\widetilde{\mathcal{E}}^{\widetilde{\theta}}, \tag{2.8}\] where the evolutions and jumps are as above, and \(\mathcal{S}^{T/2}\) denotes a half-period shift, \[\mathcal{S}^{T/2}\,W(t):=W(t-T/2).\] In this paper we find space and time periodic solutions based on a modification of (2.8) that takes advantage of symmetries preserved by nonlinear evolution in space, namely \(p\) even and \(u\) odd as functions of \(t\). The key to managing the small divisors in (2.8) is based on a reformulation tailored to this more symmetric class of allowable solutions. We call this new approach _periodicity by projection_, while we refer to (2.8) as describing _periodicity by periodic return_. ### Riemann Invariants and Symmetry It is convenient to use Riemann invariants to describe the system; in terms of our rescaled variables, these are \[\begin{split}& y=w+\tilde{\boldsymbol{w}},\quad\tilde{ \boldsymbol{\dot{y}}}=w-\tilde{\boldsymbol{\dot{w}}},\quad\text{so that}\\ & w=\frac{y+\hat{\boldsymbol{\dot{y}}}}{2},\quad\tilde{ \boldsymbol{\dot{w}}}=\frac{y-\hat{\boldsymbol{\dot{y}}}}{2}.\end{split} \tag{2.9}\] In Riemann invariants, the evolution equations (2.6) are \[\begin{split}&\partial_{x}\dot{\boldsymbol{\dot{y}}}-\widehat{ \sigma}\big{(}\frac{\dot{\boldsymbol{\dot{y}}}+y}{2}\big{)}\,\partial_{t}\dot{ \boldsymbol{\dot{y}}}=0,\\ &\partial_{x}y+\widehat{\sigma}\big{(}\frac{\dot{\boldsymbol{ \dot{y}}}+y}{2}\big{)}\,\partial_{t}y=0,\end{split} \tag{2.10}\] and we write the state as \(U=(\vec{y},y)^{T}\). We let \(\mathcal{E}^{\theta}(U^{0})\) denote evolution of the initial data \(U^{0}\) from \(x=0\) to \(x=\theta\). It follows that \[\widetilde{\mathcal{E}}^{\theta}\mathcal{Q}=\mathcal{Q}\,\mathcal{E}^{\theta}, \quad\text{where}\quad\mathcal{Q}=\left(\begin{array}{cc}1&1\\ -1&1\end{array}\right),\] and the two evolutions (2.6) in \(W\) and (2.10) in \(U\) are equivalent. Observe that the time symmetry \[w(x,\cdot)\text{ even,}\qquad\vec{w}(x,\cdot)\text{ odd} \tag{2.11}\] is respected by the nonlinear evolution (2.6). That is, if it holds at some \(x_{0}\), it remains true at any other \(x\). Translating this into Riemann invariants using (2.9), this is \[\begin{split}\dot{\vec{y}}(x,-t)&=w(x,-t)-\dot{\vec{w}}(x,-t)\\ &=w(x,t)+\dot{\vec{w}}(x,t)=y(x,t).\end{split}\] It is convenient to express this in terms of a reflection operator: denote the action of reflecting time \(t\to-t\) by \(\mathcal{R}\), so that \[[\mathcal{R}\,v](t):=v(-t). \tag{2.12}\] In this notation, the \(w\) even, \(\dot{\vec{w}}\) odd symmetry (2.11) becomes the condition \[\ddot{y}(x,\cdot)=\mathcal{R}\,y(x,\cdot)\quad\text{for any}\quad x, \tag{2.13}\] and imposing this at any point \(x_{0}\) implies that it holds throughout the evolution. On the other hand, imposing (2.13) on \(y\) and \(\dot{\vec{y}}\) and defining \(w\) and \(\ddot{w}\) to be the even and odd parts of \(y\), respectively, implies the symmetry (2.11). This means that we can impose the symmetry (2.11) in terms of Riemann invariants by allowing \(y^{0}\) to be arbitrary and setting \(\dot{y}^{0}=\mathcal{R}\,y^{0}\). Similarly, once \(y(x,\cdot)\) is known, we use (2.13) to get \(\ddot{y}(x,\cdot)\) and (2.9) to reconstruct the full solution \(U\). The upshot is that as long as the solutions of the \(2\times 2\) system (2.6) remain regular, and the even/odd symmetry (2.11) is imposed in the data, then the system (2.6) is equivalent to a single _nonlocal_ and nonlinear scalar equation, namely \[\partial_{x}y+\sigma(y)\,\partial_{t}y=0,\quad\text{where} \tag{2.14}\] \[\sigma(y):=\widehat{\sigma}\big{(}\tfrac{\mathcal{I}+\mathcal{R}}{2}y\big{)}= \big{(}1+\tfrac{\mathcal{I}+\mathcal{R}}{2}y\big{)}^{-\nu}.\] We can interpret this as a scalar transport equation, in which the nonlinear wavespeed \(\sigma\) is nonlocal because \(\tfrac{\mathcal{I}+\mathcal{R}}{2}\) is a nonlocal operator. Note that (2.14) is the second equation of (2.10); since \(\mathcal{R}\partial_{t}=-\partial_{t}\mathcal{R}\), we recover the first equation by simply applying reflection \(\mathcal{R}\) to (2.14) and using (2.13). ### Solution of the scalar equation at constant entropy We briefly describe the nonlinear evolution given by the nonlocal scalar equation (2.14). We treat it as a simple nonlinear transport problem, by solving along characteristics as usual. These are given by \[\frac{dt}{dx}=\sigma,\quad\text{along which}\quad\frac{dy}{dx}=0,\] so \(y\) is constant along characteristics, \(y(x,t)=y^{0}(t_{0})\). Note that in contrast to a scalar conservation law, we cannot conclude that the characteristics are straight lines; this is due to the nonlocality of the problem, because \(\sigma(x,t)\) is a function of both \(y(x,t)\) and \(y(x,-t)\). Note also that \(\sigma\) is always strictly positive. Denote the characteristic through the reference point \((x_{*},t_{*})\), parameterized by \(x\), by \(t=\tau_{x}:=\tau_{x}(x_{*},t_{*})\). Here the subscript is a position locator rather than a partial derivative. Then \(\tau_{x}\) satisfies the equation \[\frac{d\tau_{x}}{dx}=\sigma\big{(}y(x,\tau_{x})\big{)},\quad\tau_{x_{*}}=t_{*},\] and integrating gives \[\tau_{x}(x_{*},t_{*})=t_{*}+\int_{x_{*}}^{x}\sigma\big{(}y(\xi,\tau_{\xi}) \big{)}\ d\xi. \tag{2.15}\] This fully describes the characteristic field before gradient blowup, which is the regime we are working in. For later reference we record the group property of the characteristic field, \[\tau_{x_{*}}(x_{*},t_{*})=t_{*}\quad\text{and}\quad\tau_{\xi}\big{(}\eta,\tau _{\eta}(x_{*},t_{*})\big{)}=\tau_{\xi}(x_{*},t_{*}). \tag{2.16}\] The characteristic field yields the solution of the initial value problem (2.14), namely \[y(x,t)=y^{0}\big{(}\tau_{0}(x,t)\big{)}.\] This in turn defines the evolution operator \(\mathcal{E}^{\theta}(y^{0})\), namely \[\mathcal{E}^{\theta}(y^{0})(t)=y^{0}\big{(}\tau_{0}(\theta,t)\big{)},\quad \text{so that}\quad\mathcal{E}^{\theta}=\mathcal{S}_{\phi}, \tag{2.17}\] where the (linear) _shift operator_\(\mathcal{S}_{\phi}\) is defined by \[\mathcal{S}_{\phi}\big{[}y\big{]}:=y\circ\phi,\qquad\mathcal{S}_{\phi}\big{[} y\big{]}(t)=y\big{(}\phi(t)\big{)}, \tag{2.18}\] and where \(\phi=\tau_{0}(\theta,\cdot)\) is the (nonlinear) _shift_, implicitly given by \[\phi(t)=\tau_{0}(\theta,t)=t+\int_{\theta}^{0}\sigma\big{(}y(\xi,\tau_{\xi}) \big{)}\ d\xi. \tag{2.19}\] It follows that the nonlinear evolution \(\mathcal{E}^{\theta}(y^{0})=\mathcal{S}_{\phi}[y^{0}]\) can be regarded as acting linearly _once the characteristic field \(\phi\) is known_, while the nonlinearity (and nonlocality) is manifest in the determination of the characteristics. The above reasoning establishes the following lemma, which describes the evolution under equation (2.14). **Lemma 2.2**.: _The evolution operator \(\mathcal{E}^{\theta}\), which evolves data \(y^{0}\) through a spatial interval of width \(\theta\), is given by (2.17), where the shift \(\phi\) is a smooth function given implicitly by (2.19). This solution is valid and unique as long as the derivative \(\partial_{t}y\) remains finite, which holds as long as different characteristics don't intersect, that is, for values of \(\theta\) such that (2.15) can be solved uniquely for \(t_{*}\), uniformly for \(x\in[0,\theta]\)._ We note that the characteristics vary smoothly, because the wavespeed \(\sigma(y)\) is a smooth function of \(y\), and so can be expanded as needed. On the other hand, our prior work in [40] argues that one cannot expand the linear (composition) \(\mathcal{S}_{\phi}[y]\) in \(\phi\), because any such expansion up to \(k\)-th derivatives \(y^{(k)}\) cannot be controlled in a Nash-Moser iteration [41], because the error is \(O\big{(}y^{(k+1)}\big{)}\). ### Initial setup: finding the smallest tile By analyzing the linearization of (2.8), we characterized the resonances and small divisors in the problem as eigenvalues of the linearization of \(\mathcal{G}(U)=\mathcal{N}(U)-U\). We showed how a Liapunov-Schmidt decomposition, coupled to a Nash-Moser iteration to handle the small divisors, provides a consistent methodology for solving (2.8), see [38, 41]. In [42, 43], we further analyzed a scalar model problem which isolated the problem of small divisors, and we proposed a strategy for implementing a Nash-Moser iteration to solve (2.8). Preliminary numerical simulations indicated that solutions of (2.8) appear to all lie inside a smaller class of data which satisfy restrictive symmetry properties. The difficulty with (2.8) as stated is that this restricted class of solutions is unidentifiable, because that formulation does not impose enough symmetry that is respected by the nonlinear problem. In order to proceed further, we look for a modification of (2.8) based on symmetries that are respected by the _nonlinear_ evolution with appropriate boundary conditions, which are also satisfied by the linearized solutions. In this more restricted class, the half-period time shift is placed at axes of symmetry in (material) space \(x\). Our first restriction is to the space of solutions satisfying the basic symmetry (2.11), so that \(w\) is even and \(\hat{w}\) is odd as functions of \(t\) throughout the evolution. Our second symmetry follows by imposing the natural physical "acoustic boundary condition" \(u=0\) at \(x=0\). This further restricts the domain of the nonlinear operator and imposes a spatial reflection property. Our proof ultimately shows that the periodic solutions we seek live within this smaller domain. The spatial reflection symmetry which follows from \(u(0,\cdot)=0\) is \[w(-x,t)=w(x,t),\qquad\hat{w}(-x,t)=-\hat{w}(x,t), \tag{2.20}\] and continuity at \(x=0\) requires \(\hat{w}=0\) there, which is precisely the acoustic boundary condition. We thus impose the conditions \(\hat{w}^{0}(t)=0\), \(w^{0}\) even, and evolve this data forward in space to \(x=\overline{\theta}:=\widetilde{\theta}/2\). We then obtain the solution on the entire entropy level \(-\overline{\theta}<x<\overline{\theta}\) by reflection through (2.20): by uniqueness, these must coincide. We note that the boundary symmetry \(\hat{w}(x,\cdot)=0\) holds only at \(x=0\), while the \(w\) even, \(\hat{w}\) odd symmetry (2.11) holds throughout the evolution. It remains to impose a periodicity condition at the end of the evolution. For this we use the following principle: an even or odd periodic function admits _two_ axes of symmetry, at both the endpoint \(0\) and midpoint \(T/2\) of the interval, so that even or odd periodic functions remain even or odd after a half-period shift, respectively. That is, if we set \(\widetilde{f}(s):=f(s-T/2)\), then \[\widetilde{f}(-s)=f(-s-T/2)=\pm f(s+T/2)=\pm f(s-T/2)=\pm\widetilde{f}(s). \tag{2.21}\] For periodicity, it thus suffices to impose the additional _shifted_ reflection symmetry at the end of the evolution, which becomes the shifted symmetry axis. This means that the practical interval of evolution is actually only half the width of that of problem (2.8). This shifted reflection, analogous to (2.20), is \[\begin{split} w(\ell+x,t)&=w(\ell-x,t+T/2),\\ \ddot{w}(\ell+x,t)&=-\dot{w}(\ell-x,t+T/2),\end{split} \tag{2.22}\] and setting \(x=0\), continuity at \(\ell\) requires \[w(\ell,t+T/2)=w(\ell,t),\qquad\dot{w}(\ell,t+T/2)=-\ddot{w}(\ell,t). \tag{2.23}\] In other words, if \(w(x,\cdot)\) and \(\dot{w}(x,\cdot)\) are defined for say \(0<x<\ell\), and satisfy the symmetry condition (2.23) at \(x=\ell\), then use of (2.22) allows us to extend the solution to the interval \(0<x<2\ell\). Assuming these symmetries, it follows that we can generate a periodic solution from a single tile evolving from the center of one entropy level to the center of the other, with a single jump in between. The full periodic tile is then generated by the series of reflections first (2.20), and then (2.22), so the full spatial period of the solution is \(4(\underline{\theta}+\overline{\theta})\). We note that one reflection converts the jump operator \(\mathcal{J}\) to \(\mathcal{J}^{-1}\), because \(\mathcal{J}\) is the jump from \(U(x-)\) to \(U(x+)\), and these are switched under reflection in \(x\). We illustrate the repeated reflections of the generating tile in Figure 1. Recalling that two orthogonal reflections of the plane yields a rotation of \(\pi\) around the intersection point, the standard reflection at \(x=0\) is labelled '1', and the shifted reflection at \(x=\ell\) is labeled '2'. The composition of these two reflections is then translated periodically by \(4\ell\) and \(2\pi\) to give the wave pattern on the plane. A sketch of the perturbed tile, showing the perturbed characteristics to leading order, is also shown. The densityof sketched characteristics indicates the presence of nonlinear rarefaction and compression. Only the forward characteristics are shown; the backward characteristics are obtained by applying the time reflection \(\mathcal{R}\). In order to precisely write down the reduced nonlinear equation which determines the minimal tile, we need to describe the double boundary value problem which imposes periodicity. We do this by expressing the relevant operators in terms of the scalar Riemann invariant \(y\). Assume the first entropy level \(\overline{s}\) extends from \(-\overline{\theta}<x<\overline{\theta}\), and the second \(\underline{s}\) extends from \(\overline{\theta}<x<\overline{\theta}+2\underline{\theta}\). We define a reduced nonlinear operator taking data at \(x=0\) to evolved solution at \(x=\overline{\theta}+\underline{\theta}\) and impose boundary conditions which imply periodicity. Recall that the action of the jump \(\mathcal{J}\) is \(w\to w\), \(\ddot{w}\to J\ddot{w}\), (\(\rightarrow\) indicating the jump from left to right) where the scalar jump \(J\) is a measure of how far apart the entropy levels are. Using (2.9) and (2.13), the action of the jump on \(y\) is \[y\to\mathcal{J}y,\quad\text{where}\quad\mathcal{J}:=\tfrac{\mathcal{I}+\mathcal{R }}{2}+J\,\tfrac{\mathcal{I}-\mathcal{R}}{2}, \tag{2.24}\] where \(J\) is again the (scalar) size of the entropy jump, and \(\tfrac{\mathcal{I}+\mathcal{R}}{2}\) are projections onto the even and odd parts \(w\) and \(\hat{w}\) of \(y\), respectively. We next express the boundary conditions (2.20) and (2.22) in terms of the scalar Riemann invariant \(y\). Setting \(x=0\) in (2.20) gives \(\dot{\hat{w}}(0,t)=0\), which is \[y(0,t)=\hat{y}(0,t)=y(0,-t)\quad\text{or}\quad\overline{y}^{0}=\mathcal{R}\, \overline{y}^{0}, \tag{2.25}\] so that \(\overline{y}^{0}:=y(0,\cdot)\) is even. Figure 1. Reflections of the generating tile. For the second entropy level, we set \(x=\ell:=\overline{\theta}+\underline{\theta}\) in (2.22), so we get \(w(\ell,t)=w(\ell,t+T/2)\) and \(\hat{w}(\ell,t)=-\hat{w}(\ell,t+T/2)\), or equivalently \[\begin{gathered} y(\ell,t+T/2)=\vec{y}(\ell,t)=y(\ell,-t),\quad \text{or}\\ y^{\ell}\big{(}\tau+\tfrac{T}{4}\big{)}=y^{\ell}\big{(}-\tau+ \tfrac{T}{4}\big{)},\quad\text{with}\quad\tau=t+\tfrac{T}{4}.\end{gathered} \tag{2.26}\] It is convenient to define the shift or translation operator \(\mathcal{S}^{\theta}\), which translates \(v\) by a fixed amount \(\theta\), by \[[\mathcal{S}^{\theta}\,v](t):=v(t-\theta). \tag{2.27}\] We use the convention that \(\mathcal{S}^{\theta}\) is a constant or uniform shift, while \(\mathcal{S}_{\phi}\) is a non-uniform shift given by (2.18). These are then related by \[\mathcal{S}^{\theta}=\mathcal{S}_{\phi_{\theta}},\quad\text{where}\quad\phi_ {\theta}(t):=t-\theta.\] Noting that \[[\mathcal{S}^{\theta}\,\mathcal{R}\,v](t)=v(-t+\theta)=[\mathcal{R}\, \mathcal{S}^{-\theta}\,v](t),\] our second boundary condition (2.26) can be written as \[\mathcal{S}^{-T/4}\,y^{\ell}=\mathcal{S}^{T/4}\,\mathcal{R}\,y^{\ell}= \mathcal{R}\,\mathcal{S}^{-T/4}\,y^{\ell}, \tag{2.28}\] so that the target \(\underline{y}^{\ell}\) is the quarter-period shift of an even function, \[\underline{y}^{\ell}=\mathcal{S}^{T/4}\,\underline{y}^{\ell},\quad\text{where }\quad\underline{y}^{\ell}=\mathcal{R}\,\underline{y}^{\ell},\] because \(\underline{y}^{\ell}\) is even. ### The nonlinear equation We now give the reduced nonlinear operator whose solutions solve the above double boundary value problem which generates a periodic solution. We treat the problem as an evolution in which we use the first boundary condition (2.25) to restrict the data of the nonlinear operator, and the second boundary condition (2.28) as the target for the nonlinear evolution. The boundary condition (2.25) can be written \(\frac{\mathcal{I}+\mathcal{R}}{2}y^{0}=0\), so that \(y^{0}\) is even, and so we restrict our domain to (arbitrary periodic) even data \(y^{0}\). This data is then evolved a distance \(\overline{\theta}\), subjected to the jump (2.24), and evolved a further distance \(\underline{\theta}\). We then impose the second boundary condition (2.28), namely \(\frac{\mathcal{I}-\mathcal{R}}{2}\mathcal{S}^{-T/4}y=0\). Thus our periodic tile is generated by the fully nonlinear (and nonlocal) reduced evolution equation \[\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{S}^{-T/4}\,\mathcal{E}^{ \underline{\theta}}\,\mathcal{J}\,\mathcal{E}^{\overline{\theta}}\,y^{0}=: \mathcal{F}(y^{0})=0, \tag{2.29}\] where the data \(y^{0}\) is even, and \(T\)-periodic. At this stage, the parameters \(T\), \(\overline{\theta}\), \(\underline{\theta}\) and \(J\) are yet to be determined; each solution of (2.29) determines a periodic solution. We note that we have replaced the subtraction \(\mathcal{N}-\mathcal{I}\) in (2.8) by the linear projection \(\frac{\mathcal{I}-\mathcal{R}}{2}\) to impose periodicity. In other words, we have replaced the _periodic return_ problem (2.8) with the _periodicity by projection_ problem (2.29). Remarkably, this converts out-of-control small divisors to uniform small divisors thus making the whole problem tractable. Our re-expression of the nonlinear problem using projection allows a factorization of the nonlinear operator, whereas the expression (2.8) has no obvious factorization. **Theorem 2.3**.: _Any solution of the scalar equation (2.29) with even data \(y^{0}\) provides a minimal tile which generates by reflections, a solution of the compressible Euler equations which is \(4\ell\)-periodic in space and \(T\)-periodic in time._ By a _minimal tile_, we mean a solution of (2.29) defined on \([0,\ell]\times[0,T]\), which generates a space and time periodic solution by this series of reflections and shifted reflections. Proof.: Given a solution \(y(x,t)\) defined on \([0,\ell]\times[0,T]\) and satisfying the stated conditions, we extend this to a full periodic solution of the system (2.4), illustrated in Figure 1, as follows. Recalling that \(y\) is a Riemann invariant, we generate the state \((w,\dot{w})\) from \(y\), and then reflect this \(2\times 2\) state using the boundary conditions. Recalling (2.13) and (2.9), we define \[w(x,t):=\tfrac{\mathcal{I}+\mathcal{R}}{2}y(x,t)=\frac{y(x,t)+y(x,-t)}{2},\] \[\dot{w}(x,t):=\tfrac{\mathcal{I}-\mathcal{R}}{2}y(x,t)=\frac{y(x,t)-y(x,-t)}{2},\] for \((x,t)\in[0,\ell]\times[0,T]\). We then extend \(w\) using (2.22) and (2.20) as \[w(x,t):=\begin{cases}w(2\ell-x,t+T/2),&\ell\leq x\leq 2\ell,\\ w(x-2\ell,t+T/2),&2\ell\leq x\leq 3\ell,\\ w(4\ell-x,t),&3\ell\leq x\leq 4\ell,\end{cases}\] and similarly extend \(\dot{w}\) as \[\dot{w}(x,t):=\begin{cases}-\ddot{w}(2\ell-x,t+T/2),&\ell\leq x\leq 2\ell,\\ \ddot{w}(x-2\ell,t+T/2),&2\ell\leq x\leq 3\ell,\\ -\ddot{w}(4\ell-x,t),&3\ell\leq x\leq 4\ell.\end{cases}\] By construction, the solution so given is periodic and continuous at all boundaries \(x=j\ell\), \(j=0,\dots 4\). Note that this uses the extra symmetry point of funtions that are both periodic and even or odd, (2.21). ## 3. Linearization and Small Divisors Our first task is to linearize our main equation (2.29) around \(y=0\), which is itself a solution of the equation. By our non-dimensionalization, \(y=0\) corresponds to the quiet state solution \(p=p_{0}\), \(u=0\) of (2.1) over a variable entropy profile. We note that the jump \(\mathcal{J}\), shift \(\mathcal{S}^{-T/4}\) and projection onto even, \(\tfrac{\mathcal{I}+\mathcal{R}}{2}\) are already linear operators, so it suffices to linearize the evolution \(\mathcal{E}^{\theta}\). Since \(\sigma(0)=1\), the linearization of the nonlinear transport equation (2.14) around \(y=0\) is just the linear transport equation \[\partial_{x}Y+\partial_{t}Y=0,\] which has solution \[Y(x,t)=Y^{0}(t-x)=\mathcal{S}^{x}\big{[}Y^{0}\big{]}(t),\] where \(\mathcal{S}^{x}\) is translation by \(x\), as in (2.27). It follows that the linearization of the evolution operator around \(y=0\) through \(\theta\) is \[\mathcal{L}^{\theta}:=D\mathcal{E}^{\theta}(0),\quad\text{so}\quad\mathcal{L}^ {\theta}[Y]=D\mathcal{E}^{\theta}(0)[Y]=\mathcal{S}^{\theta}Y. \tag{3.1}\] Here and throughout the paper, we will use square brackets \([\cdot]\) for inputs of (multi-)linear operators, and parentheses \((\cdot)\) to denote inputs to nonlinear operators, and we will use upper case to refer to arguments of (multi-)linear operators when convenient. It follows that the linearization of \(\mathcal{F}\) around \(y=0\), acting on \(Y\), is \[D\mathcal{F}(0)[Y]=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{S}^{-T/4}\, \mathcal{L}^{\underline{\theta}}\mathcal{J}\,\mathcal{L}^{\overline{\theta}}[ Y]=:\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}[Y], \tag{3.2}\] and because we have restricted the domain, we take \(Y\) even, and we have set \[\mathcal{L}_{0}:=\mathcal{S}^{-T/4}\,\mathcal{L}^{\underline{\theta}}\mathcal{ J}\,\mathcal{L}^{\overline{\theta}}, \tag{3.3}\] and we note that each factor of \(\mathcal{L}_{0}\) is invertible. Our goal is to fully understand the kernel of \(D\mathcal{F}(0)=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\), and choose parameters so that the kernel consists of constant states and isolated modes. This will allow us to perturb the kernel to get a nontrivial solution of (2.29). Because \(\mathcal{L}_{0}\) consists of translations and jumps, it respects Fourier \(k\)-modes in \(t\); this in turn allows us to explicitly calculate the kernel and small divisors of \(D\mathcal{F}(0)\). We use the following notation: for a given (reference) time period \(T\), denote the \(k\)-mode by the \(1\times 2\) matrix \[\mathcal{T}_{k}=\mathcal{T}_{k}(T):=\left(\begin{array}{cc}\operatorname{c} (k\tfrac{2\pi}{T}t)&\operatorname{s}(k\tfrac{2\pi}{T}t)\end{array}\right), \tag{3.4}\] where \(\operatorname{c}/\operatorname{s}\) abbreviate the trigonometric functions \(\cos/\sin\). It then follows that any \(T\)-periodic function can be represented as \[f(t)=\sum_{k\geq 0}\mathcal{T}_{k}\left(\begin{array}{c}a_{k}\\ b_{k}\end{array}\right)=\sum a_{k}\operatorname{c}(k\tfrac{2\pi}{T}t)+b_{k} \operatorname{s}(k\tfrac{2\pi}{T}t),\] uniquely with \(b_{0}=0\). Identifying a mode as a \(k\)-mode requires us to fix the reference period \(T\): clearly \(\mathcal{T}_{jk}(jT)=\mathcal{T}_{k}(T)\) for any \(j\geq 1\). This notation allows us to express (3.1) in simple matrix terms, as stated in the following lemma, which can readily be verified. **Lemma 3.1**.: _The above linear operators act on \(\mathcal{T}_{k}\), as follows:_ \[\mathcal{R}\,\mathcal{T}_{k} =\mathcal{T}_{k}\,M(-1),\] \[\mathcal{J}\,\mathcal{T}_{k} =\mathcal{T}_{k}\,M(J),\] \[\mathcal{S}^{-T/4}\,\mathcal{T}_{k} =\mathcal{T}_{k}\,P^{-k},\] \[\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{T}_{k} =\operatorname{s}(k\tfrac{2\pi}{T}\theta)\left(\begin{array}{cc}0 &1\end{array}\right),\quad\text{and}\] \[\mathcal{L}^{\theta}\,\mathcal{T}_{k} =\mathcal{S}^{\theta}\,\mathcal{T}_{k}=\mathcal{T}_{k}\,R(k \tfrac{2\pi}{T}\theta),\] _where \(R(\theta)\) is the usual rotation matrix, \(M(\cdot)\) is the jump matrix from (2.7), and \(P=R(\pi/2)\), that is_ \[R(\theta)=\left(\begin{array}{cc}\mathrm{c}\theta&-\mathrm{s}\theta\\ \mathrm{s}\theta&\mathrm{c}\theta\end{array}\right),\quad M(J)=\left(\begin{array} []{cc}1&0\\ 0&J\end{array}\right),\quad\text{and}\quad P=\left(\begin{array}{cc}0&-1\\ 1&0\end{array}\right).\] **Corollary 3.2**.: _The action of \(D\mathcal{F}(0)\) on any even \(Y=\sum a_{k}\,\mathrm{c}(k\frac{2\pi}{T}t)\) is_ \[D\mathcal{F}(0)\Bigl{[}\sum a_{k}\,\mathrm{c}(k\frac{2\pi}{T}t)\Bigr{]}=\sum a _{k}\,\delta_{k}\,\mathrm{s}(k\frac{2\pi}{T}t),\] _where the \(k\)-th divisor \(\delta_{k}=\delta_{k}\bigl{(}\overline{\theta},\underline{\theta},J;T\bigr{)}\) is the number_ \[\delta_{k}=\left(\begin{array}{cc}0&1\end{array}\right)\,P^{-k}\,R(k\frac{ 2\pi}{T}\underline{\theta})\,M(J)\,R(k\frac{2\pi}{T}\overline{\theta})\left( \begin{array}{c}1\\ 0\end{array}\right). \tag{3.5}\] Proof.: We write the data as \(Y=\sum a_{k}\,\mathrm{c}(k\frac{2\pi}{T}t)=\sum a_{k}\,\mathcal{T}_{k}\left( \begin{array}{c}1\\ 0\end{array}\right)\), and use Lemma 3.1 in (3.2), to get the matrix expression for \(\delta_{k}\). This in turn can be easily multiplied out to get an explicit expression. ### The Kernel Because \(D\mathcal{F}(0)\) respects \(k\)-modes, it follows that it has a \(k\)-mode kernel if and only if \(\delta_{k}=0\). There is always a 0-mode kernel, because constant states solve the linearized (and nonlinear) equation. Our initial strategy, to construct the simplest solution as in [39], is to choose parameters so that there is a 1-mode kernel, \(\delta_{1}=0\), but no higher modes appear in the kernel, \(\delta_{k}\neq 0\) for each \(k>1\). We call this the _nonresonant case_. Because the 1-mode kernel is then isolated, our methods will show that it will perturb to a solution of the nonlinear problem (2.29). To ease notation, until we choose otherwise, we will assume a time period of \(T=2\pi\); putting the factor \(\frac{2\pi}{T}\) back in is straight-forward. **Lemma 3.3**.: _The linearization \(D\mathcal{F}(0)\) has a 1-mode kernel if and only if_ \[J=\frac{\mathrm{c}\underline{\theta}}{\mathrm{s}\underline{\theta}}\,\frac{ \mathrm{c}\overline{\theta}}{\overline{\delta}\overline{\theta}}. \tag{3.6}\] _Moreover, for almost every pair \((\underline{\theta},\overline{\theta})\in\mathbb{R}^{2}\), \(\delta_{k}\neq 0\) for all \(k\geq 2\), so that there are no other \(k\)-modes in the kernel._ _In the diagonal case \(\underline{\theta}=\overline{\theta}=:\theta\), some \(\delta_{k}=0\) if and only if \(\theta\) is a rational multiple of \(\pi\), and if \(\theta\notin\pi\mathbb{Q}\) satisfies the diophantine condition_ \[\Bigl{|}\frac{\theta}{\pi}-\frac{p}{q}\Bigr{|}\geq\frac{C}{q^{r}}, \tag{3.7}\] _then the divisors \(\delta_{k}\) satisfy the explicit bound_ \[\bigl{|}\delta_{k}\bigr{|}\geq\frac{K}{k^{2(r-1)}}. \tag{3.8}\] Proof.: The proof requires us to calculate each \(\delta_{k}\). First, we calculate \(\delta_{1}\) to be \[\delta_{1} =\left(\begin{array}{cc}-1&0\end{array}\right)R(\underline{\theta}) \,M(J)\,R(\overline{\theta})\left(\begin{array}{c}1\\ 0\end{array}\right)\] \[=\left(\begin{array}{cc}-c\underline{\theta}&\text{s}\underline{ \theta}\end{array}\right)\left(\begin{array}{c}c\overline{\theta}\\ J\,\text{s}\overline{\theta}\end{array}\right)\] \[=J\,\text{s}\underline{\theta}\,\text{s}\overline{\theta}-c \underline{\theta}\,\text{c}\overline{\theta},\] and there is a 1-mode kernel if and only if \(\delta_{1}=0\), which is equivalent to (3.6). This in turn allows us to write \[M(J)=\frac{1}{\text{s}\underline{\theta}\,\text{s}\overline{\theta}}\left( \begin{array}{cc}\text{s}\underline{\theta}&0\\ 0&\text{c}\underline{\theta}\end{array}\right)\left(\begin{array}{cc}\text {s}\overline{\theta}&0\\ 0&\text{c}\overline{\theta}\end{array}\right),\] and we use this in the calculation of \(\delta_{k}\). To calculate \(\delta_{k}\), at the \(\overline{\theta}\) level, we have \[\left(\begin{array}{cc}\text{s}\overline{\theta}&0\\ 0&c\overline{\theta}\end{array}\right) R(k\overline{\theta})\left(\begin{array}{c}1\\ 0\end{array}\right)=\left(\begin{array}{c}\text{s}\overline{\theta}\,\text{c }(k\overline{\theta})\\ \text{c}\overline{\theta}\,\text{s}(k\overline{\theta})\end{array}\right)\] \[=\frac{\text{s}\big{(}(k+1)\overline{\theta}\big{)}}{2}\left( \begin{array}{c}1\\ 1\end{array}\right)+\frac{\text{s}\big{(}(k-1)\overline{\theta}\big{)}}{2} \left(\begin{array}{c}-1\\ 1\end{array}\right).\] Similarly, at the \(\underline{\theta}\) level, for \(k\) even, we get \[(-1)^{k/2}\left(\begin{array}{cc}0&1\end{array}\right) P^{-k}\,R(k\underline{\theta})\left(\begin{array}{cc}\text{s} \underline{\theta}&0\\ 0&\text{c}\underline{\theta}\end{array}\right)\] \[=\left(\begin{array}{cc}\text{s}(k\underline{\theta})\text{s} \underline{\theta}&\text{c}(k\underline{\theta})\text{c}\underline{\theta} \end{array}\right)\] \[=\frac{\text{c}\big{(}(k+1)\underline{\theta}\big{)}}{2}\left( \begin{array}{cc}-1&1\end{array}\right)+\frac{\text{c}\big{(}(k-1) \underline{\theta}\big{)}}{2}\left(\begin{array}{cc}1&1\end{array}\right),\] while for \(k\) odd, \[(-1)^{(k+1)/2}\left(\begin{array}{cc}0&1\end{array}\right) P^{-k}\,R(k\underline{\theta})\left(\begin{array}{cc}\text{s}\underline{ \theta}&0\\ 0&\text{c}\underline{\theta}\end{array}\right)\] \[=\left(\begin{array}{cc}\text{c}(k\underline{\theta})\text{s} \underline{\theta}&-\text{s}(k\underline{\theta})\text{c}\underline{\theta} \end{array}\right)\] \[=\frac{\text{s}\big{(}(k+1)\underline{\theta}\big{)}}{2}\left( \begin{array}{cc}1&-1\end{array}\right)+\frac{\text{s}\big{(}(k-1)\underline{ \theta}\big{)}}{2}\left(\begin{array}{cc}-1&-1\end{array}\right).\] It now follows from (3.5) that for \(k\) even, we have \[\delta_{k}=\frac{(-1)^{k/2}}{2\,\text{s}\underline{\theta}\,\text{s}\overline {\theta}}\left(\text{c}\big{(}(k+1)\underline{\theta}\big{)}\,\text{s}\big{(} (k-1)\overline{\theta}\big{)}+\text{c}\big{(}(k-1)\underline{\theta}\big{)} \,\text{s}\big{(}(k+1)\overline{\theta}\big{)}\right),\] while for \(k\) odd, \[\delta_{k}=\frac{(-1)^{(k-1)/2}}{2\,\text{s}\underline{\theta}\,\text{s} \overline{\theta}}\left(\text{s}\big{(}(k+1)\underline{\theta}\big{)}\,\text{ s}\big{(}(k-1)\overline{\theta}\big{)}+\text{s}\big{(}(k-1)\underline{\theta}\big{)}\, \text{s}\big{(}(k+1)\overline{\theta}\big{)}\right).\] For each fixed \(k\geq 2\), the equation \(\delta_{k}=0\) holds at the zeros of a nontrivial periodic function, so is a countable set of regular solution curves \((\theta,\overline{\theta})\in\mathbb{R}^{2}\) each of which has measure \(0\) in \(\mathbb{R}^{2}\). Thus, being a countable union of measure zero sets, the full "resonant" set \[\Big{\{}(\underline{\theta},\overline{\theta})\mid\delta_{k}=0\text{ for some }k\geq 2\Big{\}}\] also has measure \(0\) in \(\mathbb{R}^{2}\). Restricting to the diagonal case \(\underline{\theta}=\overline{\theta}=:\theta\), we get \[\begin{split}\delta_{k}&=\frac{(-1)^{k/2}}{2\,{\rm s }^{2}\theta}\,{\rm s}(2k\theta)\quad\text{or}\\ \delta_{k}&=\frac{(-1)^{(k-1)/2}}{{\rm s}^{2}\theta }\,{\rm s}\big{(}(k+1)\theta\big{)}\,{\rm s}\big{(}(k-1)\theta\big{)},\end{split} \tag{3.9}\] for \(k\) even or odd, respectively, so that \(\delta_{k}=0\) for some \(k\geq 2\) if and only if \(\theta\) is some rational multiple of \(\pi\). Since \({\rm s}(\vartheta)\geq 2\vartheta/\pi\) for \(0\leq\vartheta\leq\pi/2\), it follows that for all \(\vartheta\), we have the lower bound \[\big{|}{\rm s}(\vartheta)\big{|}\geq\frac{2}{\pi}\,\min_{j\in\mathbb{Z}} \big{|}\vartheta-\pi\,j\big{|},\] so, provided \(\theta\) satisfies (3.7), we have for \(q\geq 2\), \[\big{|}{\rm s}(q\theta)\big{|}\geq\frac{2}{\pi}\,\min\big{|}q\,\theta-\pi\,j \big{|}\geq 2q\,\min\Big{|}\frac{\theta}{\pi}-\frac{j}{q}\Big{|}\geq\frac{2\,C}{q^{ r-1}}.\] Using this estimate in (3.9) now yields (3.8). ## 4. Factorization of the Nonlinear Operator Our use of symmetry replaces the periodic return problem (2.8) with the periodicity by projection problem (2.29). In this section we show that the fully nonlinear operator \(\mathcal{F}\) factors into the fixed linearized operator \(\frac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\) times a regular invertible nonlinear factor. In this sense, by exploiting symmetry and restricting to a smaller domain, we have been able to "shrink-wrap" the problem by reducing it while retaining the fundamental nonlinear principle that leads to the existence of periodic solutions, which is the balance of compression and rarefaction. We thus focus on the reduced equation (2.29), namely \[\mathcal{F}(y^{0})=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{S}^{-T/4}\, \mathcal{E}^{\underline{\theta}}\,\mathcal{J}\,\mathcal{E}^{\overline{\theta} }\,y^{0}=0,\] for \(y^{0}\) even. Our first observation is that, according to Corollary 3.2 and Lemma 3.3, the small divisors are a fundamental effect of the leading order linearization around the base state \(y=0\), which will persist when perturbing to a nonlinear solution. By factoring the nonlinear operator, we show that the small divisors remain uniform under perturbation. Indeed, because of the simple structure of \(\mathcal{F}\) as a composition of operators, and since each linear and nonlinear evolution is invertible by backwards evolution, we are able to factor the linearization \(\mathcal{L}_{0}\), which generates the small divisors, out of the fully nonlinear operator \(\mathcal{F}\), as follows. **Theorem 4.1**.: _The nonlinear operator \(\mathcal{F}\) given in (2.29) can be factored as_ \[\mathcal{F}=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\,\mathcal{N}, \quad\text{where}\quad\mathcal{N}:=\underline{\mathcal{N}}\,\overline{\mathcal{ N}}. \tag{4.1}\] _Here \(\mathcal{L}_{0}\) is given by (3.3), and for \(y=O(\alpha)\), \(\underline{\mathcal{N}}\) and \(\overline{\mathcal{N}}\) are regular nonlinear operators, with_ \[D\overline{\mathcal{N}}=\mathcal{I}+O(\alpha),\qquad D\underline{\mathcal{N} }=\mathcal{I}+O(\alpha). \tag{4.2}\] _It follows that the fully nonlinear equation (2.29) can be rewritten as_ \[\mathcal{F}(y^{0})=0\quad\text{iff}\quad\mathcal{N}(y^{0})\in\ker\big{\{} \tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\big{\}}. \tag{4.3}\] The upshot of this factorization is that the small divisors, which are determined by the leading order linearization \(\tfrac{\mathcal{I}-\mathcal{R}}{2}\mathcal{L}_{0}\), have been explicitly factored out, and so are necessarily uniform. In addition, (4.3) can be regarded as containing only a projection with no small divisors, and in addition \(\mathcal{N}=\underline{\mathcal{N}}\,\overline{\mathcal{N}}\) is a regular perturbation of the identity. By this, we are able to solve (4.3) using the standard implicit function theorem, without having to resort to a more technical Nash-Moser iteration. Proof.: The proof is a direct computation, using the fact that the linearized evolutions \(\mathcal{L}^{\theta}\) and jump \(\mathcal{J}\) are invertible. Using (2.29) and (3.3), we write \[\mathcal{F}(y^{0}) =\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\big{\{}\mathcal{L}^{ \theta}\mathcal{J}\mathcal{L}^{\overline{\theta}}\,\mathcal{L}^{\overline{ \theta}-1}\mathcal{J}^{-1}\mathcal{L}^{\underline{\theta}-1}\big{\}}\, \mathcal{E}^{\underline{\theta}}\,\mathcal{J}\mathcal{E}^{\overline{\theta}}\, y^{0}\] \[=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\,\big{(} \mathcal{J}\mathcal{L}^{\overline{\theta}}\big{)}^{-1}\,\mathcal{L}^{ \underline{\theta}-1}\mathcal{E}^{\underline{\theta}}\big{(}\mathcal{J} \mathcal{L}^{\overline{\theta}}\big{)}\,\mathcal{L}^{\overline{\theta}-1} \mathcal{E}^{\overline{\theta}}\,y^{0},\] where \(\{\square\}=\mathcal{I}\). This is (4.1), provided we define \[\underline{\mathcal{N}}:=\big{(}\mathcal{J}\mathcal{L}^{\overline{\theta}} \big{)}^{-1}\,\mathcal{L}^{\underline{\theta}-1}\mathcal{E}^{\underline{ \theta}}\big{(}\mathcal{J}\mathcal{L}^{\overline{\theta}}\big{)}\quad\text{ and}\quad\overline{\mathcal{N}}:=\mathcal{L}^{\overline{\theta}-1}\mathcal{E}^{\overline{ \theta}}. \tag{4.4}\] Recall that, as long as derivatives remain bounded, \(\mathcal{E}^{\theta}\) is a regular operator which propagates the value of \(y^{0}\) along characteristics. Also, if \(y^{0}=0\), the characteristics for \(\mathcal{L}^{\theta}\) and \(\mathcal{E}^{\theta}\) coincide, and \(\mathcal{L}^{\theta}=D\mathcal{E}^{\theta}(0)\). From this it follows that \(\mathcal{N}^{\theta}:=\mathcal{L}^{\theta-1}\mathcal{E}^{\theta}\) is a regular perturbation of the identity, and moreover \[D\mathcal{N}^{\theta}(y)=\mathcal{L}^{\theta-1}D\mathcal{E}^{\theta}(y)= \mathcal{I}+O(\alpha)\quad\text{if}\quad y=O(\alpha),\] because \(\mathcal{L}^{\theta}=D\mathcal{E}^{\theta}(0)\). Thus (4.2) follows for \(\overline{\mathcal{N}}\), and it follows for \(\underline{\mathcal{N}}\) since this property is preserved under conjugation by (fixed) linear operators. ### Removal of the small divisors Our fully nonlinear equation now has the factored form (4.1), namely \[\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\,\mathcal{N}\,y^{0}=0, \quad y^{0}\text{ even},\] where the nonlinear part \(\mathcal{N}\) is bounded invertible for small regular data. We now introduce the Hilbert spaces which allow us to find solutions which are perturbations of the constant state \(y=0\). To leading order, these have the form \(\alpha\,{\rm c}(t)\in\ker\{\frac{\mathcal{I}-\mathcal{R}}{2}\mathcal{L}_{0}\}\), and are parameterized by the amplitude \(\alpha\) provided (3.6) holds. Our program is to construct solutions of the form \[y^{0}(t)=\alpha\,{\rm c}(t)+z+\sum_{j>1}a_{j}\,{\rm c}(jt), \tag{4.5}\] where the \(0\)-mode \(z\) (also in \(\ker\{\frac{\mathcal{I}-\mathcal{R}}{2}\mathcal{L}_{0}\}\)), and higher mode coefficents \(a_{j}\) of order \(O(\alpha^{2})\) are unknowns to be found. Observe first that there is one free variable and one equation corresponding to each mode because the operator (4.1) takes even modes to odd modes. That is, after projection by \(\frac{\mathcal{I}-\mathcal{R}}{2}\), we have one equation for each coefficient of \({\rm s}(jt)\) in \(\mathcal{L}_{0}\,\mathcal{N}\,y^{0}\), \(j\geq 1\). On the other hand, the free parameters are the \(a_{j}\), \(j>1\), and \(0\)-mode \(z\), so each equation corresponds uniquely to an unknown. Thus formally we expect to get a solution of (4.1) of the form (4.5) for any \(\alpha\) sufficiently small in the nonresonant case. The nonresonant case is characterized by the conditions \(\delta_{1}=0\), \(\delta_{j}\neq 0\) for all \(j>1\). According to Lemma 3.3, this is in turn implied for almost every pair \((\underline{\theta},\overline{\theta})\), provided \(J\) is chosen according to (3.6). We thus fix a nonresonant pair \((\underline{\theta},\overline{\theta})\). The following development is an explicit version of the Liapunov-Schmidt decomposition of the nonlinear operator \(\mathcal{F}\) into the auxiliary equation and corresponding bifurcation equation in the nonresonant case. As a roadmap, we briefly describe the abstract bifurcation problem and its solution. Consider the problem of solving equations of the form \[f(\alpha,z,w)=0,\quad\text{with}\quad f(0,0,0)=0,\] for solutions parameterized by the small amplitude parameter \(\alpha\) (for us, \(\mathcal{F}\) plays the role of \(f\)). We cannot do this with a direct application of the implicit function theorem because the gradient \(\nabla_{(z,w)}f\big{|}_{0}\) is not invertible. Here we assume \[z\in\ker\Big{\{}\nabla_{(z,w)}f\big{|}_{0}\Big{\}}\quad\text{and}\quad w\in \ker\Big{\{}\nabla_{(z,w)}f\big{|}_{0}\Big{\}}^{\perp}.\] Assuming \(\frac{\partial f}{\partial w}\big{|}_{0}\) is invertible, we first find \[w(\alpha,z)\quad\text{so that}\quad f\big{(}\alpha,z,w(\alpha,z)\big{)}=0;\] this is known as the _auxiliary equation_. Next, we wish to solve for \[z=z(\alpha)\quad\text{such that}\quad f\big{(}\alpha,z(\alpha),w(\alpha,z( \alpha))\big{)}=0,\] In our problem, \(\frac{\partial f}{\partial z}\big{|}_{0}\) is not invertible, so we replace \(f\) by the equivalent function \[g(\alpha,z):=\begin{cases}\frac{1}{\alpha}\,f\big{(}\alpha,z,w(\alpha,z)\big{)},&\alpha\neq 0,\\ \frac{\partial f}{\partial\alpha}\big{(}0,z,w(0,z)\big{)},&\alpha=0,\end{cases}\] and the equation \(g(\alpha,z)=0\) is known as the _bifurcation equation_. This bifurcation equation can be solved by the implicit function theorem, provided \[\frac{\partial g}{\partial z}\Big{|}_{(0,0)}\equiv\frac{\partial^{2}f}{\partial z \,\partial\alpha}\Big{|}_{(0,0,0)}\neq 0.\] The decomposition of the domain of the function \(f\) into a direct sum of the kernel and its orthogonal complement is known as the Liapunov-Schmidt decomposition [11, 38]. In our application, the (infinite dimensional) gradient, although invertible, has unbounded inverse because of the presence of small divisors \(\delta_{k}\). However, the factorization (4.1) means that these small divisors are uniform, and so we can handle them by an appropriate adjustment of the associated Hilbert space norms. It is remarkable that by factoring the nonlinear operator, we are able to avoid difficult technical issues, like diophantine estimates such as (3.7), (3.8), which are common in Nash-Moser iterations. When solving the bifurcation equation, we must calculate the second derivative of an infinite dimensional nonlinear evolution operator, which is an important technical part of the overall argument. Assume \(y^{0}\) lies in the Sobolev space \(H^{s}\), so that for small data \(y^{0}\), the evolution \(\mathcal{E}^{x}(y^{0})\) stays in \(H^{s}\), by the local existence theory [22, 34]. To apply the implicit function theorem, define \[\begin{split}\mathcal{H}_{1}&:=\big{\{}z+\alpha\, \mathrm{c}(t)\ \big{|}\ z,\alpha\in\mathbb{R}\big{\}}\quad\text{and}\\ \mathcal{H}_{2}&:=\Big{\{}\sum_{j>1}a_{j}\,\mathrm{ c}(jt)\ \Big{|}\ \sum a_{j}^{2}\,j^{2s}<\infty\Big{\}},\end{split} \tag{4.6}\] so that \(y^{0}\in\mathcal{H}_{1}\oplus\mathcal{H}_{2}\), and define \[\widehat{\mathcal{F}}:\mathcal{H}_{1}\times\mathcal{H}_{2}\to H ^{s}\quad\text{by}\] \[\widehat{\mathcal{F}}(y_{1},y_{2}):=\mathcal{F}(y_{1}+y_{2})= \tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\,\mathcal{N}\,(y_{1}+y_{ 2}),\] so that \(\widehat{\mathcal{F}}\) is a continuous (nonlinear) operator. It follows that the partial Frechet derivative \[D_{y_{2}}\widehat{\mathcal{F}}(0,0):\mathcal{H}_{2}\to H^{s}\quad\text{is} \quad D_{y_{2}}\widehat{\mathcal{F}}(0,0)=\tfrac{\mathcal{I}-\mathcal{R}}{2} \,\mathcal{L}_{0},\] and according to Corollary 3.2, this acts as \[D_{y_{2}}\widehat{\mathcal{F}}(0,0)\Big{[}\sum_{j>1}a_{j}\,\mathrm{c}(jt) \Big{]}=\sum_{j>1}a_{j}\,\delta_{j}\,\mathrm{s}(jt).\] By our choice of parameters, we have \(\delta_{j}\neq 0\) for \(j>1\), so that \(D_{y_{2}}\widehat{\mathcal{F}}(0,0)\) is injective, but the inverse is not bounded as a map \(H^{s}\to H^{s}\) because of the presence of the small divisors \(\delta_{k}\). However, because the small divisors are uniform, we can define a new norm on the target space \(H^{s}\) so that \(D_{y_{2}}\widehat{\mathcal{F}}(0,0)\) becomes an isometry, and in particular is bounded invertible on its range, as in [43]. Thus we set \[\mathcal{H}_{+} :=\Big{\{}y=\sum_{j>1}c_{j}\operatorname{s}(jt)\Bigm{|}\|y\|<\infty \Big{\}},\quad\text{and} \tag{4.7}\] \[\mathcal{H} :=\big{\{}\beta\operatorname{s}(t)\big{\}}\oplus\mathcal{H}_{+}, \quad\text{with norm}\] \[\|y\|^{2} :=\beta^{2}+\sum_{j>1}c_{j}^{2}\,\delta_{j}^{-2}\,j^{2s},\] where for convenience we set \(c_{1}:=\beta\), which isolates the \(1\)-mode kernel. Referring to (3.5), we see that each divisor \(\delta_{j}\) is bounded above, \(\delta_{j}\leq\max\{1,J\}\), so that \[\|y\|_{H^{s}}^{2}\leq\frac{1}{\max\{1,J^{2}\}}\,\|y\|^{2},\quad\text{and}\quad \mathcal{H}\subset H^{s},\] and it is clear that \(\mathcal{H}\) is a Hilbert space. Finally, let \(\Pi\) denote the projection \[\Pi:\mathcal{H}\to\mathcal{H}_{+},\quad\Pi\Big{[}\beta\operatorname{s}(t)+ \sum_{j>1}a_{j}\operatorname{s}(jt)\Big{]}:=\sum_{j>1}a_{j}\operatorname{s}( jt).\] Note that we have constructed these spaces so that \[\ker\{D_{y_{2}}\widehat{\mathcal{F}}(0,0)\}=\mathcal{H}_{1},\quad\operatorname {ran}\{D_{y_{2}}\widehat{\mathcal{F}}(0,0)\}=\mathcal{H}_{+},\] so that \(\Pi D_{y_{2}}\widehat{\mathcal{F}}(0,0)=D_{y_{2}}\widehat{\mathcal{F}}(0,0)\), and moreover \(D_{y_{2}}\widehat{\mathcal{F}}(0,0):\mathcal{H}_{2}\to\mathcal{H}_{+}\) is an isometry, and thus bounded invertible on its range \(\mathcal{H}_{+}\). The invertibility of \(D_{y_{2}}\widehat{\mathcal{F}}(0,0)\) allows a regular application of the classical implicit function theorem on Hilbert spaces. **Lemma 4.2**.: _There is a neighborhood \(\mathcal{U}\subset\mathcal{H}_{1}\) of the origin and a unique \(C^{1}\) map_ \[W:\mathcal{U}\to\mathcal{H}_{2},\quad\text{written}\quad W\big{(}z+\alpha \operatorname{c}(t)\big{)}=:W(\alpha,z)\in\mathcal{H}_{2},\] _such that, for all \(z+\alpha\operatorname{c}(t)\in\mathcal{U}\), we have_ \[\Pi\,\widehat{\mathcal{F}}\big{(}z+\alpha\operatorname{c}(t),W(\alpha,z) \big{)}=\Pi\,F\big{(}z+\alpha\operatorname{c}(t)+W(\alpha,z)\big{)}=0. \tag{4.8}\] Proof.: This is a direct application of the implicit function theorem. Recall that this states that if \(\mathcal{H}_{1}\), \(\mathcal{H}_{2}\) and \(\mathcal{H}_{+}\) are Hilbert spaces, and \[\mathcal{G}:=\Pi\widehat{\mathcal{F}}:\Omega\subset\mathcal{H}_{1}\times \mathcal{H}_{2}\to\mathcal{H}_{+}\] is a continuously differentiable map defined on an open neighborhood \(\Omega\) of \((0,0)\), and satisfying \(\mathcal{G}(0,0)=0\), and if the linear (partial derivative) map \(D_{y_{2}}\mathcal{G}(0,0):\mathcal{H}_{2}\to\mathcal{H}_{+}\) is bounded invertible, then there is an open neighborhood \(\mathcal{U}_{1}\subset\mathcal{H}_{1}\) of \(0\) and a unique differentiable map \(W:\mathcal{U}_{1}\to\mathcal{H}_{2}\), such that \(\mathcal{G}\big{(}x,W(x)\big{)}=0\) for all \(x\in\mathcal{U}_{1}\), see e.g. [16]. Because we have built our Hilbert spaces so that \(D_{y_{2}}\mathcal{G}\) is an isometry, the result follows immediately. It is remarkable that we do _not_ require any estimates on the decay rate of the small divisors, because we have reframed the problem as the vanishing of a composition of operators, (4.3). Indeed, the faster the small divisors decay, the smoother the corresponding periodic solution must be, as seen by the norm (4.7). By splitting the Hilbert spaces into orthogonal complements in (4.6) and (4.7), we have explicitly carried out the Liapunov-Schmidt decomposition of the nonlinear operator \(\mathcal{F}\) around \(0\), as anticipated in [38]. ### Solution of the bifurcation equation To complete the solution of equation (2.29), (4.3), it remains to show that we can ensure, after use of (4.8), that the remaining component of \(\mathcal{F}(y^{0})\), which is the component orthogonal to the range of \(D_{y_{2}}\hat{\mathcal{F}}(0,0)\), also vanishes. From the decomposition (4.7), this is the (scalar) _bifurcation equation_, \[f(\alpha,z):=\Big{\langle}\mathrm{s}(t),\hat{\mathcal{F}}\big{(}z+\alpha\, \mathrm{c}(t),W(\alpha,z)\big{)}\Big{\rangle}=0. \tag{4.9}\] Here \(\alpha\) is the amplitude of the linearized solution, and \(z\) is a \(0\)-mode adjustment which can be regarded as bringing compression and rarefaction back into balance, as described in [43]. We will see presently that the existence of a solution of (4.9) is a consequence of the genuine nonlinearity of the system, which states that the nonlinear wavespeed depends nontrivially on the state, and is therefore controlled by \(z\). The scalar function \(f(\alpha,z)\) given by (4.9) is defined on the neighborhood \(\mathcal{U}\), which with a slight abuse of notation can be regarded as \(\mathcal{U}\subset\mathbb{R}^{2}\), so we write \[f:\mathcal{U}\subset\mathbb{R}^{2}\to\mathbb{R},\quad\text{with}\quad f(0,0)=0.\] As in our description of the Liapunov-Schmidt method, we would like to apply the implicit function theorem to \(f\), to get a curve \(z=z(\alpha)\) on which \(f\big{(}\alpha,z(\alpha)\big{)}=0\). We cannot apply this directly, because \(\frac{\partial f}{\partial z}\big{|}_{(0,0)}=0\), since all \(0\)-modes are killed by the projection \(\frac{\mathcal{I}-\mathcal{R}}{2}\). Thus we consider the second derivative \(\frac{\partial^{2}f}{\partial z\partial\alpha}\big{|}_{(0,0)}\), and if this is nonzero, we can conclude the existence of a solution. One way of effectively calculating the second derivative is to replace \(f\) with the function \[\begin{split} g(\alpha,z)&:=\frac{1}{\alpha}\,f( \alpha,z),\quad\alpha\neq 0,\\ g(0,z)&:=\frac{\partial f}{\partial\alpha}(0,z), \end{split} \tag{4.10}\] which is consistently defined because \(W(0,z)=0\), and so also \(f(0,z)=0\), for all \(z\) near \(0\). It is then clear that \[f(\alpha,z)=0\quad\text{iff}\quad g(\alpha,z)=0\quad\text{for}\quad\alpha\neq 0,\] and it suffices to apply the implicit function theorem to \(g\). To begin, we first show that \(W\) does not essentially affect the argument. **Lemma 4.3**.: _The map \(W(\alpha,z)\) found in Lemma 4.2 satisfies the estimate_ \[W(\alpha,z)=o(|\alpha|),\quad\text{so that}\quad\frac{\partial W}{\partial \alpha}\to 0, \tag{4.11}\] uniformly for \(z\) in a neighborhood of 0._ Proof.: Since \(\mathcal{F}(z)=0\), we have \[W(\alpha,z)\in\mathcal{H}_{2},\quad\text{with}\quad W(0,z)=0,\] and, setting \(y=z+\alpha\,\mathrm{c}(t)+W(\alpha,z)\), we have by (4.8) Differentiating with respect to \(\alpha\) and setting \(\alpha=0\), we get \[0=\frac{\partial\Pi\mathcal{F}(y)}{\partial\alpha}\Big{|}_{\alpha=0} =\Pi\,\frac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\,D \mathcal{N}(z)\Big{[}\mathrm{c}(t)+\frac{\partial W}{\partial\alpha}\Big{|}_{ \alpha=0}\Big{]}\] \[=\Pi\,\frac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\,D \mathcal{N}(z)\Big{[}\frac{\partial W}{\partial\alpha}\Big{|}_{\alpha=0} \Big{]},\] since, in addition to \(\mathcal{L}_{0}\), \(D\mathcal{N}(z)\) respects modes, although for \(z\neq 0\) the linear wavespeed is changed. Since \(\Pi\,\frac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}:\mathcal{H}_{2} \to\mathcal{H}_{+}\) is invertible, and \(D\mathcal{N}(z)=\mathcal{I}+O(z)\), the result follows. To calculate \(\partial g/\partial z\) at \((0,0)\), we first calculate \(\partial\mathcal{F}/\partial\alpha\) and set \(\alpha=0\). As above, we have \[\frac{\partial\mathcal{F}(y)}{\partial\alpha}\Big{|}_{\alpha=0} =\frac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\,D\mathcal{N} \big{(}z+\alpha\,\mathrm{c}(t)+W(\alpha,z)\big{)}\Big{[}\mathrm{c}(t)+\frac{ \partial W}{\partial\alpha}\Big{]}\Big{|}_{\alpha=0}\] \[=\frac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\,D\mathcal{N }\big{(}z\big{)}\big{[}\mathrm{c}(t)\big{]},\] where we have used Lemma 4.3, and \(D\mathcal{N}(z)\) is diagonal. Differentiating this in \(z\), setting \(z=0\), and using (4.10) and (4.9), we get \[\frac{\partial g}{\partial z}\Big{|}_{(0,0)}=\Big{\langle}\mathrm{s}(t),\frac {\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\,D^{2}\mathcal{N}\big{(}0\big{)} \big{[}1,\mathrm{c}(t)\big{]}\Big{\rangle}. \tag{4.12}\] From (4.1), we have \(\mathcal{N}=\underline{\mathcal{N}}\,\overline{\mathcal{N}}\), so by the chain rule we have \[D\mathcal{N}(y)[Y]=D\underline{\mathcal{N}}(\overline{\mathcal{N}}y)\big{[}D \overline{\mathcal{N}}(y)[Y]\big{]},\] and this in turn implies \[D^{2}\mathcal{N}(y)\big{[}Y_{1},Y_{2}\big{]} =D^{2}\underline{\mathcal{N}}(\overline{\mathcal{N}}y)\big{[}D \overline{\mathcal{N}}(y)[Y_{1}],D\overline{\mathcal{N}}(y)[Y_{2}]\big{]}\] \[\qquad\qquad+D\underline{\mathcal{N}}(\overline{\mathcal{N}}y) \big{[}D^{2}\overline{\mathcal{N}}(y)[Y_{1},Y_{2}]\big{]},\] and since \(D\mathcal{N}=\mathcal{I}+O(\alpha)\), when we set \(y=0\), \(Y_{1}=1\) and \(Y_{2}=\mathrm{c}(t)\), (4.12) becomes \[\frac{\partial g}{\partial z}\Big{|}_{(0,0)}=\Big{\langle}\mathrm{s}(t), \mathcal{L}_{0}\,D^{2}\underline{\mathcal{N}}(0)\big{[}1,\mathrm{c}(t)\big{]} +\mathcal{L}_{0}\,D^{2}\overline{\mathcal{N}}(0)\big{[}1,\mathrm{c}(t)\big{]} \Big{\rangle}, \tag{4.13}\] where we have dropped \(\frac{\mathcal{I}-\mathcal{R}}{2}\) because \(\mathrm{s}(t)\) is odd. The final step in the proof of existence of space and time periodic solution by the Liapunov-Schmidt method is to show that this derivative (4.13) is nonzero, \[\frac{\partial g}{\partial z}\Big{|}_{(0,0)}\neq 0. \tag{4.14}\] In order to establish (4.14), referring to (4.4), we need to calculate the second Frechet derivative of the evolution operator, \(D^{2}\mathcal{E}^{\theta}(y^{0})\big{[}Y^{0}_{1},Y^{0}_{2}\big{]}\). To do this, we use the solution formula (2.17), (2.18), (2.19). **Lemma 4.4**.: _The linearization (or Frechet derivative) of the evolution operator \(\mathcal{E}^{\theta}\) given by (2.17) at \(y^{0}\) in the direction \(Y^{0}\) is given by_ \[D\mathcal{E}^{\theta}(y^{0})\big{[}Y^{0}\big{]}=\mathcal{S}_{\phi}\big{[}Y^{0} \big{]}+\mathcal{S}_{\phi}\Big{[}\frac{d}{dt}y^{0}\Big{]}\cdot D\phi(y^{0}) \big{[}Y^{0}\big{]}, \tag{4.15}\] _where the linearization of the shift is given by_ \[D\phi(y^{0})\big{[}Y^{0}\big{]}=\sigma^{\prime}\big{(}y^{0}(\tau_{0})\big{)} \,\int_{\theta}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ where we have taken \(T_{0}=O(\|Y^{0}\|)\) because the characteristic field (4.19) is regular. Taking the limit of small \(\|Y^{0}\|\) then yields the linearization \[D\mathcal{E}^{x}(y^{0})\big{[}Y^{0}\big{]}=\mathcal{S}_{\tau_{0}}\big{[}Y^{0} \big{]}+D\tau_{0}(y^{0})\big{[}Y^{0}\big{]}\cdot\mathcal{S}_{\tau_{0}}\Big{[} \frac{d}{dt}y^{0}\Big{]}, \tag{4.20}\] where again the reference point for \(\tau_{0}\) and \(D\tau_{0}\) is \((x,t)\). To linearize the characteristic field, fix the reference point \((x,t)\), and let \(\tau_{\xi}+T_{\xi}\) denote the perturbed characteristic field (4.19), so that \[\tau_{\xi}+T_{\xi}=t+\int_{x}^{\xi}\sigma\big{(}(y+Y)(\eta,\tau_{\eta}+T_{\eta })\big{)}\;d\eta,\] and subtracting (4.19) gives \[T_{\xi}(x,t)=\int_{x}^{\xi}\sigma\big{(}(y+Y)(\eta,\tau_{\eta}+T_{\eta})\big{)} -\sigma\big{(}y(\eta,\tau_{\eta})\big{)}\;d\eta,\] so we must linearize the integrand. By (2.6), we have \(\sigma(y)=\big{(}1+\frac{\mathcal{I}+\mathcal{R}}{2}y\big{)}^{-\nu}\), and so \[\sigma\big{(}(y+Y)(\eta,\tau_{\eta}+T_{\eta})\big{)}-\sigma\big{(} y(\eta,\tau_{\eta})\big{)}\] \[\qquad\approx\sigma^{\prime}\big{(}y(\eta,\tau_{\eta})\big{)}\, \tfrac{\mathcal{I}+\mathcal{R}}{2}\Big{(}Y(\eta,\tau_{\eta}+T_{\eta})+y(\eta, \tau_{\eta}+T_{\eta})-y(\eta,\tau_{\eta})\Big{)}\] with \(\sigma^{\prime}(y):=-\nu\left(1+\frac{\mathcal{I}+\mathcal{R}}{2}y\right)^{- \nu-1}\). To linearize we drop higher order terms, which results in \[\approx\sigma^{\prime}\big{(}y(\eta,\tau_{\eta})\big{)}\,\tfrac{\mathcal{I}+ \mathcal{R}}{2}\Big{(}Y(\eta,\tau_{\eta})+\frac{\partial y}{\partial t}\Big{|} _{(\eta,\tau_{\eta})}\,D\tau_{\eta}[Y^{0}]\Big{)}, \tag{4.21}\] where \(Y\) is the linearized evolution of \(Y^{0}\), namely \[Y(\eta,\tau_{\eta}) :=D\mathcal{E}^{\eta}(y^{0})\big{[}Y^{0}\big{]}(\tau_{\eta})\] \[=\mathcal{S}_{\tau_{0}(\eta,\tau_{\eta})}(y^{0})\big{[}Y^{0}\big{]} +D\tau_{0}(\eta,\tau_{\eta})(y^{0})\big{[}Y^{0}\big{]}\] \[=\mathcal{S}_{\tau_{0}}(y^{0})\big{[}Y^{0}\big{]}+D\tau_{0}(y^{0}) \big{[}Y^{0}\big{]}. \tag{4.22}\] Here \(\tau_{\eta}\) has the reference point \((x,t)\), and we have used the important group property (2.16), so that \[\tau_{0}\big{(}\eta,\tau_{\eta}(x,t)\big{)}=\tau_{0}(x,t),\] so \(\tau_{0}\) has simplified reference point \((x,t)\). We use this again in (4.21), to get the first term \[\sigma^{\prime}\big{(}y(\eta,\tau_{\eta})\big{)}=\sigma^{\prime}\big{(}y^{0} \big{(}\tau_{0}(\eta,\tau_{\eta}(x,t))\big{)}\big{)}=\sigma^{\prime}\big{(}y^{ 0}\big{(}\tau_{0}(x,t)\big{)}\big{)};\] although we can similarly simplify the higher order \(\frac{\partial y}{\partial t}\) term, this derivative is not preserved along characteristics, and we have no need to do so. It now follows from (4.21) that the linearization of the characteristic field satisfies the linear integral equation \[D\tau_{\xi}[Y^{0}]=\sigma^{\prime}\big{(}y^{0}(\tau_{0})\big{)}\,\int_{x}^{\xi }\tfrac{\mathcal{I}+\mathcal{R}}{2}\Big{(}Y(\eta,\tau_{\eta})+\frac{\partial y }{\partial t}\Big{|}_{(\eta,\tau_{\eta})}\,D\tau_{\eta}[Y^{0}]\Big{)}\;d\eta, \tag{4.23}\] with \(Y\) given by (4.22), and since \(\phi(t)=\tau_{0}(\theta,t)\), substituting in gives (4.16). We now wish to differentiate \(D\mathcal{E}^{\theta}\) a second time. We fix \(Y_{2}^{0}\) and perturb \(y^{0}\) by \(Y_{1}^{0}\), and again let \(\phi+\Phi\) denote the perturbed shift. Then by (4.15) we have \[\begin{split} D\mathcal{E}^{\theta}(y^{0}&+Y_{1}^{0 })\big{[}Y_{2}^{0}\big{]}=\mathcal{S}_{\phi+\Phi}\big{[}Y_{2}^{0}\big{]}\\ &+\mathcal{S}_{\phi+\Phi}\Big{[}\frac{d}{dt}\big{(}y^{0}+Y_{1}^{0 }\big{)}\Big{]}\,D(\phi+\Phi)(y^{0}+Y_{1}^{0})\big{[}Y_{2}^{0}\big{]},\end{split}\] and subtracting (4.15) (evaluated at \(Y_{2}^{0}\)) yields, after rearranging, \[\begin{split} D\mathcal{E}^{\theta}(y^{0}&+Y_{1}^{0 })\big{[}Y_{2}^{0}\big{]}=\mathcal{S}_{\phi+\Phi}\big{[}Y_{2}^{0}\big{]}- \mathcal{S}_{\phi}\big{[}Y_{2}^{0}\big{]}\\ &+\mathcal{S}_{\phi+\Phi}\Big{[}\frac{d}{dt}Y_{1}^{0}\Big{]}\,D( \phi+\Phi)(y^{0}+Y_{1}^{0})\big{[}Y_{2}^{0}\big{]}\\ &+\Big{(}\mathcal{S}_{\phi+\Phi}-\mathcal{S}_{\phi}\Big{)}\Big{[} \frac{d}{dt}y^{0}\Big{]}\,D(\phi+\Phi)(y^{0}+Y_{1}^{0})\big{[}Y_{2}^{0}\big{]} \\ &+\mathcal{S}_{\phi}\Big{[}\frac{d}{dt}y^{0}\Big{]}\,\Big{(}D( \phi+\Phi)(y^{0}+Y_{1}^{0})\big{[}Y_{2}^{0}\big{]}-D\phi(y^{0})\big{[}Y_{2}^{0 }\big{]}\Big{)}.\end{split}\] Now again taking the limit of small \(\|Y_{1}^{0}\|\), we retain only linear terms (or terms bilinear in \([Y_{1}^{0},Y_{2}^{0}]\)) to get \[\begin{split} D^{2}\mathcal{E}^{\theta}(y^{0})\big{[}Y_{1}^{0},Y_ {2}^{0}\big{]}=&\mathcal{S}_{\phi}\Big{[}\frac{d}{dt}Y_{2}^{0} \Big{]}\,D\phi(y^{0})\big{[}Y_{1}^{0}\big{]}+\mathcal{S}_{\phi}\Big{[}\frac{d} {dt}Y_{1}^{0}\Big{]}\,D\phi(y^{0})\big{[}Y_{2}^{0}\big{]}\\ &+\mathcal{S}_{\phi}\Big{[}\frac{d}{dt}y^{0}\Big{]}\,D^{2}\phi(y^ {0})\big{[}Y_{1}^{0},Y_{2}^{0}\big{]},\end{split}\] which is (4.17). To complete the solution of the bifurcation equation, we now isolate the terms appearing in equations (4.12) and (4.13). We use a slight generalization in order to apply it later on in a more general context. The first argument 1 in the bilinear operator below corresponds to differentiation of the \(0\)-mode, that is \(\frac{\partial}{\partial z}\). **Corollary 4.5**.: _Suppose, as in (4.4), that the nonlinear operator \(\mathcal{N}^{\theta}\) is defined by_ \[\mathcal{N}^{\theta}(y):=\mathcal{L}^{\theta-1}\,\mathcal{E}^{\theta}(y),\quad \text{where}\quad\mathcal{L}^{\theta}:=D\mathcal{E}^{\theta}(0)=\mathcal{S}^{ \theta}, \tag{4.24}\] _and let \(\mathcal{B}\) be any fixed constant invertible linear operator which preserves \(k\)-modes. Then \(\mathcal{B}^{-1}\,\mathcal{N}^{\theta}\,\mathcal{B}\) is twice Frechet differentiable at \(0\), with_ \[\begin{split} D\big{(}\mathcal{B}^{-1}\,\mathcal{N}^{\theta}\, \mathcal{B}\big{)}(0)&=\mathcal{I},\quad\text{and}\\ D^{2}\big{(}\mathcal{B}^{-1}\,\mathcal{N}^{\theta}\,\mathcal{B} \big{)}(0)[1,Y^{0}]&=\nu\,\theta\,\mathcal{B}\,\mathcal{I}\,\frac{ d}{dt}Y^{0}.\end{split} \tag{4.25}\] Note that in all contexts in which we apply this corollary, including (4.4), where \(\mathcal{B}=\mathcal{J}\,\mathcal{L}^{\overline{\theta}}\), the operator \(\mathcal{B}\) is a combination of linear evolutions \(\mathcal{L}\) and jumps \(\mathcal{J}\), for which we have \(\mathcal{B}1=1\). Proof.: Setting \(y^{0}=0\) in (4.19), (2.19), we get \[\tau_{\xi}(x,t)=t+\xi-x\quad\text{and}\quad\phi(t)=t-\theta,\] and using (4.15), we get \[D\mathcal{E}^{\theta}(0)=\mathcal{S}_{\phi}=\mathcal{S}^{\theta}=\mathcal{L}^ {\theta},\quad\text{so also}\quad D\mathcal{N}^{\theta}(0)=\mathcal{I},\] and the first part of (4.25) follows. Next, since \(\mathcal{B}\) is fixed, taking \(y^{0}=0\), \(Y^{0}_{1}=1\) and \(Y^{0}_{2}=Y^{0}\) in (4.17), only the first term persists, and we get \[D^{2}\big{(}\mathcal{B}^{-1}\,\mathcal{N}^{\theta}\,\mathcal{B} \big{)}(0)\big{[}1,Y^{0}\big{]} =\mathcal{B}^{-1}\,\mathcal{L}^{\theta-1}\,\Big{\{}\mathcal{S}_{ \phi}\Big{[}\frac{d}{dt}\mathcal{B}\,Y^{0}\big{]}\cdot D\phi(\mathcal{B}0)[ \mathcal{B}1]\Big{\}}\] \[=\frac{d}{dt}Y^{0}\cdot D\phi(0)[\mathcal{B}1],\] since \(\mathcal{B}\) and \(\mathcal{L}^{\theta}\) preserve \(0\)-modes, and \(\mathcal{B}\square\) is the result of operation by \(\mathcal{B}\), and \(\mathcal{L}^{\theta}=\mathcal{S}_{\phi}\). Finally, setting \(y^{0}=0\) and \(Y^{0}_{1}=1\) in (4.16), we get \[D\phi(0)[\mathcal{B}1]=\sigma^{\prime}(0)\int_{\theta}^{0}\frac{\mathcal{I}+ \mathcal{R}}{2}\mathcal{B}1\;d\xi=\nu\,\theta\,\mathcal{B}1,\] which yields (4.25). We can now state our first theorem on the simplest space and time periodic solutions of the compressible Euler equations. **Theorem 4.6**.: _Let \((\underline{\theta},\overline{\theta})\in\mathbb{R}^{2}\) be given such that the divisors \(\delta_{k}\) given by (3.5), with \(J\) defined by (3.6), are nonzero for all \(k\geq 2\). Then there is a number \(\overline{\alpha}_{1}>0\) and \(C^{2}\) function \(z=z(\alpha)\) satisfying \(z(0)=0\), such that for each \(\alpha\in(-\overline{\alpha}_{1},\overline{\alpha}_{1})\), the even function_ \[y^{0}(t)=\alpha\,\mathrm{c}(t)+z(\alpha)+W\big{(}\alpha,z(\alpha)\big{)}\in \mathcal{H}_{1}\oplus\mathcal{H}_{2},\] _is a solution of the equation_ \[\mathcal{F}(y^{0})=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\, \underline{\mathcal{N}}\,\overline{\mathcal{N}}\;y^{0}=0.\] _By reflection symmetry this defines a minimal tile whch generates a classical space and time periodic solution of the compressible Euler equations with stationary square wave entropy profile._ Thus we have a one-parameter family of solutions, which are perturbations of a cosine \(1\)-mode, parameterized by the amplitude \(\alpha\) of the \(1\)-mode of the solution. Our identification and use of symmetry has turned the problem into a regular bifurcation problem which has been handled by a modified Liapunov-Schmidt reduction, which has been explicitly carried out here. This has led both to a proof that is easier and a result that is more robust than that which we would get by applying the Nash-Moser method, which would require an expunging of resonant parameter values to get diophantine conditions, rather than the simpler irrationality condition, cf. [43]. Proof.: We use Lemma 4.2 to find the function \(W\) which solves the auxiliary equation, and in order to complete the proof we must solve the bifurcation equation (4.9). To do so it suffices to solve \(g(\alpha,z)=0\), where \(g\) is given by (4.10). Since \(g(0,0)=0\), and the partial derivative \(g_{z}(0,0)\) is given by (4.13), we get a function \(z(\alpha)\) from the implicit function theorem if we can show that \(g_{z}(0,0)\neq 0\). Using (4.25) with \(Y^{0}=\operatorname{c}(t)\) and \(\mathcal{B}1=1\) in (4.13), we get \[\frac{\partial g}{\partial z}\Big{|}_{(0,0)}=-\nu\left(\underline{\theta}+ \overline{\theta}\right)\big{\langle}\operatorname{s}(t),\mathcal{L}_{0} \operatorname{s}(t)\big{\rangle}\neq 0, \tag{4.26}\] because \(\big{\langle}\operatorname{s}(t),\mathcal{L}_{0}\operatorname{c}(t)\big{\rangle}=0\) and \(\mathcal{L}_{0}\) is invertible on \(1\)-modes, which verifies condition (4.14) and completes the Liapunov-Schmidt argument. In fact, using Lemma 3.1 it is easy to calculate \[\big{\langle}\operatorname{s}(t),\mathcal{L}_{0}\operatorname{s}(t)\big{\rangle} =\big{(}\begin{array}{cc}-1&0\end{array}\big{)}\,R(\underline{ \theta})\,M(J)\,R(\overline{\theta})\left(\begin{array}{c}0\\ 1\end{array}\right)\] \[=\operatorname{c}\underline{\theta}\operatorname{s}\overline{ \theta}+J\operatorname{s}\underline{\theta}\operatorname{c}\overline{\theta}= \frac{\operatorname{c}\underline{\theta}}{\operatorname{s}\overline{\theta}} \neq 0,\] by our choice of \((\underline{\theta},\overline{\theta})\), and where we have used (3.6). Thus the function \(z(\alpha)\) is determined and the proof is complete. Theorem 4.6 provides the first proof of the existence of a space and time periodic solution of the compressible Euler equations exhibiting sustained nonlinear interactions. This completes the author's initial program proposed in [36, 39]. Indeed, to the author's knowledge, this also represents the first global existence theorem for a non-monotone solution of the \(3\times 3\) compressible Euler equations having large (spatial) total variation. ## 5. Higher Mode Periodic Solutions We now consider the divisors \(\delta_{k}(\overline{\theta},\underline{\theta},J;T)\) as a function of time period \(T\), while fixing the entropy field parameters \(\overline{\theta}\), \(\underline{\theta}\), and \(J\). The divisor \(\delta_{k}\) is again given by (3.5), but the functional dependence is more complicated because the variable \(T\) appears in each evolution component \(R(k\frac{2\pi}{T}\theta)\). We thus consider the divisors \[\delta_{k}(\Theta;T)=\big{(}\begin{array}{cc}0&1\end{array}\big{)}\ P^{-k} \,R(k\,\tfrac{2\pi}{T}\,\underline{\theta})\,M(J)\,R(k\,\tfrac{2\pi}{T}\, \overline{\theta})\left(\begin{array}{c}1\\ 0\end{array}\right), \tag{5.1}\] in which we regard \(\Theta:=(\underline{\theta},J,\overline{\theta})\) as fixed, and look for _all_ values of \(T>0\) for which \(\delta_{k}=0\). We begin by recalling the redundancy in our basis vectors (3.4): that is, \[\mathcal{T}_{k}(T)=\big{(}\begin{array}{cc}\operatorname{c}\bigl{(}k\tfrac {2\pi}{T}t\bigr{)}&\operatorname{s}\bigl{(}k\tfrac{2\pi}{T}t\bigr{)}\end{array} \big{)}=\mathcal{T}_{jk}(jT),\] which immediately yields (since \(P^{2}=-I\)) \[\delta_{k}\big{(}\Theta;T\big{)}=-\delta_{k-2}\Big{(}\Theta;\frac{k-2}{k}\,T \Big{)},\] and which in turn implies \[\delta_{2j-1}(\Theta;T\big{)} =(-1)^{j-1}\,\delta_{1}\Big{(}\Theta;\frac{1}{2j-1}\,T\Big{)}, \quad\text{and} \tag{5.2}\] \[\delta_{2j}(\Theta;T\big{)} =(-1)^{j}\,\delta_{2}\Big{(}\Theta;\frac{1}{j}\,T\Big{)},\] for odd and even indices, respectively. It thus suffices to find all solutions to \(\delta_{1}=0\) and \(\delta_{2}=0\), and scaling then yields all solutions of \(\delta_{k}=0\). Because each \(2\times 2\) matrix in (5.1) is invertible, the intermediate vectors in the composition (5.1) never vanish, and it suffices to consider the _angle_ made by these \(2\times 2\) vectors: these are \[0\to k\,\tfrac{2\pi}{T}\,\overline{\theta}\to\gamma\to\gamma+k\,\tfrac{2\pi}{ T}\,\underline{\theta},\] where \(\gamma\) is the angle obtained after applying the jump corresponding to \(M(J)\) to the angle \(k\,\tfrac{2\pi}{T}\,\overline{\theta}\). We simplify by making the substitutions \[\theta:=\frac{\overline{\theta}}{\underline{\theta}+\overline{\theta}},\quad \omega:=k\,\tfrac{2\pi}{T}\,(\underline{\theta}+\overline{\theta}), \tag{5.3}\] which yields \[k\,\tfrac{2\pi}{T}\,\overline{\theta}=\omega\,\theta\quad\text{and}\quad k\, \tfrac{2\pi}{T}\,\underline{\theta}=\omega\,(1-\theta),\] and defining the function \[h:\mathbb{R}_{+}\times\mathbb{R}\to\mathbb{R},\quad\text{by}\quad h(J,x):= \arctan\big{(}J\,\tan x\big{)},\] chosen so that \(h\) is smooth. The angles then vary from \[0\to\omega\,\theta\to\gamma\to\gamma+\omega\,(1-\theta),\quad\text{with}\quad \gamma:=h(J,\omega\,\theta).\] To be precise, for \(J>0\), we define \[h(J,x):=\begin{cases}\text{Arctan}(J\,\tan x\big{)}+k\,\pi,&-\pi/2<x-k\,\pi< \pi/2,\\ x&x=k\,\pi\pm\pi/2,\end{cases} \tag{5.4}\] where \(\text{Arctan}(\square)\in(-\pi/2,\pi/2)\) is the principal branch. Geometrically, \(h\) gives the angle that results from a vector with angle \(x\) being acted upon by \(M(J)\); in particular, \(h(J,x)\) is always in the same quadrant as \(x\), and \(h(J,x)=x\) for \(x\) on the coordinate axes. We can now give the angle associated with rotations \(x_{1}\) and \(x_{2}\) separated by jump \(J\), namely \[R(x_{2})\,M(J)\,R(x_{1})\left(\begin{array}{c}1\\ 0\end{array}\right)\quad\text{is represented by}\quad x_{2}+h(J,x_{1}). \tag{5.5}\] This is illustrated in Figure 2 below for a more general piecewise constant entropy profile. Using (5.3), and referring to (5.1), we now see that for any \(\Theta\), \(T\) provides a solution of \(\delta_{k}(\Theta;T)=0\) if the angle \(\gamma+\omega\,(1-\theta)\) points along the \(k\)-th coordinate axis, counting anti-clockwise: that is, if \[\omega\,(1-\theta)+\gamma=k\,\frac{\pi}{2},\quad\text{with}\quad\gamma=h(J, \omega\,\theta).\] More precisely, we define the \(k\)_-th base frequency_\(\omega^{(k)}=\omega^{(k)}(\Theta)\) to be the solution of the equation \[\omega^{(k)}\,(1-\theta)+h\big{(}J,\omega^{(k)}\,\theta\big{)}=k\,\frac{\pi}{2}. \tag{5.6}\] Note that the parity of \(k\) determines which axis is met; this is consistent with (5.2). **Lemma 5.1**.: _For any fixed positive step entropy parameter \(\Theta=(\underline{\theta},J,\overline{\theta})\), and for any \(k\geq 1\), there is a unique base reference period \(T^{(k)}\), such that the \(k\)-mode \(\mathcal{T}_{k}\big{(}T^{(k)}\big{)}\) has vanishing multiplier,_ \[\delta_{k}\big{(}\Theta;T^{(k)}\big{)}=0.\] _If in addition this \(k\)-mode is nonresonant, that is_ \[\delta_{j}\big{(}\Theta;T^{(k)}\big{)}\neq 0\quad\text{for all}\quad j\geq 1,\quad j\neq k, \tag{5.7}\] _then the corresponding \(k\)-mode solution of the linearized equation perturbs to a pure tone solution of the nonlinear Euler equations with reference period \(T=T^{(k)}\)._ Proof.: We begin with the observation that the function \(h\) given by (5.4) is a smooth function, with \[\frac{\partial h}{\partial x}=\frac{J\,(1+\operatorname{t}^{2}(x))}{1+J^{2} \operatorname{t}^{2}(x)}>0,\qquad\frac{\partial h}{\partial J}=\frac{ \operatorname{t}(x)}{1+J^{2}\operatorname{t}^{2}(x)}, \tag{5.8}\] where \(\operatorname{t}:=\tan\). In particular \(h\) is strictly increasing, one-to-one and onto, and so smooth invertible, as a function of \(x\), and invertible as a function of \(J\) as long as \(\operatorname{t}(x)\neq 0,\;\pm\infty\), that is as long as \(x\neq j\frac{\pi}{2}\). Considering (5.6), we define \[f\big{(}\omega,J,\theta\big{)}:=\omega\,(1-\theta)+h(J,\omega\,\theta),\] so \(\omega^{(k)}\) solves \(f(\omega,J,\theta)=k\,\frac{\pi}{2}\). Since \(0<\theta<1\), \(f\) is a sum of two smooth strictly monotone increasing and surjective functions, so there is a unique positive solution \(\omega^{(k)}\) of (5.6). Using (5.3), it follows that our desired reference period is \[T^{(k)}:=k\,\frac{2\pi}{\omega^{(k)}}\,(\underline{\theta}+\overline{\theta}). \tag{5.9}\] The proof that nonresonant modes perturb follows in the same manner as Theorem 4.6 above. We omit the details because we treat a more general case in Theorem 6.2 below. We now examine the nonresonance condition (5.7) in more detail. The reference period \(T^{(k)}\) is defined by (5.9), (5.6), and satisfies \(\delta_{k}\big{(}\Theta;T^{(k)}\big{)}=0\). The \(j\)-th mode is resonant with the given \(k\)-mode, if also \(\delta_{j}\big{(}\Theta;T^{(k)}\big{)}=0\). According to (5.1), (5.3), (5.5), it follows that the angle associated with the \(j\)-mode, namely \[j\,\frac{2\pi}{T^{(k)}}\,\overline{\theta}=\frac{j}{k}\,\omega^{(k)}\,\theta, \qquad j\,\frac{2\pi}{T^{(k)}}\,\underline{\theta}=\frac{j}{k}\,\omega^{(k)} \,(1-\theta),\] must also be a solution of (5.6), say \[q\,\omega^{(k)}=\omega^{(p)},\quad\text{with}\quad q:=\frac{j}{k}\in\mathbb{Q} _{+}. \tag{5.10}\] This states that the \(j\)-th and \(k\)-th modes (with respect to reference period \(T^{(k)}\)) are in resonance, if and only if the distinct roots \(\omega^{(k)}\) and \(\omega^{(p)}\) are rationally related by (5.10). Finally, note that because there are countably many solutions \(\{\omega^{(k)}\}\) of (5.6), rational combinations of these span a "small" set over the whole solution space, and so for a generic \(\Theta\), all modes will be nonresonant and so perturb to a finite amplitude nonlinear solution. **Theorem 5.2**.: _Let the single step function entropy profile be parameterized by \(\Theta=(\underline{\theta},J,\overline{\theta})\in\mathbb{R}_{+}^{3}\). Generically, such entropy profiles are totally nonresonant, in that all \(k\)-modes with associated reference period \(T^{(k)}\) are nonresonant, and so perturb to finite amplitude smooth solutions of (2.29), which in turn generate pure tone periodic solutions of the nonlinear system (2.1). That is, there is a set \(\widehat{\mathcal{Z}}\subset\mathbb{R}_{+}^{3}\), which is both measure zero and meagre, such that any \(\Theta\notin\widehat{\mathcal{Z}}\) is totally nonresonant._ Proof.: We work with the reduced parameters \((\omega,J,\theta)\), given by (5.3), for which redundancies due to scale-invariance of the system (2.1) have been removed. The condition that the \(j\)-th mode be resonant with the \(k\)-th mode, with respect to reference period \(T^{(k)}\), is (5.10). Thus for fixed integers \(j\), \(k\) and \(p\), we define the set \[\mathcal{Z}_{k,j,p}:=\Big{\{}(J,\theta)\,\Big{|}\,\omega^{(p)}=q\,\omega^{(k) }\Big{\}},\quad q:=\frac{j}{k},\] where \(\omega^{(k)}=\omega^{(k)}(J,\theta)\) is regarded as a known function defined by (5.6). We show that \(\mathcal{Z}_{k,j,p}\) is a smooth one-dimensional manifold in \(\mathbb{R}_{+}^{2}\). We first make our description of \(\omega^{(k)}\) more explicit, by noting that we can write \(\omega=\omega^{(k)}\) in (5.6) as \[\omega=x-h(J,x)+k\,\frac{\pi}{2},\qquad\omega\,\theta=x, \tag{5.11}\] and treating \(x\) as a variable with \((J,\theta)\) fixed. Thus \((x,\omega)\) is the point of intersection of two explicit, regular curves. The slopes of these curves are \[1-h_{x}<1\quad\text{and}\quad\frac{1}{\theta}>1,\] respectively, so there is a unique solution, as expected. From (5.11) we write \[\omega^{(k)} =x^{(k)}-h(J,x^{(k)})+k\,\frac{\pi}{2},\qquad\omega^{(k)}\,\theta=x^ {(k)},\] \[\omega^{(p)} =x^{(p)}-h(J,x^{(p)})+p\,\frac{\pi}{2},\qquad\omega^{(p)}\,\theta=x ^{(p)},\] and so \(\mathcal{Z}_{k,j,p}\) becomes the set \[\omega^{(p)}=q\,\omega^{(k)},\qquad x^{(p)}=q\,x^{(k)},\] and \[g(J,x):=h(J,q\,x)-q\,h(J,x)=(p-j)\frac{\pi}{2},\] where we have written \(x:=x^{(k)}\), \(q\,x=x^{(p)}\). To show that \(\mathcal{Z}_{k,j,p}\) is a submanifold, it suffices to show that \(\nabla g\neq 0\) on \(\left\{g=(p-j)\frac{\pi}{2}\right\}\). From (5.8), we calculate \[\frac{\partial g}{\partial x} =\frac{J\left(1+\operatorname{t}^{2}(qx)\right)}{1+J^{2} \operatorname{t}^{2}(qx)}\,q-q\,\frac{J\left(1+\operatorname{t}^{2}(x)\right) }{1+J^{2}\operatorname{t}^{2}(x)}\] \[=\frac{q\,J\left(J^{2}-1\right)\left(\operatorname{t}^{2}(x)- \operatorname{t}^{2}(qx)\right)}{\left(1+J^{2}\operatorname{t}^{2}(qx) \right)\left(1+J^{2}\operatorname{t}^{2}(x)\right)},\] and so \[\frac{\partial g}{\partial x}=0\quad\text{if and only if}\quad\operatorname{t} (qx)=\pm\operatorname{t}(x),\] unless \(J=1\), which is the degenerate isentropic case. Similarly, \[\frac{\partial g}{\partial J} =\frac{\operatorname{t}(qx)}{1+J^{2}\operatorname{t}^{2}(qx)}-q \,\frac{\operatorname{t}(x)}{1+J^{2}\operatorname{t}^{2}(x)}\] \[=\frac{\operatorname{t}(qx)-q\operatorname{t}(x)+J^{2} \operatorname{t}(qx)\operatorname{t}(x)\left(\operatorname{t}(x)-q \operatorname{t}(qx)\right)}{\left(1+J^{2}\operatorname{t}^{2}(qx)\right) \left(1+J^{2}\operatorname{t}^{2}(x)\right)},\] and so if \(\frac{\partial g}{\partial J}=0\)_and_\(\frac{\partial g}{\partial x}=0\), we must have \(\operatorname{t}(qx)=\pm\operatorname{t}(x)\) and, after simplifying, \[\operatorname{t}(x)\left(\pm 1-q\right)\left(1+J^{2}\operatorname{t}^{2}(x) \right)=0,\quad\text{so that}\quad\operatorname{t}(qx)=\operatorname{t}(x)=0.\] This condition (or \(J=1\)) then implies that \(g=0\neq(p-j)\frac{\pi}{2}\), so that \[\nabla g(J,\theta)\neq 0\quad\text{for all}\quad(J,\theta)\in\mathcal{Z}_{k,j,p}.\] The implicit function theorem now implies that \(\mathcal{Z}_{k,j,p}\) is a smooth submanifold of codimension one. This implies that \(\mathcal{Z}_{k,j,p}\) is both a measure zero and nowhere dense set in \(\mathbb{R}^{2}_{+}\). Since the integers \(k\), \(j\) and \(p\) were arbitrary, it follows that the resonant set \[\mathcal{Z}:=\bigcup_{k,p,j}\mathcal{Z}_{k,p,j}\] being a countable union, is also measure zero and meagre. Thus the complement \(\mathcal{Z}^{c}\), which is the set of _fully nonresonant_ parameters, is generic in both the measure and Baire category senses. Finally, we use scale invariance of the system and (5.3) to transform from \((J,\theta)\) back to \(\Theta=(\theta,J,\overline{\theta})\) coordinates, which preserves the smallness condition and thus completes the proof. ## 6. Piecewise constant entropy Having understood the evolution and its linearization for two constant entropy levels separated by a single jump, we now extend the argument to a general piecewise constant entropy profile. We again use the non-dimensional system (2.4), and consider perturbations of the equilibrium state \((w,\overset{\star}{w})=(0,0)\), which corresponds to the quiet state stationary solution \((p,u)=(p_{0},0)\). Let the entropy profile \(s(x)\) be a piecewise constant function on \([0,\ell]\), continuous at the endpoints \(x=0\) and \(x=\ell\). To be precise, we introduce \(n+1\)_entropy widths_\(\theta_{m}>0\), \(m=0,\ldots,n\), separated by \(n\)_entropy jumps_\(J_{m}=e^{-[s]_{m}/2c_{p}}\neq 1\), as above. Assume the same symmetries as before, namely \(w\), \(\overset{\star}{w}\) even/odd in \(t\) and time periodic with period \(T\), together with the spatial/material reflection symmetries (2.20) at \(x=0\) and (2.22) at \(x=\ell\). Then our generalized setup becomes: \(x=0\) is the center of the \(\theta_{0}\) entropy level, and boundary condition (2.20) is imposed there, and that initial data is evolved through \(\theta_{0}\) by the nonlinear evolution (2.6). Then inductively, at \(x=\sum_{j=0}^{m-1}\theta_{j}\), we place entropy jump \(\mathcal{J}_{m}\) of size \(J_{m}\), given by (2.7), and evolve this from \(x\) to \(x+\theta_{m}\) by (2.6). After the final jump \(J_{n}\), we evolve the amount \(\theta_{n}\) to \(x=\ell=\sum_{j=0}^{n}\theta_{j}\), where we impose the shifted boundary condition (2.22) which generates a minimal tile. As above, after non-dimensionalization we can leverage the even/odd symmetry to reduce to the nonlocal and nonlinear scalar evolution (2.14) together with jumps (2.24), and our boundary conditions again become (2.25) and (2.28), respectively. Since the entropy field is stationary, the non-dimensionalized jumps \(\mathcal{J}_{m}\) are the same for the nonlinear evolution. Thus our nonlinear equation for \(y^{0}\), given arbitrary piecewise constant entropy profile, which is a direct generalization of (2.29), becomes \[\begin{split}\mathcal{F}(y^{0})=0,\quad y^{0}\text{ even, where}\\ \mathcal{F}(y^{0}):=\frac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{S }^{-T/4}\,\mathcal{E}^{\theta_{n}}\,\mathcal{J}_{n}\,\mathcal{E}^{\theta_{n- 1}}\,\ldots\,\mathcal{J}_{1}\,\mathcal{E}^{\theta_{0}}\,y^{0}.\end{split} \tag{6.1}\] Our goal is to show that generically, \(k\)-mode solutions of the corresponding linearized equation are nonresonant, and perturb to finite amplitude pure tone solutions of the nonlinear system. Because the structure is very similar to that of Sections 3 and 4, we can solve the equation (6.1) in much the same way. In particular, Lemmas 3.1 and 4.4 and Corollary 4.5 continue to hold unchanged, and the only essential difference is in the base linear operator \(\mathcal{L}_{0}\), which, however, has a similar structure. **Lemma 6.1**.: _For a given reference period \(T\), and parameters_ \[J:=\big{(}J_{1},\ldots,J_{n}\big{)}\in\mathbb{R}_{+}^{n}\quad\text{and}\quad \Theta:=\big{(}\theta_{0},\ldots,\theta_{n}\big{)}\in\mathbb{R}_{+}^{n+1},\] _let \(\mathcal{F}\) be given by (6.1). Then \(\mathcal{F}(0)=0\), and the Frechet derivative of \(\mathcal{F}\) at \(0\) is_ \[D\mathcal{F}(0)=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0},\quad\text{ that is}\quad D\mathcal{F}(0)\big{[}Y^{0}\big{]}=\tfrac{\mathcal{I}-\mathcal{R}}{2}\, \mathcal{L}_{0}\big{[}Y^{0}\big{]},\] _where \(\mathcal{L}_{0}\) is the invertible linear operator_ \[\mathcal{L}_{0}:=\mathcal{S}^{-T/4}\,\mathcal{L}^{\theta_{n}}\,\mathcal{J}_{n }\,\mathcal{L}^{\theta_{n-1}}\,\dots\,\mathcal{J}_{1}\,\mathcal{L}^{\theta_{0 }}, \tag{6.2}\] _and the input \(Y^{0}\) is an even function of \(t\) with period \(T\). The nonlinear operator (6.1) can again be factored,_ \[\mathcal{F}=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\,\mathcal{N},\quad\text{where}\quad\mathcal{N}:=\mathcal{N}_{n}\,\mathcal{N}_{n-1}\cdots \mathcal{N}_{0}, \tag{6.3}\] _and each \(\mathcal{N}_{m}\) is a nonlinear operator given by_ \[\mathcal{N}_{m}=\mathcal{B}_{m}^{-1}\,\mathcal{L}^{\theta_{m}-1}\,\mathcal{E} ^{\theta_{m}}\,\mathcal{B}_{m},\] _where \(\mathcal{B}_{m}\) is a bounded linear operator that respects \(k\)-modes, with \(\mathcal{B}1=1\)._ _The operators \(\mathcal{L}_{0}\) and \(D\mathcal{F}(0)\) respect \(k\)-modes, with_ \[D\mathcal{F}(0)\Big{[}\mathrm{c}\big{(}k\tfrac{2\pi}{T}t\big{)}\Big{]}=\tfrac {\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\Big{[}\mathrm{c}\big{(}k\tfrac{ 2\pi}{T}t\big{)}\Big{]}=\delta_{k}(J,\Theta;T)\,\mathrm{s}\big{(}k\tfrac{2\pi} {T}t\big{)},\] _where the divisors are the scalars_ \[\begin{array}{rll}\delta_{k}(J,\Theta;T):=\big{(}\begin{array}{cc}0&1\end{array} \big{)}P^{-k}\,R(k\tfrac{2\pi}{T}\theta_{n})\,M(J_{n})\,\dots\\ &\dots R(k\tfrac{2\pi}{T}\theta_{1})\,M(J_{1})\,R(k\tfrac{2\pi}{T}\theta_{0}) \left(\begin{array}{c}1\\ 0\end{array}\right),\end{array} \tag{6.4}\] _and the \(2\times 2\) matrices \(P\), \(R(\cdot)\) and \(M(\cdot)\) are defined in Lemma 3.1._ Proof.: It is clear that \(\mathcal{F}(0)=0\), and (6.2) and (6.4) follow directly from (3.1) and Lemma 3.1, as in Corollary 3.2. We show the factorization (6.3) by induction: to begin, set \[\mathcal{B}_{0}:=\mathcal{I},\quad\mathcal{N}_{0}:=\mathcal{L}^{\theta_{0}-1} \,\mathcal{E}^{\theta_{0}},\quad\text{so that}\quad\mathcal{E}^{\theta_{0}}= \mathcal{L}^{\theta_{0}}\,\mathcal{N}_{0}.\] Next, assume inductively that \[\mathcal{N}_{j}=\mathcal{B}_{j}^{-1}\,\mathcal{L}^{\theta_{j}-1}\,\mathcal{E} ^{\theta_{j}}\,\mathcal{B}_{j}\quad\text{for}\quad j<m,\] which implies \[\mathcal{N}_{j}\dots\mathcal{N}_{0}=\mathcal{B}_{j}^{-1}\,\mathcal{L}^{\theta _{j}}{}^{-1}\,\mathcal{E}^{\theta_{j}}\,\mathcal{B}_{j}\,\mathcal{B}_{j-1}^{- 1}\,\mathcal{L}^{\theta_{j-1}}{}^{-1}\,\mathcal{E}^{\theta_{j-1}}\,\mathcal{B} _{j-1}\dots\mathcal{L}^{\theta_{0}-1}\,\mathcal{E}^{\theta_{0}}, \tag{6.5}\] and comparing this to \(\mathcal{F}\), we require each \(\mathcal{B}_{j}\) to satisfy \[\mathcal{B}_{j}\,\mathcal{B}_{j-1}^{-1}\,\mathcal{L}^{\theta_{j-1}}{}^{-1}= \mathcal{J}_{j}.\] Thus, at the inductive step, we choose \[\mathcal{B}_{m}:=\mathcal{J}_{m}\,\mathcal{L}^{\theta_{m-1}}\,\mathcal{B}_{m-1 }\quad\text{and}\quad\mathcal{N}_{m}=\mathcal{B}_{m}^{-1}\,\mathcal{L}^{\theta _{m}-1}\,\mathcal{E}^{\theta_{m}}\,\mathcal{B}_{m}.\] This implies that for each \(m\leq n\), we have \[\mathcal{B}_{m}=\mathcal{J}_{m}\,\mathcal{L}^{\theta_{m-1}}\,\mathcal{J}_{m-1 }\dots\mathcal{L}^{\theta_{1}}\,\mathcal{J}_{1}\,\mathcal{L}^{\theta_{0}},\] and (6.5) then yields \[\begin{split}\mathcal{S}^{-T/4}\,\mathcal{E}^{\theta_{n}}\,\mathcal{ J}_{n}\,\mathcal{E}^{\theta_{n-1}}\,\dots\,\mathcal{J}_{1}\,\mathcal{E}^{ \theta_{0}}&=\mathcal{S}^{-T/4}\,\mathcal{L}^{\theta_{n}}\, \mathcal{B}_{n}\,\mathcal{N}_{n}\,\dots\mathcal{N}_{0}\\ &=\mathcal{L}_{0}\,\mathcal{N}_{n}\,\dots\mathcal{N}_{0},\end{split}\] and (6.3) follows, completing the proof. It follows that, for fixed reference period \(T\), the base linearized operator \(D\mathcal{F}(0)=\frac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}_{0}\) has a \(k\)-mode kernel if and only if \(\delta_{k}(J,\Theta,T)=0\). As above, our purpose is to perturb this to a nontrivial solution \(y^{0}\) of \(\mathcal{F}(y^{0})=0\) with finite amplitude \(\alpha\). We proceed as in the earlier case. For fixed \(T\), denote the zero set of \(\delta_{k}\) by \(\Lambda_{k}=\Lambda_{k}(T)\), that is, set \[\Lambda_{k}:=\big{\{}(J,\Theta)\in\mathbb{R}^{2n+1}\,\big{|}\,\delta_{k} \big{(}J,\Theta;T\big{)}=0\big{\}},\quad k\geq 1, \tag{6.6}\] and we declare a parameter \((J,\Theta)\in\Lambda_{k}\) to be _nonresonant_ if it is in no other zero set \(\Lambda_{j}\), \[(J,\Theta)\in\Lambda_{k},\quad\text{but}\quad(J,\Theta)\notin\Lambda_{j}, \quad j\geq 1,\ j\neq k. \tag{6.7}\] Then, as above, any nonresonant \((J,\Theta)\in\Lambda_{k}\) provides a \(k\)-mode that perturbs to a solution of the nonlinear problem. **Theorem 6.2**.: _Suppose that \(T\) is fixed, and let \((J,\Theta)\in\Lambda_{k}(T)\) be nonresonant. Then the \(k\)-mode \(\alpha\,\mathrm{c}(k\frac{2\pi}{T}t)\) perturbs to a pure tone solution of the nonlinear problem \(\mathcal{F}(y^{0})=0\). More precisely, there is an \(\overline{\alpha}_{k}>0\) and functions_ \[W(\alpha,z)=\sum_{j\geq 1,j\neq k}a_{j}\,\mathrm{c}(j\tfrac{2\pi}{T}t)\quad \text{and}\quad z=z(\alpha)\in\mathbb{R},\] _defined for \(|\alpha|<\overline{\alpha}_{k}\) and some interval \(z\in(-\overline{z}_{k},\overline{z}_{k})\), such that \(W(0,z)=0\), \(z(0)=0\), and_ \[\mathcal{F}\Big{(}\alpha\,\mathrm{c}\big{(}k\tfrac{2\pi}{T}t\big{)}+z(\alpha )+W\big{(}\alpha,z(\alpha)\big{)}\Big{)}=0,\] _and this generates a space and time periodic solution of the compressible Euler equations._ Proof.: The proof proceeds exactly as in that of Theorem 4.6. The Hilbert spaces are defined as in (4.6) and (4.7), with the \(k\)-mode being substituted for the \(1\)-mode. The analog of Lemma 4.2 then follows exactly as above to yield the solution \(W(\alpha,z)\) of the auxiliary equation. The bifurcation equation (4.9) is replaced by \[f(\alpha,z):=\Big{\langle}\mathrm{s}(k\tfrac{2\pi}{T}t),\widehat{\mathcal{F}} \big{(}z+\alpha\,\mathrm{c}(k\tfrac{2\pi}{T}t)+W(\alpha,z)\big{)}\Big{\rangle} =0,\] and we again define \(g\) by (4.10). The analog of (4.13) is \[\frac{\partial g}{\partial z}\Big{|}_{(0,0)}=\Big{\langle}\mathrm{s}\big{(}k \tfrac{2\pi}{T}t\big{)},\sum_{m=0}^{n}\mathcal{L}_{0}\,D^{2}\mathcal{N}_{m}(0) \big{[}1,\mathrm{c}(k\tfrac{2\pi}{T}t)\big{]}\Big{\rangle}, \tag{6.8}\] and Lemmas 4.4 and Corollary 4.5 continue to hold. Since \[D^{2}\mathcal{N}_{m}(0)\big{[}1,Y^{0}\big{]}=\nu\,\theta_{m}\,\frac{d}{dt}Y^{0},\] the analog of (4.26) is where again the coefficient is nonzero because \[\Big{\langle}\mathrm{s}\big{(}k\tfrac{2\pi}{T}t\big{)},\mathcal{L}_{0}\, \mathrm{c}\big{(}k\tfrac{2\pi}{T}t\big{)}\Big{\rangle}=0,\] and \(\mathcal{L}_{0}\) is invertible on \(k\)-modes, spanned by \(\mathrm{c}\big{(}k\tfrac{2\pi}{T}t\big{)}\) and \(\mathrm{s}\big{(}k\tfrac{2\pi}{T}t\big{)}\). ### Structure of the sets \(\Lambda_{k}(T)\) In Theorem 6.2, we showed that non-resonant modes perturb, but we have not yet shown the existence of such modes. We now study the structure of the sets \(\Lambda_{k}(T)\), and show that a small perturbation of any piecewise constant entropy profile yields the existence of some \(k\) for which \((J,\Theta)\in\Lambda_{k}\), that is \(\delta_{k}(J,\Theta;T)=0\) by (6.6). Moreover, we can choose this perturbed entropy profile to be nonresonant, so that the corresponding \(k\)-mode solution indeed perturbs. **Lemma 6.3**.: _For each \(k\geq 1\), and fixed \(T>0\), the set \(\Lambda_{k}(T)\) is a \(C^{\infty}\) submanifold of \(\mathbb{R}^{2n+1}\) of codimension 1, and for each \(j\neq k\), the intersection \(\Lambda_{k}\cap\Lambda_{j}\) is a submanifold of \(\mathbb{R}^{2n+1}\) of codimension 2 if it is nonempty._ Proof.: Without loss of generality we can assume \(T=2\pi\). The set \(\Lambda_{k}\) is defined in (6.6) as the zero set of the single \(C^{\infty}\) scalar equation \(\delta_{k}(J,\Theta)=0\), given by (6.4). Thus to show it is a manifold of codimension 1, it suffices to show that the gradient \[\nabla\delta_{k}\neq 0,\qquad\nabla:=\nabla_{(J,\Theta)},\] everywhere on \(\Lambda_{k}\). To isolate the dependence of \(\delta_{k}(J,\Theta)\) on the localized triple \((\theta_{m},J_{m},\theta_{m-1})\), \(m\in\{1,\ldots,n\}\), note that for each such \(m\), (6.4) can be written as \[\delta_{k}(J,\Theta;T)=\underline{z_{m}^{(k)}}^{T}\,M(J_{m})\,\overline{z_{m }^{(k)}}, \tag{6.9}\] where the vectors \(\overline{z_{m}^{(k)}}\) and \(\underline{z_{m}^{(k)}}\) are defined inductively by \[\overline{z_{0}^{(k)}} :=\left(\begin{array}{c}1\\ 0\end{array}\right),\] \[\overline{z_{m+1}^{(k)}} :=R(k\theta_{m})\,M(J_{m})\,\overline{z_{m}^{(k)}},\quad m=0, \ldots,n-1,\] and \[\underline{z_{n}^{(k)}} :=R(-k\theta_{n})\,P^{k}\,\left(\begin{array}{c}1\\ 0\end{array}\right),\] \[z_{m-1}^{(k)} :=R(-k\theta_{m-1})\,M(J_{m})\,\underline{z_{m}^{(k)}},\quad m= n,\ldots,1,\] respectively. Each of \(\overline{z_{m}^{(k)}}\) and \(\underline{z_{m}^{(k)}}\) depends only on those variables that determine the entropy profile to the left or right of the jump \(J_{m}\), respectively. For any fixed \(m\), we further simplify by noting that since rotations and jumps are invertible, the \(z_{m}^{(k)}\)'s never vanish, so we can write them as \[\underline{z_{m}^{(k)}}:=\underline{r_{k}}\left(\begin{array}{c}\mathrm{c} \big{(}\underline{\varphi_{k}}\big{)}\\ \mathrm{s}\big{(}\overline{\underline{\varphi_{k}}}\big{)}\end{array}\right) \quad\text{and}\quad\overline{z_{m}^{(k)}}:=\overline{r_{k}}\left(\begin{array} []{c}\mathrm{c}\big{(}\overline{\varphi_{k}}\big{)}\\ \mathrm{s}\big{(}\overline{\varphi_{k}}\big{)}\end{array}\right), \tag{6.10}\] for some given angles \(\underline{\varphi_{k}}\) and \(\overline{\varphi_{k}}\), and scalar amplitudes \(\underline{r_{k}}\) and \(\overline{r_{k}}\), where we have simplified the subscripts for readability; recall that each of these is defined for each \(m\). We calculate the components of the gradient by differentiating (6.9) directly: it is immediate that \[\frac{\partial\delta_{k}}{\partial J_{m}}=\big{(}\begin{array}{cc}0&1\end{array} \big{)}\,\underline{z_{m}^{(k)}}\,\big{(}\begin{array}{cc}0&1\end{array} \big{)}\,\overline{z_{m}^{(k)}},\] while since \(\frac{\partial}{\partial\theta}R(\theta)=PR(\theta)=R(\theta)P\), we have \[\frac{\partial\delta_{k}}{\partial\theta_{m}}=\underline{z_{m}^{(k)}}^{T}\,kP \,M(J_{m})\,\overline{z_{m}^{(k)}},\qquad\frac{\partial\delta_{k}}{\partial \theta_{m-1}}=\underline{z_{m}^{(k)}}^{T}\,M(J_{m})\,kP\,\overline{z_{m}^{(k) }}.\] If we use (6.10), we get the explicit formulas \[\begin{split}\frac{\partial\delta_{k}}{\partial J_{m}}& =\underline{r_{k}}\,\overline{r_{k}}\,\mathrm{s}(\underline{\varphi_{k}})\, \mathrm{s}(\overline{\varphi_{k}}),\\ \frac{\partial\delta_{k}}{\partial\theta_{m}}& =\underline{r_{k}}\,\overline{r_{k}}\,k\,\Big{(}\mathrm{s}( \underline{\varphi_{k}})\,\mathrm{c}(\overline{\varphi_{k}})-J_{m}\,\mathrm{c} (\underline{\varphi_{k}})\,\mathrm{s}(\overline{\varphi_{k}})\Big{)},\\ \frac{\partial\delta_{k}}{\partial\theta_{m-1}}& =\underline{r_{k}}\,\overline{r_{k}}\,k\,\Big{(}J_{m}\,\mathrm{s}( \underline{\varphi_{k}})\,\mathrm{c}(\overline{\varphi_{k}})-\mathrm{c}( \underline{\varphi_{k}})\,\mathrm{s}(\overline{\varphi_{k}})\Big{)},\end{split} \tag{6.11}\] for each \(m\). To show that \(\Lambda_{k}\) is a manifold, we first suppose that \(\nabla\delta_{k}=0\), so that the right hand sides in (6.11) all vanish at every \(m\). However, if all three vanish for just one \(m\), the first equation implies either \(\mathrm{s}(\underline{\varphi_{k}})=0\) or \(\mathrm{s}(\overline{\varphi_{k}})=0\), and the second equation then implies that both of these are zero. It then follows that \(\mathrm{c}(\varphi_{k})=\pm 1\) and \(\mathrm{c}(\overline{\varphi_{k}})=\pm 1\), and plugging these in to (6.9), we must have \(\delta_{k}(J,\Theta)\neq 0\), so \((J,\Theta)\notin\Lambda_{k}\). We similarly show that for \(j\neq k\), if \(\Lambda_{k}\cap\Lambda_{j}\) is nonempty, it is a manifold in \(\mathbb{R}^{2n+1}\) of codimension \(2\), which will imply that it is a submanifold of \(\Lambda_{k}\) of codimension \(1\). This will follow if we show that the gradients of \(\delta_{k}\) and \(\delta_{j}\) are independent at each \[(J,\Theta)\in\Lambda_{k}\cap\Lambda_{j}=\Big{\{}(J,\Theta)\,\Big{|}\,\,\delta_ {k}(J,\Theta)=\delta_{j}(J,\Theta)=0\Big{\}}.\] We thus suppose that the gradients \(\nabla\delta_{k}\) and \(\nabla\delta_{j}\) are _dependent_, so that \[\nabla\delta_{j}=c\,\nabla\delta_{k},\quad\text{and set}\quad C=c\,\frac{r_{k }\,\overline{r_{k}}}{\overline{r_{j}}\,\overline{r_{j}}}.\] By comparing the gradients, it follows from (6.11) that we have \[\operatorname{s}(\underline{\varphi_{j}})\operatorname{s}(\overline{ \varphi_{j}}) =C\operatorname{s}(\underline{\varphi_{k}})\operatorname{s}( \overline{\varphi_{k}}),\] \[\operatorname{s}(\underline{\varphi_{j}})\operatorname{c}( \overline{\varphi_{j}})-J_{m}\operatorname{c}(\underline{\varphi_{j}}) \operatorname{s}(\overline{\varphi_{j}}) =C\,\frac{k}{j}\,\Big{(}\operatorname{s}(\underline{\varphi_{k} })\operatorname{c}(\overline{\varphi_{k}})-J_{m}\operatorname{c}( \underline{\varphi_{k}})\operatorname{s}(\overline{\varphi_{k}})\Big{)},\] \[J_{m}\operatorname{s}(\underline{\varphi_{j}})\operatorname{c}( \overline{\varphi_{j}})-\operatorname{c}(\underline{\varphi_{j}}) \operatorname{s}(\overline{\varphi_{j}}) =C\,\frac{k}{j}\,\Big{(}J_{m}\operatorname{s}(\underline{\varphi _{k}})\operatorname{c}(\overline{\varphi_{k}})-\operatorname{c}( \underline{\varphi_{k}})\operatorname{s}(\overline{\varphi_{k}})\Big{)}.\] Eliminating \(J_{m}\), the last two equations simplify, so we have \[\operatorname{s}(\underline{\varphi_{j}})\operatorname{s}( \overline{\varphi_{j}}) =C\operatorname{s}(\underline{\varphi_{k}})\operatorname{s}( \overline{\varphi_{k}}),\] \[\operatorname{s}(\underline{\varphi_{j}})\operatorname{c}( \overline{\varphi_{j}}) =C\,\frac{k}{j}\operatorname{s}(\underline{\varphi_{k}}) \operatorname{c}(\overline{\varphi_{k}}),\] \[\operatorname{c}(\underline{\varphi_{j}})\operatorname{s}( \overline{\varphi_{j}}) =C\,\frac{k}{j}\operatorname{c}(\underline{\varphi_{k}}) \operatorname{s}(\overline{\varphi_{k}}),\] and multiplying the last two equations and dividing by the first also yields \[\operatorname{c}(\underline{\varphi_{j}})\operatorname{c}(\overline{\varphi_{ j}}) =C\,\frac{k^{2}}{j^{2}}\operatorname{c}(\underline{\varphi_{k}}) \operatorname{c}(\overline{\varphi_{k}}),\] unless \(\operatorname{s}(\underline{\varphi_{k}})\operatorname{s}(\overline{\varphi_{ k}})=0\). It now follows from (6.9) that \(\delta_{k}\neq\delta_{j}\) unless possibly \(\operatorname{s}(\underline{\varphi_{k}})\operatorname{s}(\overline{\varphi_ {k}})=0\); however, as above, in this case neither \(\delta_{k}\) or \(\delta_{j}\) vanishes, and the proof is complete. The piecewise constant entropy profile can be viewed as being fully parameterized by \((J,\Theta)\in\mathbb{R}_{+}^{2n+1}\), although this degenerates if some \(J_{m}=1\). The following lemma follows directly from this parameterization of the piecewise constant entropy profile, denoted \(s(J,\Theta)\), recalling that the jump is given by \(J=e^{-[s]/2c_{p}}\) in (2.7). **Lemma 6.4**.: _The parameter change_ \[\Theta\to\widetilde{\Theta}(\eta,m)\quad\text{given by}\] \[\theta_{m}\to\theta_{m}+\eta,\quad\theta_{m-1}\to\theta_{m-1}-\eta,\] _has the effect of shifting the position of the jump \(J_{m}\) to the right by an amount \(\eta\), while for \(m<n\), the change_ \[J\to\widetilde{J}(h,m)\quad\text{given by}\] \[J_{m+1}\to J_{m+1}\,e^{h},\quad J_{m}\to J_{m}\,e^{-h},\] _increases the value of the entropy at level \(\theta_{m}\) by \(2c_{p}\,h\). The corresponding \(L^{1}\) norm of the change in entropy is_ \[\big{\|}s\big{(}J,\widetilde{\Theta}\big{)}-s(J,\Theta)\big{\|}_ {L^{1}} =2c_{p}\,|\log J_{m}|\,|\eta|,\quad\text{or}\] \[\big{\|}s\big{(}\widetilde{J},\Theta\big{)}-s(J,\Theta)\big{\|}_ {L^{1}} =2c_{p}\,\theta_{m}\,|h|,\] _respectively._ A consequence of these lemmas is the following theorem, which states that generically, zero divisors are nonresonant, so generate nonlinear pure tones of finite amplitude. Moreover, nonresonant profiles are dense in the set of measurable profiles. This means that given any entropy profile \(s(x)\) on \((0,\ell)\), an arbitrarily small perturbation of \(s(x)\) supports finite amplitude pure tone space and time periodic solutions of the compressible Euler equations. **Theorem 6.5**.: _Given any \(T\), we define the resonant set of \(k\)-modes to be_ \[\Lambda_{k}^{res}(T):=\Big{\{}(J,\Theta)\in\Lambda_{k}\ \Big{|}\ \delta_{j}(J, \Theta)=0\ \text{for some}\ j\neq k\Big{\}}\subset\Lambda_{k}(T).\] _Then the set of nonresonant \(k\)-modes is generic in the \(2n\)-dimensional manifold \(\Lambda_{k}(T)\), in that the set \(\Lambda_{k}^{res}(T)\) is both meagre (or first Baire category) in \(\Lambda_{k}\) and has \(2n\)-dimensional Hausdorff measure 0._ _Moreover, given any entropy profile, an arbitrarily small \(L^{1}\) perturbation yields a piecewise constant profile with infinitely many nonresonant zero divisors, each of which which generates a corresponding periodic solution. That is, the set of entropy profiles which support infinitely many finite amplitude periodic solutions is dense in the set of all (measurable) entropy profiles._ Proof.: We write \[\Lambda_{k}^{res}(T)=\bigcup_{j\neq k}\big{(}\Lambda_{k}\cap\Lambda_{j}\big{)},\] and by Lemma 6.3, each of these intersections is both nowhere dense and measure zero with respect to \(2n\)-dimensional Hausdorff measure in \(\mathbb{R}^{2n+1}\), and the first part of the theorem follows. For the second part, we note that the equation \(\delta_{k}(J,\Theta)=0\) can be regarded as an equation for the unknown \(\theta_{m}\) (or \(k\frac{2\pi}{T}\theta_{m}\)), which is \(2\pi\)-periodic, namely \[\delta_{k}=\underline{z}\,R(k\tfrac{2\pi}{T}\theta_{m})\,\overline{z}=0, \tag{6.12}\] similar to (6.9), for some nonzero vectors \(\underline{z}\) and \(\overline{z}\). Note that as \(k\) increases, solutions of (6.12) get closer together. Given an arbitrary measurable entropy profile \(S(x)\) and \(\epsilon>0\), we can approximate the entropy profile by a piecewise constant profile \(s(J,\Theta)\), with \[\big{\|}S-s(J,\Theta)\big{\|}_{L^{1}}<\epsilon/3.\] Next, by choosing \(k\) large enough, we find a small perturbation \(\widehat{\Theta}\) of \(\Theta\) with \(\theta_{m}\) a solution of (6.12), so that \[(J,\widehat{\Theta})\in\Lambda_{k}\quad\text{with}\quad\big{\|}s(J,\widehat{ \Theta})-s(J,\Theta)\big{\|}_{L^{1}}<\epsilon/3.\] Finally, by genericity, we find a small perturbation \((\widetilde{J},\widetilde{\Theta})\in\Lambda_{k}\) of \((J,\widehat{\Theta})\) which is nonresonant, and for which \[\big{\|}s(\widetilde{J},\widetilde{\Theta})-s(J,\widehat{\Theta})\big{\|}_{L ^{1}}<\epsilon/3;\] combining these completes the proof. ### Dependence on Reference Period As in Section 5, we again allow the piecewise constant entropy field, parameterized by \((J,\Theta)\), to be arbitrary, and consider the dependence of the divisors \(\delta_{k}(J,\Theta;T)\) on the reference period \(T\). The total width of the evolution (in the material frame) is \(\sum_{m}\theta_{m}=\ell\), and by scale invariance of our system (2.1), we can without loss of generality take \(\ell=1\). This means that our parameter space which fully describes any piecewise constant entropy profile on \([0,\ell]=[0,1]\) is \[J\in\mathbb{R}_{+}^{n},\qquad\Theta\in\Delta^{n}\subset\mathbb{R}_{+}^{n+1},\] where \(\Delta^{n}\) is the \(n\)-simplex, \[\Delta^{n}:=\Big{\{}\Theta\in\mathbb{R}_{+}^{n+1}\;\Big{|}\;\sum_{m=0}^{n} \theta_{m}=1\Big{\}},\] which is the convex hull of the standard coordinate vectors. It will be convenient to parameterize \(\Delta^{n}\) by the open solid simplex in \(\mathbb{R}^{n}\), \[\Delta^{n}_{\circ}:=\Big{\{}\Theta^{\circ}\in\mathbb{R}_{+}^{n}\;\Big{|}\; \sum_{m=0}^{n-1}\theta_{m}<1\Big{\}},\quad\text{with}\quad\theta_{n}:=1-\sum _{m=0}^{n-1}\theta_{m}>0;\] here we write \[\Theta^{\circ}:=(\theta_{0},\dots,\theta_{n-1})\quad\text{and}\quad\Theta:=( \theta_{0},\dots,\theta_{n})=(\Theta^{\circ},\theta_{n}).\] Note that whenever any \(J_{m}=1\), this description becomes degenerate, and in fact the actual number of entropy jumps is \(\#\big{\{}J_{m}\neq 1\big{\}}\). For a given \(T\), the basis vectors \(\mathcal{T}_{k}(T)\) and divisors \(\delta_{k}(J,\Theta;T)\) are given by (3.4) and (6.4), respectively, and so again satisfy the degeneracy (5.2). As in our earlier development, we write \(\omega=k\frac{2\pi}{T}\), recalling we have assumed \(\ell=1\), and we express each \(\delta_{k}(J,\Theta;T)\) in terms of \(\omega\), as \[\delta_{k}(J,\Theta;T)=\big{(}\begin{array}{cc}0&1\end{array} \big{)}P^{-k}\,R(\omega\,\theta_{n})\,M(J_{n})\,\dots\] \[\qquad\qquad\qquad\qquad\qquad\dots R(\omega\,\theta_{1})\,M(J_{ 1})\,R(\omega\,\theta_{0})\,\left(\begin{array}{c}1\\ 0\end{array}\right).\] Since we are interested in those values of \(T\) for which \(\delta_{k}\) vanishes, and since the matrices \(R(\omega\,\theta_{m})\) and \(M(J_{m})\) are invertible, we again use the function \(h\) defined in (5.4) to describe the angles at each jump. Proceeding inductively, we thus define \[\gamma_{1}(\omega) :=h(J_{1},\omega\theta_{0}),\quad\text{and} \tag{6.13}\] \[\gamma_{m+1}(\omega) :=h\big{(}J_{m+1},\omega\theta_{m}+\gamma_{m}),\quad m=1,\dots,n-1.\] Thus \(\gamma_{n}(\omega)\) is the angle obtained after the final jump \(J_{n}\), and our condition that \(\delta_{k}=0\) is again that \(\gamma_{n}+\omega\theta_{n}\) lie exactly on a coordinate axis. Thus, in analogy with (5.6), we again define the \(k\)-th _base frequency_\(\omega^{(k)}=\omega^{(k)}(J,\Theta)\) by the implicit equation \[\omega^{(k)}\theta_{n}+\gamma_{n}\big{(}\omega^{(k)}\big{)}=k\,\frac{\pi}{2}. \tag{6.14}\] Here we have suppressed dependence on the entropy profile parameter \((J,\Theta)\). Note, however, that \(\gamma_{m}\) depends only on the initial part of the parameter, \[\gamma_{m}=\gamma_{m}\big{(}\omega;J_{1},\ldots,J_{m},\theta_{0},\ldots,\theta_ {m-1}\big{)}=:\gamma_{m}(\omega).\] To illustrate, in Figure 2, we show the corresponding rotations and jumps (scaled for visibility) for the first four nonzero eigenfrequencies. Here the entropy field consists of four constant states separated by three jumps. The circular arcs represent linear evolution of the \(k\)-mode through the entropy level \(\theta_{m}\) so are rotations by \(\omega^{(k)}\theta_{m}\), and the vertical segments represent the action of the jumps, in which the second component is scaled but the first remains constant. The four curves are color coded, and some arcs are labeled: for example, \(\omega^{(1)}\theta_{1}\) (blue) is the evolution of the \(1\)-mode between jumps \(J_{1}\) and \(J_{2}\). **Lemma 6.6**.: _For any given \(J\in\mathbb{R}_{+}^{n}\), \(\Theta\in\Delta^{n}\), there are infinitely many bounded time periods_ \[T^{(k)}:=k\,\frac{2\pi}{w^{(k)}}\,\ell,\quad\text{or}\quad T^{(k)}:=k\,\frac{2 \pi}{w^{(k)}},\quad\ell=1,\] _such that the \(k\)-mode with corresponding reference period \(T^{(k)}\), that is \(\mathcal{T}_{k}\big{(}T^{(k)}\big{)}\), has a zero divisor, \(\delta_{k}(J,\Theta;T^{(k)})=0\)._ Figure 2. Rotations and jumps generating frequencies Proof.: We have seen that \(\delta_{k}(J,\Theta;T)=0\) if and only if (6.14) holds for appropriate \(k\). To show that this uniquely defines \(\omega^{(k)}\), we differentiate (6.13): inductively, we have \[\frac{\partial\gamma_{1}}{\partial\omega} =\frac{\partial h}{\partial x}\Big{|}_{(J_{1},\omega\theta_{0})} \,\theta_{0}\quad\text{and}\] \[\frac{\partial\gamma_{m+1}}{\partial\omega} =\frac{\partial h}{\partial x}\Big{|}_{(J_{m+1},\omega\theta_{m}+ \gamma_{m})}\,\Big{(}\theta_{m}+\frac{\partial\gamma_{m}}{\partial\omega}\Big{)}.\] Now by (5.8), each of these terms is positive, and so differentiating the left hand side of (6.14), we get \[\frac{\partial}{\partial\omega}\Big{(}\gamma_{n}(\omega)+\omega\,\theta_{n} \Big{)}>\theta_{n}>0,\] so by the implicit function theorem there exists a unique positive solution \(\omega^{(k)}\) for each \(k\geq 1\). We now show that \(\omega^{(k)}\) grows like \(k\) for \(k\) large. Note from (5.4) that \(h(J,x)\) is always in the same quadrant as \(x\), and \(\left|h(J,x)-x\right|<\frac{\pi}{2}\). Applying this to (6.13), we get \[\left|\gamma_{1}-\omega\theta_{0}\right|<\frac{\pi}{2},\quad\text{and}\quad \left|\gamma_{m+1}-(\omega\theta_{m}+\gamma_{n})\right|<\frac{\pi}{2},\] for \(m=1,\ldots,n-1\). Telescoping and using the triangle inequality we get \[\left|\gamma_{n}-\omega\sum_{m=0}^{n-1}\theta_{m}\right|<n\,\frac{\pi}{2},\] and finally using (6.14) to eliminate \(\gamma_{n}\) yields \[\left|k\,\frac{\pi}{2}-\omega^{(k)}\sum_{m=0}^{n}\theta_{m}\right|<n\,\frac{ \pi}{2},\] and since \(\sum_{m}\theta_{m}=1\), this yields \[(k-n)\,\frac{\pi}{2}<\omega^{(k)}<(k+n)\,\frac{\pi}{2},\] and the boundedness of reference periods \(T^{(k)}\) follows. Having described the frequencies \(\omega^{(k)}\) and corresponding periods \(T^{(k)}\), we again look for the conditions that determine resonance of distinct frequencies. We have shown above that each \(\omega^{(k)}\) can be regarded as a single-valued, non-degenerate smooth function \[\omega^{(k)}:\mathbb{R}_{+}^{n}\times\Delta^{n}\to\mathbb{R},\qquad\omega^{(k )}=\omega^{(k)}\big{(}J,\Theta\big{)},\] in which the simplex \(\Delta^{n}\subset\mathbb{R}_{+}^{n+1}\) is independently parameterized by the \(\Theta^{\circ}\in\Delta_{\circ}^{n}\subset\mathbb{R}^{n}\). Because the addition of extra entropy intervals introduces more degrees of freedom, the following generalization of Theorem 5.2 to a general step entropy profile is unsurprising. **Theorem 6.7**.: _There is a meagre subset \(\mathcal{Z}\subset\Delta^{n}\times\mathbb{R}^{n}\) of (\(2n\)-dimensional) measure zero, such that, for any \((J,\Theta)\notin\mathcal{Z}\), every frequency \(\omega^{(k)}\) is non-resonant. For each of the corresponding piecewise constant entropy profiles, the \(k\)-mode with reference period \(T^{(k)}\), \(k\geq 1\), is non-resonant, and so perturbs to a finite amplitude solution of the nonlinear equation (6.1). This in turn generates a space and time periodic solution of the nonlinear Euler equations._ Proof.: We proceed as in the proof of Theorem 5.2. Thus we fix a \(k\)-mode, and assume that the \(j\) mode resonates with this \(k\) mode. The \(k\)-mode determines the reference period \(T^{(k)}=k\,2\pi/\omega^{(k)}\), and resonance of the \(j\)-mode means that also \(\delta_{j}\big{(}J,\Theta;T^{(k)}\big{)}=0\). This in turn corresponds to \(\omega=j\,2\pi/T^{(k)}\) being a solution of (6.14), say \[\omega=j\,\frac{2\pi}{T^{(k)}}=\omega^{(p)},\quad\text{so that}\quad\omega^{(p )}=q\,\omega^{(k)},\quad q:=\frac{j}{k}\in\mathbb{Q}_{+}. \tag{6.15}\] Thus, fixing positive integers \(k\), \(j\neq k\) and \(p\), we define \[\mathcal{Z}_{k,p,j}:=\Big{\{}(J,\Theta)\in\mathbb{R}_{+}^{n}\times\Delta^{n} \ \Big{|}\ \omega^{(p)}=q\,\omega^{(k)}\Big{\}},\] where \(\omega^{(k)}\) and \(\omega^{(p)}\) are functions of \((J,\Theta)\), regarded as known. We wish to show that this is a non-degenerate constraint on the parameters, which will imply that \(\mathcal{Z}_{k,p,j}\) is a codimension one submanifold, and so is small, in both the measure and Baire senses. We again get a more explicit version of the angles, and so also \(\omega^{(k)}\), by using the fact that \(\theta_{n}=1-\sum_{m<n}\theta_{m}\) in (6.14), which gives, with \(\omega=\omega^{(k)}\), \[\omega=\sum_{m=0}^{n-1}\omega\theta_{m}-\gamma_{n}(\omega)+k\,\frac{\pi}{2}. \tag{6.16}\] Now whenever \(\theta_{m}\) appears in (6.13), it is scaled by \(\omega\), and this is the only context in which it appears. We thus make the substitution \(\omega\theta_{m}\to x_{m}\) for \(m<n\). That is, we use \(\Theta^{\circ}\) to parameterize the simplex, and scale this up to a vector \[x=(x_{0},\ldots,x_{n-1})\in\mathbb{R}^{n},\quad\text{by}\quad x_{m}=\omega\, \theta_{m},\quad\text{or}\quad x=\omega\,\Theta^{\circ}.\] We now use (6.13) to define \(\gamma_{m}\) as a function of \(x\), namely \[\begin{split}\gamma_{1}(x)&:=h(J_{1},x_{0}),\quad \text{and}\\ \gamma_{m+1}(x)&:=h\big{(}J_{m+1},x_{m}+\gamma_{m} \big{)},\quad m=1,\ldots,n-1,\end{split} \tag{6.17}\] which now yields the explicit function \(\gamma_{n}=\gamma_{n}(J,x)\). We can combine this function and the scaling \(x=\omega\,\Theta^{\circ}\) to get a geometric description of \(w^{(k)}\), as follows. Referring to (6.16), we see that (6.14) is equivalent to the coupled expressions \[\omega^{(k)}=\sum_{m=0}^{n-1}x_{m}^{(k)}-\gamma_{n}(x^{(k)})+k\,\frac{\pi}{2},\qquad x_{m}^{(k)}=\omega^{(k)}\,\theta_{m}. \tag{6.18}\] We thus interpret \(\omega^{(k)}\) as being determined by the intersection of a ray \(\omega\,\Theta^{\circ}\) and the graph of the explicit function \(\sum x_{m}+\gamma_{n}(x)+k\frac{\pi}{2}\). We now use (6.18) to re-express the resonance condition (6.15) as a single equation: that is, we have \[\omega^{(p)}=q\,\omega^{(k)}\quad\text{if and only if}\quad g(J,x)=(p-j)\,\frac{ \pi}{2},\] where we have defined the function \[g:\mathbb{R}_{+}^{2n}\to\mathbb{R}\quad\text{by}\quad g(J,x):=\gamma_{n}(q\,x )-q\,\gamma_{n}(x), \tag{6.19}\] and written \[x:=x^{(k)},\quad\omega:=\omega^{(k)},\quad\omega^{(p)}=q\,\omega,\quad\text{ and}\quad x^{(p)}=q\,x,\] with \(q=j/k\in\mathbb{Q}_{+}\) fixed and \(\gamma_{n}\) is given by (6.17). Here the vector identity \(x^{(p)}=q\,x^{(k)}\) follows from the second equation in (6.18). Thus we have effectively described the resonant set as a level surface of the explicit function \(g\), namely \[\mathcal{Z}_{j,k,p}=\Big{\{}(J,x)\in\mathbb{R}^{2n}\,\Big{|}\;g(J,x)=(p-j)\, \frac{\pi}{2}\Big{\}},\] and we recover the original coordinates \((J,\Theta)\in\mathbb{R}_{+}^{n}\times\Delta^{n}\) by setting \[\Theta^{\circ}:=\frac{x}{\omega^{(k)}},\quad\text{and}\quad\theta_{n}=1-\sum_{ m=0}^{n-1}\theta_{m},\] in which \(\omega^{(k)}\) is given explicitly by (6.18). We will show that \(g\) is non-degenerate, that is \[\nabla_{(J,x)}g\neq 0\quad\text{whenever}\quad g(J,x)=b,\] for any nonzero constant \(b\neq 0\). From the definition (6.19), and using (5.8), we calculate \[\frac{\partial g}{\partial x_{n-1}} =\frac{\partial h}{\partial x}\Bigg{|}_{\big{(}J_{n},q\,x_{n-1}+ \gamma_{n-1}(qx)\big{)}}\,q-q\,\frac{\partial h}{\partial x}\Bigg{|}_{\big{(} J_{n},x_{n-1}+\gamma_{n-1}(x)\big{)}}\] \[=\frac{q\,J_{n}\,(1+\Gamma_{n-1}^{2}(qx))}{1+J^{2}\,\Gamma_{n-1}^ {2}(qx)}-\frac{q\,J\,(1+\Gamma_{n-1}^{2}(x))}{1+J^{2}\,\Gamma_{n-1}^{2}(x)}\] \[=\frac{q\,J\,(J^{2}-1)\,\big{(}\Gamma_{n-1}^{2}(x)-\Gamma_{n-1}^ {2}(qx)\big{)}}{\big{(}1+J^{2}\,\Gamma_{n-1}^{2}(qx)\big{)}\,\big{(}1+J^{2}\, \Gamma_{n-1}^{2}(x)\big{)}},\] where we have set \[\Gamma_{n-1}(x):=\text{t}\big{(}x_{n-1}+\gamma_{n-1}(x)\big{)}.\] It follows that \[\frac{\partial g}{\partial x_{n-1}}=0\quad\text{iff}\quad\Gamma_{n-1}(qx)=\pm \Gamma_{n-1}(x),\] unless \(J_{n}=1\). Similarly, \[\frac{\partial g}{\partial J_{n}} =\frac{\partial h}{\partial J}\bigg{|}_{\left(J_{n},q\,x_{n-1}+ \gamma_{n-1}(qx)\right)}-q\,\frac{\partial h}{\partial J}\bigg{|}_{\left(J_{n},x_{n-1}+\gamma_{n-1}(x)\right)}\] \[=\frac{\Gamma_{n-1}(qx)}{1+J^{2}\,\Gamma_{n-1}^{2}(qx)}-q\,\frac{ \Gamma_{n-1}(x)}{1+J^{2}\,\Gamma_{n-1}^{2}(x)}.\] Thus, if we assume \(J_{n}\neq 1\), and assume both \[\frac{\partial g}{\partial x_{n-1}}\bigg{|}_{(J,x)}=0\quad\text{and}\quad \frac{\partial g}{\partial x_{n-1}}\bigg{|}_{(J,x)}=0,\] then we must have both \[\Gamma_{n-1}(qx)=\pm\Gamma_{n-1}(x)\quad\text{and}\quad\Gamma_{n-1}(qx)-q\, \Gamma_{n-1}(x)=0,\] which in turn implies \(\Gamma_{n-1}(qx)=\Gamma_{n-1}(x)=0\), so that, by (6.19), (6.17) and (5.4), we get \[g(J,x)=\arctan\left(J_{n}\,\Gamma_{n-1}(qx)\right)-q\,\arctan\left(J_{n}\, \Gamma_{n-1}(x)\right)=0.\] On the other hand, if \(J_{n}=1\), which is the degenerate case, we carry out the same calculation for \(\frac{\partial g}{\partial x_{n-2}}\) and \(\frac{\partial g}{\partial J_{n-1}}\), and continue by backward induction as necessary. If all \(J_{m}=1\), which is the most degenerate isentropic case, then since \(h(1,x)=x\), we get \(g=0\) identically. We have thus shown that, as long as at least one \(J_{m}\neq 1\), \[\nabla_{(J,x)}g=0\quad\text{implies}\quad g(J,x)=0.\] The implicit function theorem now tells us that non-zero level sets of \(g\), and in particular \(\mathcal{Z}_{k,p,j}\), are codimension one submanifolds of \(\mathbb{R}^{2n}\), and so these are again both measure zero and nowhere dense. As before, we now set \[\mathcal{Z}:=\bigcup_{k,p,j}\mathcal{Z}_{k,p,j},\] and since this is a countable union, \(\mathcal{Z}\) has measure zero and is meagre. Since \(\mathcal{Z}\) contains all possible resonances, the proof is complete. ## 7. Generalizations We return now to the compressible Euler equations (2.1) with a general constitutive law, \(p=p(v,s)\), in which we solve for \(v\), so that \(v=v(p,s)\). For classical solutions, in a Lagrangian frame, we write the system as \[p_{x}+u_{t}=0,\qquad u_{x}-v(p,s)_{t}=0, \tag{7.1}\] or, in quasilinear form, \[p_{x}+u_{t}=0,\qquad u_{x}-v_{p}(p,s)\,p_{t}=0,\] where we are again evolving in \(x\), and since \(s_{t}=0\), we regard the entropy as a given fixed function \(s(x)\). As above we make the observation that the nonlinear equations respect the symmetry \(p\) even, \(u\) odd, as functions of \(t\) That is, if we specify \(p(x_{0},\cdot)\) even, \(u(x_{0},\cdot)\) odd at one point \(x_{0}\), then this is satisfied throughout the interval of classical existence, that is as long as gradients remain finite. ### Minimal Tile for Periodicity and Boundary Conditions We again build a tile which generates a periodic solution by assuming the data is time periodic and imposing extra symmetry conditions at the ends \(x=0\) and \(x=\ell\) of the evolution. As in (2.20), (2.25), we impose a regular reflective boundary condition at \(x=0\): that is, we impose the _acoustic reflection boundary condition_\(u(0,\cdot)=0\), with \(p(0,\cdot)\) even and periodic. Physically, this acoustic boundary condition describes pure (lossless) reflection of a sound wave off a wall. On the other hand, in (2.22), (2.28), the boundary condition which is a shifted reflection, provides a mechanism for the generation of periodic tiles, but is not realizable as a simple physical boundary condition. This is because when reflecting the profile around \(x=\ell\) to generate a (half) periodic tile, the velocity need not actually vanish at \(x=\ell\). This can be explained by noting the effect of the quarter-period shift \(S^{T/4}\) on a \(k\)-mode: we have \[\mathcal{S}^{T/4}\mathrm{s}\big{(}k\tfrac{2\pi}{T}t\big{)}=\mathrm{s}\big{(}k \tfrac{2\pi}{T}t-k\tfrac{\pi}{2}\big{)}=\begin{cases}\pm\mathrm{c}\big{(}k \tfrac{2\pi}{T}t\big{)},&k\ \mathrm{odd},\\ \pm\mathrm{s}\big{(}k\tfrac{2\pi}{T}t\big{)},&k\ \mathrm{even},\end{cases}\] and similarly for \(\mathcal{S}^{T/4}\mathrm{c}\big{(}k\tfrac{2\pi}{T}t\big{)}\). It follows that, for \(k\)-modes with \(k\) even, \[\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{S}^{T/4}\,\mathcal{T}_{k}=0\quad \mathrm{iff}\quad\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{T}_{k}=0,\] while this fails for odd \(k\)-modes. As a consequence, at least at the linear level, if we double the time frequency of tones, then we can drop the \(1/4\)-period shift in posing the right boundary condition, and this then becomes an acoustic reflection boundary condition at \(x=\ell\), exactly as at \(x=0\). It follows that we can pose two problems that effectively have the same method of solution: as in previous sections, we assume \(p\) even and \(u\) odd as functions of \(t\), and set \[y(x,t):=p(x,t)+u(x,t), \tag{7.2}\] and define a nonlocal scalar flux by \[g(y):=u+v(p,s)=\tfrac{\mathcal{I}-\mathcal{R}}{2}y+v\big{(}\tfrac{\mathcal{I} +\mathcal{R}}{2}y,s\big{)},\] so that \(y\) solves the nonlocal scalar conservation law \[y_{x}+g(y)_{t}=0. \tag{7.3}\] Note that this is distinct from our earlier treatment of piecewise constant entropy, in which we used the isentropic equations at each entropy level together with the rescaling of \(x\), whereas here we use the full equations and unscaled \(x\) directly. The two problems we then pose are: * the _periodic tile problem_, namely \[\mathcal{F}_{P}(y^{0}):=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{S}^{-T/4}\, \mathcal{E}^{\ell}\,y^{0}=0;\] (7.4) * the _acoustic boundary value problem_, namely \[\mathcal{F}_{A}(y^{0}):=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{E}^{\ell }\,y^{0}=0.\] (7.5) In both of these problems, as above, \(\mathcal{E}^{\ell}\) denotes nonlinear evolution through the varying entropy profile from \(x=0\) to \(x=\ell\), and the data \(y^{0}=y^{0}(t)\) is again assumed to be even and \(T\)-periodic. For a given entropy profile \(s(x)\), each solution of (7.4) generates a space and time periodic solution, generated by a reflection at the left boundary and a shifted reflection on the right, so the space period of the solution is \(4\ell\). On the other hand, a solution of (7.5) generates a periodic solution by just one reflection in \(x\), so has space period \(2\ell\). The two problems are related as follows: any \(k\)-mode solution of the linearization of (7.4) with \(k\) even is also a solution of that of (7.5), and conversely an even \(2j\)-mode linearized solution of (7.5) can also be realized as a solution of the linearization of (7.4). Moreover, if these linearized solutions perturb, then by uniqueness they will coincide as solutions of the corresponding nonlinear problems. It appears that (7.4) generates more solutions than (7.5), but we regard (7.5) as a more physically relevant problem. This is because the acoustic boundary condition (7.5) is simply a reflection off a wall, so the system models sound waves bouncing between two walls which bound an unrestricted varying entropy profile on \([0,\ell]\). On the other hand, the velocity \(u\) need not vanish at \(x=\ell\) with (7.4), and it is hard to ascribe (7.4) to a physical condition, because we expect that controlling an entropy profile to be perfectly symmetric between two walls (at \(x=0\) and \(x=2\ell\)) is practically infeasible. However, we regard periodic solutions in which compression and rarefaction are in perfect balance as being of fundamental importance. ### Linearization and Sturm-Liouville systems We now analyze (7.1) following the same steps as before. We regard the entropy field \(s(x)\) as a given piecewise continuous function on the interval \(x\in[0,\ell]\), continuous at the endpoints, and assume a constitutive law \[v=v(p,s),\quad\text{with}\quad v_{p}<0,\] and we will make further assumptions as necessary. We begin by noting that the quiet state \((\overline{p},0)\) is a solution of the Euler equations satisfying the boundary conditions; although it is a constant solution of (7.1), it is part of a non-constant standing wave solution of (2.1). Linearizing (7.1) around \((\overline{p},0)\) gives the system \[\begin{split}& P_{x}+U_{t}=0,\qquad U_{x}+\sigma^{2}\,P_{t}=0,\\ &\text{where}\quad\sigma=\sigma(x):=\sqrt{-v_{p}(\overline{p},s) },\end{split} \tag{7.6}\] in which we use the convention that \((P,U)\) solve linearized equations, while \((p,u)\) solve the nonlinear system. We use separation of variables to solve (7.6), in which we look for \(T\)-periodic solutions of (7.6) in which \(P\) and \(U\) are even and odd, respectively, so we set \[\begin{split} P(x,t)&:=\sum_{n\geq 0}\varphi_{n}(x) \operatorname{c}\bigl{(}n\tfrac{2\pi}{T}t\bigr{)},\\ U(x,t)&:=\sum_{n>0}\psi_{n}(x)\operatorname{s}\bigl{(} n\tfrac{2\pi}{T}t\bigr{)}.\end{split} \tag{7.7}\] Plugging in to (7.6) and simplifying, we get the ODE system \[\boldsymbol{\dot{\varphi}}_{n}+\omega\,\psi_{n}=0,\qquad\boldsymbol{\dot{ \psi}}_{n}-\sigma^{2}\,\omega\,\varphi_{n}=0,\qquad\omega:=n\tfrac{2\pi}{T}, \tag{7.8}\] where \(\boldsymbol{\dot{\square}}\) denotes \(\frac{d}{dx}\square\). We denote nonlinear evolution through \(x\) according to (7.3) by \(\mathcal{E}^{x}\), and we denote the linearization of this around the constant quiet state \((\overline{p},0)\) by \(\mathcal{L}^{x}:=D\mathcal{E}^{x}(\overline{p},0)\). That is, \(\mathcal{L}^{x}\) denotes evolution by (7.6), or more precisely by the linear nonlocal scalar equation for \(Y:=P+U\), \[Y_{x}+\bigl{(}\tfrac{\mathcal{I}-\mathcal{R}}{2}Y+\sigma^{2}\,\tfrac{ \mathcal{I}+\mathcal{R}}{2}Y\bigr{)}_{t}=0. \tag{7.9}\] We again use the notation (3.4), so that (7.7) can be written \[Y(x,t)=P+U=\sum_{n\geq 0}\mathcal{T}_{n}\,\left(\begin{array}{c}\varphi_{n}( x)\\ \psi_{n}(x)\end{array}\right), \tag{7.10}\] and we obtain an analogue of Lemma 3.1, whose proof is omitted. **Lemma 7.1**.: _The linearized evolution operator \(\mathcal{L}^{x}=D\mathcal{E}^{x}(\overline{p},0)\) acts on the \(k\)-mode row vector \(\mathcal{T}_{k}=\mathcal{T}_{k}(T)\) given by (3.4), by_ \[\mathcal{L}^{x}\,\mathcal{T}_{k}=\mathcal{T}_{k}\,\Psi\bigl{(}x;k\tfrac{2\pi} {T}\bigr{)},\quad\text{so}\quad\mathcal{L}^{x}\,\mathcal{T}_{k}\left( \begin{array}{c}a\\ b\end{array}\right)=\mathcal{T}_{k}\,\Psi\left(\begin{array}{c}a\\ b\end{array}\right),\] _where \(\Psi=\Psi(x;\omega)\) is the fundamental solution of (7.8), so satisfies_ \[\boldsymbol{\dot{\Psi}}(x;\omega)=\omega\,\left(\begin{array}{cc}0&-1\\ \sigma^{2}(x)&0\end{array}\right)\,\Psi(x;\omega),\qquad\Psi(0;\omega)=I. \tag{7.11}\] Alternatively, we can express the linearized system (7.6) as a wave equation with varying speed, namely \[P_{xx}-\sigma^{2}\,P_{tt}=0, \tag{7.12}\] and after use of (7.7), by separation of variables, we get the second-order linear system \[\boldsymbol{\ddot{\varphi}}_{n}+\bigl{(}n\tfrac{2\pi}{T}\bigr{)}^{2}\,\sigma^ {2}\,\varphi_{n}=0, \tag{7.13}\] which is equivalent to (7.8). We now consider boundary values for these ODE systems. In both (7.4) and (7.5), the data \(y^{0}\) (and so also \(Y^{0}\)) posed at \(x=0\) is even, so using (7.2) and (7.7), this becomes the condition \[U(0,\cdot)=0,\quad\text{equivalently}\quad\psi_{n}(0)=\dot{\varphi}_{n}(0)=0. \tag{7.14}\] If we are solving (7.5), we get the same condition at \(x=\ell\), namely \[U(\ell,\cdot)=0,\quad\text{equivalently}\quad\psi_{n}(\ell)=\dot{\varphi}_{n}( \ell)=0. \tag{7.15}\] On the other hand, if we are solving (7.4), the boundary condition is \[0 =\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{S}^{-T/4}\big{(}P( \ell,\cdot)+U(\ell,\cdot)\big{)}\] \[=\tfrac{\mathcal{I}-\mathcal{R}}{2}\sum\Big{(}\varphi_{n}(\ell) \operatorname{c}\!\left(n\tfrac{2\pi}{T}t-n\tfrac{\pi}{2}\right)+\psi_{n}( \ell)\operatorname{s}\!\left(n\tfrac{2\pi}{T}t-n\tfrac{\pi}{2}\right)\Big{)},\] which yields the conditions \[\dot{\varphi}_{n}(\ell)=\psi_{n}(\ell)=0,\quad n\text{ even}, \tag{7.16}\] \[\varphi_{n}(\ell)=\dot{\psi}_{n}(\ell)=0,\quad n\text{ odd}.\] Our boundary conditions can now be expressed succinctly using the fundamental solution (7.11), in analogy with Lemma 6.1, as follows. **Lemma 7.2**.: _Suppose that constant ambient pressure \(\overline{p}\), entropy profile \(s(x)\) and reference period \(T\) are given. Then the constant quiet state \(y^{0}=\overline{p}\) solves (7.4) and (7.5), and the respective linearizations around \(\overline{p}\) are_ \[D\mathcal{F}_{P}(\overline{p})=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{S} ^{-T/4}\,\mathcal{L}^{\ell}\quad\text{and}\quad D\mathcal{F}_{A}(\overline{p} )=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}^{\ell},\] _and moreover the nonlinear functionals \(\mathcal{F}_{P}\) and \(\mathcal{F}_{A}\) factor as_ \[\mathcal{F}_{P}=\tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{S}^{-T/4}\, \mathcal{L}^{\ell}\,\mathcal{N}^{\ell}\quad\text{and}\quad\mathcal{F}_{A}= \tfrac{\mathcal{I}-\mathcal{R}}{2}\,\mathcal{L}^{\ell}\,\mathcal{N}^{\ell}, \tag{7.17}\] _respectively, where \(\mathcal{N}^{\ell}:=\big{(}\mathcal{L}^{\ell}\big{)}^{-1}\mathcal{L}^{\ell}\) in both cases. The linearized operators respect \(k\)-modes, and we get_ \[D\mathcal{F}_{P}(\overline{p})\Big{[}\mathrm{c}\!\left(k\tfrac{ 2\pi}{T}t\right)\Big{]} =\delta_{P,k}(T)\operatorname{s}\!\left(k\tfrac{2\pi}{T}t\right)\quad \text{and}\] \[D\mathcal{F}_{A}(\overline{p})\Big{[}\mathrm{c}\!\left(k\tfrac{ 2\pi}{T}t\right)\Big{]} =\delta_{A,k}(T)\operatorname{s}\!\left(k\tfrac{2\pi}{T}t\right),\] _where the \(k\)-th divisor is given by_ \[\delta_{P,k}(T) :=\big{(}\begin{array}{cc}0&1\end{array}\big{)}\,\,P^{-k}\, \Psi\big{(}\ell;k\tfrac{2\pi}{T}\big{)}\,\left(\begin{array}{c}1\\ 0\end{array}\right)\quad\text{or} \tag{7.18}\] \[\delta_{A,k}(T) :=\big{(}\begin{array}{cc}0&1\end{array}\big{)}\,\,\Psi\big{(} \ell;k\tfrac{2\pi}{T}\big{)}\,\left(\begin{array}{c}1\\ 0\end{array}\right),\] _for the periodic or acoustic boundary conditions, respectively._ It follows that the single mode \(\mathrm{c}\!\left(k\tfrac{2\pi}{T}t\right)\) provides a solution given by \(\mathcal{T}_{k}\,\Psi\big{(}x;k\tfrac{2\pi}{T}\big{)}\) of the linearized problem satisfying the boundary conditions if and only if \[\delta_{P,k}(T)=0\quad\text{or}\quad\delta_{A,k}(T)=0,\] respectively. In this case, \(\Psi\big{(}x;k\frac{2\pi}{T}\big{)}\binom{1}{0}\) solves (7.8), and since it satisfies the boundary conditions, it is an eigenfunction of the Sturm-Liouville problem (7.13), corresponding to eigenvalue \(\lambda_{k}:=\big{(}k\frac{2\pi}{T}\big{)}^{2}\). As in previous sections, we expect that if the corresponding mode is nonresonant, then this should perturb to a nonlinear pure tone solution of (7.1). We are thus led to introduce the Sturm-Liouville (SL) operator \[\mathcal{L}:=-\frac{1}{\sigma^{2}}\,\frac{d^{2}}{dx^{2}},\quad\text{so that}\quad \mathcal{L}\phi=-\frac{1}{\sigma^{2}}\,\overset{\makebox[0.0pt]{\mbox{\tiny$ \bullet$}}}{\phi}, \tag{7.19}\] and to introduce the associated SL eigenvalue problem, \[\mathcal{L}\,\varphi=\lambda\,\varphi,\quad\text{which is}\quad-\,\not{\! \varphi}=\lambda\,\sigma^{2}\,\varphi,\quad\text{on}\quad 0<x<\ell, \tag{7.20}\] and subject to the boundary conditions (7.14) at \(x=0\) and either (7.16) or (7.15) at \(x=\ell\). We note that the boundary conditions imply self-adjointness, implying that this is a _regular SL problem_ as long as the weight \(\sigma^{2}(x)=-v_{p}\big{(}\overline{p},s(x)\big{)}\) is a positive, bounded piecewise continuous function [28, 1, 5]. In particular, the setup we have here is a direct generalization of the cases studied earlier in this paper, and the results here apply unchanged in that context. Moreover, because the corresponding ODEs are linear, the solutions can be expressed as integrals, and our methods extend to BV entropy profiles with only minor technical changes. We collect some well-known classical results for regular SL eigenvalue problems in the following lemma; see [28, 5] for details. This is the direct generalization of the first statement of Lemma 5.1; indeed, the existence of infinitely many reference periods for a piecewise constant entropy profile is a special case of the classical Sturm-Liouville theory. Many of the statements in this lemma will be explicitly verified below. **Lemma 7.3**.: _Assume that the entropy profile is piecewise continuous. The Sturm-Liouville system (7.20), with boundary conditions (7.14) and (7.16) or (7.15), has infinitely many eigenvalues \(\lambda_{k}\). These are positive, simple, monotone increasing and isolated, and satisfy the growth condition \(\lambda_{k}=O(k^{2})\). The corresponding eigenfunctions form an orthogonal \(L^{2}\) basis in the Hilbert space with weight function \(\sigma^{2}\)._ We label the eigenvalues \(\lambda_{k}\) and corresponding eigenfunctions \(\varphi_{k}\), scaled so that \(\varphi_{k}(0)=1\), so that (7.20) and (7.8) hold for each \(k\geq 1\), and the \(\lambda_{k}\)'s are increasing with \(k\). Because we prefer to work with the frequencies, we define the \(k\)-th _eigenfrequency_\(\omega_{k}\) and corresponding _reference period_\(T_{k}\) by \[\omega_{k}:=\sqrt{\lambda_{k}},\quad\text{and}\quad T_{k}:=k\,\frac{2\pi}{ \omega_{k}}=k\,\frac{2\pi}{\sqrt{\lambda_{k}}}, \tag{7.21}\] respectively. It follows that the \(k\)-modes \[Y_{k}(x,t) :=\mathcal{T}_{k}(T_{k})\,\Psi\big{(}x;k\frac{2\pi}{T_{k}}\big{)} \,\binom{1}{0}\quad\text{or}\] \[P_{k}(x,t) :=\varphi_{k}(x)\,\mathrm{c}\big{(}k\frac{2\pi}{T_{k}}t\big{)}= \varphi_{k}(x)\,\mathrm{c}(\omega_{k}t),\] solve (7.9) or (7.12), respectively, while also satisfying the appropriate boundary conditions. These \(k\)-modes lie in the kernel of the linearized operator \(D\mathcal{F}(\overline{p})\), and we will show that these perturb to pure tone solutions of the nonlinear problem. We note that for given \(k\), the frequency \(\omega_{k}\) is determined by the entropy profile, and this in turn yields the reference period. Consideration of other resonant and nonresonant modes then refers to this fixed period \(T_{k}\). ### Nonresonant modes As in our above development, we wish to perturb \(Y_{k}\) (or \(P_{k}\)) to a time periodic solution of the corresponding nonlinear equation. Following (4.5), and having identified the reference period, we consider perturbations of the initial data of the form \[y^{0}(0)=p^{0}(t):=\overline{p}+\alpha\,\mathrm{c}\big{(}k\tfrac{2\pi}{T_{k}}t \big{)}+z+\sum_{j\geq 1,j\neq k}a_{j}\,\mathrm{c}\big{(}j\tfrac{2\pi}{T_{k}}t \big{)}, \tag{7.22}\] in which the reference period \(T_{k}\) is determined by the choice of SL eigenvalue \(\lambda_{k}\) in (7.21), and we now regard this as fixed. We again declare the \(k\)-mode to be _nonresonant_ if no other \(j\)-modes satisfy the boundary condition, that is \[\delta_{P,j}(T_{k})\neq 0\quad\text{or}\quad\delta_{A,j}(T_{k})\neq 0,\quad \text{for all}\quad j\neq k,\] respectively for the appropriate boundary conditions (7.4) or (7.5). Here we note that the reference period \(T_{k}\) is fixed, while \(j\) varies, and the collection of \(j\)-mode basis elements \[\big{\{}\mathcal{T}_{j}(T_{k})\mid j\geq 1\big{\}}=\Big{\{}\,\Big{(}\,\, \mathrm{c}(j\tfrac{2\pi}{T_{k}}t)\quad\text{s}(j\tfrac{2\pi}{T_{k}}t)\,\, \Big{)}\,\mid j\geq 1\Big{\}}\] span the even and odd functions of period \(T_{k}\), respectively. **Lemma 7.4**.: _The \(j\)-mode is resonant with the fixed \(k\)-mode if and only if two distinct SL frequencies are rationally related: that is,_ \[\delta_{j}(T_{k})=0\quad\text{iff}\quad k\,\omega_{p}=j\,\omega_{k},\] _for some index \(p=p(j)\neq k\)._ Proof.: By construction, we have \[\delta_{k}(T_{k})=0,\quad\text{with}\quad T_{k}:=k\,\frac{2\pi}{\omega_{k}}.\] If \(\delta_{j}(T_{k})=0\), then \(j\tfrac{2\pi}{T_{k}}\) also corresponds to some SL frequency, and since \(j\neq k\), we must have \[\omega_{p}=j\frac{2\pi}{T_{k}}\quad\text{for some}\quad p\neq k,\] and substituting in \(T_{k}\) gives the result. The resonance or nonresonance of modes corresponding to SL eigenvalues depends on the entropy profile and constitutive law. Moreover, because the SL eigenvalues, and hence also frequencies, vary continuously with the entropy profile, we expect that in a strong sense, as in the previous cases, the set of profiles with resonant modes should be small, so that generically, all modes should be nonresonant. ### Angle Variable Once again we treat the general entropy profile as given, and solve for the base frequencies \(\omega_{k}\) that yield \(k\)-mode solutions of the linearized equation. Starting with the linearization (7.6), we separate variables by setting \[P(x,t):=\varphi(x)\,\mathrm{c}(\omega\,t),\qquad U(x,t):=\psi(x)\,\mathrm{s}( \omega\,t),\] which yields the SL system (7.8), namely \[\dot{\varphi}+\omega\,\psi=0,\qquad\dot{\psi}-\omega\,\sigma^{2}\,\varphi=0, \tag{7.23}\] with \(\sigma^{2}=-v_{p}(\overline{p},s)=\sigma^{2}(x)\). According to (7.14), we first consider initial values \[\varphi(0)=c_{0}\quad\text{and}\quad\psi(0)=0,\] for appropriate \(c_{0}\neq 0\). Finding the values of \(\omega_{k}\) that meet the boundary conditions (7.15) or (7.16) will then determine the appropriate reference period \(T_{k}:=k\,\frac{2\pi}{\omega_{k}}\). As we have seen earlier, the most important part of the evolution is the angle \(\theta=\theta(x)\) of the vector \((\varphi,\psi)\). This can be effectively captured with the use of _modified Prufer variables_, which are \[\varphi(x):=r(x)\,\frac{1}{\rho(x)}\,\mathrm{c}\big{(}\theta(x)\big{)},\qquad \psi(x):=r(x)\,\rho(x)\,\mathrm{s}\big{(}\theta(x)\big{)}, \tag{7.24}\] see [28, 5, 1]. Here we interpret \(r(x)\) as the radial length or amplitude, \(\rho(x)\) as the eccentricity or aspect, and \(\theta(x)\) as the angle variable. This is a degenerate description in which we are free to choose the aspect \(\rho(x)>0\), and having done so, both \(r(x)\) and \(\theta(x)\) will be determined by the equations. Plugging in (7.24) into (7.23) and simplifying, we get the system \[\frac{\dot{\boldsymbol{r}}}{r}\,\mathrm{c}(\theta)-\frac{\dot{ \boldsymbol{\rho}}}{\rho}\,\mathrm{c}(\theta)-\mathrm{s}(\theta)\,\dot{ \boldsymbol{\theta}}+\omega\,\rho^{2}\,\mathrm{s}(\theta) =0,\] \[\frac{\dot{\boldsymbol{r}}}{r}\,\mathrm{s}(\theta)+\frac{\dot{ \boldsymbol{\rho}}}{\rho}\,\mathrm{s}(\theta)+\mathrm{c}(\theta)\,\dot{ \boldsymbol{\theta}}-\omega\,\frac{\sigma^{2}}{\rho^{2}}\,\mathrm{c}(\theta) =0,\] with initial conditions \[\theta(0)=0\quad\text{and}\quad r(0)=1,\quad\text{so}\quad c_{0}=\frac{1}{ \rho(0)}.\] After use of elementary trig identities, this in turn becomes \[\frac{\dot{\boldsymbol{r}}}{r}-\frac{\dot{\boldsymbol{\rho}}}{ \rho}\,\mathrm{c}(2\theta)+\omega\,\Big{(}\rho^{2}-\frac{\sigma^{2}}{\rho^{2}} \Big{)}\mathrm{s}(\theta)\,\mathrm{c}(\theta) =0,\] \[\dot{\boldsymbol{\theta}}+\frac{\dot{\boldsymbol{\rho}}}{\rho}\, \mathrm{s}(2\theta)-\omega\,\Big{(}\frac{\sigma^{2}}{\rho^{2}}\,\mathrm{c}^{2} (\theta)+\rho^{2}\,\mathrm{s}^{2}(\theta)\Big{)} =0.\] Thus the system is reduced to a nonlinear scalar ODE for the angle \(\theta(x)\), coupled with an integration for the amplitude \(r(x)\). Moreover, it is now clear that we should choose \(\rho\) such that \[\rho^{2}=\frac{\sigma^{2}}{\rho^{2}},\quad\text{that is}\quad\rho(x):=\sqrt{ \sigma(x)}. \tag{7.25}\] With this choice the equations simplify further, and we get the scalar equation for \(\theta(x)\), \[\boldsymbol{\dot{\theta}}=\omega\,\sigma-\frac{\boldsymbol{\dot{\sigma}}}{2 \sigma}\,\mathrm{s}(2\theta),\qquad\theta(0)=0, \tag{7.26}\] coupled with a linear homogeneous equation for \(r(x)\), namely \[\frac{\boldsymbol{\dot{r}}}{r}=\frac{\boldsymbol{\dot{\sigma}}}{2\sigma}\, \mathrm{c}(2\theta),\qquad r(0)=1,\] which immediately yields the quadrature \[r(x)=\exp\Big{\{}\int_{0}^{x}\mathrm{c}\big{(}2\theta(y)\big{)}\;d\log\sqrt{ \sigma}(y)\Big{\}}. \tag{7.27}\] We note that (7.26) is consistent with our treatment of piecewise constant entropy; in that case, we have \(\boldsymbol{\dot{\theta}}=const\) on intervals, while each jump contributes a \(\delta\)-function. Thus the angle changes linearly on each entropy interval, with a finite jump in angle (independent of \(\omega\)) at each entropy jump. In particular, our framework extends without change to bounded measurable entropy profiles for which \(\log\sigma\) has bounded variation. Moreover, in the prototypical case of a \(\gamma\)-law gas, \(\log\sigma\) is a multiple of the entropy. As above, we are interested in characterizing the base \(k\)-mode frequencies and periods, that yield periodic solutions of the linearized equations. That is, we want to characterize periods \(T_{k}\), such that the \(k\)-mode \(\mathcal{T}_{k}(T_{k})\) satisfies the boundary conditions (7.15) or (7.16), respectively, or equivalently \[\delta_{A,k}(T_{k})=0\quad\text{or}\quad\delta_{P,k}(T_{k})=0,\] respectively, these being given by (7.18). Using (7.24), we express the boundary conditions in terms of the angle variable \(\theta(x)\), as follows. For the periodic boundary condition (7.16), we require \[\varphi_{k}(\ell)=0,\quad\text{so}\quad\theta(\ell)=\Big{(}n+\frac{1}{2}\Big{)}\pi, \tag{7.28}\] for \(k\) odd, and for \(k\) even or the acoustic boundary condition, we need \[\psi_{k}(\ell)=0,\quad\text{so}\quad\theta(\ell)=n\,\pi, \tag{7.29}\] for some \(n\). We can methodically enumerate all such conditions in terms of the frequency \(\omega\) and angle \(\theta(x)=\theta(x,\omega)\), using the _angle boundary condition_ \[\theta(\ell,\omega)=k\,\frac{\pi}{2}, \tag{7.30}\] which we interpret as an implicit condition for the coefficient \(\omega=\omega_{k}\) of (7.26). Integrating, this becomes the condition \[k\,\frac{\pi}{2}=\omega_{k}\,\int_{0}^{\ell}\sigma\;dy-\int_{0}^{\ell}\mathrm{s} \big{(}2\theta(y,\omega_{k})\big{)}\;d\log\sqrt{\sigma}(y), \tag{7.31}\] which is an implicit equation for \(\omega_{k}\). Thus, given an entropy profile, we find the base reference period \(T_{k}\) of the \(k\)-mode by solving (7.31) for \(\omega_{k}\), where \(\theta(x,\omega)\) solves (7.26), and then using (7.21). Each such \(k\)-mode with reference period \(T_{k}\) then determines a periodic problem of the linearized equation. Note that (7.30) is equivalent to the periodic boundary condition (7.16), but permits more frequencies than the acoustic reflection boundary condition (7.15) allows, these latter two being equivalent only for even values of \(k\). However, the even modes form a closed subspace, so if we start from \(a_{m}=0\) for all odd \(m\), this persists when we perturb to the nonlinear problem. Thus for notational convenience, we use (7.31) to identify all frequencies, with the understanding that if we are using (7.15), all linearizations and perturbations are restricted to even modes. **Lemma 7.5**.: _For each integer \(k\geq 1\), and any entropy profile \(s(x)\) such that \(\log\sigma\in BV[0,\ell]\), there is a unique \(\omega_{k}\) such that the solution \(\theta(x)\) of (7.26) satisfies (7.30). Moreover, \(\{\omega_{k}\}\) is a monotone increasing sequence which grows like \(k\), that is \(\omega_{k}/k\in[1/C,C]\) for some constant \(C>0\)._ Proof.: We will show that for any fixed \(x>0\), the function \(\theta(x)\) is strictly monotone increasing as a function of \(\omega\). In particular, taking \(x=\ell\) in (7.30) implies that \(\omega_{k}\) exists for each \(k\geq 1\) and is increasing. To get the growth rate for \(\omega_{k}\), we use (7.31) to write \[\bigg{|}\,\omega_{k}\,\int_{0}^{\ell}\sigma\;dy-k\,\frac{\pi}{2}\bigg{|}\leq \int_{0}^{\ell}d\,|\log\sqrt{\sigma}(y)|.\] To show monotonicity of \(\theta(\cdot,\omega)\), we note that \(\sigma\) is determined by the entropy \(s(x)\) and differentiate the ODE (7.26) with respect to \(\omega\). Denoting \(\frac{\partial\theta}{\partial\omega}\) by \(\zeta\), this yields \[\dot{\zeta}=\sigma-\frac{\dot{\sigma}}{\sigma}\,\mathrm{c}(2\theta)\,\zeta, \qquad\zeta(0)=0, \tag{7.32}\] which by (7.27) can also be written \[\dot{\zeta}=\sigma-2\,\frac{\dot{\boldsymbol{r}}}{r}\,\zeta,\qquad\zeta(0)=0.\] It follows that \(r^{2}\) is an integrating factor, and we integrate to get \[r^{2}(x)\,\zeta(x)=\int_{0}^{x}r(y)^{2}\,\sigma(y)\;dy.\] Thus \(\zeta(x)>0\) for \(x>0\), and the proof is complete. ### Perturbation and Auxiliary equation Assuming now that the \(k\)-th mode is nonresonant, we show that the \(k\)-mode bifurcates to a time periodic solution of the compressible Euler equations with periodic or acoustic reflective boundary conditions (7.16) or (7.15), respectively. As noted above, it is enough to consider the periodic condition (7.16) only, and restrict our attention to even modes when using the reflective condition (7.15). Our strategy is unchanged: we build Hilbert spaces as in (4.6) and (4.7), but separating the \(k\)-mode rather than the \(1\)-mode. The analogues of Lemmas 4.2 and 4.3 then follow in exactly the same way. For clarity and brevity, we define the spaces and state the lemmas without proof. For \(k\) fixed, the angle boundary condition (7.30), or equivalently (7.31), determines the base frequency \(\omega_{k}\), and by (7.21) this gives both the SL eigenvalue \(\lambda_{k}\) and reference period \(T=T_{k}\), namely \[\lambda_{k}:=\omega_{k}^{2}\quad\text{and}\quad T_{k}:=k\,\frac{2\pi}{\omega_ {k}},\] respectively, and while considering other modes, this period \(T=T_{k}\) remains fixed. Following (4.6), we define \[\begin{split}\mathcal{H}_{1}&:=\big{\{}z+\alpha \operatorname{c}(k\tfrac{2\pi}{T_{k}}t)\bigm{|}z,\alpha\in\mathbb{R}\big{\}} \quad\text{and}\\ \mathcal{H}_{2}&:=\Big{\{}\sum_{j\neq k}a_{j} \operatorname{c}(j\tfrac{2\pi}{T_{k}}t)\Bigm{|}\sum_{j\neq k}a_{j}^{2}\,j^{2s }<\infty\Big{\}},\end{split} \tag{7.33}\] so the domain is \(H^{s}=\mathcal{H}_{1}\oplus\mathcal{H}_{2}\). Similarly, from (4.7), we describe the range \(\mathcal{H}\) by \[\begin{split}\mathcal{H}_{+}&:=\Big{\{}y=\sum_{j \neq k}a_{j}\operatorname{s}(j\tfrac{2\pi}{T_{k}}t)\Bigm{|}\|y\|<\infty\Big{\}},\quad\text{and}\\ \mathcal{H}&:=\big{\{}\beta\operatorname{s}(k\tfrac {2\pi}{T_{k}}t)\big{\}}\oplus\mathcal{H}_{+},\quad\text{with norm}\\ \|y\|^{2}&:=\beta^{2}+\sum_{j>1}a_{j}^{2}\,\delta_{ j}^{-2}\,j^{2s},\end{split} \tag{7.34}\] and where we have set \[\delta_{j}:=\delta_{P,j}(T_{k})\quad\text{or}\quad\delta_{j}:=\delta_{A,j}(T_ {k}),\] respectively for the periodic or acoustic boundary condition. We similarly define the orthogonal projection, \[\Pi:\mathcal{H}\to\mathcal{H}_{+}\quad\text{by}\quad\Pi\Big{[}\beta \operatorname{s}(k\tfrac{2\pi}{T_{k}}t)+\sum_{j\neq k}a_{j}\operatorname{s}( jt)\Big{]}:=\sum_{j\neq k}a_{j}\operatorname{s}(jt),\] which projects onto all but the \(k\)-mode. Now let \(\mathcal{F}\) denote \(\mathcal{F}_{P}\) or \(\mathcal{F}_{A}\), given by (7.17), depending on whether periodic or acoustic boundary conditions (7.16) or (7.15) are used, respectively. Note that the data consists only of even functions and \(\mathcal{F}\) incorporates the projection onto odd modes, and the Hilbert spaces respect this structure, being spaces of even and odd functions for the domain and range, respectively. As above, the nonlinear problem is treated as a bifurcation problem, which consists of an infinite dimensional _auxiliary equation_, namely \[\Pi\,\mathcal{F}\big{(}y^{0}\big{)}=0,\quad\text{with}\quad y^{0}=\overline{p}+z +\alpha\,\mathrm{c}(k\tfrac{2\pi}{T_{k}}t)+W, \tag{7.35}\] together with a scalar _bifurcation equation_, namely \[\Big{\langle}\mathrm{s}(k\tfrac{2\pi}{T_{k}}t),\mathcal{F}\big{(}y^{0}\big{)} \Big{\rangle}=0. \tag{7.36}\] Here the amplitude \(\alpha\) parameterizes the \(k\)-mode linear solution that we are perturbing, \(z\) is the free parameter of the 0-mode, which is always in the kernel, and \(W\) is the nonlinear correction off the kernel. As usual, we first solve (7.35) to get a solution \(W(\alpha,z)\) uniform in \(\alpha\) and \(z\), and then we solve for the 0-mode correction \(z\) as a function of \(\alpha\). Here \(z\) is the 0-mode correction that gives a nonzero derivative in the bifurcation equation as a consequence of genuine nonlinearity. Physically, this is a correction to the ambient pressure that balances rarefaction and compression, and ultimately this is the physical mechanism for avoiding shock formation and generating time periodic solutions. The following summarizes the conclusions of Lemmas 4.2 and 4.3, and follows from the implicit function theorem exactly as those lemmas do. **Lemma 7.6**.: _If the \(k\)-mode is nonresonant, there is a neighborhood \(\mathcal{U}\subset\mathcal{H}_{1}\) of the origin and a unique \(C^{1}\) map_ \[W:\mathcal{U}\to\mathcal{H}_{2},\quad\text{written}\quad W\big{(}\overline{p}+ z+\alpha\,\mathrm{c}(k\tfrac{2\pi}{T_{k}}t)\big{)}=:W(\alpha,z)\in\mathcal{H}_{2},\] _such that, for all \(z+\alpha\,\mathrm{c}(k\tfrac{2\pi}{T_{k}}t)\in\mathcal{U}\), we have a solution of the auxiliary equation (7.35), given by_ \[\Pi\,\mathcal{F}\Big{(}\overline{p}+z+\alpha\,\mathrm{c}(k\tfrac{2\pi}{T_{k}} t)+W(\alpha,z)\Big{)}=0.\] _Moreover, the map \(W(\alpha,z)\) satisfies the estimate_ \[W(\alpha,z)=o(|\alpha|),\] _uniformly for \(z\) in a neighborhood of 0._ ### Bifurcation equation It remains to solve the bifurcation equation (7.36), which is scalar. Proceeding as before, we define scalar functions \(f\) and \(g\) in analogy with (4.9), (4.10). That is, we set \[f(\alpha,z):=\Big{\langle}\mathrm{s}(\omega\,t),\mathcal{F}\big{(}\overline{p }+z+\alpha\,\mathrm{c}(\omega\,t)+W(\alpha,z)\big{)}\Big{\rangle}, \tag{7.37}\] where \(\omega:=\omega_{k}=k\,2\pi/T_{k}\), and we set \[g(\alpha,z):=\frac{1}{\alpha}\,f(\alpha,z),\quad\alpha\neq 0,\] \[g(0,z):=\frac{\partial f}{\partial\alpha}(0,z).\] Our goal is to show that there is some \(z=z(\alpha)\) such that \[f\big{(}\alpha,z(\alpha)\big{)}=0,\quad\text{or equivalently}\quad g\big{(} \alpha,z(\alpha)\big{)}=0. \tag{7.38}\] As above, we cannot apply the implicit function theorem to \(f\) directly, because \(\frac{\partial f}{\partial z}\big{|}_{\alpha=0}=0\), but we can apply it to \(g\): to do so, we must show that \[\frac{\partial g}{\partial z}\Big{|}_{(0,0)}\neq 0,\quad\text{which is}\quad\frac{ \partial^{2}f}{\partial z\,\partial\alpha}\Big{|}_{(0,0)}\neq 0. \tag{7.39}\] We can rewrite \(f(\alpha,z)\) using (7.4), to get \[f(\alpha,z)=\Big{\langle}\mathrm{s}\big{(}\omega\,t\big{)},\frac{\mathcal{I}- \mathcal{R}}{2}\,\mathcal{S}^{-T/4}\,\mathcal{E}^{\ell}y^{0}\Big{\rangle}= \Big{\langle}\mathrm{s}\big{(}\omega\,t-k\,\frac{\pi}{2}\big{)},\mathcal{E}^{ \ell}y^{0}\Big{\rangle},\] where we have used self-adjointness of \(\frac{\mathcal{I}-\mathcal{R}}{2}\) and the fact that \(\mathrm{s}(k\frac{2\pi}{T_{k}}t)\) is odd as a function of \(t\), together with \[\big{(}\mathcal{S}^{-T/4}\big{)}^{\dagger}\big{[}\mathrm{s}(\omega\,t)\big{]}= \mathcal{S}^{T/4}\big{[}\mathrm{s}(\omega\,t)\big{]}=\mathrm{s}\Big{(}\omega \,\big{(}t-\frac{T}{4}\big{)}\Big{)}=\mathrm{s}\big{(}\omega\,t-k\,\frac{\pi} {2}\big{)},\] where \(\square^{\dagger}\) denotes the adjoint (in \(t\)). It follows that \[\frac{\partial^{2}f}{\partial z\,\partial\alpha}\Big{|}_{(0,0)}=\Big{\langle} \mathrm{s}\big{(}\omega\,t-k\,\tfrac{\pi}{2}\big{)},\frac{\partial^{2}}{ \partial z\,\partial\alpha}\mathcal{E}^{\ell}y^{0}\Big{|}_{(0,0)}\Big{\rangle}. \tag{7.40}\] where \[y^{0}:=\overline{p}+z+\alpha\,\mathrm{c}(\omega\,t)+W(\alpha,z). \tag{7.41}\] Recall that the acoustic boundary problem (7.5) is obtained by limiting ourselves to \(k\) even. To proceed we must thus calculate the second derivative of the evolution, evaluated at the constant state, namely \[\frac{\partial^{2}}{\partial z\,\partial\alpha}\mathcal{E}^{\ell}y^{0}\Big{|} _{(0,0)}. \tag{7.42}\] We postpone this calculation to the next section so that we can complete the bifurcation argument. The proof of the following lemma is the topic of Section 8 below. **Lemma 7.7**.: _For a genuinely nonlinear constitutive equation, the second derivative given in (7.40) is not equal to zero, that is_ \[\frac{\partial^{2}f}{\partial z\,\partial\alpha}\Big{|}_{(0,0)}=\Big{\langle} \mathrm{s}\big{(}\omega\,t-k\,\tfrac{\pi}{2}\big{)},\frac{\partial^{2}}{ \partial z\,\partial\alpha}\mathcal{E}^{\ell}y^{0}\Big{|}_{(0,0)}\Big{\rangle} \neq 0.\] Applying the lemma to (7.40) now reduces the solvability of the bifurcation equation to a single non-degeneracy condition, namely that the \(k\)-mode be nonresonant. **Theorem 7.8**.: _Assume that the \(k\)-mode is nonresonant. Then there exists a one-parameter family of solutions of the form (7.22) of equation (7.4) or (7.5), respectively, parameterized by the amplitude \(\alpha\) in a neighborhood of \(0\). This in turn generates a periodic pure tone solution of the compressible Euler equations._ Proof.: Lemma 7.6 shows that the auxiliary equation has a unique solution in a neighborhood of the origin. It remains only to show that the bifurcation equation (7.38), namely \(f\big{(}\alpha,z\big{)}=0\), for the \(k\)-mode can always be solved uniquely in a neighborhood of the origin. Using (7.39), Lemma 7.7 and the implicit function theorem imply the existence of a unique \(z(\alpha)\) such that (7.41) gives the data \(y^{0}\) which solves the equation (7.38) for \(\alpha\) in a neighborhood of the origin. ### Nonresonance conditions As in our earlier development, we now consider conditions for resonance or non-resonance of linear modes. As in Section 5, we regard the resonance conditions as conditions on the infinite dimensional entropy field, and show that the set of entropy fields for which any resonance condition holds is small. According to Lemma 7.4, the \(j\)-mode is resonant with the \(k\)-mode if and only if there is some \(p\) such that \[\omega_{p}=q\,\omega_{k},\quad\text{with}\quad q:=\frac{j}{k}. \tag{7.43}\] The frequencies \(\omega_{k}\) and \(\omega_{p}\) are in turn chosen by the condition (7.30) or (7.31), which we rewrite as the implicit conditions \[k\,\frac{\pi}{2} =\omega_{k}\,\int_{0}^{\ell}\sigma\;dy-\int_{0}^{\ell}\mathrm{s} \big{(}2\theta(y,\omega_{k})\big{)}\;d\log\sqrt{\sigma}(y),\] \[p\,\frac{\pi}{2} =\omega_{p}\,\int_{0}^{\ell}\sigma\;dy-\int_{0}^{\ell}\mathrm{s} \big{(}2\theta(y,\omega_{p})\big{)}\;d\log\sqrt{\sigma}(y).\] Assuming (7.43) and eliminating the linear term, we get the condition \[\int_{0}^{\ell}\Big{(}q\,\mathrm{s}\big{(}2\theta(y,\omega_{k})\big{)}- \mathrm{s}\big{(}2\theta(y,q\,\omega_{k})\big{)}\Big{)}\;d\log\sqrt{\sigma}(y) =(p-j)\,\frac{\pi}{2}, \tag{7.44}\] which is necessary and sufficient for (7.43) to hold. We view (7.44) as a restriction on the entropy profile \(s(x)\), and as before we set \[\mathcal{Z}_{k,j,p}:=\Big{\{}s(x)\;\Big{|}\;\omega_{p}=\frac{j}{k}\,\omega_{k }\Big{\}},\] so that \(s(x)\in\mathcal{Z}_{k,j,p}\) if and only if (7.44) holds. As before, we then form the union \[\mathcal{Z}:=\bigcup_{k,p,j}\mathcal{Z}_{k,p,j},\] which is the set of all entropy profiles having _some_ resonant mode. In order to show that the resonant set is small, we need to introduce a convenient topology on the set of entropy profiles. We thus define the subset \[\mathcal{B}:=\Big{\{}s\in L^{1}[0,\ell]\;\Big{|}\;\sigma\in L^{1},\;\log\sigma \in BV\Big{\}}, \tag{7.45}\] together with the \(L^{1}\) topology, where \(\sigma\) is given by (7.6). **Lemma 7.9**.: _Each of the sets \(\mathcal{Z}_{k,j,p}\) is nowhere dense in \(\mathcal{B}\), and the resonant set \(\mathcal{Z}\) is meagre in \(\mathcal{B}\)._ Proof.: Since the map \(\mathcal{B}\to\mathbb{R}\) given by (7.44), \[s(\cdot)\mapsto\int_{0}^{\ell}\Big{(}q\,\mathrm{s}\big{(}2\theta(y,\omega_{k}) \big{)}-\mathrm{s}\big{(}2\theta(y,q\,\omega_{k})\big{)}\Big{)}\;d\log\sqrt{ \sigma}(y)\] is evidently continuous, each of the sets \(\mathcal{Z}_{k,j,p}\) is closed in \(\mathcal{B}\). Let \(\epsilon>0\) and \(s=s(x)\in\mathcal{Z}_{k,j,p}\) be given. Since piecewise constant functions are dense, we use Theorem 6.5 to find a piecewise constant entropy profile \(\overline{s}(J,\Theta)\) which approximates \(s(x)\) but is fully nonresonant, so in particular, \[\big{\|}s(x)-\overline{s}(J,\Theta)\big{\|}_{L^{1}}<\epsilon,\quad\text{with} \quad\overline{s}(J,\Theta)\notin\mathcal{Z}_{k,j,p}.\] It follows that the interior of the closure of \(\mathcal{Z}_{k,j,p}\) is empty, that is \(\mathcal{Z}_{k,j,p}\) is nowhere dense, and, being a countable union of nowhere dense sets, \(\mathcal{Z}\) is meagre. We cannot use this proof to show directly that \(\mathcal{Z}\) is nowhere dense, because \(\mathcal{Z}\) is not itself closed in \(\mathcal{B}\). We expect that similar to our earlier results, if one were to regard \(\mathcal{B}\) as a measure or probability space, then the resonant set \(\mathcal{Z}\) would also have zero measure. However we will not pursue this here. Our next theorem is a summary the foregoing lemmas. **Theorem 7.10**.: _Given any entropy profile \(s(x)\) with \(s\in\mathcal{B}\), the linearization (7.1) around the constant state solution \((\overline{p},0)\) determines a Sturm-Liouville operator (7.8), (7.19). Imposing the periodic tiling boundary conditions (7.4), respectively the acoustic reflection boundary conditions (7.5), determines an increasing sequence \(\omega_{k}\), respectively \(\omega_{2k}\), of linearized SL frequencies. To each of these frequencies corresponds a time periodic solution of the linearized equations (7.6), (7.12). For any such frequency which is nonresonant, there is a one-parameter family of perturbations of the linearized \(k\)-mode (resp. \(2k\)-mode) to a pure tone solution of the nonlinear system (2.1), parameterized by the amplitude \(\alpha\) of the \(k\)-mode component of the linearized data. The set of fully nonresonant profiles, for which every \(k\)-mode (resp. \(2k\)-mode) perturbs to a solution of the nonlinear problem, is generic in \(\mathcal{B}\), in that it is residual, or the complement of a countable union of nowhere dense sets._ ## 8. Differentiation of the Evolution Operator To complete the proof of the existence of periodic solutions of the compressible Euler equations with generic entropy profile, we must prove Lemma 7.7 stated and used above, which requires calculation of the second derivative of the evolution operator. In differentiating the solution twice, we cannot apply Lemma 4.4 or Corollary 4.5 directly, because we no longer have exact expressions for the nonlinear solutions, although this can be carried out: see [49]. However, because our gradients remain finite and our solutions to the PDE are classical, we can expand the solution and evaluate the derivative at the origin. We briefly describe our strategy for evaluating (7.42). Here we view \(\mathcal{E}^{\ell}\) as a Banach space valued function of the two real variables \(\alpha\) and \(z\), evaluated on data given by (7.22). Formally we assume that \((p,u)\) are general functions of variables \((\alpha,z)\), satisfying (7.1), and we denote the derivatives with respect to \(\alpha\) and \(z\) as \[\widehat{p_{\alpha}}:=\frac{\partial p}{\partial\alpha},\quad\widehat{p_{z}}:= \frac{\partial p}{\partial z},\quad\text{and}\quad\widehat{p_{z\alpha}}:=\frac {\partial^{2}p}{\partial z\,\partial\alpha},\] respectively, and similarly for \(u\). Differentiating (7.1) in \(\alpha\), we get \[\widehat{p_{\alpha}}_{x}+\widehat{u_{\alpha t}}=0,\qquad\widehat{u_{\alpha x} }-\left(v_{p}(p)\,\widehat{p_{\alpha}}\right)_{t}=0,\] which are the linearized equations. Now differentiating in \(z\) yields \[\widehat{p_{z\alpha x}}+\widehat{u_{z\alpha t}}=0,\qquad\widehat{u_{z\alpha t }}-\left(v_{p}(p)\,\widehat{p_{z\alpha}}+v_{pp}(p)\,\widehat{p_{\alpha}}\, \widehat{p_{z}}\right)_{t}=0, \tag{8.1}\] which captures the second derivative. In the context we are working in, we have \(\widehat{p_{z}}\big{|}_{(0,0)}=1\), and the second derivative equation is linear inhomogeneous, with varying coefficients \(v_{p}\big{(}\overline{p},s(x)\big{)}\) and \(v_{pp}\big{(}\overline{p},s(x)\big{)}\neq 0\) which encode the leading order effects of the nonlinear interaction of acoustic waves with the varying entropy field. **Lemma 8.1**.: _The second derivative (7.42) of the evolution operator acting on \(y^{0}\) given by (7.41) is given by the solution of the linear inhomogeneous SL system_ \[\begin{split}\,\hat{\overline{\varphi}}+\omega\,\widehat{\psi}& =0,\\ \,\hat{\overline{\psi}}-\sigma^{2}\,\omega\,\widehat{\varphi}& =-v_{pp}\,\omega\,\varphi_{k},\end{split} \tag{8.2}\] _with vanishing initial data \(\widehat{\varphi}(0)=\widehat{\psi}(0)=0\), with coefficient \(\sigma^{2}=-v_{p}(\overline{p},s)\), and where \(v_{pp}=v_{pp}(\overline{p},s)\neq 0\). More precisely, we have_ \[\frac{\partial^{2}}{\partial z\,\partial\alpha}\mathcal{E}^{\ell}y^{0}\Big{|} _{(0,0)}=\widehat{\varphi}(\ell)\,\mathrm{c}(\omega t)+\widehat{\psi}(\ell)\, \mathrm{s}(\omega t), \tag{8.3}\] _where \(\omega=\omega_{k}\) is given by (7.21)._ Proof.: For the data as given by (7.35), (7.41), we write the corresponding solution of (7.1) as \[\begin{split} p(x,t)&=\overline{p}+z+\alpha\,\varphi _{k}(x)\,\mathrm{c}(\omega t)+\widehat{p}(x,t),\\ u(x,t)&=\alpha\,\psi_{k}(x)\,\mathrm{s}(\omega t)+ \widehat{u}(x,t),\end{split} \tag{8.4}\] where \(\omega=k\frac{2\pi}{T_{k}}\), and \(\widehat{p}\), \(\widehat{u}=o(|\alpha|)\). Here we have scaled the eigenfunction by \(\varphi_{k}(0)=1\), and we have \[\widehat{p}(0,\cdot)+\widehat{u}(0,\cdot)=W(\alpha,z).\] To rederive equation (8.1), we make the ansatz (8.4), expand the nonlinear (classical) solution, and differentiate, first in \(\alpha\) and setting \(\alpha=0\), then in \(z\) and setting \(z=0\), to get the derivatives \(\widehat{p_{\alpha z}}\) and \(\widehat{u_{\alpha z}}\). This is allowed because our solutions are small amplitude and locally defined, so do not suffer gradient blowup in the region \(0\leq x\leq\ell\). In this notation, it follows that \[\frac{\partial^{2}}{\partial z\,\partial\alpha}\mathcal{E}^{\ell}y^{0}\Big{|}_ {(0,0)}=\widehat{p_{\alpha z}}(\ell,\cdot)+\widehat{u_{\alpha z}}(\ell,\cdot),\] where \(\cdot\) denotes a function of \(t\). We use (8.4) to expand \(v(p,s)\) as \[v(p,s)=v(\overline{p}+z,s)+v_{p}(\overline{p}+z,s)\left[\alpha\,\varphi_{k}(x) \,\mathrm{c}(\omega t)+\widehat{p}\right]+O(|\alpha|^{2}),\] and substitute into (7.1) to get \[\alpha\,\hat{\varphi}_{k}\,\mathrm{c}(\omega t)+\widehat{p}_{x}+ \alpha\,\omega\,\psi_{k}\,\mathrm{c}(\omega t)+\widehat{u}_{t}=0,\] \[\alpha\,\hat{\psi}_{k}\,\mathrm{s}(\omega t)+\widehat{u}_{x}-v_ {p}(\overline{p}+z,s)\left(-\,\alpha\,\omega\,\varphi_{k}\,\mathrm{s}(\omega t )+\widehat{p}_{t}\right)=O(\alpha^{2}).\] Differentiating with respect to \(\alpha\) and setting \(\alpha=0\), we get \[\hat{\varphi}_{k}\,\mathrm{c}(\omega t)+\widehat{p_{\alpha x}}+ \omega\,\psi_{k}\,\mathrm{c}(\omega t)+\widehat{u_{\alpha t}}=0,\] \[\hat{\psi}_{k}\,\mathrm{s}(\omega t)+\widehat{u_{\alpha x}}-v_{p }(\overline{p}+z,s)\left(-\,\omega\,\varphi_{k}\,\mathrm{s}(\omega t)+ \widehat{p_{\alpha t}}\right)=0.\] We now differentiate this in \(z\) and set \(z=0\), to get \[\begin{split}\widehat{p_{\alpha z}}_{x}+\widehat{u_{\alpha zt}}& =0,\\ \widehat{u_{\alpha z}}_{x}+\sigma^{2}\,\widehat{p_{\alpha zt}}& =-v_{pp}(\overline{p},s)\,\omega\,\varphi_{k}\,\mathrm{s}(\omega t ),\end{split} \tag{8.5}\] where we have used (7.6) and the fact that \(\widehat{p_{\alpha}}\big{|}_{(0,0)}=0\). This is a restatement of the second derivative equation (8.1). According to Lemma 7.6, and refering to our ansatz (7.41), namely \[y^{0}:=\overline{p}+z+\alpha\,\mathrm{c}(\omega\,t)+W(\alpha,z),\] we have \(W(\alpha,z)=o(|\alpha|)\), which yields \[\frac{\partial W}{\partial\alpha}\bigg{|}_{(0,0)}=0,\quad\text{so also}\quad \frac{\partial^{2}W}{\partial\alpha\,\partial z}\bigg{|}_{(0,0)}=0.\] This implies that the initial conditions for (8.5) are \[\widehat{p_{\alpha z}}(0,t)=0,\quad\widehat{u_{\alpha z}}(0,t)=0,\] and we wish to integrate to \(x=\ell\). Finally, we make the ansatz \[\widehat{p_{\alpha z}}=\widehat{\varphi}(x)\,\mathrm{c}(\omega t),\quad \widehat{u_{\alpha z}}=\widehat{\psi}(x)\,\mathrm{s}(\omega t),\quad\text{ with}\quad\omega=k\tfrac{2\pi}{T_{k}},\] and separate variables in (8.5) to get the inhomogeneous system (8.2), as required. We solve (8.2) using Duhamel's principle with the fundamental solution \(\Psi(x;\omega)\) given in (7.11): it is straight-forward to check that the solution of (8.2) is \[\left(\begin{array}{c}\widehat{\varphi}(x)\\ \widehat{\psi}(x)\end{array}\right)=-\omega\,\int_{0}^{x}\Psi(x-x^{\prime}; \omega)\left(\begin{array}{c}0\\ 1\end{array}\right)\varphi_{k}(x^{\prime})\,v_{pp}\;dx^{\prime}, \tag{8.6}\] and where \(v_{pp}=v_{pp}\big{(}\overline{p},s(x^{\prime})\big{)}\). We carry out this calculation explicitly in Lemma 8.3 below. This last integrand requires us to find the full fundamental solution \(\Psi(x;\omega)\) of the linear system (7.23). To do so, we consider initial data of the form \[\widetilde{\varphi}(0)=0\quad\text{and}\quad\widetilde{\psi}(0)=\widetilde{c }_{0},\] solution of which provides the second column of \(\Psi\). As in (7.24), we again use modified Prufer coordinates, and so we make the ansatz \[\widetilde{\varphi}(x):=-\widetilde{r}(x)\,\frac{1}{\widetilde{\rho}(x)}\, \mathrm{s}\big{(}\widetilde{\theta}(x)\big{)},\qquad\widetilde{\psi}(x):= \widetilde{r}(x)\,\widetilde{\rho}(x)\,\mathrm{c}\big{(}\widetilde{\theta}(x )\big{)}, \tag{8.7}\] As above, plugging in (8.7) into (7.23) and simplifying yields \[-\frac{\overset{\,\bullet}{\widetilde{r}}}{\widetilde{r}}\, \mathrm{s}(\widetilde{\theta})+\frac{\overset{\,\bullet}{\widetilde{\rho}}}{ \widetilde{\rho}}\,\mathrm{s}(\widetilde{\theta})-\mathrm{c}(\widetilde{\theta })\overset{\,\bullet}{\widetilde{\theta}}+\omega\,\widetilde{\rho}^{2}\, \mathrm{c}(\widetilde{\theta}) =0,\] \[\frac{\overset{\,\bullet}{\widetilde{r}}}{\widetilde{r}}\, \mathrm{c}(\widetilde{\theta})+\frac{\overset{\,\bullet}{\widetilde{\rho}}}{ \widetilde{\rho}}\,\mathrm{c}(\widetilde{\theta})-\mathrm{s}(\widetilde{\theta })\overset{\,\bullet}{\widetilde{\theta}}+\omega\,\frac{\sigma^{2}}{ \widetilde{\rho}^{2}}\,\mathrm{s}(\widetilde{\theta}) =0,\] with initial conditions \[\widetilde{\theta}(0)=0\quad\text{and}\quad\widetilde{r}(0)=1,\quad\text{so} \quad\widetilde{c}_{0}=\frac{1}{\widetilde{\rho}(0)}.\] Again choosing \(\widetilde{\rho}:=\sqrt{\sigma}=\rho(x)\), after simplifying we get \[\frac{\overset{\,\bullet}{\widetilde{r}}}{\widetilde{r}}=-\frac{\overset{ \,\bullet}{\sigma}}{2\sigma}\,\mathrm{c}(2\widetilde{\theta}),\qquad\overset{ \,\bullet}{\widetilde{\theta}}=\frac{\overset{\,\bullet}{\sigma}}{2\sigma}\, \mathrm{s}(2\widetilde{\theta})+\omega\,\sigma, \tag{8.8}\] and the first of these can again be integrated, giving \[\widetilde{r}(x)=\exp\Big{\{}-\int_{0}^{x}\mathrm{c}\big{(}2\widetilde{\theta }(y)\big{)}\;d\log\sqrt{\sigma}(y)\Big{\}}.\] We summarize the foregoing, which gives a full description of a fundamental solution of the SL system. **Lemma 8.2**.: _A fundamental matrix of the system (7.11) is_ \[\Psi(x;\omega)=\left(\begin{array}{cc}\varphi&\widetilde{\varphi}\\ \psi&\widetilde{\psi}\end{array}\right)=\left(\begin{array}{cc}r(x)\,\frac{ 1}{\rho(x)}\,\mathrm{c}\big{(}\theta(x)\big{)}&-\widetilde{r}(x)\,\frac{1}{ \rho(x)}\,\mathrm{s}\big{(}\widetilde{\theta}(x)\big{)}\\ r(x)\,\rho(x)\,\mathrm{s}\big{(}\theta(x)\big{)}&\widetilde{r}(x)\,\rho(x)\, \mathrm{c}\big{(}\widetilde{\theta}(x)\big{)}\end{array}\right),\] _with_ \[\rho(x)=\sqrt{\sigma(x)}\quad\text{and}\quad\Psi(0;\omega)=\left(\begin{array} []{cc}\frac{1}{\rho(0)}&0\\ 0&\rho(0)\end{array}\right),\] _where \(\theta\) and \(\widetilde{\theta}\) solve_ \[\dot{\theta}=\omega\,\sigma-\frac{\dot{\sigma}}{2\sigma}\,\mathrm{s}(2\theta), \qquad\dot{\widetilde{\theta}}=\omega\,\sigma+\frac{\dot{\sigma}}{2\sigma}\, \mathrm{s}(2\widetilde{\theta}),\] _with \(\theta(0)=\widetilde{\theta}(0)=0\) respectively, and \(r\) and \(\widetilde{r}\) are given explicitly by_ \[r(x)=\exp\Big{\{}\int_{0}^{x}\mathrm{c}\big{(}2\theta(y)\big{)} \;d\log\sqrt{\sigma}(y)\Big{\}},\] \[\widetilde{r}(x)=\exp\Big{\{}-\int_{0}^{x}\mathrm{c}\big{(}2 \widetilde{\theta}(y)\big{)}\;d\log\sqrt{\sigma}(y)\Big{\}},\] _respectively. Moreover, we have_ \[\det\Psi(x;\omega)=r(x)\,\widetilde{r}(x)\,\mathrm{c}\big{(}\theta(x)- \widetilde{\theta}(x)\big{)}=1\] _for all \(x\), and in particular_ \[|\theta(x)-\widetilde{\theta}(x)|<\frac{\pi}{2}\quad\text{for all $x$.}\] In this representation of the fundamental solution, the first column describes the linearized evolution of the even modes \(\mathrm{c}(\omega\,t)\), and the second column describes evolution of the odd modes \(\mathrm{s}(\omega\,t)\). Proof.: Combining (7.25), (7.26) and (7.27) with (8.8) yields the fundamental solution, and \(\Psi(0;\omega)\) follows from our choices \(\theta(0)=\widetilde{\theta}(0)=0\) and \(r(0)=\widetilde{r}(0)=1\). Abel's theorem implies the determinant is constant, and the last inequality follows by continuity of \(\theta\) and \(\widetilde{\theta}\). Having calculated the fundamental matrix \(\Psi\) of (7.11), we now solve the inhomogeneous system (8.2). Rather than use Duhamel's principle (8.6) directly, we rederive it explicitly in the Prufer variables using variation of parameters. **Lemma 8.3**.: _The solution of (8.2) can be written as_ \[\left(\begin{array}{c}\widehat{\varphi}(x)\\ \widehat{\psi}(x)\end{array}\right)=\Psi(x;\omega)\,\left(\begin{array}{c}a( x)\\ b(x)\end{array}\right), \tag{8.9}\] _for appropriate functions \(a\) and \(b\), and we have \(b(x)<0\) for all \(x>0\)._ Proof.: Because \(\Psi\) is a fundamental solution, using the ansatz (8.9) in (8.2) yields the simplified system \[\left(\begin{array}{c}a\\ b\end{array}\right)^{\!\star}=-v_{pp}\,\omega\,\varphi\,\Psi^{-1}\,\left( \begin{array}{c}0\\ 1\end{array}\right),\qquad a(0)=b(0)=0. \tag{8.10}\] Next, since \(\det\Psi\equiv 1\), we have \[\Psi^{-1}=\left(\begin{array}{cc}\widetilde{\psi}&-\widetilde{\varphi}\\ -\psi&\varphi\end{array}\right),\] and the second component of (8.10) simplifies to \[\dot{\boldsymbol{b}}=-v_{pp}\,\omega\,\varphi^{2}<0.\] It follows that \(b(x)<0\) for all \(x>0\), as required. Lemma 8.1 gives the second derivative of the evolution in terms of the fundamental solution, which is calculated in Lemma 8.2. In Lemma 8.3, we explicitly calculate a nonvanishing term due to genuine nonlinearity, as seen by the \(v_{pp}\) term in (8.10). This in turn allows us to solve the bifurcation equation as stated in Lemma 7.7 above. Proof of Lemma 7.7.: We substitute (8.3) into (7.40), to get \[\left\langle\mathrm{s}\big{(}\omega\,t-k\,\tfrac{\pi}{2}\big{)},\frac{\partial ^{2}}{\partial z\,\partial\alpha}\mathcal{E}^{\ell}y^{0}\right|_{(0,0)}\right\rangle =\begin{cases}\widehat{\varphi}(\ell),&k\ \mathrm{odd},\\ \widehat{\psi}(\ell),&k\ \mathrm{even},\end{cases}\] respectively, where \((\widehat{\varphi},\widehat{\psi})\) solve (8.2). Evaluating (8.9) at \(x=\ell\), we get \[\left(\begin{array}{c}\widehat{\varphi}(\ell)\\ \widehat{\psi}(\ell)\end{array}\right)=\left(\begin{array}{cc}\varphi(\ell) &\widetilde{\varphi}(\ell)\\ \psi(\ell)&\widetilde{\psi}(\ell)\end{array}\right)\,\left(\begin{array}{c} a(\ell)\\ b(\ell)\end{array}\right),\quad\mathrm{with}\quad b(\ell)<0.\] By our choice (7.30) of \(\omega\), for \(k\) odd we have (7.28), so that \(\varphi(\ell)=0\), which implies that \[\widehat{\varphi}(\ell)=\widetilde{\varphi}(\ell)\,b(\ell)\neq 0,\] and similarly for \(k\) even, we have (7.29), which is \(\psi(\ell)=0\), so we must have \[\widehat{\psi}(\ell)=\widetilde{\psi}(\ell)\,b(\ell)\neq 0,\] and the proof is complete.
2308.15807
ACNPU: A 4.75TOPS/W 1080P@30FPS Super Resolution Accelerator with Decoupled Asymmetric Convolution
Deep learning-driven superresolution (SR) outperforms traditional techniques but also faces the challenge of high complexity and memory bandwidth. This challenge leads many accelerators to opt for simpler and shallow models like FSRCNN, compromising performance for real-time needs, especially for resource-limited edge devices. This paper proposes an energy-efficient SR accelerator, ACNPU, to tackle this challenge. The ACNPU enhances image quality by 0.34dB with a 27-layer model, but needs 36\% less complexity than FSRCNN, while maintaining a similar model size, with the \textit{decoupled asymmetric convolution and split-bypass structure}. The hardware-friendly 17K-parameter model enables \textit{holistic model fusion} instead of localized layer fusion to remove external DRAM access of intermediate feature maps. The on-chip memory bandwidth is further reduced with the \textit{input stationary flow} and \textit{parallel-layer execution} to reduce power consumption. Hardware is regular and easy to control to support different layers by \textit{processing elements (PEs) clusters with reconfigurable input and uniform data flow}. The implementation in the 40 nm CMOS process consumes 2333 K gate counts and 198KB SRAMs. The ACNPU achieves 31.7 FPS and 124.4 FPS for x2 and x4 scales Full-HD generation, respectively, which attains 4.75 TOPS/W energy efficiency.
Tun-Hao Yang, Tian-Sheuan Chang
2023-08-30T07:23:32Z
http://arxiv.org/abs/2308.15807v1
# ACNPU: A 4.75TOPS/W 1080P@30FPS Super Resolution Accelerator with Decoupled Asymmetric Convolution ###### Abstract Deep learning-driven superresolution (SR) outperforms traditional techniques but also faces the challenge of high complexity and memory bandwidth. This challenge leads many accelerators to opt for simpler and shallow models like FSRCNN, compromising performance for real-time needs, especially for resource-limited edge devices. This paper proposes an energy-efficient SR accelerator, ACNPU, to tackle this challenge. The ACNPU enhances image quality by 0.34dB with a 27-layer model, but needs 36% less complexity than FSRCNN, while maintaining a similar model size, with the _decoupled symmetric convolution and split-bypass structure_. The hardware-friendly 17K-parameter model enables _holistic model fusion_ instead of localized layer fusion to remove external DRAM access of intermediate feature maps. The on-chip memory bandwidth is further reduced with the _input stationary flow_ and _parallel-layer execution_ to reduce power consumption. Hardware is regular and easy to control to support different layers by _processing elements (PEs) clusters with reconfigurable input and uniform data flow_. The implementation in the 40 mm CMOS process consumes 2333 K gate counts and 198 KB SRAMs. The ACNPU achieves 31.7 FPS and 124.4 FPS for \(\times\)2 and \(\times\)4 scales Full-HD generation, respectively, which attains 4.75 TOPS/W energy efficiency. convolution neural network, super resolution, asymmetric convolution neural network, AI accelerator ## I Introduction Deep learning-based SR models [1, 2, 3, 4] are getting popular in recent years due to its superior performance over traditional approaches. As these models evolve, becoming deeper, broader, and more intricate, they also become more computationally demanding and consume more memory bandwidth. The computational load further intensifies with high-definition (HD) or larger input sizes and the corresponding intermediate feature maps. This poses a great challenge for real-time low-power applications on resource-limited edge devices, which demands co-design of hardware acceleration and lightweight model designs. Several hardware accelerators for SR have been introduced [5, 6, 7, 8, 9, 10] tailored for real-time HD applications. Yet, many current solutions prioritize ease of hardware design over image quality by adopting simpler network architectures like the commonly employed FSRCNN or FSRCNN-s [11] and their basic variants. This choice, driven by complexity concerns, often results in the compromised quality of the reconstructed images. Moreover, the significant challenge posed by memory bandwidth remains unaddressed in many designs. The prevalent layer-by-layer processing technique often necessitates storing intermediate data in DRAM and subsequently reloading them for each layer or demands a substantial buffer for temporary storage. Both approaches are impractical given the vast feature maps. To address the bandwidth dilemma, works like [7, 8] have employed tile-based layer fusion. Although this reduces bandwidth requirements, substantial intermediate data bandwidth and buffer storage are still needed, especially for tile boundary data, owing to their model configurations. Several lightweight SR models have been developed to balance image quality against computational complexity. Fig.1 contrasts the operations, parameters, and performance of these models. Within this comparison, FSRCNN-s [11] offers compactness but at the cost of image quality. SRNPU [7], with over 100K parameters, employs dynamic processing, segmenting image tiles to manage processing workload effectively. HPAN [10] introduces a simplified pixel attention model and non-overlapping block processing for \(\times 2\) scaling. However, it wrestles with computational demands and artifacts at \(\times 4\) scaling. Additionally, SR models incorporate lightweight techniques such as depthwise convolution [12], group convolution [13], and asymmetric convolution [14, 15]. These models, designed primarily for software, possess intricate architectures that aren't friendly for hardware implementation, leading to increased buffer demands. Prior models also tend to intertwine asymmetric convolutions or combine them with standard ones, which hinders optimizing the benefits of asymmetric convolution and raises buffer requirements. Addressing the above issues, this paper introduces a real-time SR neural processing unit, ACNPU, optimized through a synergy of hardware and software. The SR accelerator enhances image quality by 0.34dB using a 27-layer architecture, ACNet, yet it boasts 36% less complexity than the FSRCNN, while maintaining a similar model size. This is achieved using a _decoupled asymmetric convolution and split-bypass structure_. Designed to be hardware-friendly, the model leverages regular structures and localized connections, avoiding long connections to save buffer cost. Its 17K model size allows the accelerator to bypass external memory access for intermediate feature maps, thanks to the _holistic model fusion_. Additional efficiencies are realized by parallel execution of 1x3/1x1 layers and a _local input stationary flow_, reducing internal memory access. The core of this system is built on regular processing element (PE) clusters, which have a uniform data flow and reconfigurable input across layers. The implementation is capable of real-time full-HD processing, offering an energy efficiency of 4.75 TOPS/W. The remainder of the paper is organized as follows. Section II presents the proposed ACNet model. Section III shows the hardware design. Section IV illustrates the experimental results. Finally, we conclude this paper in Section V. ## II Proposed ACNet for superresolution ### _Proposed ACNet_ The primary objective of the target model is to enhance image quality using hardware-optimized structures. It aims to achieve this while maintaining a model size and complexity comparable to other existing SR accelerators. Notably, previous lightweight SR models, optimized primarily for software, still demand parameters nearing the mega scale. In contrast, the models typically employed in other SR accelerators don't emphasize quality optimization. Fig. 2 shows the proposed model, ACNet, to meet the above goal, which contains one 3x1 convolution layer, eight channel-bypass blocks (CBBs) with 1x3/1x1/1x1 layers, two 3x1 convolution layers and one pixel shuffle layer for final reconstruction output. For quality, this model cascades eight CBBs to extract features and form a deeper model than before for better performance. However, even with deeper models, we reduce the complexity and model size with _decoupled asymmetric convolution and split-bypass structure_, which is also hardware-friendly. The model uses local short-cut connections instead of global short-cut connections to avoid large buffers. The asymmetric convolution uses kernel sizes of 1xK or Kx1 instead of KxK to reduce complexity. However, the adopted asymmetric convolution, unlike the previous tightly coupled 1xK/Kx1 arrangement, is decoupled to two ends of the model. The top and bottom layers use vertical directions (3x1), while the central layers (CBB) use horizontal directions (1x3). CBBs are our main backbone for feature extraction. However, if we use only one type of asymmetric convolution, the model cannot learn the information from the other dimension. Thus, we use vertical convolution in the first and last two stages to achieve better performance without a large buffer size. The CBB consists of a split-bypass operation that uses 1x3 and 1x1 convolutions and bypasses channels as a balance between complexity and performance. In these CBBs, we only use one direction of asymmetric convolution (1x3) instead of two directions as in other models to save boundary buffer size and ease hardware design. In CBBs, the input channels are divided into three groups with 16 channels in each group, where two groups will be processed and one group is bypassed to save computation while preserving performance. At the end of the block, a 1x1 convolution is applied to merge the features of all different channels for better reconstruction. ### _Analysis of ACNet_ Fig. 3 and Fig. 4 show details of the basic block and the whole model. In Fig. 3, due to channel bypass, the operation and parameter are reduced by 34 % compared to the one without channel pass. The reason for using a normal 1x1 convolution at the end of the block is to improve the performance of the reconstruction. The last two 3x1 convolution layers adopt group convolution to reduce about 75 % of parameters and operations. In addition, asymmetric convolution can reduce 44 % of operations and 45 % of parameters. Fig. 4 shows the parameters and operations of the overall model, where the CBBs occupy 94%. Fig. 1 shows the ball chart of parameters and operation count versus performance in the Set5 test data set. The proposed ACNet achieves a performance comparable to HPAN [10], but reduces the parameters and operation to 67% and 53% of HPAN. Compared to [12], which also uses asymmetric convolution, our performance is 0.68 dB higher. Fig. 2: The proposed ACNet. The convolution notation, AxB, (a, b, c), is kernel size AxB, input channel a, output channel b, and group number c, respectively. Fig. 1: Comparison of different SR models in terms of model size, multiply-accumulate (MAC) and peak signal-to-noise ratio (PSNR). The yellow dots are software works and the blue dots are hardware models. The orange dot is our ACNet model. ### _Quantization format_ Most previous work uses the fixed point (FXP) format in their architectures for a lower cost, but the floating-point (FP) format is better for performance. For example, in [6] and HPAN [10], their FXP performance is decreased by 0.3 and 0.2 dB, respectively, compared to their FP counterpart. The situation results from the quantization error and the outlier values. [9] uses both formats in its design to handle some outliers and decrease external memory access. Instead of using hybrid processing, all features and weights in our design are quantized to 13 bits (S1E5M7) and 10 bits (S1E5M4) in the FP format, respectively, as a trade-off between performance and cost, where S: sign, E: exponent, M: mantissa. Table I shows the quantization in FP and FXP format with different bit lengths. We can see that the results of the two methods in the scaling factor \(\times\)2 are similar, but FP13 is better in the scaling factor \(\times\)4 due to a wider range of values than the other to handle more outliers that can damage the performance of the model. ## III System Architecture of ACNPU ### _Design Challenges and Proposed Solutions_ Due to the large input size and deep SR models, traditional SR accelerators face three design challenges: hardware cost, external memory access, and internal memory access. In this work, we markedly reduce the hardware cost associated with MAC operations through our proposed lightweight, hardware-optimized ACNet. The central issue then becomes to execute the necessary 3x1 / 1x3 / 1x1 layers cost-effectively, which we address using _PE clusters with reconfigurable input and uniform data flow_. Regarding external memory access, the compact nature of our model, sized at 17K, allows us to eliminate external memory access of intermediate feature maps. This is achieved through a _holistic model fusion_ strategy rather than a localized layer fusion as discussed in [16]. This fusion processes a non-overlapping 3\(\times\)192 tile of the image through the entire model before proceeding to the subsequent tile. This continues until the full super-resolution image is synthesized. During the process, the small model is easily stored on the chip to save frequent access. All boundary partial sums of tiles are stored in boundary SRAMs, operating in a ping-pong buffer fashion. This storage method helps reconstruct the image without manifesting checkerboard patterns and reduces additional DRAM access as compared to the approach in [7]. To reduce internal memory access, our design executes 1\(\times\)3 and 1\(\times\)1 layers in the CBB in parallel, facilitating direct data transfers between PEs without the need for intermediary storage. Additionally, the design employs a _local input stationary flow_ that temporarily stores data from local SRAMs, eliminating the need for constant accesses over several computing cycles. Fig. 5 shows the overall architecture that includes six clusters for computing and three SRAM buffers for storage. Depending on the requirement, this design can handle full HD upscaling for \(\times\)2 and \(\times\)4 and any number of CBBs within 8. The following will introduce details of this design. ### _Detailed Architecture of the cluster_ Fig. 6 illustrates the cluster's architecture. This architecture comprises 18 _PE_ and 8 _PE'_ modules. These modules enable the support of all convolution types present in the model through input reconfiguration. The cluster employs the _local Fig. 4: Parameters and operations of the overall model. \begin{table} \begin{tabular}{|c|c|} \hline Q-Type & Set5 & B100 \\ \hline & 32 \\ \hline FP32 & 37.41 & 31.73 \\ FXP13 & 37.35 & 31.71 \\ FP13 & 37.37 & 31.71 \\ \hline & 84 \\ \hline FP32 & 31.22 & 27.20 \\ FXP13 & 30.94 & 27.11 \\ FP13 & 31.20 & 27.20 \\ \hline \end{tabular} \end{table} TABLE I: The results of two quantization formats. Fig. 3: Parameters and operations detail of the CBB. input stationary flow_. This flow buffers the feature maps across several computing cycles to save recurrent access. Input and weights are broadcast to each MAC for simultaneous processing. Each module then accumulates the results. The partial sum results located at the boundary are combined at the two boundary processing blocks. This combination uses previously stored data either from the boundary SRAMs or registers, yielding the final output. The following sections provide a detailed description of the data flow. For the two types of processing elements, _PE_ has only one mode, and _PE'_ has two modes to support different layers. For _PE_ as shown in Fig. 7 (a), eight inputs and weights will be multiplied, and then the eight results and partial sums of the other _PEs_ will be accumulated together to generate the results. _PE'_ as shown in Fig. 7 (b) is similar to _PE_ but could also accumulate values from different cycles. The following shows the four operating modes in PE clusters to support different layers with reconfigurable input and uniform data flow for easy control and low area overhead. #### Iii-B1 Mode 1: 3x1 convolution Fig.8 shows the data flow and access to the input data for the vertical (3x1) convolution in the initial layer. As illustrated in Fig.8 (b), during the first cycle _T1_, the primary 3x6 input pixels, labeled as _#0_, are fetched from the off-chip memory. These pixels are then stored within the input buffers and sent to the cluster 0 through 2. Subsequently, in the second cycle, the second input _#1_ is retrieved and reserved for the cluster 3 through 5. To minimize the input buffer access frequency, these inputs remain constant and undergo updates every 32 cycles, exemplifying our input stationary flow. Meanwhile, as shown in Fig. 8 (a), the weight buffer is refreshed in each cycle using data from the weight SRAM. The weight retrieval sequence is cyclical, beginning with output channel 0 and ending at output channel 31, continuing until the entire input tile undergoes processing. Within the flow described, Fig. 8 (a) highlights that six inputs for a single cluster are horizontally broadcast to 18 _PEs_. These are then multiplied by vertically broadcast weights. The resulting products of this multiplication are diagonally cumulated. In particular, while each PE accommodates eight inputs, only one from the input image is available. This limitation reduces _PE's_ utilization to a mere 8.6%, representing a significant under-utilization. However, the execution time of this mode is relatively short compared to that of the other three modes, as shown later. #### Iii-B2 Mode 2: parallel execution of 1\(\times\)3 and 1\(\times\)1 convolutions Fig.9 illustrates the data flow and input data access mechanisms for the 1\(\times\)3 and 1\(\times\)1 convolutions within the CBB block. Given that the 1\(\times\)3 and 1\(\times\)1 convolutions together account for 46% of the computational complexity in this model, they are cascaded--1\(\times\)3 convolution at 18 _PE_s and 1\(\times\)1 convolution at 8 _PE's_--to curtail internal memory access, leading to a reduction in power consumption. As highlighted in Fig.9 (b), the 6\(\times\)3x16 input features are fetched from on-chip memory every 16 cycles: channel 0\(\sim\)7 are designated for the cluster 0\(\sim\)2, while channel 8\(\sim\)15 are reserved for the cluster 3\(\sim\)5. Weight updates in this mode occur cyclically, analogous to mode 1. Both _PE_ and _PE'_ modules are actively utilized here. While the weight sequencing in _PEs_ (for the 1\(\times\)3 convolution) mirrors that of mode 1, the _PE's_ (for the 1\(\times\)1 convolution) rotate their weight order from channel 0 to 15. Following the stipulated access sequence, weights are broadcast vertically to the _PEs_ (for 1\(\times\)3 convolutions) and the _PE's_ (for 1\(\times\)1 convolutions). Inputs are broadcast horizontally across 18 _PEs_ for the 1\(\times\)3 convolution, with the resultant outputs relayed to _PE's_ for the subsequent 1\(\times\)1 convolution. As illustrated in Fig. 9 (a), these partial sums spanning different Fig. 5: The proposed system architecture. Fig. 6: Detail of the processing cluster. Fig. 7: Details of the _PE_ and the _PE’_. channels are then aggregated. Finally, the partial boundary sums derived from the bottom two _PE's_ are retained in the boundary buffer, poised to be amalgamated with other boundary values. #### Iii-B3 Mode 3: 1x1 fusion convolution layer As for 32 channels of the 1x1 convolution layer in the CBB, Fig. 10 shows the data flow and the access of the input data. As in Fig. 10 (b), the 3x6x16 features are read from on-chip memory and are updated every 32 cycles. The data _#0_ and _#1_ in the channel 0 to 15 and 16 to 31 are distributed to the cluster 0\(\sim\)2 at the _T1_ and _T2_ cycles, respectively. Similar operations are applied to cluster 3\(\sim\)5 in the next two cycles for the data _#2_ and _#3_. The data flow is as follows. In this mode, _PE_ and _PE'_ have the same operation. To increase hardware utilization, unlike other modes, the first 16 channels are given to the 12 left _PEs_, and the last 16 channels are passed to the 6 right _PEs_ and 6 _PE's_. This is the reason why we need independent input feature buffers for every PE instead of row-wise broadcasting. Although the hardware cost is increased, we can increase utilization by double. The order of weights is read circularly from output channel 0 to output channel 31, which are broadcast to the _PEs_ and _PE's_. #### Iii-B4 Mode 4: 3x1 group convolution This mode is similar to mode 1, except that the input is updated every 8 cycles due to its group operations. If the output channel is \(g=32\), the cycle count to update the input is given by \(g/4=8\) in our case. ### _Boundary SRAM_ Table II shows the boundary SRAM used in the three 3x1 convolution layers as in Fig. 2 by assuming \(\times\)2 scaling and FullHD@30fps output. The total buffer size is larger than the feature and weight SRAM due to the ping-pong buffer strategy to keep the hardware busy. A tradeoff to reduce buffer size is to store them in the external memory with the extra DRAM access, which requires an extra 4.2MB/s bandwidth without this boundary buffer. ## IV Experimental Result ### _Experimental setup_ For the data sets, we use two training datasets, DIV2K and Flickr2K, and five commonly used benchmark datasets, Set5 [17], Set14 [18], B100 [19], and Urban100 [20] with PSNR as the evaluation metric. For model training, we randomly crop 256x256 patches that are from training datasets and apply data augmentation to these patches, such as rotation and flip. The batch size is 128. The L1 loss function is used with the AdamW optimizer for training. We use a step learning scheduler with gamma set as 0.5 for learning rate decay every 200 epochs. The initial learning rate is 5e-3 and the total epochs are 2000. The proposed model is implemented in the PyTorch framework with the NVIDIA DGX Station A100. ### _Model evaluations_ Table III and Table IV show the experimental results for the \(\times\)2 and \(\times\)4 scaling. The MAC is calculated assuming that the output image is 1920\(\times\)1080. These tables select lightweight models for comparison, which can be divided into lightweight models (model parameters \(>\) 100K) and ultra-lightweight ones (model parameters \(\leq\) 100K). Compared to the best lightweight model in the table, our performance in B100 is only 0.75dB lower. However, its parameters and operations are 31\(\times\) and 25\(\times\) different from ours, respectively. Among lightweight models, LWN [21] has a performance similar to ours in Set5 and Set14, but better than ours in B100 and Urban100. The reason is that the images in Urban100 usually have a regular texture, which cannot be handled well because of the less vertical direction information in our model. However, if we take into account the computation cost and simplicity, our simple and powerful model still has its advantages. Compared to the ultra-lightweight models, Fig. 11 shows their performance in B100 and the number of MACs. The red star symbol represents our method. Compared to HPAN, this work shows a slight loss in PSNR, but reduces almost \begin{table} \begin{tabular}{|c|c|c|} \hline & SRAM (KB) & Off-Chip Bandwidth (MB/s) \\ \hline CONV\_3x1\_1 & 57 & 1.68 \\ CONV\_3x1\_2 & 57 & 1.68 \\ CONV\_3x1\_3 & 28 & 0.84 \\ \hline Total & 142 & 4.20 \\ \hline \end{tabular} \end{table} TABLE II: Analysis of three vertical convolutions in the ACNet. Fig. 8: The dataflow of the 3\(\times\)1 convolution layer. 50 % of MAC operations. For XLSR, because the original model in [13] is designed for scaling in \(\times\)2, we modified the number of output channels in the upsampling stage and the modified model is marked as XLSR*. The parameter of XLSR* is the same as ours, but our model has better or equivalent performance in all test datasets. Compared to [12] with the coupled asymmetric convolution, our model has better results. SRNPU uses two FSRCNN-like models, and thus has comparable performance to FSRCNN but with reduced operation count. However, its parameter is 10\(\times\) larger than ours and its performance is 0.3 dB lower than FSRCNN for the upscaling factors \(\times\)4 in Set5. ### _Ablation Study_ #### Iv-C1 Model depth A direct way to increase the depth of our model is to stack more CBBs. We explore different blocks and show the results in Fig. 12. We trained every model for only 600 epochs, and the other settings are the same as in the previous setup. As the depth deepens, the performance becomes saturated around 37.2 dB. However, when we stack 60 blocks in the model, because of the gradient vanishing, the model cannot learn anything. If we want to improve it, we have to add some skip connections in the model. #### Iv-C2 Decoupled asymmetric convolution Unlike [12] which uses both asymmetric and normal convolutions, our model uses only decoupled asymmetric convolutions and 1x1 convolutions. Table V shows the different configurations of the asymmetric convolution. As shown in Table V, the first two rows are models with only one type of asymmetric convolution, and the last two rows are models with one type of convolution in the first stage and the other type in the rest Fig. 11: PSNR vs MAC numbers under 20G. Fig. 12: The trend line of increasing block numbers and the corresponding frame rate in scaling factor \(\times\)2. Fig. 10: The data flow of the 1\(\times\)1 fusion convolution layers in CBB. Fig. 9: The data flow of the 1\(\times\)3 and 1\(\times\)1 convolution layers in CBB. of the stages. Because such models lack a receptive field in another direction, the recovered images are worse than the last two rows. The results of the last two rows show that simply using one layer of different types of convolution can improve the performance of the reconstruction capacity. ### _Hardware_ #### Iv-D1 Implementation results and design comparison Table VI shows our design implementation and comparison to other works. Our design in the TSMC 40nm CMOS process can attain real-time full HD SR with 2332K gate count and 198KB SRAM when operating at 270MHz. Our design needs the smallest external bandwidth due to the _holistic model fusion_, which is enabled by our small model size. Compared to other designs, the proposed design has low MAC numbers and small buffer size, but better performance due to the deeper structure, especially compared to other FSRCNN-based designs [7, 9]. Our energy efficiency is also better than others except [8]. PSNR in [8] is 0.5 dB higher due to their 583K-parameter model. Due to this large model, its throughput becomes lower. Its energy efficiency is higher than that of ours because of its large number of MACs. SRNPU [7] has a similar throughput as ours in the scaling factors \(\times\)2, but the throughput in the scaling factor \(\times\)4 is lower than ours. The main reason is that they need to support different models in one architecture. In its design, the PE utilization is 88.2 % for the large model but it drops to only 51 % for the simple model. In contrast, as shown in Fig.13, our average PE utilization in scaling factors \(\times\)2 and \(\times\)4 is almost the same. Our energy efficiency is at least 1.8x higher than that of the SRNPU. #### Iv-D2 Hardware utilization Fig. 13 shows the execution time and hardware utilization of four modes for \(\times\)2 scaling. A similar distribution is also found in \(\times\)4 scaling. As mentioned before, although the utilization of the first mode is very low, its execution time is insignificant for the whole model. Therefore, the average PE utilization is about 88.3 % and 88.0 % at \(\times\)2 and \(\times\)4, respectively. The reason why the utilization at \(\times\)4 is lower than at \(\times\)2 is that the execution time of mode 4 is 2 % more and thus decreases the average utilization. #### Iv-D3 Area analysis Fig. 14 shows the area analysis of our design. Because our ACNet is an ultra-light-weight model, the \begin{table} \begin{tabular}{|c|c|c|} \hline & Set5 & B100 & M.109 \\ \hline All vertical & 34.55 & 30.00 & 32.07 \\ All horizontal & 34.06 & 30.10 & 31.57 \\ \hline First horizontal, rest vertical & 37.21 & 31.61 & 36.59 \\ First vertical, rest horizontal & 37.19 & 31.59 & 36.66 \\ \hline \end{tabular} * This model has its own hardware design. \end{table} TABLE V: Results of different configurations of asymmetric convolution. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Model** & **Size(K)** & **GMAC** & **Set5** & **Set14** & **B100** & **U100** \\ \hline Bicubic & - & - & 28.42 & 26.00 & 25.96 & 23.14 \\ \hline FSRCNN-s & 4 & 6 & 30.11 & 27.19 & 26.84 & - \\ \hline \({}^{*}\)SRNT\({}^{*}\) & 183 & 1 & 30.41 & 27.37 & 26.86 & - \\ \hline SRCNN & 57 & 119 & 30.48 & 27.49 & 26.90 & 24.52 \\ \hline FSRCNN & 13 & 10 & 30.71 & 27.59 & 26.98 & 24.62 \\ \hline XLSR\({}^{*}\) & 28 & 4 & 30.09 & 27.70 & 26.98 & 24.58 \\ \hline **ACNet** & **18** & **2** & **30.78** & **27.62** & **27.00** & **24.63** \\ \hline HPAN & 26 & 8 & 30.88 & 27.68 & 27.03 & 24.69 \\ \hline VDSR & 665 & 1,225 & 31.35 & 28.01 & 27.29 & 25.18 \\ \hline LapSRN & 813 & 336 & 31.54 & 28.19 & 27.32 & 25.21 \\ \hline LWN & 286 & 39 & 31.70 & 28.19 & 27.32 & 25.48 \\ \hline DRRN & 297 & 15,293 & 31.68 & 28.21 & 27.38 & 25.44 \\ \hline IDN & 600 & 72 & 31.82 & 28.25 & 27.41 & 25.41 \\ \hline CARN-M & 412 & 73 & 31.92 & 28.42 & 27.44 & 25.62 \\ \hline MADNet & 1,002 & 122 & 32.01 & 28.45 & 27.47 & 25.77 \\ \hline T-FMB & 690 & - & 32.08 & 28.51 & 27.49 & 25.89 \\ \hline RPAN & 643 & 72 & 32.24 & 28.61 & 27.57 & 26.11 \\ \hline PAN & 272 & 64 & 32.13 & 28.61 & 27.59 & 26.11 \\ \hline \({}^{a}\)eCNN & 583 & 398 & 32.11 & 28.61 & 27.59 & 26.11 \\ \hline LatticeNet & 777 & 98 & 32.30 & 28.68 & 27.62 & 26.25 \\ \hline FENet & 675 & - & 32.24 & 28.61 & 27.63 & 26.20 \\ \hline MRDN & 582 & 72 & 32.36 & 28.76 & 27.67 & 26.41 \\ \hline ADBNet & 535 & 57 & 32.82 & 29.01 & 27.95 & 27.16 \\ \hline \end{tabular} * This model has its own hardware design. \end{table} TABLE IV: The quantitative results of several light weight methods(\(\times\)4). The bold one represents our method. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Model** & **Size(K)** & **GMAC** & **Set5** & **Set14** & **B100** & **U100** \\ \hline Bicubic & - & - & 33.66 & 30.24 & 29.56 & 26.88 \\ \hline * & 37.23 & 32.89 & 29.58 & 30.43 \\ \hline FSRCNN-s & 4 & 6 & 36.57 & 32.28 & 31.23 & - \\ \hline [19] & 3 & 1 & 36.66 & 32.52 & 31.32 & 29.34 \\ \hline SRCNN & 57 & 119 & 36.66 & 32.45 & 31.36 & 29.50 \\ \hline \({}^{a}\)SRNPU & 183 & 7 & 37.06 & 32.62 & 31.47 & - \\ \hline FSRCNN & 13 & 14 & 37.00 & 32.63 & 31.53 & 29.88 \\ \hline XLSR\({}^{*}\) & 17 & 9 & 37.17 & 32.78 & 31.55 & 29.89 \\ \hline **ACNet** & **17** & **9** & **37.34** & **32.78** & **31.64** & **30.21** \\ \hline *HPAN & 26 & 17 & 37.38 & 32.91 & 31.69 & 30.29 \\ \hline LapSRN & 813 & 67 & 37.52 & 33.08 & 31.80 & 30.41 \\ \hline LWN & 286 & 104 & 37.38 & 32.93 & 31.85 & 31.04 \\ \hline VDSR & 665 & 1,225 & 37.53 & 33.03 & 31.90 & 30.76 \\ \hline CARN-M & 412 & 182 & 37.53 & 33.26 & 31.92 & 31.23 \\ \hline * & 37.90 & 33.45 & 32.11 & 31.83 \\ \hline RFDN & 354 & 284 & 38.05 & 33.68 & 32.16 & 32.12 \\ \hline PAN & 261 & 160 & 38.00 & 33.59 & 32.18 & 32.01 \\ \hline FENet & 675 & - & 38.08 & 33.70 & 32.20 & 32.18 \\ \hline MRDN & 565 & 288 & 38.05 & 33.68 & 32.24 & 32.42 \\ \hline LatticeNet & 756 & 381 & 38.15 & 33.78 & 32.25 & 32.43 \\ \hline ADBNet & 535 & 225 & 38.25 & 34.10 & 32.39 & area of the weight memory is small. The area of the feature memory is smaller than the area of the boundary memory because we have to handle all partial boundary sums. The size of the boundary memory will be about \(\times\)64 larger if we do not use the asymmetric architecture. The cluster area is the largest because there are many floating-point multiplication and addition units with buffers in all clusters. #### Iv-B4 Power analysis Despite having the second largest size, the boundary memory only uses 5% of the total power because it will be accessed only in three 3x1 convolution layers. The power of the feature memory is only 11% due to the stationary of the input features. The six clusters work at every stage and, as a result, consume most of the power. #### Iv-B5 Memory bandwidth As mentioned above, the proposed architecture fuses the whole model for on-chip execution. Compared to a layer-by-layer accelerator, ACNPU can completely eliminate the additional bandwidth of off-chip feature access shown in Table VII. The values are measured in \(\times\)2 Full HD 30 FPS. Furthermore, we execute 1\(\times\)1 and 1\(\times\)3 in parallel to reduce access to on-chip feature memory, which reduces the amount of on-chip memory access by 36.4 %. ## V Conclusion This paper realizes an energy-efficient SR accelerator, ACNPU, for resource-limited edge devices to achieve low power consumption and high image quality. Compared to FSRCNN, the ACNPU uses a 27-layer model for 0.34dB higher quality, but costs 36% less complexity with similar model size due to _decoupled asymmetric convolution and split-bypass structure_. The model structure is also hardware-friendly, as it employs the aforementioned minimal structures and local connections rather than long connections. With this hardware-friendly structure and 17K model size, the accelerator can eliminate external memory access of intermediate feature maps by the _holistic model fusion_. Internal memory access is further reduced by parallel execution of 1x3/1x1 layers and _local input stationary flow_. The corresponding hardware is regular and easy to control to support different layers by _PE clusters with adaptive input values and uniform data flow_. The final implementation with the TSMC 40nm CMOS process can achieve full HD processing with 31.7 and 124.4 frames per second for scaling \(\times\)2 and \(\times\)4 at the working frequency of 270 MHz, respectively, achieving 4.75 TOPs/W energy efficiency and being 1.8\(times\)\(times\) higher than the previous SR accelerator.
2301.09925
Born approximation study of the strong disorder in magnetized surface states of topological insulator
In this study we investigate the effect of random point disorder on the surface states of a topological insulator with out-of-plane magnetization. We consider the disorder within a high order Born approximation. The Born series converges to the one branch of the self-consistent Born approximation (SCBA) solution at low disorder. As the disorder strength increases, the Born series converges to another SCBA solution with the finite density of states within the magnetization induced gap. Further increase of the disorder strength leads to a divergence of the Born series, showing the limits of the applicability of the Born approximation. We find that the convergence properties of this Born series are closely related to the properties of the logistic map, which is known as a prototypical model of chaos. We also calculate the longitudinal and Hall conductivities within the Kubo formulas at zero temperature with the vertex corrections for the velocity operator. Vertex corrections are important for describing transport properties in the strong disorder regime. In the case of strong disorder, the longitudinal conductivity is weakly dependent on the disorder strength, while the Hall conductivity decreases with increasing disorder.
R. S. Akzyanov
2023-01-24T11:15:45Z
http://arxiv.org/abs/2301.09925v1
Born approximation study of the strong disorder in magnetized surface states of topological insulator ###### Abstract In this study we investigate the effect of random point disorder on the surface states of a topological insulator with out-of-plane magnetization. We consider the disorder within a high order Born approximation. The Born series converges to the one branch of the self-consistent Born approximation (SCBA) solution at low disorder. As the disorder strength increases, the Born series converges to another SCBA solution with the finite density of states within the magnetization induced gap. Further increase of the disorder strength leads to a divergence of the Born series, showing the limits of the applicability of the Born approximation. We find that the convergence properties of this Born series are closely related to the properties of the logistic map, which is known as a prototypical model of chaos. We also calculate the longitudinal and Hall conductivities within the Kubo formulas at zero temperature with the vertex corrections for the velocity operator. Vertex corrections are important for describing transport properties in the strong disorder regime. In the case of strong disorder, the longitudinal conductivity is weakly dependent on the disorder strength, while the Hall conductivity decreases with increasing disorder. ## I Introduction The study of the electronic structure of the magnetic materials with the non-trivial topology in a momentum space attracts great interest in modern condensed matter physics [1]. One of the manifestations of the non-trivial topology is the intrinsic anomalous Hall effect (AHE), which arises due to the finite value of the Berry curvature [2]. In the insulating state at zero temperature, the Hall conductivity is quantized, leading to the quantum anomalous Hall effect [3] (QAHE). In topological insulators, the non-trivial topology of the bulk band structure leads to the formation of the robust surface states with the Dirac cone [4]. At the charge neutrality point finite out-of-plane magnetization opens a gap in the spectrum leading to the QAHE [5; 6]. If the chemical potential is outside the gap, then the system is in the AHE regime, which has been extensively studied in the literature [7; 8; 9; 10]. Experimentally, the chemical doping of topological insulators of the Bi\({}_{2}\)Se\({}_{3}\) family with transition metal elements (Fe, Cr or Mn) induces a bulk magnetisation that opens a mass gap at the Dirac point of the surface states [11; 12; 13]. In this case, QAHE is realised with the perfectly quantized Hall conductivity \(\sigma_{xy}=\sigma_{0}=e^{2}/h\). In real samples, non-uniform doping leads to spatial variations of the mass gap and the chemical potential. Spectroscopy experiments show the important role of scalar and magnetic disorder in the electronic properties of magnetic topological insulators [14]. In Refs. [15; 16] it was shown that increasing the disorder leads to the transition from the QAHE to the AHE regime. In the disordered samples the Hall conductivity is no longer quantized and finite longitudinal conductivity is observed even at the charge neutrality point. This is due to the presence of the finite density of states which has been observed by spectroscopic [14] and angle resolved photon emission spectroscopy measurements [17]. The interest in studying the effect of disorder on the magnetic properties of topological insulators arises from possible spintronic applications [6]. The effects of disorder are also crucial for the problem of realising the edge Majorana fermions in the topological insulator-superconductor heterostructure [18; 19]. In magnetic topological insulators the effects of disorder are usually considered in a weak scattering approximation where the real part of the self-energy is neglected [7; 9; 20]. In Refs. [21; 22] the influence of strong disorder on the electronic properties of the surface state of a topological insulator with magnetization was studied. It was shown that sufficiently strong disorder leads to the finite density of states at the charge neutrality point. Note that for the anomalous Hall effect diagrams beyond the Born approximation series also give a significant contribution [23]. In Ref. [24], the effects of strong disorder on the conductivity of the Weyl semimetal were considered within the self-consistent Born approximation (SCBA). It was shown that SCBA can analytically capture the qualitative behaviour of various observables and their universal features for any disorder strength. In this work we study the longitudinal and Hall conductivity of the surface states of the topological insulator with out-of-plane magnetization within the Bastin-Kubo-Streda formulas with vertex corrections. The influence of the random point-like magnetic and scalar perturbations is studied within the high-order Born approximation. We compare the direct summation of the Born series for the self-energy with the analytical results of the SCBA solution for arbitrary values of the disorder strength. We find that disorder reduces the magnetization induced gap and drives the system towards the AHE regime. For small disorder, the density of states vanishes at the Dirac point and the Born series for the self-energy converges monotonically to one branch of the SCBA solution. For sufficiently large disorder \(j>j_{c}\) a finite density of states is generated at the charge neutrality point even for the finite gap in the spectrum. In this case the Born series converges non-monotonically to the other branch of the SCBA solution. For sufficiently large disorder, the Born series for the self-energy loses its causality properties or diverges depending on the values of the parameters. We found that vertex corrections for both retarded-advanced and retarded-retarded vertices are important for describing the longitudinal conductivity at large values of disorder. The Hall conductivity also depends significantly on the vertex corrections. For large disorder, the Hall conductivity decreases with increasing disorder. We compare our results with experimental data. ## II Hamiltonian We consider Hamiltonian of the surface states of the topological insulator with the out-of-plane magnetization in the form [4] \[H=v(s_{x}k_{y}-s_{y}k_{x})+Bs_{z}-\mu. \tag{1}\] Here \(s_{i}\) are spin Pauli matrices, \(v\) is the Fermi velocity of the surface states, \(k_{x,y}\) is the momentum of the surface states, \(B\) is the value of the out-of-plane magnetization. We choose \((x,y)\) as in-plane directions while \(z\) is out-of-plane direction. Spectrum of this Hamiltonian is given by \(\epsilon_{\pm}=-\mu\pm\sqrt{B^{2}+v^{2}k_{x}^{2}+v^{2}k_{y}^{2}}\). QAHE regime corresponds to \(\mu<|B|\) where Hamiltonian is fully gapped and Hall conductivity is quantized. Case of \(\mu\geqslant|B|\) corresponds to the AHE state with the finite Fermi surface. Bare advanced/retarded Green's function of this Hamiltonian are given by \[G_{0}^{\pm}=\frac{\mu\pm i0+Bs_{z}+v(s_{x}k_{y}-s_{y}k_{x})}{(\mu\pm i0)^{2}-B ^{2}+v^{2}k_{x}^{2}+v^{2}k_{y}^{2}}. \tag{2}\] ## III Self-energy We consider two cases of short-range random disorder. One is the density disorder corresponding to the randomly distributed charge impurities, the other is the paramagnetic (which we will call magnetic for convenience) with the out-of-plane \(s_{z}\) magnetization corresponding to the spatial fluctuations of the magnetization. We will describe the disorder by a local potential \(\hat{U}_{i}=u_{i}s_{i}\sum\limits_{i}\delta(\mathbf{r}-\mathbf{R}_{j})\), where \(\delta(\mathbf{r})\) is the Dirac delta function, \(\mathbf{R}_{j}\) are the positions of the randomly distributed point-like impurities with the local potential \(u_{i}\), and \(s_{i}\) is the Pauli matrix of the potential corresponding to the type of disorder. We assume that the perturbation is Gaussian and uncorrelated, i.e, \(\langle\hat{U}_{i}\rangle=0\) and \(\langle\hat{U}_{i}(\mathbf{r}_{1})\hat{U}_{j}(\mathbf{r}_{2})\rangle=nu_{0}^{2 }\delta(\mathbf{r}_{1}-\mathbf{r}_{2})\delta_{jl}\), where \(\delta_{jl}\) is the Kronecker delta symbol. Scalar disorder corresponds to random local charge with potential \(\hat{U}_{0}=\hat{1}u_{0}\), while magnetic disorder describes random local magnetization \(\hat{U}_{z}=s_{z}u_{0}\). Strength of the disorder is given by the dimensionless value \(j=n_{i}u_{0}^{2}/(2\pi v^{2})\). Weak disorder corresponds to \(j\ll 1\) while case \(j\sim 1\) corresponds to the strong disorder regime. We consider self-energy in a Born approximation. In this case we can calculate m-th order of the Born series as a recursive series \[\begin{cases}\hat{\Sigma}^{(m+1)}=\sum\limits_{k,i}\langle\hat{U}_{i}G(\hat{ \Sigma}^{(m)})\hat{U}_{i}\rangle,\\ G^{-1}(\hat{\Sigma}^{(m)})=-H-\hat{\Sigma}^{(m)}.\end{cases} \tag{3}\] If this series converges \(\hat{\Sigma}^{(m)}\rightarrow\hat{\Sigma}\) for \(m\rightarrow+\infty\) then the sum of the Born series can be represented as SCBA solution \(\hat{\Sigma}=\sum_{k,i}\langle U_{i}G(\hat{\Sigma})U_{i}\rangle\). We found that self-energy for the considered types of the disorder has a non-trivial spin structure \(\hat{\Sigma}=\Sigma_{0}+\Sigma_{z}s_{z}\) due to finite magnetization. Explicit expression of self-energy as a sum of \(m\) orders of the Born series is \[\Sigma_{0}^{(m+1)} \!\!\!= \!\!j\frac{\Sigma_{0}^{(m)}-\mu}{2}\Xi^{(m)},\ \Sigma_{z}^{(m+1)}\!\!=\!\!-j\frac{\Sigma_{z}^{(m)}+B}{2}\Xi^{(m)},\] \[\Xi^{(m)} \!\!\!= \!\!\ln\frac{v^{2}k_{c}^{2}}{(B+\Sigma_{z}^{(m)})^{2}-(\Sigma_{0 }^{(m)}-\mu)^{2}}. \tag{4}\] Initial condition for this self-energy series \(\Sigma_{0}^{(0)}=-i0\) gives advanced self-energy \(\hat{\Sigma}^{+}\) that corresponds to the advanced Green's function \(G^{+}\). Also, \(\Sigma_{0}^{(0)}=+i0\) gives retarded self-energy \(\hat{\Sigma}^{-}\) that corresponds to the retarded Green's function \(G^{-}\). We set \(\Sigma_{z}^{(0)}=0\) in our calculations. We will discuss the consequence of other initial conditions in following subsection III.3. Real part of the self-energy leads to the renormalization of the chemical potential \(\mu\rightarrow\bar{\mu}=\mu-\operatorname{Re}\Sigma_{0}\) and magnetization \(B\rightarrow\bar{B}=B+\operatorname{Re}\Sigma_{z}\). Self-energy acquire finite imaginary part \(\operatorname{Im}\hat{\Sigma}^{+}=-i\Gamma_{0}-i\Gamma_{z}s_{z}\). Impurity averaged Green's function \(G^{\pm}\) can be obtained from \(G_{0}\) and self-energy using second equation in Eq. (3). It leads to the renormalization \(\mu\rightarrow\bar{\mu}\pm i\Gamma_{0}\) and \(B\rightarrow\bar{B}\mp i\Gamma_{z}\) in the Green's function given by Eq. (2) that leads to following impurity-averaged Green's function \[G^{\pm}=\frac{\bar{\mu}\pm i\Gamma_{0}+(\bar{B}\mp i\Gamma_{z})s_{z}+v(s_{x}k_{y }-s_{y}k_{x})}{(\bar{\mu}\pm i\Gamma_{0})^{2}-(\bar{B}\mp i\Gamma_{z})^{2}+v^{2 }k_{x}^{2}+v^{2}k_{y}^{2}}. \tag{5}\] ### Properties of the finite order Born series We start our analysis of the disorder with the first Born approximation \(m=1\). In this case the self-energy only has a real part if the chemical potential is within the gap \(\mu<B\). Direct calculations show that \(\Sigma_{0}^{(1)}=-j\mu\ln v^{2}k_{c}^{2}/(B^{2}-\mu^{2})\) and \(\Sigma_{z}^{(1)}=-jB\ln v^{2}k_{c}^{2}/(B^{2}-\mu^{2})\). From the Dyson equation \(G^{-1}=-H-\Sigma\) we see that disorder renormalises the chemical potential \(\bar{\mu}=\mu-\Sigma_{0}^{(1)}\) and the magnetisation \(\bar{B}=B+\Sigma_{z}^{(1)}\). This renormalisation tends to increase the chemical potential \(\bar{\mu}\) and decrease the value of the magnetisation \(\bar{B}\). If the disorder is large enough, \(\bar{\mu}>\bar{B}\), the gap is closed and a disorder-induced transition from the insulating to the metallic state occurs. We now extend our calculations to the large order of the Born series. We plot the normalised value of the gap \(\Delta_{g}=(\bar{B}-\bar{\mu})/(B-\mu)\) as a function of the disorder j in Fig. 1 for \(m=4000\). In this figure we also show the normalised density of states \(\text{DOS}=(v/k_{c}\pi)\text{Im Tr}G\propto\Gamma_{0}\). We see that increasing the disorder decreases the gap. Even for small deviations of the chemical potential from the Dirac point, increasing the disorder leads to a rapid decrease in the gap. At \(j>j_{c}=1/\ln(2vk_{c}/B)\) a finite density of states appears for \(\mu=0\) while the gap remains open. Increasing the chemical potential decreases the value of the disorder where the finite density of states appears inside the gap. For \(\mu>B/2\) the density of states appears and the gap closes at similar disorder strengths. We note that for \(\mu>B/2\) the value of the gap is quite sensitive to the disorder strength. Important question for the validity of our results in the convergence of a procedure given by Eq. (4). We plot the self-energy as a function of the number of terms in the Born series \(m\) for different values of the disorder \(j\) for \(\mu=B/4\). We find three regimes of convergence of these series. In the case of \(j<j_{c}\) the Born series converges monotonically with the increase of \(m\) starting from the first Born approximation \(m=1\). In the case of \(j>j_{c}\) we have a non-monotonic behaviour of the self-energy for small values of \(m\). Convergence of the self-energy appears for sufficiently large \(m\). Increasing the disorder \(j\) increases the value of \(m\) where the self-energy starts to converge. If \(j>j_{d}\) then the Born series diverges to infinity. In principle the values of \(j_{c}\) and \(j_{d}\) depend on the values of the chemical potential \(\mu\) and the magnetisation \(B\). In the case of \(\mu=0\) we get the following analytical values of \(j_{c}=1/\ln(2vk_{c}/B)\) and \(j_{d}=2\). ### Comparison with the analytical SCBA solution Now we compare numerical results with the analytical solution of SCBA at Dirac point \(\mu=0\). SCBA solution can be obtained from Eq. (4) if we assume convergence of the self-energy \(\Sigma^{(m)}\to\Sigma\) for \(m\to+\infty\). In this case we have system of equations that determine SCBA self-energy \[\begin{cases}\Sigma_{0}=\Sigma_{0}\frac{j}{2}\ln\frac{v^{2}k_{c}^{2}}{(B+ \Sigma_{z})^{2}-\Sigma_{0}^{2}},\\ \Sigma_{z}=-(\Sigma_{z}+B)\frac{j}{2}\ln\frac{v^{2}k_{c}^{2}}{(B+\Sigma_{z})^ {2}-\Sigma_{0}^{2}}.\end{cases} \tag{6}\] The first equation in Eq. (6) has two solutions. The first solution is trivial \(\Sigma_{0}^{I}=0\). If we insert this into the second equation we have that self-energy has only the real part. Since logarithm is the slow function we can assume first that expression inside the logarithm does not depend on the self-energy and get \(\Sigma_{z}\). In this case we get first SCBA solution \[\Sigma_{z}^{I}=-\frac{jB\ln\frac{vk_{c}}{B}}{1+j\ln\frac{vk_{c}}{B}},\quad \Sigma_{0}^{I}=0. \tag{7}\] We can recalculate self-energy again using the renormalization \(B\to B+\Sigma_{z}^{I}\) in the logarithm of Eq. (7) to get more accurate result. Figure 1: Normalized value of the gap \(\Delta_{g}=(\bar{B}-\bar{\mu})/(B-\mu)\) versus disorder strength \(j\) is shown as dashed line for different values of the chemical potential \(\mu=0\), \(\mu=B/10\), \(\mu=B/2\). Solid lines represent normalized \(\text{DOS}=(v/k_{c}\pi)\text{Im Tr}G\) for corresponding values of the chemical potential. Sudden changes in plots for \(j\sim 2\) appears due to divergence of the self-energy. We set \(vk_{c}/B=10\). Figure 2: Self-energy components \(\Sigma/B\) as a functions of a Born series order \(m\) for \(vk_{c}/B=10\), \(\mu=B/4\). Left figure corresponds to the case of monotonic convergence \(j=0.1\). Central figure corresponds to the case of non-monotonic convergence \(j=1\). Right figure show divergence of the Born series for \(j=2.1\). We can see, that self-energy of the first SCBA solution renormalizes real part of the magnetization and imaginary part of the self-energy is infinitesimally small. Second solution of the first equation in Eq. (6) is given by \(1=\frac{i}{2}\ln\frac{\nu^{2}k_{c}^{2}}{(B+\Sigma_{z})^{2}-\Sigma_{0}^{2}}\). We insert this expression into the second equation and get \(\Sigma_{z}=-(B+\Sigma_{z})\) that gives us \(\Sigma_{z}=-B/2\). If we put it back to the first equation we get that \(\Sigma_{0}=\pm i\sqrt{v^{2}k_{c}^{2}e^{-\frac{2}{3}}-B^{2}/4}\). Only one solution conserves causality that leads to the second SCBA solution \[\Sigma_{z}^{II}=-\frac{B}{2},\ \Sigma_{0}^{II}=-i\sqrt{v^{2}k_{c}^{2}e^{- \frac{2}{3}}-\frac{B^{2}}{4}} \tag{8}\] This solution renormalizes real part of the magnetization and generates imaginary part of the self-energy for \(j>j_{c}=1/\ln(2vk_{c}/B)\). Note, that imaginary part of the self-energy growth exponentially with the increase of the disorder strength. We plot Born series self-energy at \(\mu=0\) for \(m=4000\) and compare it with the SCBA solutions at Fig. 3. We see that for \(j<j_{c}\) self-energy is given by Eq. (7). For \(j>j_{c}\) solution for the self-energy is given by Eq. (8). Note, that for \(j>j_{d}=2\) Born series diverges, while SCBA solution is always finite regardless the value of \(j\). We can see that with the increase of the disorder imaginary part of the self-energy grows very fast. Even at moderate values of the disorder \(j\sim 0.5\) imaginary part of the self-energy exceeds value of the magnetization. It means that for strong disorder imaginary part of the self-energy becomes a dominant energy parameter \(\Gamma_{0}\gg\mu,B\). ### Connection between with the logistic map. Lyapunov exponents We plot numerical solutions for the self-energy \(\Sigma^{(m)}\) for different values of \(m\) in Fig. 4. We can see that there is no convergence for \(j>2\). First we look at the case \(\mu=0,B=0\). For \(2<j<2.4\) the self-energy oscillates between two branches. Further increases in \(j\) increase the number of branches and at some point the behaviour becomes chaotic. Further increase of the disorder strength leads to the loss of the causality of the Green's function (imaginary part of the self-energy changes sign). The case of finite \(B\) shows a qualitatively similar behaviour. In the case of \(\mu>0\) the solutions quickly diverge to infinity for \(j>j_{d}\) as the Born series order \(m\) increases. Such observations suggest chaotic behaviour for large values of the disorder strength. The value of the critical disorder \(j_{d}\) where Born series start to diverge decreases from its maximum value \(j_{d}(\mu=0)=2\) with increasing chemical potential \(\mu\). In order to get insights about the behaviour of the Born series, we write it down at the Dirac point \(\mu=0\) without magnetization \(B=0\) as \[\Sigma_{0}^{(m+1)}=j\Sigma_{0}^{(m)}\frac{1}{2}\ln\frac{v^{2}k_{c}^{2}}{-( \Sigma_{0}^{(m)})^{2}}. \tag{9}\] Since the real part of the self-energy vanishes we rewrite this equation as \[g^{(m+1)}=-jg^{(m)}\ln g^{(m)}, \tag{10}\] where we introduce dimensionless self-energy as \(\Sigma^{(m)}=-ig^{(m)}vk_{c}\). We will call this series as logarithmic logistic map. If this series converges then we have SCBA solution \(g_{\rm scba}=e^{-1/j}\) for \(m\to+\infty\). We see that for \(j\to+\infty\) SCBA self-energy approaches energy cut-off \(g^{(m)}\to 1\). If we expand Eq. (10) by powers of \(g^{(m)}-1\) get in the lowest order \[g^{(m+1)}=jg^{(m)}(1-g^{(m)}). \tag{11}\] This equation is called as logistic map that describes e.g. population growth and is used as a prototypical model for a chaos [25]. Logistic map show a bifurcation behavior for large \(j\). It means that logarithmic logistic map given by Eq. (10) has bifurcation behaviour as well for large \(j\). One way to quantify chaotic behaviour is to calculate local Lyapunov exponents [26]. In case of the one dimensional series \(x^{(m+1)}=f(x^{(m)})\) Lyapunov exponent is \(\lambda=\lim_{m\to\infty}1/m\sum_{0}^{m}\ln|L_{m}|\) where \(L_{m}=f^{\prime}(x^{(m)})\). For the system of equations we have spectrum of Lyapunov exponents with \(\lambda_{i}=\lim_{m\to\infty}1/m\sum_{0}^{m}\ln|L_{i,m}|\) where \(L_{i,m}\) is the i-th eigenvalue of the matrix \(J_{ij}=\partial f_{i}/\partial x_{j}\). Lyapunov exponents show how sensitive is the solution to the initial conditions. If Lyapunov exponents are all negative then small perturbations of the initial conditions do not affect solution for \(m\to\infty\). Zero maximal Lyapunov exponent show presence of the degeneracy of solutions: small perturbations in the lead to different stable solutions. If maximal Lyapunov exponent is positive then we have chaotic behaviour and the solution for \(m\to\infty\) is unstable. Figure 3: Comparison of the analytical SCBA results (dashed lines) for the self-energy with the numerical results (solid lines) for \(vk_{c}/B=10\), \(\mu=0\). The local Lyapunov exponents for Eq. (4) are shown in Fig. 5. We can see that the maximum Lyapunov exponent becomes positive when the Born series starts to diverge. In the case of \(B>0\) at \(j\sim 0.26\) the maximum Lyapunov exponent approaches zero value, which indicates the transition from the solution given by Eq. (7) to another SCBA solution given by Eq. (8). One of the Lyapunov exponents has a zero plato \(\lambda_{1}\simeq 0\). This is due to the sensitivity of the solution to the initial conditions. Consider the case \(\mu=0,B=0\). If the imaginary part \(\Gamma_{0}^{(0)}>\Gamma_{z}^{(0)}\) we have the solution \(\Gamma_{0}>0,\Gamma_{z}=0\) for \(m\rightarrow\infty\) for the eq.4. However, in the case of \(\Gamma_{0}^{(0)}<\Gamma_{z}^{(0)}\) we have another solution \(\Gamma_{0}=0,\Gamma_{z}>0\). So we have two stable solutions for two different types of initial conditions, either \(\hat{\Sigma}^{(0)}=-i0\) or \(\hat{\Sigma}^{(0)}=-i0s_{z}\), where \(i0\) is the numerically small value. Usually in condensed matter physics the initial condition for the self-energy is taken to be \(\hat{\Sigma}^{(0)}=-i0\). Note that for eq. (11) we will not have such a zero Lyapunov exponent, since we have already assumed \(\Gamma_{z}^{(0)}=0\) in this equation, and consequently we have only one Lyapunov exponent. ## IV Vertex corrections The introduction of self-energy into the current-current correlation function violates gauge invariance. To restore the gauge invariance we have to introduce the vertex corrections to the velocity operator [9]. We can find corrections to the velocity operator in the form \(\delta v_{I(II)\alpha}=V_{I(II)\alpha}-v_{\alpha}\). Thus we can find the \(n\)-th order of vertex corrections as \(\delta v_{I(II)\alpha}^{(n)}=\sum\limits_{k,i}\langle\hat{U}_{i}G^{+}(v_{ \alpha}+\delta v_{I(II)\alpha}^{(n-1)})G^{-(+)}\hat{U}_{i}\rangle\). Here \(I\) corresponds to the retarded-advanced vertex correction \(\langle G^{+}...G^{-}\rangle\), while \(II\) corresponds to the advanced-advanced vertex correction \(\langle G^{+}...G^{+}\rangle\). We found that in our model the vertex corrections always converge when the Born series of the self-energy converges. There is also always a single solution for the SCBA vertex corrections. Since we consider point-like disorder, the vertex corrections have no \(k\) dependence. Calculation of the vertex corrections for retarded-advanced Green's functions in a first Born approximation \(V_{Ix}^{(1)}=v_{x}+n_{i}u_{0}^{2}\sum_{k}G_{0}^{+}v_{x}G_{0}^{-}\) shows that vertex corrections renormalise the value of the bare velocity operator and introduce a new \(s_{x}\) term. We introduce a renormalisation of the current operator in the form \(\delta v_{Ix}=V_{Ix}-v_{x}\). In this case vertex corrections can be found as [9] \[\delta v_{Ix}=\frac{n_{i}u_{0}^{2}}{(2\pi)^{2}}\int d^{2}kG^{+}(v_{x}+\delta v _{Ix})G^{-}, \tag{12}\] where Green's function are given by Eq. (5). After we take \(\text{Tr}[s_{i}...]\) for the left and right parts of the second equation we get following system of equations for the vertex corrections \[\delta v_{Ix}=v\delta_{x}s_{x}+v\delta_{y}s_{y},\] \[\delta_{x}=j(-M_{xy}+\delta_{x}M_{xx}+\delta_{y}M_{xy}),\] \[\delta_{y}=j(-M_{yy}+\delta_{x}M_{yx}+\delta_{y}M_{yy}). \tag{13}\] Here \(M_{\alpha\beta}=v^{2}/(4\pi)\int d^{2}k\text{Tr}[s_{\alpha}G^{+}s_{\beta}G^{-}]\). Since this integral converges we use infinite limits in the integration \(k_{c}\rightarrow+\infty\) to get analytical results. Explicit integration gives us \[M_{xx}=\frac{\bar{\mu}^{2}-\bar{B}^{2}+\Gamma_{0}^{2}-\Gamma_{z} ^{2}}{4(\bar{B}\Gamma_{z}+\bar{\mu}\Gamma_{0})}(\pi/2-\theta),\] \[M_{yx}=\frac{\bar{B}\Gamma_{0}+\bar{\mu}\Gamma_{z}}{2(\bar{B} \Gamma_{z}+\bar{\mu}\Gamma_{0})}(\pi/2-\theta),\] \[\theta=\arctan\frac{\bar{B}^{2}+\Gamma_{0}^{2}-\bar{\mu}^{2}- \Gamma_{z}^{2}}{2(\bar{B}\Gamma_{z}+\bar{\mu}\Gamma_{0})}. \tag{14}\] Figure 4: Self-energy components \(\Sigma_{0}^{(m)}/vk_{c}\) and \(\Sigma_{u}^{(m)}/vk_{c}\) for \(m=1000,1001,...,1005\) as a function of the disorder strength \(j\) for different values of magnetization \(B\) and chemical potential \(\mu\). Left column corresponds to \(\mu=0\), right column to \(\mu=0.01vk_{c}\). Upper row corresponds to \(B=0\), lower row to \(B=0.1vk_{c}\). Figure 5: Local Lyapunov exponents for the Eq.4 as functions of the disorder strength \(j\) for different values of the chemical potential \(\mu\) and magnetization \(B\). We set \(m=1000\). Left column corresponds to \(\mu=0\), right column to \(\mu=0.01vk_{c}\). Upper row corresponds to \(B=0\), lower row to \(B=0.1vk_{c}\). Using that \(M_{xx}=M_{yy}\) and \(M_{yx}=-M_{xy}\) we the vertex corrections in the form \[\delta_{x}=\frac{jM_{yx}}{(jM_{xx}-1)^{2}+j^{2}M_{yx}^{2}}, \tag{15}\] \[\delta_{y}=1+\frac{jM_{xx}-1}{(jM_{xx}-1)^{2}+j^{2}M_{yx}^{2}}. \tag{16}\] In case of vertex corrections for the retarded-retarded (or advanced-advanced) Green's function we calculate first Born approximation \(V_{II1x}=v_{x}+n_{i}u_{0}^{2}\sum_{k}G^{+}v_{x}G^{+}\). We found out that vertex corrections renormalize value of the bare vertex \(v_{x}\). In this case we seek renormalization of the current operator in the form \(\delta v_{IIx}=V_{IIx}-v_{x}\) \[\delta v_{IIx}=\frac{n_{i}u_{0}^{2}}{(2\pi)^{2}}\int d^{2}kG^{+}(v_{x}+\delta v _{IIx})G^{+} \tag{17}\] We use that \(N_{yy}=v^{2}/(4\pi)\int d^{2}k\mathrm{Tr}[G^{+}s_{y}G^{+}s_{y}]=-1/2\) and get that \[\delta v_{IIy}=v\frac{j}{j+2}s_{y} \tag{18}\] These results are can be easily generalized for other types of the disorder. In case of the magnetic disorder we get that vertex corrections are given by the same equations up to \(j\to-j\) in Eqs.(15) and (18). We start with the case of weak disorder \(j<j_{c}\) for the gapped phase \(\Gamma_{0}\to 0\). In this case \(M_{xx}=-1/2\) and \(M_{yx}=-\bar{B}\Gamma_{0}/(\bar{\mu}^{2}-\bar{B}^{2})\to 0\). Thus, we get that vertex correction \(\delta_{y}=j/(j+2)\) is finite inside the gapped region. If we compare this result with Eq. (18) we get that vertex corrections for \(I\) and \(II\) contributions are equal \(\delta v_{Ix}=\delta v_{IIx}\) in QAHE regime. We show later that this results in the vanishing of the total longitudinal conductivity. Now we consider case of strong disorder disorder \(j>j_{c}\) with \(\Gamma_{0}\gg\mu,B\) and \(\Gamma_{z}=0\). That results in \(M_{xx}=1/2\) and \(M_{yx}=\bar{B}/\Gamma_{0}\), \(\delta_{y}=-j/(2-j)\) and \(\delta_{x}=4j\bar{B}/(\Gamma_{0}(2-j)^{2})\). We see that vertex corrections change its sign for strong disorder. Vertex corrections diverge for \(j>j_{d}=2\) at the same value of the disorder as self-energy diverges. ## V Longitudinal conductivity At zero temperature longitudinal conductivity has two contributions [20] \[\sigma^{I}_{\alpha\alpha}=\frac{e^{2}}{2\pi}\int\frac{d^{2}k}{(2 \pi)^{2}}\mathrm{Tr}[V_{I\alpha}\,G^{+}\,v_{\alpha}\,G^{-}], \tag{19}\] \[\sigma^{II}_{\alpha\alpha}=-\frac{e^{2}}{4\pi}\int\frac{d^{2}k}{ (2\pi)^{2}}\mathrm{Tr}[V_{II\alpha}\,G^{+}\,v_{\alpha}\,G^{+}+c.c]. \tag{20}\] Here we consider \(\hbar=1\), bare velocity operator is \(v_{\alpha}=\partial H/\partial k_{\alpha}\), \(V_{I(II)\alpha}\) is the velocity operator with vertex corrections for \(I(II)\) conductivity terms, c.c stand for complex conjugation. We decompose conductivity \(\sigma^{I(II)}_{\alpha\alpha}=\sigma^{I(II)bv}_{\alpha\alpha}+\sigma^{I(II)vc}\) into the contributions from the bare bubble and vertex corrections. We express these terms as \[\sigma^{Ibv}_{xx}=\frac{\sigma_{0}}{\pi}M_{yy},\quad\sigma^{Ivc}_{ xx}=-\frac{\sigma_{0}}{\pi}(\delta_{y}M_{yy}-\delta_{x}M_{yx}),\] \[\sigma^{IIbv}_{xx}=-\frac{\sigma_{0}}{\pi}N_{yy},\quad\sigma^{IIvc }_{xx}=\frac{\sigma_{0}}{\pi}\frac{j}{j+2}N_{yy}. \tag{21}\] Here \(\sigma_{0}=e^{2}/(2\pi)=e^{2}/h\) is the conductivity quanta. Note, that \(M_{xx}=M_{yy}\) and \(M_{xy}=-M_{yx}\). We start with the analysis of the gapped state for weak disorder \(j<j_{c}\). In this case \(\Gamma_{0}\to 0\) and \(\Gamma_{z}=0\). This results in cancellation between \(\sigma^{I}_{xx}\) and \(\sigma^{II}_{xx}\) terms. In means that total conductivity \(\sigma_{xx}=0\) vanishes exactly in the QAHE regime. Now we consider case of strong disorder \(j>j_{c}\) with \(\Gamma_{0}\gg\mu,B\) and \(\Gamma_{z}=0\). This results in \[\sigma_{xx}=\frac{4\sigma_{0}}{\pi(4-j^{2})}. \tag{22}\] We see that for a strong disorder longitudinal conductivity has weak dependence on the values of the chemical potential and magnetization. We plot longitudinal conductivity for different values of the disorder strength \(j\) at Fig. 6. In the metallic state value of the longitudinal conductivity approaches value given by Eq. (22) quite quickly. Difference between scalar and magnetic disorders for the value of the conductivity is small. Figure 6: Longitudinal conductivity \(\sigma_{xx}\) as a function of the disorder strength for \(vk_{c}/B=10\), \(\mu=B/4\). Solid lines correspond to the numerical calculations, dashed line is the analytical result given by Eq. (22) Hall conductivity In general, Hall conductivity has three terms [20] \[\sigma^{I}_{\alpha\beta}=\frac{e^{2}}{2\pi}\int\frac{d^{2}k}{(2\pi) ^{2}}\text{Tr}[V_{I\alpha}\,G^{+}\,v_{\beta}\,G^{-}], \tag{23}\] \[\sigma^{II}_{\alpha\beta}=-\frac{e^{2}}{4\pi}\int\frac{d^{2}k}{(2 \pi)^{2}}\text{Tr}[V_{II\alpha}\,G^{+}\,v_{\beta}\,G^{+}+c.c]. \tag{24}\] \[\sigma^{III\gamma}_{\alpha\beta}=\frac{e^{2}}{4\pi}\int\frac{d^{2}k}{(2\pi)^{2 }}\int\limits_{-\infty}^{\mu}f(E)dE\times \tag{25}\] \[\text{Tr}[v_{\alpha}\,G^{+}\,v_{\beta}\,\frac{dG^{+}}{dE}-v_{\alpha}\,\frac{dG ^{+}}{dE}\,v_{\beta}\,G^{+}+c.c.],\] Here \(\partial G^{\pm}/\partial E=-G^{\pm 2}\) is the derivative of the Green's function over the energy, \(f(E)\) is the Fermi distribution function (that is Heaviside step for zero temperature). Direct calculation shows that \(\sigma^{II}_{xy}=0\). Now, we decompose first term in to the contribution from the bare vertex and vertex corrections \(\sigma^{I}_{xy}=\sigma^{Ibv}_{xy}+\sigma^{Ivc}_{xy}\). This results in \[\sigma^{Ibv}_{xy}=-\frac{\sigma_{0}}{\pi}M_{yx},\quad\sigma^{Ivc}_{xy}=\frac{ \sigma_{0}}{\pi}(\delta_{y}M_{yx}+\delta_{x}M_{xx}). \tag{26}\] As for \(\sigma^{III}_{xy}\) there is no compact expression and we will present the analytical results for the total Hall conductivity in the limiting cases. In case of \(\mu=0\) and \(\Gamma_{z}=0\) expression for the contribution from the filled state becomes especially simple \[\sigma^{III}_{xy}=\frac{\sigma_{0}}{2}\left(\frac{2}{\pi}\arctan\frac{\Gamma_{ 0}}{\bar{B}}-1\right). \tag{27}\] Now we analyze these expressions for gapped state for weak disorder \(j<j_{c}\) where \(\Gamma_{0}\to 0\) and \(\Gamma_{z}=0\). In this case we have vanishing contribution from the Fermi surface \(\sigma^{I}_{xy}=0\) only contribution comes from the filled states \(\sigma_{xy}=\sigma^{III}_{xy}=-\sigma_{0}/2\). For the strong disorder \(j>j_{c}\) we have \(\Gamma_{0}\gg\mu,B\) that results in the following expression \[\sigma_{xy}=-\frac{\sigma_{0}}{\pi}\frac{\bar{B}}{\Gamma_{0}}\frac{8-8j+j^{2} }{(j-2)^{2}}. \tag{28}\] We plot numerical and analytical results for the Hall conductivity in Fig. 7. We see that Hall conductivity decreases with the increase of the disorder \(j\). At \(j\sim 1.17\) Hall conductivity changes its sign due to large contribution from the vertex corrections in case of density disorder. For the magnetic disorder Hall conductivity does not changes its sign. ## VII Conclusions In this work we have analysed in detail the properties of the disordered surface states of the topological insulator with out-of-plane magnetization. We find that the linear spectrum of the surface states brings an interesting behaviour of the self-energy arising due to the disorder. Increasing the disorder changes the monotonic convergence of the Born series to non-monotonic. Further increase of the disorder leads to divergence of the Born series. The Born series is essentially a many-body perturbation series to the single-particle Green's function. The divergence of the Born series shows a failure of the perturbation approach. It means that we should use more advanced techniques beyond perturbation theory to account for disorder effects. The divergence of the perturbation series is not uncommon in condensed matter physics. In the Refs [27; 28] it was shown that studying the perturbation series beyond its radius of convergence leads to unphysical results. We get that the density of states increases exponentially with the increase of the disorder \(\rho\propto\exp(-1/j)\) in the metallic state, see Eq. (8). Longitudinal conductivity does not follow the Drude formula \(\sigma_{xx}\propto\rho\) for strong disorder. Instead, the longitudinal conductivity depends weakly on the disorder strength \(\sigma_{xx}\propto\sigma_{0}/\pi\). Hall conductivity is suppressed but finite. So large density of states \(\rho\) together with small longitudinal conductivity \(\sigma_{xx}\propto\sigma_{0}/\pi\) and partially suppressed Hall conductivity \(\sigma_{xy}\lesssim\sigma_{0}/2\) are the hallmarks of the high disorder regime for the magnetised surface state of the topological insulator. In the recent work [29] ARPES shows a large density of states at the surface of the ferromagnetic topological insulator. However, transport measurements show a small longitudinal current. Also, the Hall conductivity is suppressed compared to the quantized value. These observations suggest a regime of strong disorder in such samples. Figure 7: Hall conductivity \(\sigma_{xy}\) as a function of the disorder strength for \(vk_{c}/B=10\), \(\mu=B/4\). Solid lines correspond to the numerical calculations, dashed line is the analytical result given by Eq. (28) ## Acknowledgments This work is supported by Russian Science Foundation (project No. 22-72-10074).
2306.13702
Magenta Green Screen: Spectrally Multiplexed Alpha Matting with Deep Colorization
We introduce Magenta Green Screen, a novel machine learning--enabled matting technique for recording the color image of a foreground actor and a simultaneous high-quality alpha channel without requiring a special camera or manual keying techniques. We record the actor on a green background but light them with only red and blue foreground lighting. In this configuration, the green channel shows the actor silhouetted against a bright, even background, which can be used directly as a holdout matte, the inverse of the actor's alpha channel. We then restore the green channel of the foreground using a machine learning colorization technique. We train the colorization model with an example sequence of the actor lit by white lighting, yielding convincing and temporally stable colorization results. We further show that time-multiplexing the lighting between Magenta Green Screen and Green Magenta Screen allows the technique to be practiced under what appears to be mostly normal lighting. We demonstrate that our technique yields high-quality compositing results when implemented on a modern LED virtual production stage. The alpha channel data obtainable with our technique can provide significantly higher quality training data for natural image matting algorithms to support future ML matting research.
Dmitriy Smirnov, Chloe LeGendre, Xueming Yu, Paul Debevec
2023-06-23T16:22:33Z
http://arxiv.org/abs/2306.13702v1
# Magenta Green Screen: ###### Abstract We introduce _Magenta Green Screen_, a novel machine learning-enabled matting technique for recording the color image of a foreground actor and a simultaneous high-quality alpha channel without requiring a special camera or manual keying techniques. We record the actor on a green background but light them with only red and blue foreground lighting. In this configuration, the green channel shows the actor silhouetted against a bright, even background, which can be used directly as a holdout matte, the inverse of the actor's _alpha channel_. We then restore the green channel of the foreground using a machine learning colorization technique. We train the colorization model with an example sequence of the actor lit by white lighting, yielding convincing and temporally stable colorization results. We further show that time-multiplexing the lighting between Magenta Green Screen and Green Magenta Screen allows the technique to be practiced under what appears to be mostly normal lighting. We demonstrate that our technique yields high-quality compositing results when implemented on a modern LED virtual production stage. The alpha channel data obtainable with our technique can provide significantly higher quality training data for natural image matting algorithms to support future ML matting research. Matting, compositing, spectral imaging ## 1 Ccs Concepts ## 2 Computing methodologies \(\rightarrow\) Computational photography; Image processing. ## 3 Keywords Matting, compositing, spectral imaging Introduction Separating actors from a background image to composite them into a new scene is a fundamental problem in visual effects, and one which still poses challenges in the digital era (Wright, 2013). The problem is challenging since each pixel of an image can belong to both the foreground and the background: partial coverage at edges, wispy and transparent structures, defocused and motion-blurred areas all exhibit partial transparency. Determining the RGB color of the foreground element at a pixel, as well as the pixel's transparency \(\alpha\), is both tricky and underdetermined by a single RGB image. Even when the actor is filmed in front of a green screen, it can be challenging to obtain a high-quality foreground element and \(\alpha\) channel. As stated by computer graphics pioneer Alvy Ray Smith, "The history of digital image compositing is essentially the history of the _alpha channel_." (Smith, 1995) Notably, while visual effects practitioners in both the film and digital cinematography eras have long relied on _chroma-keying_, or filming an actor in front of a blue or green screen and then using color space manipulations to derive a foreground/background separation (Beyer, 1965; Fielding, 2013; Sawicki, 2007; Vlahos, 1964; Vlahos and Taylor, 1993), these techniques rely on heuristics and approximations, without analytically solving the underconstrained mating equations, as detailed by Smith and Blinn (1996). As such, the contemporary green-screen keying algorithms in the modern compositor's toolkit, often part of proprietary commercial tools (Image-Based Keyer, Primate, etc.), require substantial manual parameter tuning to work effectively and only provide an approximation of the per-pixel transparency (Aksoy et al., 2016). To complement their analysis of the matting equations, Smith and Blinn (1996) introduced _triangulation matting_, whereby a stationary subject is filmed in front of two known backgrounds. Given this additional constraint--two known backgrounds instead of one--an accurate alpha channel can be derived from the matting equations. This technique was leveraged by Rhemann et al. (2009) to develop the ground truth alpha channel imagery for the first public benchmark designed for fairly evaluating the myriad alpha matting algorithms proposed by the computer vision research community. Although limited to imagery of about thirty different relatively small, static objects, this benchmark1 was the first of its kind to include analytically-derived ground truth alpha mattes recovered from photographs. Footnote 1: [http://www.alphahametting.com/datasets.php](http://www.alphahametting.com/datasets.php) Since the introduction of this benchmark, matting algorithms have received a great deal of attention from the visual computing research community, particularly algorithms aiming to solve the highly ill-posed _natural image matting problem_ where the background content is unknown and non-uniform (Levin et al., 2007). Unfortunately, while several new public datasets and benchmarks have been released in the subsequent years, including those with imagery of people and those with motion imagery, the static and object-limited dataset of Rhemann et al. (2009) remains, to the best of our knowledge, the only dataset which provides ground truth alpha mattes. The remaining datasets published in the era of deep learning-based matting algorithms ((Erofeev et al., 2015; Lin et al., 2021; Xu et al., 2017)), leverage labels derived from chroma-key approximations or manual rotoscoping. This is an understandable limitation of these datasets, since methodologies used to record ground truth alpha mattes at video rates have remained difficult to practice, requiring sophisticated hardware configurations. We therefore introduce _Magenta Green Screen_ with the goal of recording ground truth alpha mattes at video rates, such that we can film scenes containing realistic human motion with appropriate motion blur along with diverse materials, including challenging-to-matte moving hair strands and transparent materials. In our proposed technique, we use a three-channel RGB camera to record the red, blue and _alpha_ channels of the desired four-channel RGBA image, instead of the typical red, blue, and green channels. As our recorded imagery is thus missing its typical green channel, we infer this green channel given the corresponding red and blue channels using a machine learning network trained on full-color exemplar images. We demonstrate that our proposed technique can be used directly as a film production methodology and argue that it can be used to capture realistic alpha channel data at scale to train deep learning-based video matting algorithms. While we record data to demonstrate our technique in an LED volume virtual production stage, this hardware setup is not strictly necessary: the only strict requirement is a set of LED based light sources capable of producing fields of red, green, and blue light independently. ## 2. Related Work The process of deriving a matte (i.e., alpha channel) for foreground elements to composite them onto new backgrounds has a rich history with a long line of contributions from both industry and academia. While a complete discussion is outside the scope of this paper, in this section we review some of the most relevant literature to our project. Where possible, actors are filmed in front of a green screen, and both the actors' foreground appearance and alpha channel are derived from the green screen image. The central difficulty is that while the green channel is bright everywhere the background should be, it is not always dark where the foreground should be. Thus, the matting algorithm somehow needs to remove the foreground appearance from the green channel based on the red and blue channels. As described by Smith and Blinn (1996), this can be done if the color space of the foreground object is limited to at most two dimensions, such as when it is known to be neutral in color, or flesh-toned, or lacking in the color present in the background. Both film-based (Vlahos, 1964) and digital compositing techniques used in commercial products such as _Ultimate_ have assumed a limited foreground object color space, called the Vlahos Assumption. ### Recording a Separate Alpha Channel Another technique which has been explored is to photograph the alpha channel as a fourth color channel, simultaneous to the RGB color channels, using a reserved part of the spectrum. _Infrared matting_ places the actor in front of a visibly black screen reflecting infrared (IR) light, and uses beamsplitter to direct the IR toward a separate strip of film (Pickley, 1946) or digital video camera, e.g., (Debvec et al., 2002). The infrared image sees the actor silhouetted against a bright background, forming a holdout matte, or the inverse of the alpha channel. Infrared matting is challenged by the fact that infrared light tends to focus differently through optics than visible light (Vidor, 1960). The sodium vapor matting process (Vliahos, 1958) moves the matting channel into the visible spectrum by filming the actor in front of a field of monochromatic yellow-orange light from a low-pressure sodium vapor lamp. A beamsplitter and bandpass filter allows the yellow background to be filmed on one strip of film, forming a holdout matte, and the actor's full-color appearance to be recorded through a notch reject filter on a second strip of film, blocking the sodium vapor light to show the actor normally lit on a black background. Our work takes some inspiration from sodium vapor matting, but we eliminate the need for a specialized camera by using the green channel (rather than infrared or yellow) to record the matte, and we use a deep learning colorization algorithm to restore the missing green channel of the actor. Numerous other matting techniques employing specialized optical properties have been proposed and used, including polarization (Ben-Ezra, 2000) and retroreflectivity (Jenkins, 1952). ### Natural Image Matting Natural image matting is the process of separating a foreground element from a complex background, where the background color is variable and sometimes unknown across the image. Numerous techniques from Bayesian Matting (Chuang et al., 2001) onwards (Levin et al., 2007, 2008; Sun et al., 2004; Wang and Cohen, 2007) have proposed automated and semi-automated techniques for natural image matting. Typically, these algorithms begin with a manually drawn segmentation into known foreground (\(\alpha=1\)), known background (\(\alpha=0\)), and unknown transition (\(0<\alpha<1\)) regions, and the algorithm estimates \(\alpha\) and the foreground/background colors \(F\) and \(B\) at each pixel in the transition region. These algorithms have made their way into software tools available for production use, but still typically require manual input to achieve good results. ### Matting with Deep Learning Recently, deep learning techniques have been applied to the natural image matting problem with significant success, as in (Cai et al., 2019; Chen et al., 2018; Forte and Pitie, 2020; Hou and Liu, 2019; Li and Lu, 2020; Lin et al., 2021; Lutz et al., 2018; Sengupta et al., 2020; Xu et al., 2017; Zhang et al., 2021). However, an impediment to achieving production-quality alpha channels with these techniques has been the lack of training datasets with accurate ground truth alpha channels. Many such datasets used in research use roughly estimated alpha channels from actors in front of a green screen, with inaccurate alpha variation in transition regions and foreground elements which have not been well-separated from the background color. Our work is not in the category of natural image matting algorithms, as we film our actors in front of a green screen, but a major motivation of our technique is to generate accurate ground truth alpha channel data for training natural image matting algorithms. ### Image Colorization Image colorization has a similarly rich history to image matting and is typically framed as converting a single-channel grayscale image to RGB color, inferring three color channels from just one. While early colorization techniques required manual digital painting, recent techniques surveyed by Anwar et al. (2020) and Zeger et al. (2021) leverage machine learning to make better guesses as to the colors in the original imagery. Techniques from before the era of deep learning use statistical analysis or user-supplied color examples to colorize an image (Chia et al., 2011; Levin et al., 2004; Liu et al., 2008; Reinhard et al., 2001; Welsh et al., 2002). Deep learning-based colorization techniques (Cheng et al., 2015; Deshpande et al., 2017; He et al., 2018; Huang et al., 2022; Iizuka et al., 2016; Isola et al., 2017; Kumar et al., 2021; Saharia et al., 2022; Su et al., 2020; Yoo et al., 2019; Zhang et al., 2016, 2017) have leveraged vast image collections as training examples for supervised learning, as any color RGB image is easily converted to monochrome to form a training pair. Nonetheless, Su et al. (2020) note that "image colorization is inherently an ill-posed problem with multi-modal uncertainty." If a monochrome image shows a person wearing a grey shirt, it's rarely clear what color the shirt should actually be. Limmer and Lensch (2016) notably infer visible color RGB images from infrared imagery, using a spectral channel disjoint from the visible spectrum. Deep colorization techniques have also been applied to video, emphasizing the need to achieve temporal stability (Lei and Chen, 2019; Vondrick et al., 2018; Zhang et al., 2019). Informed by these works, we employ a colorization technique to restore the green channel to imagery containing only the red and blue channels of the scene. As this requires inferring one channel from two, we have a simpler problem than monochrome-to-color inference, and we find that an example-based approach works well. ## 3. Studio Lighting Setup We filmed our actors in an LED volume (Bluff et al., 2020; Hamon et al., 2014) 18 meters wide and 9 meters deep, as shown in Figure 2. The volume surrounds the actors 270 degrees around with ROE Black Pearl 2v2 LED panels, each being 50cm on a side with 176\(\times\)176 pixels for a 2.8mm pixel pitch. The panels consist of red, green, and blue LEDs. The actors stood on a platform in the middle of the stage facing toward one curved side wall, with the other curved side wall behind them. We used the walls in front and to the side of the actor for lighting and an area of the wall just behind the actors for the background, cropped tightly around the camera frustum to minimize spill light. In the canonical configuration, we drove the lighting with a magenta color consisting of only the red and blue LEDs and the background with a green color consisting of only green LEDs. To validate the technique, various foreground and background images were also placed on the lighting walls and inside the background camera frustum. We filmed our subjects with a RED Komodo digital cinema camera2 commonly used for digital filmmaking. We filmed on a Canon 50mm EF lens set to an f/2.8 aperture to avoid the appearance of moire in the background. For the non-time-multiplexed recordings, we set the frame rate to 24fps and the shutter angle to 180 degrees to yield the typical frame rate and amount of motion blur commonly seen in movies. For the time-multiplexed recordings, we shot at 48fps with a significantly narrower 105 degree shutter angle, with successive frames recording alternating lighting conditions, as described in Section 4.3.3. Footnote 2: www.red.com/komodo Our subjects were chosen to have differing skin tone, hair color, and hair length and were costumed colorfully to challenge the colorization algorithm. Each subject was handed a glass bottle, one red, one green, to test the algorithm's ability to record semitransparent alpha values. They were directed to perform actions which showed the costumes from different angles and produced regions with significant motion blur. A mirrored sphere and color chart was placed in the scene to document the lighting. The reflection in the mirrored sphere gives an indication of the foreground lighting. Lighting the subjects from the front and the left side of the image, but not the right, produced different matting challenges on their left and right sides. While any cinema camera delivering linear pixel values should work with the basic magenta-green technique, we used the Komodo, since it is a global shutter camera, capable of synchronizing to the changing lighting conditions of the time-multiplexed matting techniques described in Section 4.3.3. Our matting technique can also be practiced without an LED volume. For example, our initial experiments were performed using inexpensive RGB LED light wants, lighting the actor with two wands set to a magenta color and lighting a white background with two wands set to a green color. ## 4. Basic Method Our method honors Alvy Ray Smith's assertion that the "transparency of an image is as fundamental as its color" (Smith, 1995) and uses one of the camera's three color channels (usually green) to measure the alpha channel. We then use example-based image colorization to restore the green channel of the foreground element. We can explain this process in terms of the matting equations (Porter and Duff, 1984). We refer to a pixel's background color using the RGB triple \([B_{R},B_{G},B_{B}]\), its foreground subject as \([F_{R},F_{G},F_{B}]\), and its composited appearance as \([C_{R},C_{G},C_{B}]\). Assuming a single alpha transparency \(\alpha\) for all color channels, the matting equations are simply \[C_{R} =\alpha F_{R}+(1-\alpha)B_{R} \tag{1}\] \[C_{G} =\alpha F_{G}+(1-\alpha)B_{G}\] \[C_{B} =\alpha F_{B}+(1-\alpha)B_{B}.\] These equations include seven total unknowns for a given photograph: \(B_{R}\), \(B_{G}\), \(B_{B}\), \(F_{R}\), \(F_{G}\), \(F_{B}\), \(\alpha\), as the photograph's pixel values comprise \(C_{R}\), \(C_{G}\), \(C_{B}\). In the case that the background color \(B_{R}\), \(B_{G}\), \(B_{B}\) can be measured (e.g., by photographing a clean plate without the foreground subject) there are four unknowns: \(F_{R}\), \(F_{G}\), \(F_{B}\), \(\alpha\). While many alpha matting algorithms focus on inferring \(\alpha\), it should be clear from these equations that to form a successful image composite over a new background that indeed the foreground color \(F_{R}\), \(F_{G}\), \(F_{B}\) must also be recovered, leaving the problem ill-posed without additional constraints. Smith and Blinn (1996) noted that if the subject reflects no blue light, then \(F_{B}=0\), and the blue channel of the subject in front of a blue screen gives a direct measurement of \(1-\alpha\), which allows \(F_{R}\), \(F_{G}\), and \(\alpha\) to be determined easily. We leverage color-controllable RGB LED lighting to a similar end: we turn off the green LEDs lighting an arbitrary subject to force \(F_{G}=0\) and illuminate them from behind with a field of green light. In this way, \[C_{R} =\alpha F_{R}+(1-\alpha)B_{R} \tag{2}\] \[C_{G} =(1-\alpha)B_{G}\] \[C_{B} =\alpha F_{B}+(1-\alpha)B_{B}.\] Rearranging to solve for the three remaining unknowns yields \[\alpha =\frac{B_{G}-C_{G}}{B_{G}} \tag{3}\] \[F_{R} =\frac{C_{R}-(1-\alpha)B_{R}}{\alpha}\] \[F_{B} =\frac{C_{R}-(1-\alpha)B_{B}}{\alpha}.\] Note that these equations are solvable only if \(B_{G}>0\), as otherwise the first equation is undefined. Furthermore, if \(\alpha\) is zero, the foreground colors \(F_{R}\) and \(F_{B}\) are undefined. The intuition behind these equations is that the green channel is guaranteed _only_ to be nonzero in the background, and so it is now essentially just a silhouette image of the subject, with pixel values of zero everywhere in the foreground. This is the inverse of the desired alpha channel, up to a scale factor. Given this more intuitive interpretation, it is clear why the main additional constraint is that the background must contain green; otherwise no silhouette image remains. In this designed scenario, the foreground is guaranteed to have \(F_{G}=0\). However, we could have also suggested that \(F_{R}=0\) or Figure 2. A diagram of our principal LED volume filming setup with magenta foreground lighting and a green screen background within the camera frustum (top), and a photo of the setup (bottom). 0, implying no red or blue pixels in the foreground, respectively (see Section 5.2). ### Color Calibration Digital cinema cameras sense color by placing a color filter array--often a Bayer pattern--over a set of photosites sensitive to the entirety of the visible spectrum. The color filters, by design, usually have a significant degree of overlap in spectral transmission. As a result, a given wavelength of red light may register on both the red and green color channels of an image, a phenomenon known as _ crosstalk_. Since our method as outlined in (3) requires that the green pixel values of the foreground content are all zero and, ideally, that the red and blue channels show the subject against black, we need to remove this color crosstalk. This can be done effectively with a \(3\times 3\) color transformation matrix \(\mathbf{M}\). To determine \(\mathbf{M}\), we first record the appearance of each of the LED spectra to the cinema camera by placing a color chart in the scene and illuminating it consecutively by red, green, and blue light. We calculate the average RGB color of the chart's white square under each lighting condition and places these RGB values as column vectors into a measurement matrix \(\mathbf{W}\). The matrix records how much of each LED color affects each color channel. Since \(\mathbf{W}\) transforms the individual LED colors to camera observations, \(\mathbf{M}=\mathbf{W}^{-1}\) transforms camera observations back to the individual LED colors, removing the crosstalk. We thus apply this color calibration matrix \(\mathbf{M}\) to all imagery prior to applying the solution of (3). This calibration process allows us to pretend that we captured our imagery using a camera with "sharp" spectral sensitivities without color channel crosstalk. This solution is only guaranteed for materials of spectrally neutral reflectance, such as the white square of the color chart, since having a light spectrum reflect off of a material reflectance spectrum transforms the spectral content of the observed light, and a somewhat different crosstalk elimination matrix could be required for different materials. However, since the spectral output of the LEDs in a virtual production stage is narrow (Figure 3), the crosstalk ratios tend to remain similar for the majority of the spectral content of each LED. In practice, we are able to eliminate the majority of crosstalk even for strongly hued materials with a single \(\mathbf{W}\) applied to the whole image. ### Bounce Light Subtraction A final pre-processing step is necessary is to correct for the presence of bounce light within the LED volume. Because LED panels are not perfectly black and reflect some of the light falling upon them, the background panels behind the actors will typically include some bounced light from the foreground. This means that the foreground element will not be seen against a perfect field of black, as is required for the element to be self-mutting with a premultiplied alpha but against a field of dim reflected foreground light, a seen in Figure 3(a). We can measure this bounced light by turning off the background LED panels while they are illuminated by foreground lighting as in Figure 3(b). After color correction, we can then subtract this bounced light from just the background around the actors by first multiplying it with the holdout matte as in Figure 3(c), yielding the result in Figure 3(d). ### Colorizing the Missing Green Channel While the _Magenta Green Screen_ process records an accurate alpha matte, the resulting foreground elements have the serious deficiency of missing their green channel. To address this, we design an image colorization technique to restore the green channel based on the observed red and blue channels. #### 4.3.1. Naive Colorization Real-time colorization can be performed in a simple but naive way by setting the green channel to be a linear combination of the red and blue channels: \(g=\rho r+(1-\rho)b\). The value \(\rho\) can be 0.5 to average the channels or can be chosen to be a value which optimizes the appearance of skin tones, closer to \(\rho=0\). Such naively-colorized images have a limited and inaccurate range Figure 4. Correcting for bounce light. Actors are captured in front of an unit background exhibiting bounce light (a). We also capture a clean plate showing the bounce light on its own (b). We subtract the clean plate multiplied by the holdout matte (c) from (a) to remove bounce light on just the background, not the actors (d). Figure 3. The spectral sensitivity curves of a variety of measured camera sensors (Jiang et al., 2013) overlaid with the spectral output of the red, green, and blue LEDs of the LED panels in an LED volume, showing crosstalk. of colors, but they can get skin tones, neutral tones, blue skies, and dusty plains all to look approximately correct, which explains why the two-strip Technicolor process could be effectively used for Westerns. However, greens, magentas, and many other colors will be represented inaccurately. Figure 1c shows a naively-colorized image from the magenta green matting process. #### 4.3.2. Colorization with Deep Learning To recover the green channel more accurately, we can train a deep neural network to infer the green channel from the red and blue channels based on full-color training examples. For this we follow previous works which perform full colorization of grayscale images, e.g., (Zhang et al., 2016), emboldened by the knowledge that our problem is a significantly easier one, restoring one channel of information from two rather than two from one. In this work, we record scene-specific training data in the form of an alternate "rehearsal" take of the scene shot under white RGB lighting seen on a black background, as seen in Figure 5. Each frame from this sequence yields a colorization training pair showing the proper green channel for given red and blue channels. During training, we take random \(512\times 512\) crops of the full \(1920\times 1080\) high definition frames and perform data augmentation by randomly perturbing the image luminance and color balance. Since our network is fully-convolutional, we can apply to it to the full-resolution image at test-time despite training on patches. We found that tone mapping the images with a gamma value of 2.2 improved the results compared to training on linear data, since doing the latter leads to poor optimization in darker areas of the images. We use a standard image-to-image translation U-Net architecture with skip connections (Ronneberger et al., 2015). We first pass the input through two \(3\times 3\) convolutional layers followed by five downsampling blocks, starting with 32 channels and doubling the number at each layer, five upsampling blocks with corresponding numbers of channels, two additional \(3\times 3\) convolutions, and a final \(1\times 1\) convolution and \(tanh\) non-linearity to constrain the output pixel values to a reasonable range. We use a Leaky ReLU non-linearity (Xu et al., 2020) and Batch Normalization (Ioffe and Szegedy, 2015) after each convolutional layer except for the first two. Each downsampling and upsampling block contains two \(3\times 3\) convolutions and uses blur-pooling (Zhang, 2019). We train our network for 100,000 iterations using Adam (Kingma and Ba, 2014) as the optimizer and a learning rate of 0.0001 with batch size 16. This takes approximately 2.5 hours on four NVIDIA A10G GPUs. Color inference is much faster, taking less than one second per frame. #### 4.3.3. Time-Multiplexing A significant drawback of Magenta Green Screen is that the actors need to perform their scene under the unnatural illumination condition of magenta light. We can disguise the appearance of the magenta illumination by rapidly alternating it with green illumination, with the camera synchronized to the illumination changes so that it records only the magenta-green conditions. This is similar to the effect demonstrated in (McDowall et al., 2004), where black-and-white artwork was hidden within imagery projected by a high-speed video projector, with each artwork frame quickly followed by its black-and-white inverse. Related time-multiplexing techniques have been shown for virtual production matting applications by Wenger et al. (2005) and _GhostFrame3_. Footnote 3: [https://www.ghostframe.com/](https://www.ghostframe.com/) Unfortunately, alternating between lighting patterns at 24Hz is uncomfortably stroboscopic and could even be dangerous for a person with a sensitivity to flashing light. According to Fisher et al. (2022), for such photosensitive individuals, "images with flashes brighter than 20 candelas/\(m^{2}\) at 3-60 (particularly 15-20) Hz occupying at least 10 to 25 degrees of the visual field are a risk." We address this by increasing the repeating rate of the two lighting conditions at 72HZ, so that the lighting changes from one color to the next every 144th of a second. The lighting then appears nearly constant, with a remaining effect being that rapidly moving objects leave a trail of magenta/green outlines when seen against the screen, as in Figure 6 (bottom). We can then synchronize our cinema camera to record the first of every six lighting changes, requiring a shutter angle of at most 60 degrees, yielding Magenta Green Screen images at 24fps, as seen in Figure 6 (top). The magenta-green frames can be colorized as before from a separate pass lit by full-spectrum lighting. The remaining drawback of time-multiplexing in this manner is that the shorter shutter angle reduces the amount of motion blur, which is considered desirable for cinema. Wenger et al. (2005) faced a similar problem in their time-multiplexed relighting work, with relit images formed as the linear combination of images taken with very short exposure times. Like this work, we can address the problem in our Time-Multiplexed Magenta Green Screen technique by computing optical flow and using the flow not only to temporally align adjacent frames, but also to add simulated 180 degree motion blur to the images. We show an example of this in Figure 7 and in the accompanying video. Figure 5. An example frame of training data used for our colorization model. Given frames of a subject captured with white lighting in front of a black background, we train a model to predict the green channel from the red and blue. ### Time-Multiplexed Magenta Green Screen If we set the cinema camera to record at 48fps, it will record every third lighting condition in the time-multiplexed magenta-green sequence. As seen in Figure 6, this yields alternating frames lit by magenta and green light, in front of a background of green then magenta light, as seen in Figures (a)a and (b)b. Notably, the second of each pair of frames contains the appearance of the actor under green light. In the absence of motion, this green channel could complete the red and blue channels of the previous frame and eliminate the need for colorizing the foreground element. However, in the presence of motion, there will be a frame misalignment, as seen in Figure (c)c. We can attempt to align the frames by computing optical flow from one magenta-green frame to the next, and displacing the interposed green-magenta frame by half the estimated flow vectors, similar to the use of tracking frames to align intermediate lighting frames in (Wenger et al., 2005). We use the _Kronos_ node in the Nuke studio compositing software for this purpose. After applying motion compensation, we are able to recover the full RGB foreground element (Figure (d)d). However, when the subject moves quickly, the optical flow algorithm fails to track the motion, resulting in colorization errors. Figure 8. In Time-Multiplexed Magenta Green Screen, a magenta-green frame (a) is quickly followed by a green-magenta frame (b) to neutralize its appearance. We reconstruct the foreground naively (c) and using optical flow (d). Figure 6. Time-multiplexed version of Magenta Green Screen, repeating the pattern and its inverse at 72Hz (top). While the lighting appears neutral in color and does not visibly pulsate, fast-moving objects show color fringing to the eye, as approximated in this 360 degree shutter exposure (bottom). Figure 7. We use optical flow to add simulated 180 degree motion blue to our Time-Multiplexed Magenta Green Screen composited footage. ### Classic Time-Multiplexed Matting Wenger et al. (2005) performed time-multiplexing matting by alternating frames lit by white light against a dark background with frames of the actor in silhouette against an illuminated background. The technique required optical flow, and produced good results in standard definition video. For comparison, we implemented this classical time-multiplexed matting approach by alternating between the actor lit by RGB white lighting against black and then unlit against an RGB white background, as in Figure 9. Applying optical flow between the illuminated frames to align the matte can produce a good composite and has the benefit of yielding a full-color matte frame, able to record color-dependent transparency. However, as can be seen in the accompanying video, the optical flow can fail in the presence of fast subject motion, creating matte edges that are misaligned with the foreground element. Our technique of shooting the matte in the same frame as the foreground element (which becomes colorized) does not require optical flow for alignment. ### Time-Multiplexed Triangulation Matting Smith and Blinn (1996) proposed triangulation matting, where the alpha channel of a foreground subject is derived by seeing the foreground in front of two differently colored background images. The technique was proposed for static scenes, but in this section we apply the technique to dynamic scenes using our time-multiplexing setup. We keep static white LED lighting static on the actor and alternate the background LED panels in the camera frustum between green and blue, as in Figures 9(a) and 9(b). We note that after color matrixing to eliminate crosstalk, the red channel shows the actor lit by red light against a dark background in both frames, as seen in Figures 9(c) and 9(d). We thus can perform a more robust solution to the optical flow between the red channels of consecutive green and blue background frames, which are exposed just 1/48th of a second apart instead of 1/24th of a second. Displacing the blue background frame to the position of the previous green background frame supplies the imagery needed for triangulation matting. However, even with this improved optical flow technique, some temporal misalignment can remain, as in Figure 9(f). ## 5. Results and Discussion ### Basic Magenta Green Screen Results Figure 1 shows the main steps of the Magenta Green Screen process. Figure 0(a) is a frame of a clip after the crosstalk elimination of Section 4.1. Figure 0(b) shows the matte derived by dividing the green channel by its appearance in the clean plate and inverting. Figure 0(c) shows a naively colorized foreground element, where the green channel is replaced with a simple linear combination of the red and blue channels. The bounce light subtraction, as in Section 4.2, has been applied to achieve a black background. Figure 0(d) shows the foreground element colorized with machine learning, as described in Section 4.3, based on a color reference performance under white RGB light. Figure 0(e) shows the colorized foreground element composited onto a background image, exhibiting good matte edges and transparency for the bottles. Figure 0(f) shows a real ground truth comparison image, where the actors were illuminated by white RGB light and the background image was displayed on the LED panels as an in-camera visual effect. Aside from slightly different actor poses, the composited image and real ground truth image are nearly indistinguishable, with believable matte edges and accurate image colorization. The sequence in Figure 1 is shown in motion in the accompanying video, with the actors shaking the semitransparent bottles to generate varying degrees of motion blur in the footage, showing believable alpha transparency throughout. ### Which Color to Use for the Matte? We can alternatively choose to use the red or blue channel for recording the matte instead of green: this would result in Yellow Blue Screen matting or Cyan Red Screen matting. We chose to use green for matting, since green screen is the most common traditional matting process and records the matte with the highest resolution channel, as Bayer pattern sensors have twice as many green pixels as they do red or blue. We also imagined that inferring a green channel prompted by the red and blue channels might the Figure 9. We use consecutive frames from Classic Time-Multiplexed Matting (a, b) to reconstruct the foreground without optical flow (c) and with optical flow. easiest colorization process, since the algorithm needs to infer a color channel which is spectrally between two observed channels, rather than outside the spectral area which has been recorded. To test this theory, we implemented the Yellow Blue Screen process as in Figure 12, training a different colorization process to predict the actors' blue channel from the red and green channels and the same RGB-white-lit reference sequence. This technique also worked well, and the yellow light was somewhat more pleasing to look at than the magenta. One artifact occurred on the side of the blonde hair, which appeared too yellow in some frames. However, we believe this could be due to image sensor saturation in this area, since the blonde hair reflected more yellow light than it did magenta, and we left the exposures the same. ### Recovering a Full-Color Alpha Channel The alpha channel recovered from Magenta Green Screen is monochromatic, making the usual assumption that a foreground element's transparency in the green channel is the same for its red and blue channels. If parts of the scene exhibit colorful transparency, such as the red and green glass bottles in Figure 1, then a monochrome matte would have the objects transmit incorrectly neutral light, as in Figure 0(b). In this case, we can colorize the matte image from a reference recording of the actors performing while silhouetted in front of a white background. Although this requires synthesizing two channels from one, and even though there is much less visual detail in the silhouetted imagery, our colorization framework is able to colorize the holdout matte image as well as in Figure 0(a). To aid our model in matte colorization, we provide RGB channels of the frame prior to the input monochrome matte as additional signal--this training data can be obtained by multiplexing the silhouetted lighting with white lighting over a black background. A composited result from this process is shown in Figure 0(b) and in the accompanying video. This yields a subtle improvement in the appearance of the bottles compared to the basic Magenta Green Screen technique of Figure 1. ### Time-Multiplexed Results and Comparisons The time-multiplexing technique results Section 4.3.3 are shown in Figure 8 for Time-Multiplexed Magenta Green Screen, Figure 9 for Classic Time-Multiplexed Mating, and Figure 10 for Time-Multiplexed Triangulation Matting. A sequence processed from each technique is included in the accompanying video. In each case, the technique works well except when there is significant subject motion, and optical flow is relied upon to align the channels of the foreground element and/or the matte from neighboring frames recorded at 48fps. The ML colorization technique to reconstruct the green channel of Time-Multiplexed Magenta Green Screen has the advantage that no optical flow is required to align channels, and it can be applied to Time-Multiplexed Magenta Green Screen footage just as it can be to non-Time-Multiplexed Magenta Green Screen footage. ### Comparison to Traditional Green Screen Figure 14 shows a matting comparison with a traditional green screen approach. In this example, we use a subject wearing a green dress with long blonde hair blown by a fan. The basic Magenta Figure 10. For Time-Multiplexed Triangulation Matting, we capture a white-lit subject behind alternating differently-colored frames (a, b). The red channels of the frames (c, d) are similarly lit and can be used for computing optical flow between the frames. We reconstruct the foreground without optical flow (e) and with optical flow (f). Green Screen approach yields a high-quality alpha channel and resulting composite, while an automated keying technique in Nuke applied to the footage struggles to key out the dress, rendering it significantly transparent; Magenta Green Screen also recovers somewhat more detail in the wispy hair. While manual keying techniques could certainly succeed in keying this green screen shot, we are interested in an automated, reliable matting technique which makes no restrictions on what the actor wears or how they move. ### Discussion The main findings of this work are that the Magenta Green Screen approach appears to work well in generating high-quality alpha Figure 11. Four frames of the performance sequence from the accompanying video, all processed and composited onto two different backgrounds using the basic Magenta Green Screen process. _GT1_ and _GT2_ show ground truth comparisons to the composited shots _Composite 1_ and _Composite 2_. channels, and that our implementation of the colorization technique appears to be accurate, effective, and temporally stable. Furthermore, our technique appears to outperform the matting that which can be obtained with either traditional chromakey green screen or time-multiplexed techniques which rely on optical flow to temporally align differently illuminated frames. And while we did not compare to optical techniques to record a matte simultaneously with the foreground element (e.g., infrared matting and the sodium vapor process), we do not require a custom optical setup and do not need to align images from different sensors or cameras. ## 6. Future Work Our method suggests a number of avenues for future work. One desirable improvement would be to eliminate the need for recording a color reference clip of the actors in addition to the performance clip. Figure 14. Comparison to traditional green screen. We record an actor lit with Magenta Green Screen lighting (a) as well as with white light against a traditional green screen (b). The matte generated from the Magenta Green Screen process (c) does not exhibit the artifacts of that generated from the traditional green screen using automated chroma keying (d). We show the corresponding Magenta Green Screen composite (e) and chroma keyed composite (f). Figure 12. Yellow Blue Screen variant. Actors are lit by red and green LED channels in front of a blue background (a). A matte is derived from the Blue channel (b). We colorize the foreground element (c) and produce a composite (d). Figure 13. A full color matte (a) obtained by colorizing the monochrome holdout matte (c) from the green channel from Magenta Green Screen, showing colorful transparency of the bottles. A composite (b) made using the full-color matte yields better color rendition of the bottle than done using the monochrome matte (d).
2304.06210
Aretakis Hair for Extreme Kerr Black Holes with Axisymmetric Scalar Perturbations
We study the evolution of axially-symmetric scalar field perturbations on an extreme Kerr spacetime for initial data with multipole moments $\ell^{\prime}$ higher than the least radiative mode, and we measure modes $\ell$ -- and for the first time also horizon charges -- that are excited by mode coupling interactions. We then find the Ori-Sela prefactors, a certain quantity that can be evaluated at finite distances and the Aretakis constant along the event horizon of the extreme Kerr black hole for a sequence of initial data preparations that differ only by their distance from the event horizon. We find that for initial data in the near field there is a linear relationship of the Aretakis constant and the Ori-Sela prefactor. For initial data farther than these the linear relationship is not universal, and we propose that stronger numerical simulations would be needed to regain linearity. The linear relationship suggests that the Aretakis charge along the event horizon can be measured at a finite distance, thereby extending this type of violation of the no-hair theorems from the least radiative axisymmetric mode also to situations that involve mode coupling.
Lior M. Burko, Gaurav Khanna, Subir Sabharwal
2023-04-13T01:28:39Z
http://arxiv.org/abs/2304.06210v1
# Aretakis Hair for Extreme Kerr Black Holes with Axisymmetric Scalar Perturbations ###### Abstract We study the evolution of axially-symmetric scalar field perturbations on an extreme Kerr spacetime for initial data with multipole moments \(\ell^{\prime}\) higher than the least radiative mode, and we measure modes \(\ell\) - and for the first time also horizon charges - that are excited by mode coupling interactions. We then find the Ori-Sela prefactors, a certain quantity that can be evaluated at finite distances and the Aretakis constant along the event horizon of the extreme Kerr black hole for a sequence of initial data preparations that differ only by their distance from the event horizon. We find that for initial data in the near field there is a linear relationship of the Aretakis constant and the Ori-Sela prefactor. For initial data farther than these the linear relationship is not universal, and we propose that stronger numerical simulations would be needed to regain linearity. The linear relationship suggests that the Aretakis charge along the event horizon can be measured at a finite distance, thereby extending this type of violation of the no-hair theorems from the least radiative axisymmetric mode also to situations that involve mode coupling. ## I Introduction Extreme Reissner-Nordstrom (ERN) black hole (BH) spacetimes exhibit a conformal symmetry [1] that relates the Newman-Penrose constants at future null infinity (\(\mathscr{I}^{+}\)) with the Aretakis constants at the future event horizon (EH, \(\mathscr{H}^{+}\)) [2; 3; 4; 5]. This relationship suggests that at least for ERN one could at least in principle violate the no-hair theorems [6] with measurements of Newman-Penrose constants at \(\mathscr{I}^{+}\). Later, it was shown that one can indeed measure the Aretakis constants for ERN along \(\mathscr{H}^{+}\) with measurements made at \(\mathscr{I}^{+}\)[7; 8] and at finite distances [9]. (Note, that in [7] no use of the conformal symmetry was made.) In fact, [8; 9] also extended this result for extreme Kerr (EK) BHs, specifically for axisymmetric scalar and gravitational perturbations. The proposed external measurement of BH hair with Aretakis charges for EK is perhaps surprising, because the conformal symmetry of ERN does not extend to EK [1]. However, it was pointed out in [2] that axially symmetric scalar fields propagating on EK spacetimes do have such a conformal symmetry, a result closely related to the symmetry of the radial equation for such perturbations [1]. Therefore, one may expect that at least in the axially symmetric case, although possibly not in general, one could still measure at finite distances Aretakis charges on \(\mathscr{H}^{+}\), and thereby violate the no-hair theorems in this sense. In [9] we considered the case of the lowest radiative mode of a scalar field propagating on a fixed EK spacetime, specifically the axisymmetric monopole mode. That is, we excited in [9] the monopole mode, and then measured the Ori-Sela prefactor \(e[\psi]\)[11; 12] and the Aretakis constant for a set of initial data preparations differing only by their distance from the EK EH. We showed in [9] that there was a linear relationship between the two, such that measurement of the Ori-Sela prefactor at a finite distance could allow us to infer the Aretakis constant. We interpreted this measurement of the Aretakis constant from measurements made at a finite distance as a violation of the no-hair theorem. The Kerr spacetime, and specifically EK, exhibit an intricate mode coupling mechanism [10]. We therefore pose the question of whether the behavior shown in [9] for the lowest radiative mode persists also for modes that are excited by mode-coupling excitations. We study here the Aretakis charges and their measurements at finite distances for an initial \(\ell^{\prime}\) multipole mode of an axially symmetric massless scalar field that excites an \(\ell\) multipole mode, \(\,{}_{\ell}\,\psi_{\ell}\). The latter gives rise to an Aretakis charge of degree \(k\), \(\,{}_{\ell}\,{}_{k},\ell\), of an EK, and we study its relationship to the generalized Ori-Sela prefactor, \(\,{}_{\ell}\,{}_{k},\ell[\psi]\). To the best of the knowledge of the present authors, this is the first time that horizon charges are calculated for mode that are created by mode coupling. By showing a linear relationship of the two we propose following [9] that one could at least in principle measure BH hair beyond those discussed in [6] also for multipole modes beyond the least radiative mode. The BH hair we propose are a consequence of linear perturbation theory, and result from a (linear approximation) of dynamical processes. It remains an open question whether similar hairs can be found in the fully nonlinear theory. Numerical approach We solve the scalar wave equation for perturbations in EK black hole backgrounds, focusing on axisymmetric modes (\(m=0\)). We modify the equation to work in compactified hyperboloidal coordinates \((\tau,\rho,\theta,\phi)\) that allow for time evolution on hypersurfaces which bring \(\mathscr{I}^{+}\) to a finite radial coordinate \(\rho(\mathscr{I}^{+})=S<\infty\). The relationship between these new coordinates \((\tau,\rho)\) and the spherical Boyer-Lindquist coordinates \((t,r)\) is \[\Omega = 1-\frac{\rho}{S}\] \[r = \frac{\rho}{\Omega(\rho)} \tag{1}\] \[v:=t+r_{*}-r = \tau+\frac{\rho}{\Omega(\rho)}-\rho-4M\log\Omega(\rho)\] where \(S\) denotes the location of \(\mathscr{I}^{+}\) in hyperboloidal coordinates, \(r_{*}\) is the usual 'tortoise' coordinate and \(v\) is the modified advanced time. Note that the angular variables are the same in both coordinate systems. Our numerical implementation scheme entails re-writing the second order partial differential equation (PDE) in terms of two coupled first-order differential equations. We solve this system using a high-order weighted essentially non-oscillatory (WENO) finite-difference scheme with explicit Shu-Osher time-stepping. Details may be found in our previous work [13]. We choose \(S=19.0\) and the location of \(\mathscr{H}^{+}\) such that \(\rho(\mathscr{H}^{+})=0.95\). The initial data are a truncated Gaussian centered at \(\rho=(1.0,1.1,1.2,1.3,1.4,1.5)\) with a width of \(0.22\) and non-zero for \(\rho\in[0.95,8]\). This choice ensures compactly supported initial data but with non-zero support on the \(\mathscr{H}^{+}\) surface. Finally, to complete these long duration, high-accuracy and high-precision computations in a reasonable time-frame we make extensive use GPGPU-based parallel computing. For additional details on implementation of such intensive computations on a parallel GPU architecture, we refer the reader to our earlier work on the subject [13]. ## III Fall off rates at \(\mathscr{I}^{+}\), \(\mathscr{H}^{+}\), and at \(r=\mathrm{const}\) We found before the fall-off rates for scalar perturbations (\(s=0\)) along \(r=\mathrm{const}\), along \(\mathscr{I}^{+}\), and along \(\mathscr{H}^{+}\) for the case of no initial data supported on \(\mathscr{H}^{+}\), and we add in Table 1 the corresponding decay rates when the initial data are supported on \(\mathscr{H}^{+}\). We have extensive numerical support for the asymptotic decay rates that appear in Table 1. The results in Table 1 allow us to predict the triplets \(\ell^{\prime},\ell;k\), where \(k\) is the order of the Aretakis charge (which is related to the order of the transverse derivative operator along \(\mathscr{H}^{+}\)), that would produce Aretakis constants. The results in Table 1 were obtained from results such as Figs. 1 and 2. Along \(\mathscr{I}^{+}\) horizon data do not change the decay rate, and the latter is the same as without horizon data. This conclusion is consistent with Table 2 in [14]. The reason that with or without horizon data the decay rates along \(\mathscr{I}^{+}\) are the same is that the initial data break the Couch-Torrence symmetry [1]. We can use the results from Table 1 to find the power-law indices for transverse derivatives along \(\mathscr{H}^{+}\). Specifically, without horizon data the \(p^{\mathrm{th}}\) transverse derivative along \(\mathscr{H}^{+},\ \partial_{u}^{p}\psi(v)\sim v^{n}\) at late advanced times \(v\gg M\), where \[-n_{\mathrm{no\ horizon\ data}}=\begin{cases}\ell^{\prime}-p&,\ell\leq\ell^{ \prime}-2\\ \ell+2-p&,\ell\geq\ell^{\prime}\end{cases} \tag{2}\] and with horizon data \[-n_{\text{horizon data}}=\begin{cases}\ell^{\prime}-1-p&,\ell\leq\ell^{\prime}-2 \\ \ell+1-p&,\ell\geq\ell^{\prime}\end{cases}\,. \tag{2}\] We can use the results in Eqs. (1) and (2) to predict at what value of \(k\) we expect an Aretakis constant \({}_{\ell^{\prime}}H_{k,\ell}[\psi]\) given \(\ell^{\prime},\ell\). Specifically, setting \(n=0\), we can solve Eq. (1) for the derivative order \(p\). Then the required \(k\) is just \(p-1\). Therefore, with no horizon data we expect \[k_{\text{no horizon data}}=\begin{cases}\ell^{\prime}-1&,\ell\leq\ell^{\prime}-2 \\ \ell+1&,\ell\geq\ell^{\prime}\end{cases} \tag{3}\] and with horizon data \[k_{\text{horizon data}}=\begin{cases}\ell^{\prime}-2&,\ell\leq\ell^{\prime}-2 \\ \ell&,\ell\geq\ell^{\prime}\end{cases}\,. \tag{4}\] Specific examples for the power law indices for different \(\ell^{\prime},\ell\) values and finding the \(k\) corresponding to Aretakis charges are listed in Appendix A. We find empirically that for \(r>M\), outside the EH, the radial profile for the dominant \(\ell\)-mode can be modeled by \[{}_{\ell^{\prime}}\psi_{\ell}\sim\,_{\ell^{\prime}}e_{\ell}\,r^{a}\,(r-M)^{b} \,t^{n}\,\Theta_{\ell}(\theta)\,, \tag{5}\] where \[\begin{array}{ll}a=1,b=-1&\ell^{\prime}\ \mbox{is even}\\ a=1,b=-2&\ell^{\prime}\ \mbox{is odd}\end{array} \tag{6}\] and where \({}_{\ell^{\prime}}e_{\ell}\) is the generalized Ori-Sela pre-factor. ## IV Linear relationship of \(e\) and \(H\) We label the Aretakis constant \({}_{\ell^{\prime}}H_{k,\ell}\) where \(k\) is related to the order of the differential operator, \(\ell^{\prime}\) is the multipole order of the perturbation field, and \(\ell\) is the multipole order of the field of interest. Specifically, \[{}_{\ell^{\prime}}H_{k,\ell}[\psi]=\,\partial_{r}^{k+1}\left[r\,\partial_{r} \left(r\,{}_{\ell^{\prime}}\psi_{\ell}\right)\right] \tag{7}\] In practice, we approximate the Aretakis constant \({}_{\ell^{\prime}}H_{k,\ell}[\psi]\) with \({}_{\ell^{\prime}}h_{k,\ell}[\psi]\), where \[{}_{\ell^{\prime}}h_{k,\ell}[\psi]\sim M^{2}\,\partial_{r}^{k+1}\,{}_{\ell^{ \prime}}\psi_{\ell} \tag{8}\] as is shown in Fig. 3. Table 2 shows the values of \(\ell^{\prime}\) and \(\ell\) for which we studied the relationship of the Aretakis charge \({}_{\ell^{\prime}}H_{k,\ell}\) and the Ori-Sela prefactor \({}_{\ell^{\prime}}e_{\ell}\). We find linear relationships \({}_{\ell^{\prime}}H_{k,\ell}=\beta\,_{\ell^{\prime}}e_{\ell^{\prime}}+\alpha\) (see Figs. 4, 5, and 6). See Appendix B for detail. In two of the cases studied we find deviations from linearity. Specifically, for \(\ell^{\prime}=0,\ell=2\) and for \(\ell^{\prime}=2,\ell=2\). These deviations from linearity occur when the initial data are far from the EH, but for near initial data the linear behavior is still observed. In three of the cases studied (see Appendix B) we find that at the 95% confidence level one cannot reject the claim that the intercept \(\alpha=0\) with. We propose that more robust investigation may find this result to be a general rule. Fully explaining these deviations from linearity is as yet an open question. We propose that more powerful numerical simulations would find linearity also for distant initial data: When plotting different \(\ell\) projections as functions of \(\rho\) for different sets of initial data (distinguished by the location of the peak) we find that up excitations behave differently for different initial data sets (and also for \(\ell^{\prime}=\ell\) when \(\ell\) is not the lowest radiative mode), but the behavior is the same for \(\ell=\ell^{\prime}\) (when \(\ell\) is the lowest radiative mode). This conclusion suggests that higher excitations may take longer to \begin{table} \begin{tabular}{|l||c|c|c|} \hline \(\ell/\ell^{\prime}\) & \(\ell^{\prime}=0\) & \(\ell^{\prime}=2\) & \(\ell^{\prime}=4\) \\ \hline \hline \(\ell=0\) & 0 & 0 & 2 \\ \hline \(\ell=2\) & 2 & 2 & 2 \\ \hline \end{tabular} \end{table} Table 2: The value of the order \(k\) of the Aretakis charge \({}_{\ell^{\prime}}H_{k,\ell}\) for which a linear relationship to the Ori-Sela prefactor \({}_{\ell^{\prime}}e_{\ell}\) is found. In boldface we show the cases for which deviations from linearity are found. settle for far out initial data sets. This idea is strengthened by noticing that all deviations from linearity occur with \(|_{\ell^{\prime}}H_{k,\ell}|\) being under-valued, never over-valued. We cautiously propose that the dominant mode has saturated, but subdominant modes have not saturated yet, and therefore their contributions to \({}_{\ell^{\prime}}H_{k,\ell}\) are not full. To test this idea we compare the contribution of subdominant modes to \({}_{2}H_{2,2}\) (nonlinear deviations) and to \({}_{2}H_{00}\) (no deviations from linearity). In the former case we take the subdominant mode \(\ell^{\prime}=2\), \(\ell=4\) (up excitation), and in the latter case we take the subdominant mode \(\ell^{\prime}=2\), \(\ell=2\) (up excitation). We find the results in Fig. 7. The deviations from power law behavior for \({}_{2}\psi_{4}^{(3)}\) at late time suggest that we do not get an accurate determination of \({}_{2}H_{2,2}\) which could explain the deviations shown in Fig. 5(b). We comment that before the deviation from linearity in Fig. 5(b) starts, asymptotic behavior is observed. Perhaps we need to evaluate the Aretakis constant in that domain, before presumably numerical effects change the behavior. If this is right, it is possible that one could still read the Aretakis constant from measurements made at finite distances via the Ori-Sela pre-factor. ## V Concluding remarks We show that Aretakis charges on \(\mathscr{H}^{+}\) for extreme Kerr BH with axisymmetric scalar field perturbations are associated with generalized Ori-Sela prefactors that are measured at finite distances. For all cases studied we find a linear relationship of the two quantities when the initial data sets are in the near field. This relationship suggests that one could at least in principle measure the generalized Ori-Sela prefactor at a finite distance, and infer on the associated Aretakis charge on \(\mathscr{H}^{+}\). If robust, this procedure would violate the no-hair theorems [6] in this sense. The cases that lead to deviation from linearity for initial data sets that are farther away from the EH warrant further investigation, possibly using stronger computational resources than those currently available to us. Our proposal regarding the role played by subdominant modes can be investigated with the case \(\ell^{\prime}=0,\ell=4\) which is a subdominant mode for \({}_{0}H_{2,2}\). It is currently not known whether the linearity found for the relationship of the Aretakis charges and the generalized Ori-Sela prefactors are specific for axisymmetric modes of a linearized scalar field, or whether they extend also to non-axisymmetric modes. The question of extending our work to gravitational perturbations of extreme Kerr spacetimes is of much interest, and awaits further study, as of the question of the fully nonlinear theory, where analogous results may be of transient nature. ## Appendix A Specific examples for the value of \(n\) are given in Tables 3 and 4. Figure 7: Comparison of the behavior of subdominant modes. Top panel: \({}_{2}\psi_{2}^{(1)}\) for close initial data (upper curve at late times) and for far initial data (lower curve at late times). Bottom panel: \({}_{2}\psi_{4}^{(3)}\) for close initial data (upper curve at late times) and for far initial data (lower curve at late times). ## Appendix B We calculate the slope and intercept of the least squares regression lines \(\,{}_{\ell^{\prime}}H_{k,\ell}=\beta\,{}_{\ell^{\prime}}e_{\ell^{\prime}}+ \alpha+\epsilon_{i}\) with \(t\)-confidence intervals for 95% confidence level. Here, \(\epsilon_{i}\) are the regression residuals of the \(n\) data points. We first find the standard error for the slope, \[s_{\hat{\beta}}=\sqrt{\frac{\sum_{i=1}^{n}\epsilon_{i}^{2}}{(n-2)\sum_{i=1}^{ n}(e_{i}-\bar{e}_{i})^{2}}}\,,\] where \(e_{i}\) are short notation for the Ori-Sela prefactors for the \(n\) data points. We then find the standard error for the intercept, \(s_{\hat{\alpha}}=s_{\hat{\beta}}\,\sqrt{\frac{1}{n}\,\sum_{i}e_{i}^{2}}\) Then, the margins of error for the slope and the intercept are respectively given by \[\delta_{\beta}=s_{\hat{\beta}}\,t_{n-2}^{*}\] and \[\delta_{\alpha}=s_{\hat{\alpha}}\,t_{n-2}^{*}\,,\] were \(t^{*}\) is the critical value for \(n-2\) degrees of freedom. In Table 5 we show the slope and intercept coefficients for the six cases we study. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\ell^{\prime},\ell,k\) & \(\beta\) & \(\alpha\) & dof \\ \hline \hline \(0,0,0\) & \(-0.07075\pm 0.00045\) & \(-0.0034\pm 0.0045\) & 4 \\ \hline \(0,2,2\) & \(-0.647\pm 0.058\) & \(3.29\pm 0.50\) & 3 \\ \hline \(2,0,0\) & \(-0.07259\pm 0.00017\) & \(-0.00023\pm 0.00011\) & 3 \\ \hline \(2,2,2\) & \(-218.\pm 31\). & \(90.\pm 17\). & 2 \\ \hline \(4,0,2\) & \(-0.01195\pm 0.00034\) & \(0.00015\pm 0.00016\) & 6 \\ \hline \(4,2,2\) & \(-0.03297\pm 0.00064\) & \(-0.038\pm 0.085\) & 6 \\ \hline \end{tabular} \end{table} Table 5: The 95% \(t-\)confidence intervals for the coefficient \(\beta\) (slope) and \(\alpha\) (intercept) for the regression expression \({}_{\ell^{\prime}}H_{k,\ell}=\beta\,{}_{\ell^{\prime}}e_{\ell^{\prime}}+\alpha\). Here, dof is the number of \(t-\)statistics degrees of freedom. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\ell^{\prime}=0\) & \(\ell=0\) & \(\ell=2\) & \(\ell=4\) \\ \hline \hline \(\psi\) & 3 & 3 & 5 \\ \hline \(\partial_{u}\psi\) & 2 & 2 & 4 \\ \hline \(\partial_{u}^{*}\psi\) & 0 & 0 & 2 \\ \hline \(\partial_{u}^{*}\psi\) & -1 & -1 & 1 \\ \hline \end{tabular} \end{table} Table 4: The value of the power-law indices \(n\) for the field \(\psi\) and its transverse derivatives \(\,\partial_{u}^{m}\psi\sim v^{n}\) for \(m=0,1,2,3,4\) (\(m=0\) corresponds to the field \(\psi\) itself.) Here, \(\ell^{\prime}=4\), and there are horizon data. The boldfaced values correspond to Aretakis constants: \({}_{4}H_{2,0}\) and \({}_{4}H_{2,2}\). ## Acknowledgements The authors acknowledge support from NSF Grants No. PHY-2106755 and DMS-1912716 (G.K). Simulations were performed on the UMass-URI UNITY supercomputer and MIT's SuperCloud supported by the Massachusetts Green High Performance Computing Center (MGHPCC).
2306.07777
Simultaneous extreme values of zeta and L-functions
We show that distinct primitive L-functions can achieve extreme values simultaneously on the critical line. Our proof uses a modification of the resonance method and can be applied to establish simultaneous extreme central values of L-functions in families.
Winston Heap, Junxian Li
2023-06-13T13:59:32Z
http://arxiv.org/abs/2306.07777v1
# Simultaneous extreme values of zeta and \(L\)-functions ###### Abstract. We show that distinct primitive \(L\)-functions can achieve extreme values _simultaneously_ on the critical line. Our proof uses a modification of the resonance method and can be applied to establish simultaneous extreme central values of \(L\)-functions in families. ## 1. Introduction Extreme values of \(L\)-functions have attracted considerable attention in recent years. This uptake in activity is largely due to the introduction of the resonance method of Soundararajan [43] and its subsequent developments. This is a versatile method which allows one to show that for various families of \(L\)-functions \(\mathcal{F}\) with analytic conductor \(C\), \[\max_{\begin{subarray}{c}\pi\in\mathcal{F}\\ \operatorname{cond}\pi\asymp C\end{subarray}}|L(\tfrac{1}{2},\pi)|\geqslant \exp\Big{(}c\sqrt{\frac{\log C}{\log\log C}}\Big{)} \tag{1.1}\] for some constant \(c=c(\mathcal{F})>0\). In the case of the Riemann zeta function, Bondarenko-Seip [8] recently made the significant improvement \[\max_{t\in[0,T]}|\zeta(\tfrac{1}{2}+it)|\geqslant\exp\Big{(}c^{\prime}\sqrt{ \frac{\log T\log\log\log T}{\log\log T}}\Big{)}. \tag{1.2}\] Their modification of the resonance method can be extended to other families [9], although a severe restriction is that the \(L\)-functions must have non-negative coefficients1. Footnote 1: This requirement can be weakened slightly; the method applies as long as the resultant mean values contain non-negative terms. See the proof of Theorem 1.5 of [9] for example. Extreme values of products of \(L\)-functions are also possible and in particular the bound (1.1) holds when \(L(s,\pi)\) factorises. For example, we know [5, Theorem 1.12] that for fixed cusp forms \(f,g\) there exists a non-trivial primitive character \(\chi\) modulo a prime \(q\) such that \[|L(1/2,f\otimes\chi)L(1/2,g\otimes\chi)|\geqslant\exp\Big{(}c_{f,g}\sqrt{ \frac{\log q}{\log\log q}}\Big{)}. \tag{1.3}\] In \(t\)-aspect, Aistleitner-Pankowski [1] showed that (1.1) holds for non-primitive \(L\)-functions in the Selberg class, whereas bounds of the strength (1.2) can be demonstrated for Dedekind zeta functions [7]. These results contain an interesting feature: when \(L(s,\pi)\) is a product of \(L\)-functions one can achieve a larger constant \(c\). Precisely, the \(t\)-aspect results show ###### Abstract We consider the \(L\)-functions of the \(L **Remark**.: The constant depends on the degree of the \(L\)-functions in question. If \(L_{1},L_{2}\) are both Dirichlet \(L\)-functions, then we can take \(c=\sqrt{17/66}+o(1)\) and if at least one of \(L_{1}\) and \(L_{2}\) is a \(GL(2)\)\(L\)-function, we can take \(c=\sqrt{(1-2\theta)/12}+o(1)\) where \(\theta\) is an admissible bound towards the Ramanujan conjecture for \(GL(2)\) over \(\mathbb{Q}\). By work of Kim-Sarnak [28, Appendix 2], we can take \(\theta=7/64\). For comparison, we mention that one can take \(c=\sqrt{2}+o(1)\) for the product of \(L_{1}L_{2}(1/2+it)\) (see [1]). Our method extends to other families and we shall describe a general principle below. For now, we illustrate this by demonstrating simultaneous large central values of twists of \(GL(2)\) cusp forms, refining (1.3). **Theorem 2**.: _Let \(f,g\) be fixed primitive cusp forms of level \(r,r^{\prime}\) and trivial central character. There exists a positive constant \(c\) depending only on \(f,g\) such that for all primes \(q\) sufficiently large in terms of \(f,g\), there exists a non-trivial character \(\chi\bmod q\) such that_ \[\min\Big{(}|L(1/2,f\otimes\chi)|,|L(1/2,g\otimes\chi)|\Big{)}\geqslant\exp \Big{(}c\sqrt{\frac{\log q}{\log\log q}}\Big{)}.\] **Remark**.: We give an explicit constant \(c\) in terms of \(f,g\) in the proof. In a generic situation, we can take \(c=1/12\sqrt{10}+o(1)\), which is half of the constant \(c_{f,g}=1/6\sqrt{10}+o(1)\) for the product in (1.3) [5, Remarks 7.3, 7.20], as one would expect. We now describe our method. The idea is to use a resonator that picks out a large value of the product of the \(L\)-functions that, at the same time, is significantly bigger than their sum, thus giving simultaneous large values. We detail this in the \(t\)-aspect, although the principle extends more generally. Our aim is to find a Dirichlet polynomial \(R(t)\) such that for large \(V\), \[\int_{T}^{2T}\Big{(}|L_{1}(\tfrac{1}{2}+it)L_{2}(\tfrac{1}{2}+it)|^{2}-V|L_{1} (\tfrac{1}{2}+it)|^{2}-V|L_{2}(\tfrac{1}{2}+it)|^{2}\Big{)}|R(t)|^{2}dt>0. \tag{1.5}\] If this holds then there exists a \(t\in[T,2T]\) such that \[|L_{1}(\tfrac{1}{2}+it)L_{2}(\tfrac{1}{2}+it)|^{2}-V|L_{1}(\tfrac{1}{2}+it)|^{ 2}-V|L_{2}(\tfrac{1}{2}+it)|^{2}>0\] which implies that both \(|L_{1}(1/2+it)|^{2},|L_{2}(1/2+it)|^{2}>V\). We choose \(R(t)\) to pick out large values of the product and once the asymptotic formulae for twisted second moments in (1.5) have been established, we can find the desired size for \(V\). This approach uses the larger values of the product in a key way, but also includes the required upper bound information (as is necessary to rule out excessively large values of individual \(L\)-functions). For multiple \(L\)-functions we can aim to find a value of \(t\) for which \[\prod_{j=1}^{m}|L_{j}(\tfrac{1}{2}+it)|^{2}-V\sum_{1\leqslant i\leqslant m} \prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{m}|L_{j}(\tfrac{1}{2}+it)|^{2}>0. \tag{1.6}\] Unfortunately, asymptotic formulae for the twisted second moments of multiple or higher degree \(L\)-functions are currently out of reach. However, asymptotics are not strictly necessary since reasonably sharp bounds would suffice. A lower bound for the first term on the left of (1.6) can be achieved fairly easily through the Cauchy-Schwarz inequality. This allows one to replace the second moment with a first moment which is more tractable. In fact, this first moment can be computed for any number of \(L\)-functions. To obtain upper bounds for the second term in (1.6) we note that there exist several instances in the literature [12, 13, 32] where, if one doesn't have access to asymptotics for twisted second moments, a sharp upper bound can still be achieved on the Riemann Hypothesis by applying the methods of Harper [17]. As it stands, Harper's method is designed for a fixed \(2k\)-th moment of an \(L\)-function and thus sensitive to values of size \((\log T)^{k}\). However, with some modifications, in particular by focusing on very small primes, these methods can be made suitable for extreme values. With this in hand we can exhibit simultaneous large values for many \(L\)-functions in higher degrees under the Riemann Hypothesis for these \(L\)-functions. **Theorem 3**.: _Let \(\pi_{j}\), \(j=1,\cdots,m\) be irreducible unitary cuspidal automorphic representations of \(GL(d_{j})\) over \(\mathbb{Q}\) such that \(\pi_{i}\not\cong\pi_{j}\) for \(i\neq j\). Assume the generalised Riemann Hypothesis for all \(L(s,\pi_{j})\)\(j=1,\ldots,m\) and assume the generalised Ramanujan conjecture for \(L(s,\pi_{j})\) if \(d_{j}\geqslant 3\). Then for sufficiently large \(T\) we have_ \[\max_{t\in[T,2T]}\min(|L(\tfrac{1}{2}+it,\pi_{1})|,\ldots,|L(\tfrac{1}{2}+it, \pi_{m})|)\geqslant\exp\left(c\sqrt{\frac{\log T}{\log\log T}}\right)\] _for any positive constant \(c<\frac{1}{\sqrt{2m}}\)._ Additional work is required to remove the generalised Ramanujan conjecture in the case of Maass forms. Throughout most of the proof we can work under weaker assumptions, in particular when computing the mean values. However in the final step when establishing extreme values more strict control on the size of \(a_{\pi}(p)\) is required. One sufficient condition is a Mertens' type estimate for the fourth power moment: \[\sum_{p\leqslant X}\frac{|a_{\pi}(p)|^{4}}{p}\ll\log\log X.\] Therefore, the assumption of the generalised Ramanujan conjecture can be avoided for \(GL(2)\) Maass forms by the functionality of symmetric powers established by Kim [27] (and also for self-dual \(GL(3)\)\(L\)-functions using Gelbart-Jacquet [15] and Kim [27], although we do not state this in Theorem 3 for concision). We also remark that Theorem 3 can be proved unconditionally for three distinct Dirichlet \(L\)-functions. Here, the upper bounds for the twisted second moments of \(L(1/2+it,\chi_{i})L(1/2+it,\chi_{j})\) are (essentially) available being generalisations of the twisted fourth moment of the Riemann zeta function [4, 18, 21]. The versatility of the resonance method in families of \(L\)-functions extends to simultaneous values, both large and small. Our methods allow for a general principle which we now describe. Let \(\Pi\) be some family and let \(\{L(s,f_{j})\}_{j=1}^{m}\) be fixed \(L\)-functions. Suppose we can lower bound \[\sum_{\pi\in\Pi}\prod_{j=1}^{m}L(1/2,f_{j}\otimes\pi)|R(\pi)|^{2} \tag{1.7}\] and upper bound \[\sum_{\pi\in\Pi}\prod_{j\neq i}|L(1/2,f_{j}\otimes\pi)|^{2}|R(\pi)|^{2} \tag{1.8}\] in a reasonably sharp way. Then via the inequality (1.6) one can show the existence of \(\pi\in\Pi\) such that \(|L(1/2,f_{j}\otimes\pi)|\) are large simultaneously. Here, one uses the Cauchy-Schwarz inequality to get a lower bound for the mixed absolute second moment using (1.7) together with an estimate for \(\sum_{\pi\in\mathcal{F}}|R(\pi)|^{2}\). If upper bounds for (1.8) are not immediately accessible, then one can apply our adaption of Harper's methods (Section 6) to give conditional results. The situation for simultaneous small values is somewhat simpler. Here, we just require upper bounds for the sum \[\sum_{\pi\in\Pi}\sum_{j=1}^{m}|L(1/2,f_{j}\otimes\pi)|^{2}|R(\pi)|^{2} \tag{1.9}\] along with the simple fact that for non-negative \(a,b\), the inequality \(a+b\leqslant V\) implies \(a,b\leqslant V\). In both the large and small value cases, the resonator should be chosen to pick out extreme values of the full product of \(L\)-functions. We illustrate this principle in two families. As we have seen already in Theorem 2, it can be applied to give simultaneous large central values of \(L\)-functions of \(GL(2)\) cusp forms twisted by Dirichlet characters modulo \(q\) unconditionally, since the second moment theory has been well developed [5]. For simultaneous small values, we consider the family of holomorphic cusp forms twisted by quadratic characters \(\chi_{d}(\cdot)=(\frac{d}{\cdot})\). Here, the mixed first moment \[\sum_{0<d\leqslant X}L(1/2,f\otimes\chi_{d})L(1/2,g\otimes\chi_{d})\] is still unknown (though significant progress has been made recently by X. Li [33]). Consequently, the simultaneous non-vanishing of quadratic twists of cusp forms remains an open question. Nevertheless, we can show that there are infinitely many \(d\) such that \(L(1/2,f\otimes\chi_{d})\) and \(L(1/2,g\otimes\chi_{d})\) get very small simultaneously. **Theorem 4**.: _Let \(f,g\) be holomorphic cusp forms of weight \(\kappa\equiv 0\bmod 4\) for \(SL_{2}(\mathbb{Z})\) and let \(\chi_{d}(\cdot)=(\frac{d}{\cdot})\) be the Kronecker symbol. Then for large \(X\) there exists \(X\leqslant d\leqslant 2X\) and some \(c>0\) such that_ \[\max(L(1/2,f\otimes\chi_{d}),L(1/2,g\otimes\chi_{d}))\ll\exp\bigg{(}-c\sqrt{ \frac{\log X}{\log\log X}}\bigg{)}.\] We remark that this result is unconditional since we can avoid the absolute second moments in (1.9) (which are currently out of reach) and work directly with \(L(1/2,f\otimes\chi_{d})\) as we already have non-negativity: \(L(1/2,f\otimes\chi_{d})\geqslant 0\). This surprising fact is known unconditionally from the formula of Waldspurger [45] (see also [29]). In generic situations, one can take \(c=1/\sqrt{5}+o(1)\). There are several other possibilities for simultaneous extreme values in families of \(L\)-functions. Examples of significant arithmetic interest are given by the families \[\{L(1/2,f),L(1/2,f\otimes\chi_{D}):f\in\mathcal{F}\}\] where \(\chi_{D}\) is a fixed quadratic character and \(\mathcal{F}\) is either the family of Hecke eigencuspforms of even weight \(k\) for the full modular group with \(k\) tending to infinity, or the family of holomorphic newforms of fixed even weight \(k\) for the congruence subgroup \(\Gamma_{0}(N)\) with \(N\) tending to infinity. In both the weight and (squarefree) level aspects, the required second moment formulae can be computed using Petersson's formula (see [14, 23, 30, 43] for example). We also mention that for a large prime \(q\), Dirichlet characters \(\omega_{1},\omega_{2}\) modulo \(q\) and \(f\) a Hecke eigenform for \(SL_{2}(\mathbb{Z})\) (holomorphic or Maass), the simultaneous extreme values for the families \[\{L(1/2,\chi),L(1/2,\omega_{1}\chi),L(1/2,\omega_{2}\chi) :\chi\bmod q\}\] \[\{L(1/2,f\otimes\chi),L(1/2,\chi) :\chi\bmod q\}\] could be established using work of Zacharias [47] under GRH. We close this introduction with a few remarks on similarities with previous works and on the difficulties in extending our results to the strength of (1.2). We note that our proof utilises some control on both upper and lower bounds for \(L\)-functions. A similar idea has appeared in the recent work of Gun-Kohnen-Soundararajan [16] where they demonstrated large central values of linear combinations of \(L\)-functions by making one \(L\)-function large and at the same time keeping all other \(L\)-functions smaller. In contrast, we exhibit simultaneous extreme central values in families of \(L\)-functions where all of the \(L\)-functions attain large or small values. A very natural question is whether one can attain simultaneous values of the strength of Bondarenko-Seip [8] given in (1.2). A key source of their improvement was the use of a resonator with support in numbers that are much bigger than \(T\), although this results in considerable difficulties. First, to lower bound \[\int_{1}^{T}L_{1}(\tfrac{1}{2}+it)L_{2}(\tfrac{1}{2}+it)|R(t)|^{2}dt\] with such a resonator we can require that the coefficients of \(L_{1}(s)L_{2}(s)\) be positive after inserting a smooth weight with positive transform, as in [8]. This could be resolved with Dedekind zeta functions, for example. However, finding an upper bound for \[\int_{1}^{T}|L_{j}(\tfrac{1}{2}+it)|^{2}|R(t)|^{2}dt\] when \(R\) is a long resonator seems a more substantial obstacle. In [8], to get an upper bound for \(\int|R(t)|^{2}dt\), the resonator was chosen to have well-spaced phases but unfortunately it is not clear how such an \(R\) interacts with \(|L_{j}(1/2+it)|^{2}\). If this issue could be overcome then one could deduce bounds of the form (1.2) for Dirichlet \(L\)-functions, or indeed any other \(L\)-function with negative coefficients provided they appear as a factor in a Dedekind zeta function. **Acknowledgments.** We would like to thank Edgar Assing and Peter Humphries for helpful discussions. We also thank Jesse Thorner for valuable remarks on a preliminary version of this paper. ## 2. Background on automorphic \(L\)-functions In this section we collect some basic facts about the class of \(L\)-functions used in our \(t\)-aspect results. These can be found in many places, see for example [39, 40]. Let \(L(s,\pi)\) be the \(L\)-function attached to an irreducible cuspidal automorphic representation \(\pi\) of \(\operatorname{GL}(d)\) over \(\mathbb{Q}\) normalised such that \(\pi\) has unitary central character. In the region \(\sigma>1\) we have \[L(s,\pi)=\sum_{n=1}^{\infty}\frac{A_{\pi}(n)}{n^{s}}=\prod_{p}\prod_{j=1}^{d} \left(1-\frac{\alpha_{\pi,j}(p)}{p^{s}}\right)^{-1} \tag{2.1}\] for some complex coefficients \(A_{\pi}(n)\) and \(\alpha_{j}(p)\). If \(d=1\) and \(\pi\) is the trivial representation then \(L(s,\pi)\) is given by the Riemann zeta function. Otherwise, it extends to an entire function satisfying the functional equation \[\Phi(s,\pi):=N^{s/2}\gamma(s,\pi)L(s,\pi)=\epsilon_{\pi}\overline{\Phi}(1-s,\pi)\] where \(N\in\mathbb{N}\), \(|\epsilon_{\pi}|=1\), \(\overline{\Phi}(s,\pi)=\overline{\Phi(\overline{s},\pi)}\) and \(\gamma(s,\pi)=\pi^{-ds/2}\prod_{j=1}^{d}\Gamma\Big{(}\frac{s+\mu_{\pi,j}}{2} \Big{)}\) for some complex numbers \(\mu_{\pi,j}\) satisfying \(\Re\mu_{\pi,j}>-1\). Applying Stirling's formula to \(\gamma(s,\pi)\) along with the Phragmen-Lindelof principle, we see that \[L(\sigma+it,\pi)\ll(N|t|^{d})^{(1-\sigma)/2+\epsilon} \tag{2.2}\] in the strip \(-\delta\leqslant\sigma\leqslant 1+\delta\) for large \(|t|\) (see [22] for example). In our conditional results we make key use of Euler products, so we collect some further practical bounds here. For \(\sigma>1\) on differentiating the Euler product we see \[-\frac{L^{\prime}}{L}(s,\pi)=\sum_{p^{\ell}\geqslant 2}\frac{\log p\sum_{j=1}^{ d}\alpha_{\pi,j}(p)^{\ell}}{p^{\ell s}}=:\sum_{n\geqslant 2}\frac{\Lambda(n)a_{ \pi}(n)}{n^{s}} \tag{2.3}\] where \(\Lambda(n)\) is the von-Mangoldt function. The generalised Ramanujan conjecture asserts that \(|\alpha_{\pi,j}(p)|=1\) for all but a finite number of primes and satisfies \(|\alpha_{\pi,j}(p)|\leqslant 1\) elsewhere, although this remains open in general. Rudnick-Sarnak [40] have shown that \[|\alpha_{\pi,j}(p)|\leqslant p^{1/2-1/(d^{2}+1)}\] for all primes \(p\). This bound implies that \[|a_{\pi}(p^{\ell})|=|\sum_{j=1}^{d}\alpha_{\pi,j}(p)^{\ell}|\leqslant dp^{\ell(1/2 -1/(d^{2}+1))} \tag{2.4}\] and, by (2.1), that \[|A_{\pi}(n)|\leqslant\tau_{d}(n)n^{1/2-1/(d^{2}+1)} \tag{2.5}\] where \(\tau_{d}\) is the generalised divisor function. Another useful bound is that when the degree satsifies \(d\leqslant 3\), \[|a_{j}(p^{\ell})|\ll 1+|a_{j}(p)|^{\ell},\qquad d\leqslant 3 \tag{2.6}\] for fixed prime powers \(\ell\). This is given in the proof of Proposition 2.4 of [40]. In many situations, the assumption of the generalised Ramanujan conjecture can be replaced by Hypothesis H introduced by Rudnick-Sarnak [40], which states that for fixed \(\ell\geqslant 2\), \[\sum_{p}\frac{|a_{\pi}(p^{\ell})|^{2}(\log p)^{2}}{p^{\ell}}<\infty. \tag{2.7}\] Clearly, this follows from the the generalised Ramanujan conjecture. Unconditionally, this was shown to hold for \(d\leqslant 3\) by Rudnick-Sarnak [40] using (2.6) and for \(d=4\) by Kim [27]. As a replacement for the generalised Ramanujan conjecture, Hypothesis H has previously been used in mean value results [39]. In our case it could be used to weaken the assumptions of Proposition 4 below, but at the cost of considerable extra technicalities. We settle for using (2.6), which restricts our unconditional results to \(d\leqslant 3\) rather than \(d\leqslant 4\), since in the end we require stronger coefficient bounds to deduce our final results. The following three results are the key average properties we require for \(a_{\pi}(p)\), all of which hold unconditionally when \(d\leqslant 2\). The first is the orthogonality conjecture of Selberg, as shown in full generality by Liu-Ye [36] building on previous works [34, 35, 37]. **Theorem** (Selberg's orthogonality conjectures).: _Let \(\pi\), \(\pi^{\prime}\) be two irreducible unitary cuspidal automorphic representations of \(\operatorname{GL}(d)\), \(\operatorname{GL}(d^{\prime})\) over \(\mathbb{Q}\), respectively. If (2.7) holds, then_ \[\sum_{p\leqslant x}\frac{a_{\pi}(p)\overline{a_{\pi^{\prime}}(p)}}{p}=\begin{cases} \log\log x+O(1)&\text{if }\ \pi\cong\pi^{\prime},\\ O(1)&\text{otherwise.}\end{cases} \tag{2.8}\] _In particular, (2.8) holds if \(\max(d,d^{\prime})\leqslant 4\) or on assuming the generalised Ramanujan conjecture._ This essentially implies the following bounds for our resonator sums. **Proposition 1**.: _Let \(\pi\), \(\pi^{\prime}\) be two irreducible unitary cuspidal automorphic representations of \(\operatorname{GL}(d)\), \(\operatorname{GL}(d^{\prime})\) over \(\mathbb{Q}\), respectively. If (2.7) holds then for large \(x,y\),_ \[\sum_{x<p\leqslant y}\frac{a_{\pi}(p)\overline{a_{\pi^{\prime}}(p)}}{p\log p}= \begin{cases}\frac{1}{\log x}-\frac{1}{\log y}+O\Big{(}\frac{1}{(\log x)^{2}} \Big{)}&\text{if}\ \ \pi\cong\pi^{\prime},\\ O\Big{(}\frac{1}{(\log x)^{2}}\Big{)}&\text{otherwise}.\end{cases} \tag{2.9}\] _In particular, this holds if \(\max(d,d^{\prime})\leqslant 4\) or on assuming the generalised Ramanujan conjecture._ Proof.: The sum is given by \[\sum_{x<p\leqslant y}\frac{a_{\pi}(p)\overline{a_{\pi^{\prime}}(p)}}{p\log p} =\sum_{x<n\leqslant y}\frac{\Lambda(n)a_{\pi}(n)\overline{a_{\pi^{\prime}}(n)} }{n(\log n)^{2}}-\sum_{x<p^{\ell}\leqslant y,\ell\geqslant 2}\frac{a_{\pi}(p^{ \ell})\overline{a_{\pi^{\prime}}(p^{\ell})}}{\ell^{2}p^{\ell}\log p}.\] To estimate the second sum we note that the contribution from \(\ell\geqslant d^{2}+1\) can be bounded using (2.5) by \[\ll\frac{1}{(\log x)^{2}}\sum_{p}\sum_{\ell\geqslant d^{2}+1}\frac{\log p}{p ^{2\ell/d^{2}+1}}\ll\frac{1}{(\log x)^{2}}.\] For \(\ell\leqslant d^{2}+1\) we use Cauchy-Schwarz and (2.7) to obtain \[\ll\frac{1}{(\log x)^{2}}\sum_{\begin{subarray}{c}x<p^{\ell}\leqslant y\\ 2\leqslant\ell\leqslant d^{2}+1\end{subarray}}\frac{\log p|a_{\pi}(p^{\ell}) \overline{a_{\pi^{\prime}}(p^{\ell})|}}{p^{\ell}}\ll\frac{1}{(\log x)^{2}}.\] To compute the first sum we apply partial summation along with the bounds \[S(x):=\sum_{n\leqslant x}\frac{(\log n)\Lambda(n)a_{\pi}(n)\overline{a_{\pi^{ \prime}}(n)}}{n}=\begin{cases}\frac{1}{2}(\log x)^{2}+O(\log x)&\text{if}\ \ \pi\cong\pi^{\prime},\\ O(\log x)&\text{otherwise}.\end{cases}\] of [36]. We shall use one more type of bound, which can be used to avoid the assumption of the generalised Ramanujan conjecture in some cases. **Theorem** (Fourth moment bounds).: _Suppose \(d\leqslant 2\) or that \(d=3\) and \(\pi\) is self-dual. Then_ \[\sum_{p\leqslant x}\frac{|a_{\pi}(p)|^{4}}{p}\ll\log\log x. \tag{2.10}\] Proof.: For \(d=1\) the result is clear. For \(d=2\) this follows from the fact that \[a_{\pi}(p)^{4}=2+a_{\operatorname{Sym}^{2}\pi}(p)+a_{\operatorname{Sym}^{4} \pi}(p),\] (see the proof of Corollary 2.15 of [5] for example) along with the bounds \[\sum_{p\leqslant x}a_{\operatorname{Sym}^{k}\pi}(p)/p\ll\log\log x,\ k=2,4.\] When \(d=3\) and \(\pi\) is self-dual it is known (see Section 3.2 of [26]) that \(L(s,\pi\times\pi\times\pi\times\pi)\) has a pole of order \(3\) at \(s=1\). The result in this case therefore follows by Tauberian theorems. ## 3. Simultaneous large values in \(t\)-aspect: set-up and proofs of Theorems 1 and 3 In this section we give the set-up for proving simultaneous large values in the \(t\)-aspect, state the required moment bounds and then complete the proofs of Theorems 1 and 3. Let \(\pi_{i}\), \(1\leqslant i\leqslant m\) be irreducible unitary cuspidal automorphic representations of \(\operatorname{GL}(d_{i})\) over \(\mathbb{Q}\) such that \(\pi_{i}\not\cong\pi_{j}\) for \(1\leq i\neq j\leq m\), respectively, and let \[L_{i}(s)=L(s,\pi_{i})=\sum_{n=1}^{\infty}\frac{A_{\pi_{i}}(n)}{n^{s}}\] be the associated \(L\)-functions. For brevity we denote \(a_{i}(p)=a_{\pi_{i}}(p)=A_{\pi_{i}}(p)\). To pick out simultaneous values we recall that if there exists a \(t\) such that \[\prod_{i=1}^{m}|L_{i}(1/2+it)|^{2}-V\sum_{1\leqslant i\leqslant m}\prod_{ \begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{m}|L_{j}(1/2+it)|^{2}>0,\] then we must have \(|L_{i}(1/2+it)|^{2}>V\) for all \(1\leqslant i\leqslant m\). Write \[L(s)=\prod_{i=1}^{m}L_{i}(s)=\sum_{n\geqslant 1}a(n)n^{-s}\] so that \[a(p)=\sum_{i=1}^{m}A_{\pi_{i}}(p)=\sum_{i=1}^{m}a_{i}(p).\] We choose our resonator to pick out large values of \(L(s)\). Let \(X=T^{\Delta}\) for some \(\Delta<1\) to be chosen and denote \[\mathcal{L}=\sqrt{\frac{1}{m}\log X\log\log X}.\] For small \(\epsilon>0\) let \[\mathcal{P}=\bigg{\{}\mathcal{L}^{2}<p\leqslant\exp((\log\mathcal{L})^{2}):| a_{i}(p)|\leqslant(\log p)^{1-\epsilon}\text{ for all }1\leqslant i\leqslant m\bigg{\}}.\] We then define \(r(n)\) to be the multiplicative function supported on squarefree numbers for which \[r(p)=\begin{cases}a(p)\frac{\mathcal{L}}{\sqrt{p}\log p},&\text{ for }p\in \mathcal{P},\\ 0,&\text{ otherwise}.\end{cases}\] and let \[R(t)=\sum_{n\leqslant X}r(n)n^{-it}. \tag{3.1}\] By construction of \(\mathcal{P}\) we note the important bounds \[r(p)a_{i}(p)/p^{1/2},|r(p)|^{2}=o(1) \tag{3.2}\] for any \(1\leqslant i\leqslant m\) and \(p\in\mathcal{P}\). With these bounds the computations for the Euler products acquired from the resonance method can proceed in the usual simple way. This is the reason for the restriction \(|a_{i}(p)|\leqslant(\log p)^{1-\epsilon}\) in \(\mathcal{P}\); without it such computations are much more involved e.g. see [5]. With this set-up and the above notation we have the following propositions. **Proposition 2**.: _For large \(T\) and \(X=T^{\Delta}\) with \(\Delta<1\), we have_ \[\frac{1}{T}\int_{T}^{2T}|L(\tfrac{1}{2}+it)|^{2}|R(t)|^{2}dt\gg\prod_{p}\bigg{(} 1+|r(p)|^{2}+2(1+o(1))\frac{r(p)a(p)}{\sqrt{p}}\bigg{)}.\] **Proposition 3**.: _Let \(L_{i}(s)\) be a primitive Dirichlet \(L\)-function or the \(L\)-function of a (holomorphic or MaaS) cuspidal newform. Then for large \(T\) and \(X=T^{\Delta}\) with \(\Delta\) sufficiently small,_ \[\frac{1}{T}\int_{T}^{2T}|L_{i}(\tfrac{1}{2}+it)|^{2}|R(t)|^{2}dt\ll\prod_{p} \bigg{(}1+|r(p)|^{2}+2(1+o(1))\frac{\Re(r(p)\overline{a_{i}(p)})}{\sqrt{p}} \bigg{)}.\] _If \(L_{i}\) is a Dirichlet \(L\)-function, one can take \(\Delta<\frac{17}{33}\) and if \(L_{i}\) is a \(GL(2)\)\(L\)-function, we can take \(\Delta<\frac{1/2-\theta}{3+\theta}\) where \(\theta\) is the bounds towards the Ramanujan conjecture for \(GL(2)\) Maass Forms._ **Proposition 4**.: _Let \(\pi_{i}\) be irreducible unitary cuspidal automorphic representations of \(GL(d_{i})\) over \(\mathbb{Q}\) such that \(\pi_{i}\not\cong\pi_{j}\) for \(1\leq i\neq j\leq m\). Assume GRH for each \(L_{j}(s)=L(s,\pi_{j}),j=1,\ldots,m\) and if \(d_{j}\geqslant 4\) assume the generalised Ramanujan conjecture for \(L_{j}(s)\). Then for large \(T\) and \(X=T^{\Delta}\) with \(\Delta<1/2\), we have_ \[\frac{1}{T}\int_{T}^{2T}\prod_{\begin{subarray}{c}1\leqslant j \leqslant m\\ j\neq i\end{subarray}}|L_{j}(\tfrac{1}{2}+it)|^{2}|R(t)|^{2}dt\] \[\ll \exp\bigg{(}\sqrt{\frac{\log T}{\log_{2}T\log_{3}T}}\bigg{)}\prod _{p}\bigg{(}1+|r(p)|^{2}+2(1+o(1))\frac{\Re(r(p)\overline{b_{i}(p)})}{\sqrt{p} }\bigg{)}\] _where_ \[b_{i}(p)=\sum_{j\neq i}a_{j}(p)=a(p)-a_{i}(p).\] **Remark**.: The assumption of the generalised Ramanujan conjecture for \(d_{j}\geqslant 4\) arises from Lemma 9 below. The remainder of the proof of Proposition 4 only requires the pointwise bounds on the coefficients given in (2.4) and (2.6). The generalised Ramanujan conjecture could be replaced with Hypothesis H of Rudnick-Sarnak [40], inequality (2.7), but at the cost of a more technical proof. With these propositions in hand we can complete the proofs of Theorems 1 and 3. Proof of Theorems 1 and 3.: Define \(V\) as \[V=\frac{\int_{T}^{2T}|L(\frac{1}{2}+it)|^{2}|R(t)|^{2}dt}{2\sum_{i=1}^{m}\int_{T} ^{2T}\prod_{j\neq i}|L_{j}(\frac{1}{2}+it)|^{2}|R(t)|^{2}dt}.\] Then there exists \(t\in[T,2T]\) satisfying (1.6) so that \[|L_{j}(\frac{1}{2}+it)|^{2}>V\text{ for all }1\leqslant j\leqslant m.\] It remains to give a lower bound for \(V\). From Propositions 2-4 and (3.2), we have that \[V\gg\exp\Big{(}-c\sqrt{\frac{\log T}{\log_{2}T\log_{3}T}}\Big{)}\exp\bigg{(}2 (1+o(1))\min_{1\leqslant i\leqslant m}\sum_{p\in\mathcal{P}}\frac{\mathcal{L}(| a_{i}(p)|^{2}+\Re\overline{a_{i}(p)}\sum_{j\neq i}a_{j}(p))}{p\log p}\bigg{)}.\] Now let us remove the restriction on the size of the \(|a_{i}(p)|\). For \(1\leq i\neq j\leqslant m\), \[\mathcal{L}\sum_{\begin{subarray}{c}\mathcal{L}^{2}<p\leqslant\exp((\log \mathcal{L})^{2})\\ |a_{k}(p)|>(\log p)^{1-\epsilon}\text{for some }k\end{subarray}}\frac{a_{i}(p) \overline{a_{j}(p)}}{p\log p}\ll\frac{\mathcal{L}}{(\log\mathcal{L})^{2- \epsilon}}\sum_{\mathcal{L}^{2}<p\leqslant\exp((\log\mathcal{L})^{2})}\frac{| a_{i}(p)a_{j}(p)a_{k}(p)|}{p}\] which by Holder's inequality and (2.10) or the generalised Ramanujan conjecture is \[\ll\frac{\mathcal{L}}{(\log\mathcal{L})^{2-\epsilon}}\log\log\mathcal{L}=o \Big{(}\frac{\mathcal{L}}{\log\mathcal{L}}\Big{)}.\] Thus we can extend the sum over \(p\in\mathcal{P}\) to all \(\mathcal{L}^{2}<p\leqslant\exp((\log\mathcal{L})^{2})\) with an acceptable error. Applying Proposition 1 gives \[V\gg\exp\bigg{(}2(1+o(1))\sqrt{\frac{\frac{1}{m}\log X}{\log\log X}}\bigg{)}.\] ## 4. Lower bounds in \(t\)-aspect: Proof of Proposition 2 Let \[\mathcal{I}:=\frac{1}{T}\int_{T}^{2T}|L(\tfrac{1}{2}+it)|^{2}|R(t)|^{2}dt.\] We aim to prove the lower bound \[\mathcal{I}\gg\prod_{p}\bigg{(}1+|r(p)|^{2}+2(1+o(1))\frac{\overline{r(p)}a(p )}{\sqrt{p}}\bigg{)}.\] Let \(\Phi\) be a smooth function supported on \([1,2]\) satisfying \(0\leqslant\Phi(x)\leqslant 1\) so that \[\mathcal{I}\geqslant\frac{1}{T}\int_{\mathbb{R}}|L(\tfrac{1}{2}+it)|^{2}|R(t)| ^{2}\Phi(t/T)dt.\] Thus, by Cauchy-Schwarz \[\mathcal{I}\geqslant\left|\frac{1}{T}\int_{\mathbb{R}}L(\tfrac{1}{2}+it)|R(t)|^{2 }\Phi(t/T)dt\right|^{2}/\frac{1}{T}\int_{\mathbb{R}}|R(t)|^{2}\Phi(t/T)dt. \tag{4.1}\] As usual, since \(X=T^{\Delta}\) with \(\Delta<1\), we find \[\frac{1}{T}\int_{\mathbb{R}}|R(t)|^{2}\Phi(t/T)dt\sim\hat{\Phi}(0)\sum_{n \leqslant X}|r(n)|^{2}\leqslant\hat{\Phi}(0)\prod_{p}(1+|r(p)|^{2}) \tag{4.2}\] and so it remains to compute \[\frac{1}{T}\int_{\mathbb{R}}L(\tfrac{1}{2}+it)|R(t)|^{2}\Phi(t/T)dt.\] This has essentially been done by Aistleitner-Pankowski [1], and so we only present the main details. The only difference is that they worked with the Selberg class where the generalised Ramanujan conjecture is assumed, although this has little effect on the arguments. Applying the Mellin inversion formula \[e^{-x}=\frac{1}{2\pi i}\int_{(c)}\Gamma(s)x^{-s}ds,\qquad c>0\] and shifting contours to the left along with the convexity bounds (2.2), we find \[L(\tfrac{1}{2}+it)=\sum_{n=1}^{\infty}\frac{a(n)}{n^{1/2+it}}e^{-n/Y}+O(1)\] where \(Y=T^{d_{L}+\epsilon}\) and \(d_{L}=\sum_{i=1}^{m}d_{i}\) is the degree of \(L(s)\). For \(n>3Y\log Y\) we have \(e^{-n/Y}\leqslant n^{-2}\) and hence \[\sum_{n\geqslant 3Y\log Y}\frac{|a(n)|e^{-n/Y}}{n^{1/2}}\ll\sum_{n\geqslant 1} \frac{\tau_{d_{L}}(n)}{n^{2+1/(d^{2}+1)}}\ll 1\] where we have used the coefficient bound for \(a(n)\) in (2.5). Thus we find \[L(\tfrac{1}{2}+it)=\sum_{n\leqslant T^{d_{L}+2\epsilon}}\frac{a(n)}{n^{1/2+it} }e^{-n/Y}+O(1).\] From the rapid decay of \(\hat{\Phi}\) and (4.2) we now have \[\frac{1}{T}\int_{\mathbb{R}}L(\tfrac{1}{2}+it)|R(t)|^{2}\Phi(t/T)dt=\hat{\Phi} (0)\sum_{lm=n\leqslant X}\frac{a(l)r(m)\overline{r(n)}}{\sqrt{l}}e^{-l/Y}+O \big{(}\prod_{p}(1+|r(p)|^{2})\big{)}. \tag{4.3}\] Since \(r(n)\) is supported on squarefree integers and \(r(p)=a(p)\mathcal{L}/p^{1/2}\log p\) where it is non-zero, the summand of the main term is positive and hence we can bound it from below by \[\frac{1}{2}\hat{\Phi}(0)\sum_{lm\leqslant X}\frac{a(l)r(m)\overline{r(lm)}}{\sqrt{ l}} \tag{4.4}\] since \(e^{-l/Y}\geqslant 1/2\) for \(l\leqslant X\). Next we extend the sum \(lm\leq X\) to all \(l,m\). By Rankin's trick \[\begin{split}&\frac{\sum_{lm>X}a(l)r(m)\overline{r(lm)}/\sqrt{l}}{ \sum_{l,m}a(l)r(m)\overline{r(lm)}/\sqrt{l}}\ll X^{-\alpha}\prod_{p}\frac{1+|r( p)|^{2}p^{\alpha}+|r(p)a(p)|p^{-1/2+\alpha}}{|1+|r(p)|^{2}+a(p)\overline{r(p)}p^{-1/2}| }\\ \ll&\exp\bigg{(}-\alpha\log X+\sum_{L^{2}<p\leqslant \exp((\log L)^{2})}|a(p)|^{2}\Big{(}\frac{\mathcal{L}^{2}}{p\log^{2}p}+\frac{ \mathcal{L}}{p\log p}\Big{)}(p^{\alpha}-1)\bigg{)}\end{split} \tag{4.5}\] for any \(\alpha>0\). Applying this along with the bound (2.9) and choosing \(\alpha=1/(\log\mathcal{L})^{3}\), we find that (4.5) can be bounded by \[\ll\exp\Big{(}-\alpha\frac{\log X\log_{3}X}{\log_{2}X}+O(\alpha\frac{ \mathcal{L}}{(\log\mathcal{L})^{2}}+\alpha^{2}\mathcal{L}^{2}\log\log \mathcal{L})\Big{)}\ll\exp\Big{(}-\frac{\log X}{(\log_{2}X)^{3}}\Big{)}. \tag{4.6}\] Combining (4.3), (4.4), (4.5) and (4.6), we find that \[\frac{1}{T}\int_{\mathbb{R}}L(\tfrac{1}{2}+it)|R(t)|^{2}\Phi(t/T)dt\gg\prod_{ p}\bigg{(}1+|r(p)|^{2}+\frac{a(p)\overline{r(p)}}{\sqrt{p}}\bigg{)}.\] Applying this in (4.1) together with (3.2) we find that, \[\mathcal{I}\gg\prod_{p}\frac{\Big{(}1+|r(p)|^{2}+\frac{a(p)\overline{r(p)}}{ p^{1/2}}\Big{)}^{2}}{1+|r(p)|^{2}}\gg\prod_{p}\bigg{(}1+|r(p)|^{2}+2(1+o(1)) \frac{a(p)\overline{r(p)}}{\sqrt{p}}\bigg{)}\] as desired. ## 5. Unconditional upper bounds in \(t\)-aspect: Proof of Proposition 3 In this section we give the proof of Proposition 3 depending on whether \(L_{i}\) is a Dirichlet \(L\)-function or a \(GL(2)\)\(L\)-function. For this we utilise the requisite twisted moment formulas which are known in these cases. ### Dirichlet \(L\)-functions In the case when \(L_{i}(s)=L(s,\chi)\) where \(\chi\) is a primitive Dirichlet character modulo \(q\) we have the following. **Lemma 5**.: _Let \(\chi\) be a primitive Dirichlet character modulo \(q\). Let \(R(t)=\sum_{n\leq X}r(n)n^{-it}\) be as in \((\ref{eq:2.1})\). Let \(\alpha,\beta\) be complex numbers such that \(\alpha,\beta\ll 1/T\). Then for \(X=T^{\Delta}\) with \(\Delta<\frac{17}{33}\) there exists \(\epsilon_{\Delta}>0\) such that_ \[\int_{T}^{2T}L(\tfrac{1}{2}+\alpha+it,\chi)L(\tfrac{1}{2}+\beta-it,\bar{\chi})|R (t)|^{2}dt+O(T^{1-\epsilon_{\Delta}})\] \[=\!\!\!\sum_{(hk,q)=1}\frac{(h,k)^{1+\alpha+\beta}r(h)\overline{\chi(h)}r(k) \chi(k)}{h^{1/2+\beta}k^{1/2+\alpha}}\Big{(}L(1+\alpha+\beta,\chi_{0})+\Big{(} \frac{qt(h,k)^{2}}{2\pi hk}\Big{)}^{-\alpha-\beta}L(1-\alpha-\beta,\chi_{0}) \Big{)},\] _where \(\chi_{0}\) is the principal Dirichlet character modulo \(q\)._ Proof.: The proof is similar to that [46, Theorem 1.1], but certain modifications are needed. First note that the condition \(a(h)\ll h^{\epsilon}\) in the assumption [46, Theorem 1.1] does not hold in our case, however, we can modify the proof so that the conclusion still holds. More specifically, we still have [46, eq (5.28)] since \(\sum_{u\sim U}\frac{\sqrt{u}r(u)}{u}\ll U\prod_{p}(1+r(p)\sqrt{p})\ll U\exp \Big{(}O\Big{(}\frac{\mathcal{L}}{\log\mathcal{L}^{2}}\Big{)}\Big{)}\ll UT^{\epsilon}\) and this causes an extra factor of \(T^{\epsilon}\) in [46, eq (5.29)] which is acceptable. In the estimate [46, eq (5.37)], we use \(\sum_{u\sim U}|\frac{|\sqrt{u}r(u)|}{u}|^{2}\ll\prod_{p}\Big{(}1+\frac{r(p)^{ 2}}{p}\Big{)}\ll\exp\Big{(}O\Big{(}\frac{\mathcal{L}^{2}}{(\log\mathcal{L}^{2} )^{2}}\Big{)}\Big{)}\ll T^{\epsilon}\), which is again acceptable and leads to [46, eq (5.38)]. The main term can be derived the same way in the proof of [46, Theorem 1.1] using [46, Proposition 3.1] (after correcting the typo in the exponent of \((h,k)\)) or [11, Lemma 1 and Section 5]. Let \(X=T^{\Delta}\) with \(\Delta<\frac{17}{33}\). We write \[\int_{T}^{2T}|L(\tfrac{1}{2}+it,\chi)|^{2}|R(t)|^{2}dt=\lim_{\alpha,\beta\to 0 }\int_{T}^{2T}L(\tfrac{1}{2}+\alpha+it,\chi)L(\tfrac{1}{2}+\beta-it,\overline{ \chi})|R(t)|^{2}dt.\] By a residue calculation, we see that \[L(1+\alpha+\beta,\chi_{0})+\left(\frac{qt}{2\pi HK}\right)^{-\alpha-\beta}L( 1-\alpha-\beta,\chi_{0})\] \[=-\frac{\big{(}\frac{qt}{2\pi HK}\big{)}^{-(\alpha+\beta)/2}}{(2\pi i)^{2}} \int_{|z_{j}|=\frac{2^{j}}{\log T}}L(1+z_{1}-z_{2},\chi_{0})(z_{1}-z_{2})^{2} \bigg{(}\frac{qt}{2\pi HK}\bigg{)}^{\frac{z_{1}-z_{2}}{2}}\prod_{j=1}^{2} \frac{dz_{j}}{(z_{j}-\alpha)(z_{j}+\beta)}.\] Applying this in Lemma 5 with \(HK=hk/(h,k)^{2}\), we obtain \[\int_{T}^{2T}|L(\tfrac{1}{2}+it,\chi)|^{2}|R(t)|^{2}dt+O(T^{1-\epsilon_{\Delta }}) \tag{5.1}\] \[=\frac{1}{(2\pi i)^{2}}\int_{|z_{j}|=\frac{2^{j}}{\log T}}L(1+z_{1}+z_{2},\chi _{0})G_{X}(z_{1},z_{2})(z_{1}+z_{2})^{2}\bigg{(}\int_{T}^{2T}\left(\frac{qt}{ 2\pi}\right)^{\frac{z_{1}+z_{2}}{2}}\!\!\!\!\!dt\bigg{)}\prod_{j=1}^{2}\frac{ dz_{j}}{z_{j}^{2}},\] where \[G_{X}(z_{1},z_{2})=\sum_{h,k\leqslant X}\frac{\overline{\chi}(h/(h,k))\chi(k/(h,k ))r(h)\overline{r(k)}}{(hk)^{1/2+(z_{1}+z_{2})/2}}(h,k)^{1+z_{1}+z_{2}}.\] By Rankin's trick we see that \[G_{X}(\underline{z})=N_{X}(\underline{z})+O(\mathcal{E}_{X}(\underline{z})) \tag{5.2}\] where \[N_{X}(\underline{z})=\sum_{h,k}\frac{\overline{\chi}(h/(h,k))\chi(k/(h,k))r(h )\overline{r(k)}}{(hk)^{(1+z_{1}+z_{2})/2}}(h,k)^{1+z_{1}+z_{2}}=\prod_{p} \left(1+|r(p)|^{2}+2\frac{\Re r(p)\overline{\chi}(p)}{p^{(1+z_{1}+z_{2})/2}}\right)\] and \[\mathcal{E}_{X}(\underline{z})=X^{-\alpha}\prod_{p}\left(1+|r(p)|^{2}p^{ \alpha}+\frac{|r(p)|}{p^{(1+\Re z_{1}+\Re z_{2})/2}}(p^{\alpha}+1)\right)\] for any \(\alpha>0\). Using the approximation (5.2) in (5.1) and calculating the integral of the main term via residues at \(z_{j}=0\), we find that the leading term in (5.1) is of size \[T(\log T)N_{X}(\underline{0})\] with the lower order terms involving partial derivatives of \(N_{X}(\underline{z})\). To estimate these we note \[\begin{split}\frac{\partial}{\partial z_{1}}N_{X}(\underline{z}) \bigg{|}_{\underline{z}=\underline{0}}\ll& N_{X}(\underline{0})\sum_{p}\frac{|r(p)|\log p /p^{1/2}}{|1+|r(p)|^{2}+2\Re\frac{r(p)\overline{\chi}(p)}{p^{1/2}}|}\\ \ll& N_{X}(\underline{0})\sum_{\mathcal{L}^{2}<p \leqslant\exp((\log\mathcal{L})^{2})}\frac{\mathcal{L}|a(p)|}{p}\ll N_{X}( \underline{0})(\log X)^{1/2+\epsilon}\end{split} \tag{5.3}\] by Cauchy-Schwarz, (3.2) and (2.8). Note that the integrand of (5.1) is \(\ll(\log T)^{3}\) and so trivial estimation of the contribution from \(\mathcal{E}_{X}(\underline{z})\) to this integral gives \[\mathcal{J}_{i}\leqslant C(\log T)N_{X}(\underline{0})+O((\log T)\mathcal{E}( X))+O(T^{-\epsilon_{\Delta}}),\] where \(\mathcal{E}(X)=\mathcal{E}_{X}(-2/\log T,-4/\log T)\). Now \[\begin{split}\frac{\mathcal{E}(X)}{N_{X}(\underline{0})}\ll& \exp\Big{(}-a\log X+\sum_{p}|r(p)|^{2}(p^{\alpha}-1)+O(\sum_{p}|r(p)|p^{-1/2}) \Big{)}\\ \leqslant&\exp\Big{(}-a\log X+\sum_{\mathcal{L}^{2} <p\leqslant\exp((\log\mathcal{L})^{2})}(p^{\alpha}-1)\frac{\mathcal{L}^{2}|a(p )|^{2}}{p\log^{2}p}+O(\frac{\mathcal{L}}{\log\mathcal{L}})\Big{)}\end{split}\] which, similarly to (4.5), is \(o(1)\) on choosing \(\alpha=1/(\log\mathcal{L})^{3}\). Thus, we find that when \(L_{i}(s)=L(s,\chi)\), \[\mathcal{J}_{i}\ll(\log T)N_{X}(\underline{0})\ll\log T\prod_{p}\bigg{(}1+|r( p)|^{2}+2\Re\frac{r(p)\overline{a_{i}}(p)}{p^{1/2}}\bigg{)},\] which completes the proof of Proposition 3 for the case for Dirichlet \(L\)-functions after noting that the factor of \(\log T\) can be absorbed into the \(o(1)\) term in the product. ### \(Gl(2)\)\(L\)-functions Here we are in the case where \(L_{i}=L(s,f)\) is the \(L\)-function of a primitive cusp form \(f\). Let \(\Phi:\mathbb{R}\to\mathbb{R}\) be a smooth function supported on \([1/4,2]\) satisfying \(\Phi(x)\geqslant 1\) on \(x\in[1,2]\) along with the bounds \(\Phi^{(j)}(x)\ll(\log T)^{j}\) for each \(j\geq 0\). **Lemma 6**.: _Let \(L(s,f)=\sum_{n\geqslant 1}\lambda_{f}(n)n^{-s}\) be the \(L\)-function of a Hecke newform (holomorphic or Maass) of level \(N\). Let \(\alpha,\beta\) be complex numbers satisfying \(\alpha,\beta\ll 1/\log T\). Let \((h,k)=1\) and \(\Phi\) be as above. Then we have_ \[\int_{\mathbb{R}}L(\tfrac{1}{2}+\alpha+it,f)L(\tfrac{1}{2}+\beta -it,f)(h/k)^{-it}\Phi\big{(}\frac{t}{T}\big{)}dt\] \[= \frac{1}{h^{1/2+\beta}k^{1/2+\alpha}}\int_{\mathbb{R}}\Big{(}L^{ *}(1+\alpha+\beta,f\otimes f)Z_{\alpha,\beta}(h,k)\] \[+\bigg{(}\frac{t\sqrt{N}}{2\pi\sqrt{hk}}\bigg{)}^{-2(\alpha+ \beta)}L^{*}(1-\alpha-\beta,f\otimes f)Z_{-\beta,-\alpha}(h,k)\Big{)}\Phi\big{(} \frac{t}{T}\big{)}dt+O((hk)^{1/2+\epsilon}T^{1/2+\theta+\epsilon})\] _where \(L^{*}(s,f\otimes f)=\sum_{n}\lambda_{f}(n)^{2}n^{-s}\) and_ \[Z_{\alpha,\beta}(h,k)=\prod_{p|hk}(1-p^{-2(1+\alpha+\beta)})^{-1}\Big{(} \lambda_{f}(p)-\frac{\lambda_{f}(p)}{p^{1+\alpha+\beta}}\Big{)}. \tag{5.4}\] Proof.: This follows from [2], which improves earlier results for the holomorphic case in [3, 31]. Using [2, Proposition 3.4], we have for \(|\alpha+\beta|\gg 1/\log T\) \[\int_{\mathbb{R}}L(\tfrac{1}{2}+\alpha+it,f)L(\tfrac{1}{2}+\beta- it,f)(h/k)^{-it}\Phi(t/T)dt+O((hk)^{1/2}T^{1/2+\theta+\epsilon})\] \[= \sum_{hm=kn}\frac{\lambda_{f}(m)\lambda_{f}(n)}{m^{1/2+\alpha}n^{ 1/2+\beta}}\int_{\mathbb{R}}V_{\alpha,\beta}(mn,t)\Phi(t/T)dt\] \[+\sum_{hm=kn}\frac{\lambda_{f}(m)\lambda_{f}(n)}{m^{1/2-\beta}n^{ 1/2-\alpha}}\int_{\mathbb{R}}X_{\alpha,\beta,t}V_{-\beta,-\alpha}(mn,t)\Phi(t/ T)dt\] where \[V_{\alpha,\beta}(x)=\frac{1}{2\pi i}\int_{1-i\infty}^{1+i\infty}\frac{G(s)}{s }x^{-s}g_{\alpha,\beta}(s,t)ds\] with \[G(s)=e^{s^{2}}\frac{(\alpha+\beta)^{2}-(2s)^{2}}{(\alpha+\beta)^{2}},\] and where \(g_{\alpha,\beta}(s,t)\) and \(X_{\alpha,\beta,t}\) are ratios of gamma factors satisfying \[g_{\alpha,\beta}(s,t)=\bigg{(}\frac{t\sqrt{N}}{2\pi}\bigg{)}^{2s}\Big{(}1+O \big{(}\frac{|s|^{2}}{t}\big{)}\Big{)},\quad X_{\alpha,\beta,t}=\bigg{(}\frac{ t\sqrt{N}}{2\pi}\bigg{)}^{-2(\alpha+\beta)}\Big{(}1+O\big{(}\frac{|\alpha^{2}- \beta^{2}|}{t}\big{)}\Big{)} \tag{5.5}\] (see e.g. Lemma 2 of [3]). Using the definition of \(V_{\alpha,\beta}(x)\) and moving the \(m,n\)-sum inside, we encounter the Dirichlet series \[\sum_{hm=kn}\frac{\lambda_{f}(m)\lambda_{f}(n)}{m^{1/2+\alpha+s}n^{1/2+\beta+s}}= \frac{1}{k^{1/2+\alpha+s}h^{1/2+\beta+s}}\sum_{l\geqslant 1}\frac{ \lambda_{f}(kl)\lambda_{f}(hl)}{l^{1+\alpha+\beta+2s}}\] since \((h,k)=1\). Using multiplicativity and Hecke relations (see e.g. [5, Proof of Lemma 7.9]), we see that \[D(s;h,k):=\sum_{l\geq 1}\frac{\lambda_{f}(kl)\lambda_{f}(hk)}{l^{s}}=L^{*}(s, f\otimes f)\prod_{p|hk}(1-p^{-2s})^{-1}\Big{(}\lambda_{f}(p)-\frac{\lambda_{f}(p)}{p ^{s}}\Big{)}\] where \(L^{*}(s,f\otimes f)=\sum_{n\geq 1}\lambda_{f}(n)^{2}n^{-s}\). Shifting the contour to \(\Re(s)=-1/4+\epsilon\) we encounter a simple pole at \(s=0\) which gives the main term. The contribution from the remaining contour is seen to be \(\ll T^{1/2}(hk)^{-1/4+\theta+\epsilon}\) by (5.5), the rapid decay of \(G(s)\), the convexity bound \(L^{*}(1/2+\epsilon+iy,f\otimes f)\ll(1+|y|)^{1+\epsilon}\), and the bound \[\prod_{p|hk}\Big{(}\lambda_{f}(p)+O(\frac{\lambda_{f}(p)}{p^{1/2+2\epsilon}}) \Big{)}\ll(hk)^{\theta+\epsilon}.\] By analytic continuation, the result hold for \(\alpha,\beta\ll 1/\log T\). To complete the proof of Proposition 3 for the case of \(GL(2)\)\(L\)-functions, we follow the same argument as before in the case for Dirichlet \(L\)-functions after replacing Lemma 5 by Lemma 6 and \(G_{X}(z_{1},z_{2})\) by \[H_{X}(z_{1},z_{2})=\sum_{h,k\leqslant X}\frac{r(h)\overline{r(k)}Z_{z_{1},z_{2 }}(h/(h,k),k/(h,k))}{(hk)^{(1+z_{1}+z_{2})/2}}(h,k)^{1+z_{1}+z_{2}}\] where \(Z_{z_{1},z_{2}}\) is defined in (5.4). Note that we have \[Z_{\underline{0}}(h,k)=\prod_{p|hk}(1-p^{-2})^{-1}\Big{(}\lambda_{f}(p)-\frac {\lambda_{f}(p)}{p}\Big{)}\] and that \(\lambda_{f}(p)(1-1/p)=a_{i}(p)(1+o(1))\) for large \(p\). Thus, the main contribution to \(\mathcal{J}_{i}\) in this case, aside from some factors of \(\log T\) which can be absorbed into the \(o(1)\), is \[\sum_{h,k}\frac{r(h)\overline{r(k)}Z_{\underline{0}}(h/(h,k),k/(h,k))}{(hk)^{ 1/2}}(h,k)=\prod_{p}\bigg{(}1+|r(p)|^{2}+2(1+o(1))\Re\frac{r(p)\overline{a_{i }(p)}}{p^{1/2}}\bigg{)}\] as required. Again, the lower order terms coming from partial derivatives can be bounded similarly to (5.3) using Proposition 1 whilst the error from the tail sums \(h>X\), \(k>X\) are also of a lower order by similar arguments to before (again using Proposition 1). ## 6. Conditional upper bounds in \(t\)-aspect: Proof of Proposition 4 ### Upper bounds for the logarithm of the product of \(L\)-functions Let \(\pi_{i}\) be irreducible unitary cuspidal automorphic representations of \(GL(d_{i})\) over \(\mathbb{Q}\) such that \(\pi_{i}\not\cong\pi_{j}\) for \(1\leq i\neq j\leq m\). For a given \(1\leqslant i\leqslant m\) write \[M(s)=M_{i}(s)=\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{m}L_{j}(s)=\sum_{n=1}^{\infty}\frac{B_{i}(n)}{n^{s}}\] so that \[\frac{M^{\prime}(s)}{M(s)}=\sum_{n}\frac{\Lambda(n)b(n)}{n^{s}}\] where \[b(n)=b_{i}(n)=\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{m}a_{\pi_{j}}(n)\] where \(a_{\pi}(n)\) are defined as in (2.3). The goal is to show that \[\frac{1}{T}\int_{T}^{2T}|M(\tfrac{1}{2}+it)|^{2}|R(t)|^{2}dt\ll\exp\bigg{(} \sqrt{\frac{\log T}{\log_{2}T\log_{3}T}}\bigg{)}\prod_{p}\bigg{(}1+|r(p)|^{2} +2\Re\frac{r(p)\overline{b(p)}}{p^{1/2}}\bigg{)}.\] We plan to compute the integral over \(t\) by using Harper's method [17]. A key estimate will be \[\sum_{p\leqslant x}\frac{|b(p)|^{2}}{p}=(m-1)\log\log x+O(1) \tag{6.1}\] which follows by (2.8) and the assumption on \(\pi_{j}\). To measure the size of exceptional sets where Dirichlet polynomials obtain large values we need the following standard lemma. **Lemma 7** ([44], Lemma 3).: _Let \(T\) be large and let \(2\leqslant x\leqslant T\). Let \(k\) be a natural number such that \(x^{k}\leqslant T\). Then for any complex numbers \(c(p)\) we have_ \[\frac{1}{T}\int_{T}^{2T}\bigg{|}\sum_{p\leqslant x}\frac{c(p)}{p^{1/2+it}} \bigg{|}^{2k}dt\ll k!\bigg{(}\sum_{p\leqslant x}\frac{|c(p)|^{2}}{p}\bigg{)}^ {k}.\] To obtain an upper bound for \(|M|^{2}\) we apply the following generalisation of Soundararajan's result for the Riemann zeta function [44, Proposition] due to Chandee [10]. **Lemma 8** ([10], Theorem 2.1).: _Assume GRH holds for \(L(s,\pi_{j})\), \(1\leqslant j\leqslant m\), \(j\neq i\). Then for \(2\leqslant Z\leqslant T^{2}\) and \(t\in[T,2T]\) we have_ \[\log|M(1/2+it)|\leqslant\Re\sum_{n\leqslant Z}\frac{\Lambda(n)b(n)w_{Z}(n)}{ n^{1/2+it}\log n}+C_{M}\frac{\log T}{\log Z}+O(1)\] _for some positive constant \(C_{M}\) where_ \[w_{Z}(n)=n^{-1/2\log Z}\Big{(}1-\frac{\log n}{\log Z}\Big{)}\leqslant 1.\] Since we are interested in extreme values, powers of \(\log T\) will not meaningfully affect our final bound and so we first trivially bound the sum over prime powers. **Lemma 9**.: _Assume GRH holds for \(L(s,\pi_{j})\), \(1\leqslant j\leqslant m\), \(j\neq i\) and that the generalised Ramanujan conjecture holds for \(L(s,\pi_{j})\) if \(d_{j}\geqslant 4\). Then for \(2\leqslant Z\leqslant T^{2}\) and \(t\in[T,2T]\) we have_ \[\log|M(1/2+it)|\leqslant\Re\sum_{p\leqslant Z}\frac{b(p)w_{Z}(p)}{p^{1/2+it}}+ C_{M}\frac{\log T}{\log Z}+O(\log\log Z). \tag{6.2}\] Proof.: The terms in Lemma 8 with powers \(\ell>d^{2}+1\) with \(d=\max_{j}d_{j}\) are \[\ll\sum_{\begin{subarray}{c}p^{\ell}\leqslant Z\\ \ell>d^{2}+1\end{subarray}}\frac{|b(p^{\ell})|}{p^{\ell/2}}\ll\sum_{p}\frac{1} {p^{\ell/(d^{2}+1)}}\ll 1\] which follows from (2.4) and the fact that \(b(p^{\ell})=\sum_{j\neq i}a_{j}(p^{\ell})\). For \(2\leqslant\ell\leqslant d^{2}+1\) we note that if \(d_{j}\leqslant 3\) then (2.6) gives \[|a_{j}(p^{\ell})|\ll 1+|a_{j}(p)|^{\ell}.\] If \(d_{j}\geqslant 4\) then the generalised Ramanujan conjecture implies \(|a_{j}(p^{\ell})|\leqslant d_{j}\). Thus the prime squares contribute \[\ll\sum_{p\leqslant Z^{1/2}}\frac{|b(p^{2})|}{p}\ll\log\log Z\] by (2.8) whilst by (2.4) we have \[\sum_{p}\frac{|b(p)|^{\ell}}{p^{\ell/2}}\ll\sum_{j\neq i}\sum_{p}\frac{|a_{j} (p)|^{2}}{p^{1+(\ell-2)/(d_{j}^{2}+1)}}.\] Since the Rankin-Selberg \(L\)-function is convergent for \(\sigma>1\), this last sum is bounded for \(\ell\geqslant 3\). ### Initial splitting and the exceptional set Following [17], the aim is to take a reasonably large \(Z\) in (6.2) and split the sum over primes into pieces with small variance so that for typical \(t\) their exponential can be approximated by a short truncated Taylor series. For us, the choice at where to begin this splitting is dictated by the support of the resonator coefficients, namely \(p\leqslant\exp((\log\mathcal{L})^{2})\) - the main interaction between \(M\) and the resonator will come from this piece. This gives a large chunk of primes in the first sum but as the following lemma shows, this is just about affordable and the exceptional set of large values of this sum is sufficiently small in measure. **Lemma 10**.: _For \(Z\leqslant X\) let_ \[E=\Big{\{}t\in[T,2T]:\Big{|}\sum_{p\leqslant\exp((\log\mathcal{L})^{2})}\frac{b( p)w_{Z}(p)}{p^{1/2+it}}\Big{|}\geqslant\frac{\log T}{100(\log\mathcal{L})^{2}} \Big{\}}.\] _Then_ \[\mu(E)\ll T\exp\Big{(}-4(1-o(1))\frac{\log T}{\log_{2}T}\Big{)}.\] _In particular, under the assumptions in Proposition 4 for \(X=T^{\Delta}\) with \(\Delta<1/2\), we have_ \[\int_{E}|M(\tfrac{1}{2}+it)|^{2}|R(t)|^{2}dt=o(T).\] Proof.: By Lemma 7 along with (6.1) we have \[\frac{1}{T}\mu(E)\ll k!\bigg{(}\sum_{p\leqslant\exp((\log\mathcal{L})^{2})}\frac{|b (p)|^{2}}{p}\bigg{)}^{k}\bigg{(}\frac{\log T}{100(\log\mathcal{L})^{2}}\bigg{)} ^{-2k}\] \[\ll k^{1/2}\bigg{(}\frac{Ck\log_{3}T}{(\log T/(\log\mathcal{L})^{2})^ {2}}\bigg{)}^{k}\] for some \(C\) provided \(k\leqslant\frac{\log T}{(\log\mathcal{L})^{2}}\). Choosing \(k=\frac{\log T}{(\log\mathcal{L})^{2}}=4(1+o(1))\frac{\log T}{(\log_{2}T)^{2}}\) this is \[\ll\bigg{(}\frac{C^{\prime}(\log_{2}T)^{2}\log_{3}T}{\log T} \bigg{)}^{4(1+o(1))\log T/(\log_{2}T)^{2}}\leqslant\exp\big{(}-4(1-o(1))\frac {\log T}{\log_{2}T}\big{)}\] giving the first part of the lemma. By Holder's inequality we have \[\int_{E}|M(\tfrac{1}{2}+it)|^{2}|R(t)|^{2}dt\leqslant \mu(E)^{1/4}\bigg{(}\int_{T}^{2T}|M(\tfrac{1}{2}+it)|^{8}dt\bigg{)} ^{1/4}\bigg{(}\int_{T}^{2T}|R(t)|^{4}dt\bigg{)}^{1/2}\] \[\ll T^{1/2}\exp\bigg{(}-(1+o(1))\frac{\log T}{\log_{2}T}\bigg{)} \bigg{(}\int_{T}^{2T}|R(t)|^{4}dt\bigg{)}^{1/2}\] on applying the conditional bound \(\int_{T}^{2T}|M(1/2+it)|^{8}dt\ll T(\log T)^{O(1)}\) which follows from [39]. By the mean value theorems for Dirichlet polynomials, for \(X\leqslant T^{1/2-\epsilon}\) we have \[\int_{T}^{2T}|R(t)|^{4}dt\ll T\sum_{\begin{subarray}{c}n_{1}n_{2}=n_{3}n_{4}\\ n_{j}\leqslant X\end{subarray}}|r(n_{1})r(n_{2})r(n_{3})r(n_{4})|\leqslant T \prod_{p}\big{(}1+4|r(p)|^{2}+|r(p)|^{4}\big{)}\] \[\leqslant T\exp\bigg{(}4\sum_{p}|r(p)|^{2}\bigg{)}\ll T\exp\bigg{(}4\sum_{ \mathcal{L}^{2}<p}\frac{\mathcal{L}^{2}|a(p)|^{2}}{p\log^{2}p}\bigg{)}\] \[\ll T\exp\bigg{(}4(1+o(1))\frac{\log X}{\log_{2}X}\bigg{)}\ll T\exp \bigg{(}(2-\epsilon+o(1))\frac{\log T}{\log_{2}T}\bigg{)}\] by (2.9). ### Remaining splittings and an inequality for \(|M|^{2}\) A key point is that on the set \([T,2T]\backslash E\) the exponential of the sum in the above lemma can be approximated by a short truncated Taylor series to give a Dirichlet polynomial of length \(\leqslant T^{1/10}\). Our choice of parameters throughout will be dictated by the need to have short Dirichlet polynomials whilst also having small exceptional sets. For integer \(\mathfrak{i}\geqslant 0\) let \[Z_{\mathfrak{i}}=\exp(e^{\mathfrak{i}}(\log\mathcal{L})^{2}),\qquad\qquad Z_{ -1}=1.\] Let \(J\) be the minimal integer such that \(Z_{J}\geqslant\exp(2C_{M}\sqrt{\log T\log_{2}T\log_{3}T})\), so that \(J=(\frac{1}{2}+o(1))\log\log T\) and note \[C_{M}\frac{\log T}{\log Z_{J}}\leqslant\frac{1}{2}\sqrt{\log T/\log_{2}T\log_{ 3}T}.\] By a slight abuse of notation we write \(w_{Z_{\mathfrak{i}}}(p)\) as \(w_{\mathfrak{i}}(p)\). Let \[P_{\mathfrak{i},\mathfrak{j}}(t)=\sum_{Z_{\mathfrak{i}-1}<p\leqslant Z_{ \mathfrak{i}}}\frac{b(p)w_{\mathfrak{j}}(p)}{p^{1/2+it}}.\] so that \[\sum_{p\leqslant Z_{\mathfrak{j}}}\frac{b(p)w_{\mathfrak{j}}(p)}{p^{1/2+it}}= \sum_{i=0}^{\mathfrak{j}}P_{\mathfrak{i},\mathfrak{j}}(t).\] Set \[\ell_{\mathfrak{i}}=\frac{\log T}{100(\log\mathcal{L})^{2}}e^{-5\mathfrak{i}/ 4},\qquad r_{\mathfrak{i}}=\frac{\log T}{e^{\mathfrak{i}}(\log\mathcal{L})^{2 +\epsilon}}.\] We remark that \(P_{\mathfrak{i},\mathfrak{j}}(t)^{10\ell_{\mathfrak{i}}}\) is a Dirichlet polynomial of length \(Z_{\mathfrak{i}}^{10\ell_{\mathfrak{i}}}=T^{e^{-\mathfrak{i}/4}/10}\). By Lemma 7 and (6.1) the measure of the set where \(|P_{\mathfrak{i},\mathfrak{j}}(t)|\geqslant\ell_{\mathfrak{i}}\) is, for any \(k\leqslant\log T/(e^{\mathfrak{i}}(\log\mathcal{L})^{2})\), \[\ll k^{1/2}\bigg{(}\frac{k\sum_{Z_{\mathfrak{i}-1}<p\leqslant Z_{ \mathfrak{i}}}|b(p)|^{2}/p}{e\ell_{\mathfrak{i}}^{2}}\bigg{)}^{k}\ll\left( \frac{ck}{\ell_{\mathfrak{i}}^{2}}\right)^{k}\ll\exp\Big{(}-\frac{c^{\prime} \log T}{e^{\mathfrak{i}}(\log_{2}T)^{1+\epsilon}}\Big{)} \tag{6.3}\] for some constants \(c,c^{\prime}\) on choosing \(k=r_{\mathfrak{i}}\) (we don't choose \(k\) as large as possible because we will need shorter Dirichlet polynomials later and this bound is sufficient). Note this bound kills \[\exp\Big{(}C_{M}\frac{\log T}{\log Z_{\mathfrak{i}-1}}\Big{)}=\exp\Big{(}(1+o( 1))\frac{4C_{M}\log T}{e^{\mathfrak{i}-1}(\log_{2}T)^{2}}\Big{)} \tag{6.4}\] which will be the extra term acquired from applying the inequality (6.2) at the \(\mathfrak{i}\)th step. As a final remark on our parameter choices: the reason for factor \(e^{-5\mathfrak{i}/4}\) in \(\ell_{\mathfrak{i}}\) is, first of all, so that we have a polynomial of length \(T^{e^{-\mathfrak{i}/4}/10}\) which, after taking the product over all \(\mathfrak{i}\), is still short (see (6.8) below). A factor of \(e^{-c\mathfrak{i}}\), \(c>1\), is required to have the decay in the exponent of \(T\) however if \(c>3/2\) then \(\ell_{J}\) would not be large enough to guarantee (6.3). The reason for the factor of \(e^{-\mathfrak{i}}\) in \(r_{\mathfrak{i}}\) is so that (6.3) is comparable with (6.4) for all \(\mathfrak{i}\). Now, by Stirling's formula for \(|z|\leqslant L\) we have \[e^{z}=(1+O(e^{-9L}))\sum_{m\leqslant 10L}\frac{z^{m}}{m!}.\] Therefore, if \(|P_{\rm i,j}(t)|\leqslant\ell_{i}\) we have \[\exp(P_{\rm i,j}(t))=(1+O(e^{-9\ell_{\rm i}}))\sum_{m\leqslant 10\ell_{\rm i}} \frac{P_{\rm i,j}(t)^{m}}{m!}.\] The multinomial theorem gives \[P_{\rm i,j}(t)^{m}=m!\sum_{\begin{subarray}{c}\Omega(n)=m\\ p|n\,\Longrightarrow\,Z_{\rm i-1}<p\leqslant Z_{\rm i}\end{subarray}}\frac{c(n )W_{\rm j}(n)\mathfrak{g}(n)}{n^{1/2+it}}\] where \[c(n)=\prod_{p^{\alpha_{p}}||n}b(p)^{\alpha_{p}},\qquad W_{\rm j}(n)=\prod_{p^ {\alpha_{p}}||n}w_{\rm j}(p)^{\alpha_{p}} \tag{6.5}\] are the completely multiplicative extensions of \(b(p)\) and \(w_{\rm j}(p)\) to the integers and \(\mathfrak{g}\) is the multiplicative function for which \[\mathfrak{g}(p^{\alpha})=\frac{1}{\alpha!}. \tag{6.6}\] Thus, if we denote \[\mathcal{N}_{\rm i,j}(t)=\sum_{\begin{subarray}{c}\Omega(n)\leqslant 10\ell_{ \rm i}\\ p|n\,\Longrightarrow\,Z_{\rm i-1}<p\leqslant Z_{\rm i}\end{subarray}}\frac{c(n )W_{\rm j}(n)\mathfrak{g}(n)}{n^{1/2+it}}\] then on such a set of \(t\) we have \[\exp\left(2\Re P_{\rm i,j}(t)\right)=(1+O(e^{-9\ell_{\rm i}}))|\mathcal{N}_{ \rm i,j}(t)|^{2}.\] Accordingly, if \(t\) is such that \(|P_{\rm i,j}(t)|\leqslant\ell_{\rm i}\) for all \(0\leqslant\rm i\leqslant\rm j\) then \[\exp\left(2\Re\sum_{p\leqslant Z_{\rm j}}\frac{w_{\rm j}(p)}{p^{1/2+it}} \right)=(1+o(1))\prod_{\rm i=0}^{\rm j}\left|\mathcal{N}_{\rm i,j}(t)\right|^{2} \tag{6.7}\] since \(\sum_{\rm i=0}^{\rm j}e^{-9\ell_{\rm i}}=o(1)\). We note that the right hand side is a Dirichlet polynomial of length \[\leqslant\prod_{\rm i=0}^{J}Z_{\rm i}^{10\ell_{\rm i}}=T^{\frac{1}{10}\sum_{ \rm i=0}^{J}e^{-\rm i/4}}\leqslant T^{1/2} \tag{6.8}\] We can now state an upper bound for the \(|M(\frac{1}{2}+it)|\) in terms of these short Dirichlet polynomials. **Lemma 11**.: _Assume GRH for \(L(s,\pi_{j})\) for \(1\leqslant j\leqslant m\) and let \(t\in[T,2T]\). Then either_ \[|P_{0,\mathrm{j}}(t)|>\ell_{0}\] _for some \(0\leqslant\mathrm{j}\leqslant J\) or_ \[|M(\tfrac{1}{2}+it)|^{2}\ll\exp\bigg{(}\frac{1}{2}\sqrt{\frac{ \log T}{\log_{2}T\log_{3}T}}\bigg{)}\prod_{\mathrm{i}=0}^{J}\big{|}\mathcal{N }_{\mathrm{i},J}(t)\big{|}^{2}\\ +(\log T)^{O(1)}\sum_{\begin{subarray}{c}0\leqslant\mathrm{j} \leqslant J-1\\ \mathrm{j}+1\leqslant l\leqslant J\end{subarray}}\exp\Big{(}\frac{C_{M}\log T }{\log Z_{\mathrm{j}}}\Big{)}\bigg{(}\frac{|P_{\mathrm{j}+1,l}(t)|}{\ell_{ \mathrm{j}+1}}\bigg{)}^{2r_{\mathrm{j}}}\prod_{\mathrm{i}=0}^{\mathrm{j}} \big{|}\mathcal{N}_{\mathrm{i},\mathrm{j}}(t)\big{|}^{2}.\] Proof.: Suppose \(|P_{0,\mathrm{j}}(t)|<\ell_{0}\). For \(0\leqslant\mathrm{j}\leqslant J-1\) let \[S(j)=\bigg{\{}t\in[T,2T]:\qquad\begin{array}{c}|P_{\mathrm{i},l}(t)| \leqslant\ell_{\mathrm{i}}\qquad\forall 1\leqslant\mathrm{i}\leqslant \mathrm{j},\ \forall\mathrm{j}\leqslant l\leqslant J;\\ |P_{\mathrm{j}+1,l}(t)|>\ell_{\mathrm{j}+1}\text{ for some }\mathrm{j}+1\leqslant l \leqslant J\end{array}\bigg{\}}\] and \[S(J)=\bigg{\{}t\in[T,2T]:|P_{\mathrm{i},J}(t)|\leqslant\ell_{ \mathrm{i}}\quad\forall 1\leqslant\mathrm{i}\leqslant J\bigg{\}}.\] Then since \([T,2T]=\cup_{\mathrm{j}=0}^{J}S(\mathrm{j})\), for \(t\in[T,2T]\) we have \[|M(\tfrac{1}{2}+it)|^{2}\leqslant\mathds{1}_{t\in S(J)}\cdot|M( \tfrac{1}{2}+it)|^{2}+\sum_{\begin{subarray}{c}0\leqslant\mathrm{j} \leqslant J-1\\ \mathrm{j}+1\leqslant l\leqslant J\end{subarray}}\mathds{1}_{t\in S_{l}( \mathrm{j})}\cdot|M(\tfrac{1}{2}+it)|^{2} \tag{6.9}\] where \[S_{l}(\mathrm{j})=\bigg{\{}t\in[T,2T]:\ \begin{array}{c}|P_{\mathrm{i},l}(t)| \leqslant\ell_{\mathrm{i}}\\ |P_{\mathrm{j}+1,l}(t)|>\ell_{\mathrm{j}+1}\end{array}\qquad\qquad\forall 1 \leqslant\mathrm{i}\leqslant j,\ \forall\mathrm{j}\leqslant l \leqslant J;\\ \left.\right\}.\] We apply Corollary 9 to each \(M\) on the right hand side of (6.9). If \(t\in S_{l}(\mathrm{j})\) then we take \(Z=Z_{\mathrm{j}}\) to give \[|M(\tfrac{1}{2}+it)|^{2}\ll\exp\bigg{(}2\Re\sum_{p\leqslant Z_{ \mathrm{j}}}\frac{b(p)w_{\mathrm{j}}(p)}{p^{1/2+it}}+\frac{C_{M}\log T}{\log Z_ {\mathrm{j}}}+O(\log_{2}T)\bigg{)}.\] For the first sum over primes in the exponential we apply (6.7). To capture the small size of the set, we multiply by \[\bigg{(}\frac{|P_{\mathrm{j}+1,l}(t)|}{\ell_{\mathrm{j}+1}}\bigg{)}^{2r_{ \mathrm{j}}}>1.\] If \(t\in S(J)\) then we omit this last step. ### Applying the inequality We apply Lemma 11 to compute \[\int_{T}^{2T}|M(\tfrac{1}{2}+it)|^{2}|R(t)|^{2}dt\] By Lemma 10 we may disregard the \(t\) for which \(|P_{0,\mathrm{j}}(t)|>\ell_{0}\) since this gives a contribution \(o(T)\). By Lemma 11 the integral over the remaining set is then \[\ll\int_{T}^{2T}\bigg{(}\exp\bigg{(}\frac{1}{2}\sqrt{\frac{\log T }{\log_{2}T\log_{3}T}}\bigg{)}\prod_{\mathrm{i}=0}^{J}\big{|}\mathcal{N}_{ \mathrm{i},J}(t)\big{|}^{2}+(\log T)^{O(1)}\\ \times\sum_{\begin{subarray}{c}0\leqslant\mathrm{j}\leqslant J-1 \\ \mathrm{j}+1\leqslant\mathrm{j}\end{subarray}}\exp\Big{(}\frac{C_{M}\log T}{ \log Z_{\mathrm{j}}}\Big{)}\bigg{(}\frac{|P_{\mathrm{j}+1,t}(t)|}{\ell_{ \mathrm{j}+1}}\bigg{)}^{2r_{\mathrm{j}}}\prod_{\mathrm{i}=0}^{\mathrm{j}} \big{|}\mathcal{N}_{\mathrm{i},\mathrm{j}}(t)\big{|}^{2}\bigg{)}|R(t)|^{2}dt. \tag{6.10}\] To facilitate computations we note the following general observations. Suppose we are given \(R\) sets \(\mathcal{S}_{j}\subset\mathbb{N}\) and Dirichlet polynomials \[A_{j}(s)=\sum_{n\in\mathcal{S}_{j}}a_{j}(n)n^{-s},\] where the \(\prod_{j=1}^{R}n_{j}\leqslant M=o(T)\) for all \(n_{j}\in\mathcal{S}_{j}\). Then by the mean value theorems for Dirichlet polynomials we have \[\frac{1}{T}\int_{T}^{2T}\prod_{j=1}^{R}\big{|}A_{j}(it)\big{|}^{2}dt\sim\sum_{ n\leqslant M}\Big{|}\sum_{\begin{subarray}{c}n=n_{1}\cdots n_{R}\\ n_{j}\in\mathcal{S}_{j}\end{subarray}}a_{1}(n_{1})\cdots a_{R}(n_{R})\Big{|}^ {2}\] If for any \(j_{1},j_{2}\) with \(j_{1}\neq j_{2}\) the elements of \(\mathcal{S}_{j_{1}}\) are all coprime to the elements of \(\mathcal{S}_{j_{2}}\) then there is at most one way to write \(n=\prod_{j=1}^{R}n_{j}\) with \(n_{j}\in\mathcal{S}_{j}\) and so \[\frac{1}{T}\int_{T}^{2T}\prod_{j=1}^{R}|A_{j}(it)|^{2}dt =(1+O(NT^{-1}))\sum_{n\leq N}\Big{|}\sum_{\begin{subarray}{c}n=n _{1}\cdots n_{R}\\ n_{j}\in\mathcal{S}_{j}\end{subarray}}\prod_{j=1}^{R}a_{j}(n_{j})\Big{|}^{2}\] \[=(1+O(NT^{-1}))\prod_{j=1}^{R}\Big{(}\sum_{n_{j}\in\mathcal{S}_{j }}|a_{j}(n_{j})|^{2}\Big{)}\] \[=(1+O(NT^{-1}))^{1-R}\prod_{j=1}^{R}\Big{(}\frac{1}{T}\int_{T}^{2 T}|A_{j}(it)|^{2}dt\Big{)}.\] Since \(\prod_{\mathrm{i}=0}^{J}N_{\mathrm{i},\mathrm{j}}(t)\) is a Dirichlet polynomial of length \(\leqslant T^{1/2}\) by (6.8) and \(R(t)\) is a Dirichlet polynomial of length \(X=T^{\Delta}\) with \(\Delta<1/2\), we can apply the above observations so that (6.10) becomes \[\ll T\exp\left(\frac{1}{2}\sqrt{\frac{\log T}{\log_{2}T\log_{3}T}} \right)\int|\mathcal{N}_{0,J}(t)|^{2}|R(t)|^{2}dt\cdot\prod_{\mathrm{i}=1}^{J} \int\big{|}\mathcal{N}_{\mathrm{i},J}(t)\big{|}^{2}dt\\ +T(\log T)^{O(1)}\sum_{\begin{subarray}{c}0\leq\mathrm{i}\leq J-1 \\ \mathrm{j}+1\leq\mathrm{j}\leq\mathrm{j}\end{subarray}}\exp\left(\frac{C_{M} \log T}{\log Z_{\mathrm{j}}}\right)\int|\mathcal{N}_{0,\mathrm{j}}(t)|^{2}|R(t )|^{2}dt\\ \times\prod_{\mathrm{i}=1}^{\mathrm{j}}\int\big{|}\mathcal{N}_{ \mathrm{i},\mathrm{j}}(t)\big{|}^{2}dt\int\left(\frac{|P_{\mathrm{j}+1,l}(t)|}{ \ell_{\mathrm{j}+1}}\right)^{2r_{\mathrm{j}}}dt\] where \(\int\) denotes \(\frac{1}{T}\int_{T}^{2T}\) for short. Note that, by (6.3) and (6.4) we have \[\exp\Big{(}\frac{C_{M}\log T}{\log Z_{\mathrm{j}}}\Big{)}\cdot \frac{1}{T}\int_{T}^{2T}\left(\frac{|P_{\mathrm{j}+1,l}(t)|}{\ell_{\mathrm{j} +1}}\right)^{2r_{\mathrm{j}}}dt\ll \exp\Big{(}-\frac{C\log T}{e^{\mathrm{j}+1}(\log_{2}T)^{1+\epsilon }}\Big{)}\] \[\ll \exp\big{(}-(\log T)^{1/2+o(1)}\big{)}.\] Since the number of terms in the sum over \(\mathsf{j},l\) along with the \((\log T)^{O(1)}\) term can be absorbed into this exponential, we arrive at \[\frac{1}{T}\int_{T}^{2T}|M(\tfrac{1}{2}+it)|^{2}|R(t)|^{2}dt\] \[\ll \exp\left(\sqrt{\frac{\log T}{\log_{2}T\log_{3}T}}\right)\max_{ \mathrm{j}\leq\mathrm{j}}\frac{1}{T}\int_{T}^{2T}|\mathcal{N}_{0,\mathrm{j}}(t )|^{2}|R(t)|^{2}dt\prod_{\mathrm{i}=1}^{\mathrm{j}}\frac{1}{T}\int_{T}^{2T} \big{|}\mathcal{N}_{\mathrm{i},\mathrm{j}}(t)\big{|}^{2}dt. \tag{6.11}\] ### Computing the mean values It remains to compute \[\frac{1}{T}\int_{T}^{2T}|\mathcal{N}_{0,\mathrm{j}}(t)|^{2}|R(t)|^{2}dt\cdot \prod_{\mathrm{i}=1}^{\mathrm{j}}\frac{1}{T}\int_{T}^{2T}\big{|}\mathcal{N}_{ \mathrm{i},\mathrm{j}}(t)\big{|}^{2}dt.\] Applying the mean value theorem for Dirichlet polynomials we have \[\frac{1}{T}\int_{T}^{2T}|\mathcal{N}_{0,\mathrm{j}}(t)|^{2}|R(t)|^{2}dt=(1+O(T ^{-\epsilon}))\sum_{\begin{subarray}{c}m_{1}n_{2}=m_{2}n_{2}\\ \Omega(m_{j})\leq\ell_{0}\\ p|m_{j}\underset{n_{j}\leq\mathrm{}X}{\Longrightarrow}p\leq Z_{0}\end{subarray}} \frac{\mathsf{c}_{\mathrm{j}}(m_{1})\overline{\mathsf{c}_{\mathrm{j}}(m_{2} )}r(n_{1})\overline{r(n_{2})}}{(m_{1}m_{2})^{1/2}}\] and \[\prod_{\mathrm{i}=1}^{\mathrm{j}}\frac{1}{T}\int_{T}^{2T}|\mathcal{N}_{ \mathrm{i},\mathrm{j}}(t)|^{2}dt=(1+O(T^{-\epsilon}))\prod_{\mathrm{i}=1}^{ \mathrm{j}}\sum_{\begin{subarray}{c}\Omega(m)\leq\ell_{\mathrm{i}}\\ p|m\underset{Z_{\mathrm{i}-1}<p\leq Z_{\mathrm{i}}}{\Longrightarrow}Z_{ \mathrm{i}-1}\end{subarray}}\frac{|\mathsf{c}_{\mathrm{j}}(m)|^{2}}{m}\] with \[\mathsf{c}_{\mathrm{j}}(m)=c(m)\mathsf{g}(m)W_{\mathrm{j}}(m)\] where we recall the definition of these coefficients from (6.5), (6.6). Now, \[\prod_{\mathfrak{i}=1}^{\mathfrak{j}}\sum_{p|n}\sum_{Z_{\mathfrak{i}-1}<p\leqslant Z _{\mathfrak{i}}}\frac{|\mathfrak{c}_{\mathfrak{i}}(m)|^{2}}{m}\leqslant\exp \bigg{(}\sum_{Z_{0}<p\leqslant Z_{\mathfrak{j}}}\frac{|b(p)|^{2}}{p}\bigg{)} \ll\bigg{(}\frac{\log Z_{J}}{\log Z_{0}}\bigg{)}^{m-1} \tag{6.12}\] by (6.1). It thus suffices to show \[\sum_{\begin{subarray}{c}m_{1}n_{2}=m_{2}n_{2}\\ \Omega(m_{j})\leqslant\ell_{0}\\ p|m_{j}\Longrightarrow p\leqslant Z_{0}\\ n_{j}\leqslant X\end{subarray}}\frac{\mathfrak{c}_{\mathfrak{i}}(m_{1}) \overline{\mathfrak{c}_{\mathfrak{i}}(m_{2})}r(n_{1})\overline{r(n_{2})}}{( m_{1}m_{2})^{1/2}}\ll\prod_{p}\bigg{(}1+|r(p)|^{2}+2(1+o(1))\Re\frac{r(p)\overline{b(p)} }{p^{1/2}}\bigg{)}. \tag{6.13}\] Assuming this for the moment, plugging (6.12) and (6.13) into (6.11) gives \[\frac{1}{T}\int_{T}^{2T}|M(\tfrac{1}{2}+it)|^{2}|R(t)|^{2}dt\\ \ll\exp\bigg{(}\sqrt{\frac{\log T}{\log_{2}T\log_{3}T}}\bigg{)} \prod_{p}\bigg{(}1+|r(p)|^{2}+2(1+o(1))\Re\frac{r(p)\overline{b(p)}}{p^{1/2}} \bigg{)}.\] and Proposition 4 follows. To prove (6.13) we apply Rankin's trick to find that the sum on the left there is for any \(\alpha>0\), \[\sum_{\begin{subarray}{c}m_{1}n_{2}=m_{2}n_{2}\\ p|n\Longrightarrow p\leqslant Z_{0}\end{subarray}}\frac{\mathfrak{c}_{ \mathfrak{j}}(m_{1})\overline{\mathfrak{c}_{\mathfrak{j}}(m_{2})}r(n_{1}) \overline{r(n_{2})}}{(m_{1}m_{2})^{1/2}}\\ +O\bigg{(}e^{-\ell_{0}}\sum_{\begin{subarray}{c}m_{1}n_{2}=m_{2 }n_{2}\\ p|m_{j}\Longrightarrow p\leqslant Z_{0}\end{subarray}}\frac{|\mathfrak{c}_{ \mathfrak{j}}(m_{1})\mathfrak{c}_{\mathfrak{j}}(m_{2})r(n_{1})r(n_{2})|e^{ \Omega(m_{1})}}{(m_{1}m_{2})^{1/2}}\bigg{)}\\ +O\bigg{(}X^{-\alpha}\sum_{\begin{subarray}{c}m_{1}n_{2}=m_{2 }n_{2}\\ p|m_{j}\Longrightarrow p\leqslant Z_{0}\end{subarray}}\frac{|\mathfrak{c}_{ \mathfrak{j}}(m_{1})\mathfrak{c}_{\mathfrak{j}}(m_{2})r(n_{1})r(n_{2})|n_{1}^ {\alpha}}{(m_{1}m_{2})^{1/2}}\bigg{)} \tag{6.14}\] by symmetry. The main term here is \[\prod_{p\leqslant Z_{0}}\sum_{m_{1}+n_{1}=m_{2}+n_{2}}\frac{b(p)^{m_ {1}}\overline{b(p)^{m_{2}}}w_{\mathfrak{j}}(p)^{m_{1}}w_{\mathfrak{j}}(p)^{m_{2} }r(p^{n_{1}})\overline{r(p^{n_{2}})}}{m_{1}!m_{2}!p^{(m_{1}+m_{2})/2}}\] \[= \prod_{p\leqslant Z_{0}}\biggl{(}(1+|r(p)|^{2})\sum_{m\geqslant 0} \frac{|b(p)|^{2m}w_{\mathfrak{j}}(p)^{2m}}{m!^{2}p^{m}}+2\Re\frac{r(p) \overline{b(p)}}{p^{1/2}}\sum_{m\geqslant 0}\frac{|b(p)|^{2m}w_{\mathfrak{j}}(p) ^{2m}}{m!(m+1)!p^{m}}\biggr{)}\] \[= \mathcal{K}(X)\prod_{p}\left(1+|r(p)|^{2}+2\Re\frac{r(p)\overline {b(p)}}{p^{1/2}}B(p)\right)\] where \[\mathcal{K}(X)=\prod_{p\leqslant Z_{0}}\sum_{m\geqslant 0}\frac{|b(p)|^{2m}w_{ \mathfrak{j}}(p)^{2m}}{m!^{2}p^{m}}\ll\exp\bigg{(}\sum_{p\leqslant Z_{0}}\frac {|b(p)|^{2}}{p}\bigg{)}\ll(\log Z_{0})^{m-1},\] by (6.1) and \[B(p)=\frac{\sum_{m\geqslant 0}|b(p)|^{2m}w_{\mathfrak{j}}(p)^{2m}/m!^{2}p^{m }}{\sum_{m\geqslant 0}|b(p)|^{2m}w_{\mathfrak{j}}(p)^{2m}/m!(m+1)!p^{m}}=1+O \Bigl{(}\frac{1}{p^{2/(\max d_{i}^{2}+1)}}\Bigr{)}\] by (2.4). Since \(B(p)=1+o(1)\) for \(p\) in the support of \(r\) and \((\log Z_{0})^{m-1}\) can be absorbed into this \(o(1)\) term of the exponential, it remains to show that the error terms of (6.14) are of a lower order than this. With a similar calculation the first error term there is \[\ll e^{-\ell_{0}}(\log Z_{0})^{e(m-1)}\prod_{p}\bigg{(}1+|r(p)|^{2}+2e(1+o(1)) \frac{|r(p)b(p)|}{p^{1/2}}\bigg{)}.\] Since \(|r(p)|^{2},r(p)b(p)=o(1)\) in the support of \(r(\cdot)\) as in (3.2), the ratio of this to the main term is then \[\ll e^{-\ell_{0}}(\log Z_{0})^{e(m-1)}\exp\bigg{(}4e\sum_{p\leqslant Z_{0}} \frac{|r(p)b(p)|}{p^{1/2}}\bigg{)}=o(1)\] on recalling that \(\ell_{0}=\log T/100(\log\mathcal{L})^{2}\asymp\log T/(\log_{2}T)^{2}\) and noting that the sum in the exponential is \(\ll\sqrt{\log T/\log_{2}T}\). The second error term is \[\ll X^{-\alpha}(\log Z_{0})^{m-1}\prod_{p}\bigg{(}1+|r(p)|^{2}p^{\alpha}+2(1+o (1))\frac{|r(p)b(p)|}{p^{1/2-\alpha}}\bigg{)}.\] The ratio of this to the main term is \[\ll\exp\bigg{(}-\alpha\log X+\sum_{\mathcal{L}^{2}<p\leqslant\exp((\log \mathcal{L})^{2})}|a(p)|^{2}\frac{\mathcal{L}^{2}}{p\log^{2}p}(p^{\alpha}-1)+O \Bigl{(}\sqrt{\frac{\log T}{\log_{2}T}}\Bigr{)}\bigg{)}\] for \(\alpha=1/(\log\mathcal{L})^{3}\). The usual computations, as in (4.6), show this is \(o(1)\). ## 7. Simultaneous extreme values of twists of \(Gl(2)\) cusp forms: Proof of Theorem 2 Let \(f,g\) be a fixed primitive (holomorphic or Maass) cusp forms with respect to \(\Gamma_{0}(r)\) and \(\Gamma_{0}(r^{\prime})\) with trivial central character. As before, if we can find \(R(\chi)\) and \(V\) such that \[\frac{1}{\phi^{*}(q)}{\sum_{\chi\bmod q}}^{*}\,|L(1/2,f\otimes \chi)\overline{L(1/2,g\otimes\chi)}R(\chi)|^{2}\] \[\geqslant\frac{V}{\phi^{*}(q)}{\sum_{\chi\bmod q}}^{*}\,\Big{(}| L(1/2,f\otimes\chi)R(\chi)|^{2}+|L(1/2,g\otimes\bar{\chi})R(\chi)|^{2}\Big{)} \tag{7.1}\] with \(*\) meaning the sum is over primitive characters modulo \(q\) and \(\phi^{*}(q)=q-2\), then we must have \[\max_{\chi\bmod q}\,\min_{f,g}\Big{(}|L(1/2,f\otimes\chi)|,L(1/2,g\otimes\chi )|\Big{)}\geqslant\sqrt{V}.\] To estimate these mean values we follow [5] quite closely and so retain some of their methods, notation and set-up for ease of comparison, although it may differ from our previous sections slightly. We consider the sum on the left hand side of (7.1) first. From Cauchy's inequality we have \[\frac{1}{\phi^{*}(q)}{\sum_{\chi\bmod q}}^{*}\,|L(1/2,f\otimes\chi) \overline{L(1/2,g\otimes\chi)}R(\chi)|^{2}\] \[\geqslant\Big{(}\frac{1}{\phi^{*}(q)}{\sum_{\chi\bmod q}}^{*}\,| R(\chi)|^{2}\Big{)}^{-1}\Big{(}\frac{1}{\phi^{*}(q)}{\sum_{\chi\bmod q}}^{*}\,L(1/2,f \otimes\chi)\overline{L(1/2,g\otimes\chi)}|R(\chi)|^{2}\Big{)}^{2}\] Thus it make sense to choose \(R(\chi)\) such that \(|L(1/2,f\otimes\chi)L(1/2,g\otimes\chi)|\) is large and we follow the choice of \(R(\chi)\) in [5, section 7.5.1]. Let \(\lambda_{f}^{*},\lambda_{g}^{*}\) be multiplicative functions supported on squarefree positive integers defined by \(\lambda_{f}^{*}(p)=(1-1/p)^{-1}(\lambda_{f}(p)-\lambda_{g}(p)/p)\) and \(\lambda_{g}^{*}(p)=(1-1/p)^{-1}(\lambda_{g}(p)-\lambda_{f}(p)/p)\). For \(u\geqslant 1\) some parameter depending only on \(f\) and \(g\), let \[\mathcal{G}:=\{n\geqslant 1:(n,urr^{\prime})=1,\lambda_{f}^{*}(n)\lambda_{g}^{* }(n)\neq 0,\operatorname{sgn}(\lambda_{f}^{*}(n))=\operatorname{sgn}(\lambda_{g}^{* }(n))\}. \tag{7.2}\] Define \[\varpi(p)=\begin{cases}\lambda_{f}^{*}(p)\lambda_{g}^{*}(p)(\lambda_{f}^{*}(p) +\lambda_{g}^{*}(p)),&p\in\mathcal{G},\\ 0,&p\not\in\mathcal{G}.\end{cases}\] Let \[\omega(n)=|\varpi(n)|^{2},\ \omega_{1}^{\prime}(n)=\varpi(n)\lambda_{f}^{*}(n),\ \omega_{2}^{\prime}(n)=\varpi(n)\lambda_{g}^{*}(n),\ \omega^{\prime}(p)=\omega_{1}^{\prime}(p)+\omega_{2}^{ \prime}(p)\] and \[R(\chi)=\sum_{n\leqslant N}r(n)\varpi(n)\chi(n)\] where \[r(p)=\begin{cases}\frac{\mathcal{L}}{p^{1/2}\log p}&\text{ for }\mathcal{L}^{2} \leqslant p\leqslant\exp((\log\mathcal{L})^{2})\\ 0&\text{ otherwise}\end{cases} \tag{7.3}\] and \[\mathcal{L}=\sqrt{a_{\omega}^{-1}\log N\log\log N}\] for some constant \(a_{\omega}\) as in [5, eq. (7.55)] Fixing an arbitrary \(\delta>0\) and \(N\leqslant q^{1/360-\delta}\) we have that using [5, Lemma 7.19, Lemma 7.10] \[\frac{1}{\phi^{*}(q)}{\sum_{\chi\bmod q}}^{*}|R(\chi)|^{2}\sim\prod_{p}\Big{(} 1+r(p)^{2}\omega(p)\Big{)}\] and using [5, Lemma 7.19, Lemma 7.12, Lemma 7.14] there exists a squarefree integer \(u\geq 1\) coprime to \(rr^{\prime}\) such that \[\frac{1}{\phi^{*}(q)}{\sum_{\chi\bmod q}}^{*}|R(\chi)|^{2}L(1/2,f \otimes\chi)L(1/2,g\otimes\bar{\chi})\chi(u)\] \[= L^{*}(1,f\otimes g)(\nu+o(1))\prod_{p}\Big{(}1+r(p)^{2}\omega(p )+\frac{r(p)\omega^{\prime}(p)}{\sqrt{p}}\Big{)}+O\Big{(}q^{-\delta}\prod_{p} \big{(}1+r(p)^{2}\omega(p)\big{)}\Big{)},\] where \(L^{*}(s,f\otimes g)\) is as in [5, eq. (2.7)], \(\nu\neq 0\) is a constant depending on \(f,g\) only. As in [5], the \(\chi(u)\) is introduced to break the symmetry of \(\chi\) and \(\overline{\chi}\). Since \(|\chi(u)|\leqslant 1\) and \(L^{*}(f\otimes g,1)\neq 0\) ([5, Lemma 2.6]) we see that \[\frac{1}{\phi^{*}(q)}{\sum_{\chi\bmod q}}^{*}|L(1/2,f\otimes\chi) \overline{L(1/2,g\otimes\chi)}R(\chi)|^{2}\] \[\gg_{f,g}\prod_{p}\Big{(}1+r(p)^{2}\omega(p)+\frac{r(p)w^{\prime }(p)}{\sqrt{p}}\Big{)}^{2}\Big{(}1+r(p)^{2}\omega(p)\Big{)}^{-1}. \tag{7.4}\] We now turn to getting an upper bound for the mean squares on the right of (7.1). Similarly to the proof of [5, Lemma 7.9], we have for \((\ell,\ell^{\prime})=(\ell\ell^{\prime},qrr^{\prime})=1\), \(\ell,\ell^{\prime}\leqslant L\), \[\frac{1}{\phi^{*}(q)}{\sum_{\chi\bmod q}}^{*}|L(1/2,f\otimes\chi)|^{2}\chi( \ell)\chi(\ell^{\prime})=\frac{1}{2}\operatorname{MT}^{+}(f,f;\ell,\ell^{ \prime})+\frac{1}{2}\operatorname{MT}^{-}(f,f;\ell,\ell^{\prime})+O(L^{3/2}q^ {-1/144+\epsilon}),\] where \[\operatorname{MT}^{\pm}(f,f;\ell,\ell^{\prime}) =\frac{1}{2\pi i}\int_{(2)}\frac{L_{\infty}^{\pm}(f,\pm,\frac{1}{ 2}+u)}{L_{\infty}^{2}(f,\pm,\frac{1}{2})}\frac{D(1+2u;\ell,\ell^{\prime})}{( \ell\ell^{\prime})^{1/2+u}}G(u)(q^{2}|rr^{\prime}|)^{u}\frac{du}{u},\] \[D(s;\ell,\ell^{\prime}) =\sum_{n}\frac{\lambda_{f}(\ell n)\lambda(\ell^{\prime}n)}{n^{s} }=L^{*}(f\otimes f,s)\prod_{p|\ell\ell^{\prime}}(1-p^{-2s})^{-1}\prod_{p|\ell \ell^{\prime}}(\lambda_{f}(p)-\frac{\lambda_{f}(p)}{p^{s}}).\] Here \(G(u)=\cos(\frac{\pi u}{4A})^{-16A}\) for some \(A\geq 2\) and \(L_{\infty}(f,\pm,s)=L_{\infty}(f\otimes\chi,s)\) for \(\chi(-1)=\pm 1\) (see [5, Lemma 2.1] for definitions of \(L_{\infty}(f\otimes\chi,s)\)). Shifting the contour of integration to \(\Re(u)=-\frac{1}{4}+\epsilon\), we encounter a double pole at \(u=0\) so that \[\mathrm{MT}^{\pm}=\frac{\lambda_{f}^{*}(\ell\ell^{\prime})}{\sqrt{\ell\ell^{ \prime}}}\Big{(}\frac{L^{*}(\mathrm{Sym}^{2}\,f,1)}{\prod_{p|r}(1+p^{-1})\zeta (2)}+2\log(|r|q)-\log(\ell\ell^{\prime})+C_{f}+\sum_{p|\ell\ell^{\prime}}\frac{ 2\log p}{p+1}\Big{)}\] using [5, eq. (2.9)] for \(\mathrm{Res}_{s=1}\,L^{*}(f\otimes f,1)\) and \(C_{f}=\frac{d}{du}\frac{L_{\infty}^{\pm}(f,\pm,\frac{1}{2}+u)}{L_{\infty}^{2} (f,\pm,1/2)}\Big{|}_{u=0}\). Therefore, we have that \[\frac{1}{\phi^{*}(q)}{\sum_{\chi\bmod q}}^{*}|L(1/2,f\otimes\chi)R( \chi)|^{2}=\sum_{d}|r(d)|^{2}\omega(d)\sum_{\begin{subarray}{c}\ell,\ell^{ \prime}\leq N/d\\ (\ell\ell^{\prime},d)=1\end{subarray}}\frac{r(\ell\ell^{\prime})\varpi(\ell \ell^{\prime})}{\sqrt{\ell\ell^{\prime}}}\lambda_{f}^{*}(\ell\ell^{\prime})\\ \times\Big{(}A_{f}+2\log q-\log(\ell\ell^{\prime})+\sum_{p|\ell \ell^{\prime}}\frac{2\log p}{p+1}+O(N^{3/2}q^{-1/144})\Big{)}\] where \(A_{f}=\frac{L^{*}(\mathrm{Sym}^{2}\,f,1)}{\prod_{p|r}(1+p^{-1})\zeta(2)}+C_{f}+ 2\log|r|\). It follows that \[\frac{1}{\phi^{*}(q)}{\sum_{\chi\bmod q}}^{*}|L(1/2,f\otimes\chi)R(\chi)|^{2} +O(N^{7/2}q^{-144})\] \[=\sum_{d}|r(d)|^{2}\omega(d)\sum_{\begin{subarray}{c}\ell,\ell^{\prime}\leq N/ d\\ (\ell\ell^{\prime},d)=1\end{subarray}}\frac{r(\ell\ell^{\prime})\varpi(\ell \ell^{\prime})}{\sqrt{\ell\ell^{\prime}}}\Big{(}A_{f}\lambda_{f}^{*}(\ell\ell ^{\prime})+\lambda_{f}^{*}(\ell\ell^{\prime})\big{(}2\log q-\log(\ell\ell^{ \prime})+2\sum_{p|\ell\ell^{\prime}}\frac{\log p}{p+1}\big{)}\Big{)}.\] Note from the support of \(\varpi\) in (7.2), we have \(0\leqslant\varpi(p)\lambda_{f}^{*}(p)=\omega_{1}^{\prime}(p)\) and thus \[\sum_{\begin{subarray}{c}\ell,\ell^{\prime}\leq N/d\\ (\ell\ell^{\prime},d)=1\end{subarray}}\frac{r(\ell\ell^{\prime})\varpi(\ell \ell^{\prime})\lambda_{f}^{*}(\ell\ell^{\prime})}{\sqrt{\ell\ell^{\prime}}} \leq\prod_{\begin{subarray}{c}\ell,\ell^{\prime}\\ (\ell\ell^{\prime},d)=1\end{subarray}}\frac{r(\ell\ell^{\prime})\varpi(\ell \ell^{\prime})\lambda_{f}^{*}(\ell\ell^{\prime})}{\sqrt{\ell\ell^{\prime}}}= \prod_{\begin{subarray}{c}\ell,\ell^{\prime}\\ (\ell\ell^{\prime},d)=1\end{subarray}}\frac{r(\ell\ell^{\prime})\omega_{1}^{ \prime}(\ell\ell^{\prime})}{\sqrt{\ell\ell^{\prime}}}.\] Therefore \[\sum_{d}|r(d)|^{2}\omega(d)\sum_{\begin{subarray}{c}\ell,\ell^{\prime}\leq N/d \\ (\ell\ell^{\prime},d)=1\end{subarray}}\frac{r(\ell\ell^{\prime})\varpi(\ell \ell^{\prime})}{\sqrt{\ell\ell^{\prime}}}A_{f}\lambda_{f}^{*}(\ell\ell^{ \prime})\ll_{f}\prod_{p}\Big{(}1+r(p)^{2}\omega(p)+\frac{2r(p)\omega_{1}^{ \prime}(p)}{\sqrt{p}}\Big{)}.\] With the current choice of \(\varpi,N\), we also have that \(2\log q-\log(\ell\ell^{\prime})+\sum_{p\mid\ell\ell^{\prime}}\frac{2\log p}{p+1}\geqslant 0\), which together with \(\varpi(\ell)\lambda_{f}^{*}(\ell)\geq 0\) gives \[\sum_{d}|r(d)|^{2}\omega(d)\sum_{\begin{subarray}{c}\ell,\ell^{ \prime}\leq N/d\\ (\ell\ell^{\prime},d)=1\end{subarray}}\frac{r(\ell\ell^{\prime})\varpi(\ell \ell^{\prime})}{\sqrt{\ell\ell^{\prime}}}\Big{(}2\log q-\log(\ell\ell^{ \prime})+\sum_{p\mid\ell\ell^{\prime}}\frac{2\log p}{p+1}\Big{)}\lambda_{f}^{* }(\ell\ell^{\prime})\] \[\ll\log q\sum_{d}|r(d)|^{2}\omega(d)\sum_{\begin{subarray}{c}\ell,\ell^{\prime}\\ (\ell\ell^{\prime},d)=1\end{subarray}}\frac{r(\ell\ell^{\prime})\varpi(\ell \ell^{\prime})}{\sqrt{\ell\ell^{\prime}}}\lambda_{f}^{*}(\ell\ell^{\prime})\] \[\ll\ \log q\prod_{p}\Big{(}1+r(p)^{2}\omega(p)+\frac{2r(p) \omega_{1}^{\prime}(p)}{\sqrt{p}}\Big{)}.\] Therefore, we have \[\frac{1}{\phi^{*}(q)}{\sum_{\chi\bmod q}}^{*}\Big{(}\big{|}L(1/2,f \otimes\chi)R(\chi)\big{|}^{2}+\big{|}L(1/2,g\otimes\chi)R(\chi)\big{|}^{2} \Big{)}\] \[\ll \max_{i=1,2}\ \log q\prod_{p}\Big{(}1+r(p)^{2}\omega(p)+\frac{2r(p) \omega_{i}^{\prime}(p)}{\sqrt{p}}\Big{)}. \tag{7.5}\] Combining (7.4) and (7.5), we can choose \[V=\min_{i=1,2}\frac{1}{\log q}\prod_{p}\Big{(}1+r(p)^{2}\omega(p)+\frac{r(p) \omega^{\prime}(p)}{\sqrt{p}}\Big{)}^{2}(1+r(p)^{2}\omega(p))^{-1}\Big{(}1+r( p)^{2}\omega(p)+\frac{2r(p)\omega_{i}^{\prime}(p)}{\sqrt{p}}\Big{)}^{-1}.\] We have from [5, Proof of Lemma 7.5] that \[\log\prod_{p}\Big{(}1+r(p)^{2}\omega(p)+\frac{r(p)\omega^{\prime} (p)}{\sqrt{p}}\Big{)}^{2}(1+r(p)^{2}\omega(p))^{-1}\Big{(}1+r(p)^{2}\omega(p) +\frac{2r(p)\omega_{i}^{\prime}(p)}{\sqrt{p}}\Big{)}^{-1}\] \[=\log\prod_{p}\Big{(}1+\frac{r(p)\omega^{\prime}(p)}{\sqrt{p}(1+r (p)^{2}\omega(p))}\Big{)}^{2}\Big{(}1+\frac{2r(p)\omega_{i}^{\prime}(p)}{ \sqrt{p}(1+r(p)^{2}\omega(p))}\Big{)}^{-1}\] \[=\mathcal{L}\sum_{L^{2}\leq p\leq\exp(\log^{2}L)}\frac{2\omega^{ \prime}(p)-2\omega_{i}^{\prime}(p)}{p\log p}+O_{\omega,\omega^{\prime},\delta }\Big{(}\frac{L}{(\log L)^{1+\delta}}\Big{)}\] Since \(\lambda_{f}^{*}(p)\lambda_{g}^{*}(p)(\lambda_{f}^{*}(p)+\lambda_{g}^{*}(p))\lambda_ {f}^{*}(p),\lambda_{f}^{*}(p)\lambda_{g}^{*}(p)(\lambda_{f}^{*}(p)+\lambda_{g}^{* }(p))\lambda_{g}^{*}(p)\leqslant 0\) when \(p\not\in\mathcal{G}\), we have \[\sum_{L^{2}\leq p\leq\exp(\log^{2}L)}\frac{2\omega^{\prime}(p)-2 \omega_{1}^{\prime}(p)}{p\log p} \geqslant\sum_{\begin{subarray}{c}\mathcal{L}^{2}\leqslant p\leqslant \exp(\log^{2}\mathcal{L})\\ p\in\mathcal{G}\end{subarray}}\frac{2\lambda_{f}^{*}(p)\lambda_{g}^{*}(p)( \lambda_{f}^{*}(p)+\lambda_{g}^{*}(p))\lambda_{g}^{*}(p)}{p\log p}\] \[\geqslant\sum_{\mathcal{L}^{2}\leq p\leq\exp(\log^{2}\mathcal{L}) }\frac{2\lambda_{f}^{*}(p)\lambda_{g}^{*}(p)(\lambda_{f}^{*}(p)+\lambda_{g}^{* }(p))\lambda_{g}^{*}(p)}{p\log p}\] \[=\frac{c}{\log L}+O(\frac{1}{(\log L)^{2}}).\] for some positive \(c=2n_{2,2}+2n_{1,3}\) using the notation for \(n_{i,j}\) in [5, Corollary 2.17]. Thus we see that for every prime \(q\) sufficiently large depending on \(f,g\), there exists a non-trivial character \(\chi\bmod q\) such that \[\min_{f,g}(|L(1/2,f\otimes\chi)|,|L(1/2,g\otimes\chi)|)\geqslant\sqrt{V} \geqslant\exp\left(c_{f,g}\sqrt{\frac{\log q}{\log\log q}}\right)\] for some positive \(c_{f,g}\). With \(N=q^{1/360-\delta}\), we can take the constant \[c_{f,g}=\frac{1}{2}(\frac{1}{6\sqrt{10}}\sqrt{1-360\delta}+o(1))\frac{n_{2,2}+ \min(n_{1,3},n_{3,1})}{(n_{4,2}+2n_{3,3}+n_{2,4})^{1/2}}.\] In a generic situation, as in [5, Remark 7.20], i.e. where neither \(f\) nor \(g\) are of polyhedral type (in particular \(\operatorname{Sym}^{\mathrm{k}}f,\operatorname{Sym}^{k}g\) are cuspidal for all \(k\leq 4\)) and if \(\operatorname{Sym}^{k}\pi_{f}\not\cong\operatorname{Sym}^{k}\pi_{g}\) for \(k\leqslant 4\), we see that \[c_{f,g}=\frac{1}{12\sqrt{10}}+o(1).\] ## 8. Simultaneous small values of quadratic twists: Proof of Theorem 4 Let \(f,g\) be holomorphic cusp forms of weight \(\kappa\equiv 0\bmod 4\) for \(SL_{2}(\mathbb{Z})\) and let \(\chi_{d}(n)=\left(\frac{d}{n}\right)\) be the Kronecker symbol. Due to the non-negativity of \(L(1/2,f\otimes\chi_{d})\), small values of \(L(1/2,f\otimes\chi_{d})+L(1/2,g\otimes\chi_{d})\) implies simultaneous small values of \(L(1/2,f\otimes\chi_{d})\) and \(L(1/2,g\otimes\chi_{d})\). Let \(\Phi(x):(0,\infty)\to\mathbb{C}\) be a smooth, compactly supported function. From [42, Theorem 1.4], we have for square-free \(\ell\) \[\sideset{}{{}^{*}}{\sum}_{(d,2)=1}^{*}\chi_{8d}(\ell)L(1/2,f\otimes\chi_{8d}) \Phi(\frac{d}{X})=\frac{8X\tilde{\Phi}(1)}{\pi^{2}\sqrt{\ell}}L(1,\operatorname {Sym}^{2}f)Z(1/2,\ell)+O(\ell^{1/2+\epsilon}X^{1/2+\epsilon})\] where the \(*\) now denotes a sum over squarefree integers and \[L(1,\operatorname{Sym}^{2}f)Z(1/2,\ell)= \prod_{p|\ell}\frac{p^{3/2}}{2(p+1)}\Big{(}(1-\frac{\lambda_{f}(p) }{\sqrt{p}}+\frac{1}{p})^{-1}-(1+\frac{\lambda_{f}(p)}{\sqrt{p}}+\frac{1}{p})^{ -1}\Big{)}\] \[\times\prod_{p|2\ell}\Big{(}1+\frac{p}{2(p+1)}\big{(}(1-\frac{ \lambda_{f}(p)}{\sqrt{p}}+\frac{1}{p})^{-1}+(1+\frac{\lambda_{f}(p)}{\sqrt{p}} +\frac{1}{p})^{-1}-2\big{)}\Big{)}.\] We write \(L(1,\operatorname{Sym}^{2}f)Z(1/2,\ell)=:L(1,\operatorname{Sym}^{2}f)Z(1/2,1 )\prod_{p|\ell}h_{f}(p)\) so that \[h_{f}(p) =\frac{p^{3/2}}{2(p+1)}\Big{(}(1-\frac{\lambda_{f}(p)}{\sqrt{p}} +\frac{1}{p})^{-1}-(1+\frac{\lambda_{f}(p)}{\sqrt{p}}+\frac{1}{p})^{-1}\Big{)}\] \[\times\Big{(}1+\frac{p}{2(p+1)}\big{(}(1-\frac{\lambda_{f}(p)}{ \sqrt{p}}+\frac{1}{p})^{-1}+(1+\frac{\lambda_{f}(p)}{\sqrt{p}}+\frac{1}{p})^{ -1}-2\big{)}\Big{)}^{-1}\] \[=\lambda_{f}(p)+O\big{(}\frac{|\lambda_{f}(p)|}{p}\big{)}.\] Thus for \(R(\chi_{8d})=\sum_{n\leq N}\mu(n)r(n)\varpi(n)\chi_{8d}(n)\) with \(r,\varpi\) real multiplicative functions supported on squarefree integers \[\sum_{(d,2)=1}^{*}L(1/2,f\otimes\chi_{8d})|R(\chi_{8d})|^{2}\Phi( \frac{d}{X})\] \[\quad=\frac{8X\tilde{\Phi}(1)}{\pi^{2}}L(1,\operatorname{Sym}^{2 }f)Z(1/2,1)\sum_{d\leq N}r(d)^{2}\omega(d)\sum_{\begin{subarray}{c}n_{1},n_{2 }\leq N/d\\ (n_{1}n_{2},d)=1\end{subarray}}\frac{\mu(n_{1}n_{2})r(n_{1}n_{2})\omega_{1}^{ \prime}(n_{1}n_{2})}{\sqrt{n_{1}n_{2}}}\] \[\quad\quad+O(N^{5/2+\epsilon}X^{1/2+\epsilon}) \tag{8.1}\] where \(\omega(n)=|\varpi(n)|^{2}\) and \(\omega_{1}^{\prime}(n)=\varpi(n)h_{f}(n)\). A similar expression holds when \(f\) is replaced by \(g\). Now it remains to find \(r(n)\) and \(\varpi(n)\). As usual, let \(r(n)\) be multiplicative supported on squarefree and satisfying (7.3) with \(\mathcal{L}=\sqrt{a_{\omega}\log N\log\log N}\) where \(a_{\omega}\) is defined as in [5, eq (7.2)]. Let \(\tilde{\mathcal{G}}:=\{n\geq 1:h_{f}(n)h_{g}(n)\neq 0,\operatorname{sgn}(h_{f}(n))= \operatorname{sgn}(h_{g}(n))\}\) and define \(\varpi\) as \[\varpi(p)=\begin{cases}h_{f}(p)h_{g}(p)(h_{f}(p)+h_{g}(p)),&p\in\tilde{ \mathcal{G}},\\ 0,&p\not\in\tilde{\mathcal{G}}.\end{cases}\] Then similarly as before, we can evaluate the \(d,n_{1},n_{2}\)-sum in (8.1) as \[(1+o(1))\prod_{p}\Big{(}1+r(p)^{2}\omega(p)-\frac{2r(p)\omega_{1}^{\prime}(p)} {\sqrt{p}}\Big{)}.\] With \(N=X^{1/5-\delta}\) we see that (8.1) becomes \[\frac{8X\tilde{\Phi}(1)}{\pi^{2}}L(\operatorname{Sym}^{2}f,1/2)Z(1/2,1)(1+o(1)) \prod_{p}\Big{(}1+r(p)^{2}\omega(p)-\frac{2r(p)\omega_{1}^{\prime}(p)}{\sqrt{p }}\Big{)}+O(X^{1-5\delta/2+\epsilon}).\] By standard computations (e.g. see [43]), we have \[\sideset{}{{}^{*}}{\sum}{}_{(d,2)=1}|R(\chi_{8d})|^{2}\Phi(\tfrac{d}{X})\sim cX \prod_{p}\big{(}1+r(p)^{2}\omega(p))\] for some positive constant \(c\). Thus, since \(h_{f}(p)=\lambda_{f}(p)+O(p^{-1+\theta})\) with \(\theta=7/64\), we have from [5, Corollary 2.17, Lemma 7.19, proof of Lemma 7.5] that our ratio of mean values is \[\ll\prod_{p}\Big{(}1-\frac{2r(p)\omega_{1}^{\prime}(p)}{\sqrt{p}(1+r(p)^{2} \omega(p))}\Big{)}\leqslant\exp\Big{(}-(n_{3,1}+n_{2,2}+o(1))\frac{\mathcal{L }}{2\log\mathcal{L}}\Big{)}\] with \(n_{i,j}\) defined as in [5, Corollary 2.17]. Therefore, we see that there exists \(d\) such that \[\max(L(1/2,f\otimes\chi_{8d}),L(1/2,g\otimes\chi_{8d}))\leqslant\exp\Big{(}- \tilde{c}_{f,g}(\sqrt{\frac{1}{5}-\delta})\sqrt{\frac{\log X}{\log\log X}} \Big{)}.\] where positive \(\tilde{c}_{f,g}=\frac{\min\big{(}n_{3,1},n_{1,3}\big{)}+n_{2,2}}{\sqrt{a_{ \omega}}}+o(1)>0\). In a generic situation, where neither \(f\) nor \(g\) are of polyhedral type (in particular \(\operatorname{Sym}^{k}f,\operatorname{Sym}^{k}g\) are cuspidal for all \(k\leq 4\)) and if \(\operatorname{Sym}^{k}\pi_{f}\not\cong\operatorname{Sym}^{k}\pi_{g}\) for \(k\leq 4\), then \(\tilde{c}_{f,g}=1+o(1)\) using [5, eq. (7.55)] for \(a_{\omega}\).
2305.01656
Probabilistic Formal Modelling to Uncover and Interpret Interaction Styles
We present a study using new computational methods, based on a novel combination of machine learning for inferring admixture hidden Markov models and probabilistic model checking, to uncover interaction styles in a mobile app. These styles are then used to inform a redesign, which is implemented, deployed, and then analysed using the same methods. The data sets are logged user traces, collected over two six-month deployments of each version, involving thousands of users and segmented into different time intervals. The methods do not assume tasks or absolute metrics such as measures of engagement, but uncover the styles through unsupervised inference of clusters and analysis with probabilistic temporal logic. For both versions there was a clear distinction between the styles adopted by users during the first day/week/month of usage, and during the second and third months, a result we had not anticipated.
Oana Andrei, Muffy Calder, Matthew Chalmers, Alistair Morrison
2023-05-01T21:17:01Z
http://arxiv.org/abs/2305.01656v1
# Probabilistic Formal Modelling to Uncover and Interpret ###### Abstract We present a study using new computational methods, based on a novel combination of machine learning for inferring admixture hidden Markov models and probabilistic model checking, to uncover interaction styles in a mobile app. These styles are then used to inform a redesign, which is implemented, deployed, and then analysed using the same methods. The data sets are logged user traces, collected over two six-month deployments of each version, involving thousands of users and segmented into different time intervals. The methods do not assume tasks or absolute metrics such as measures of engagement, but uncover the styles through unsupervised inference of clusters and analysis with probabilistic temporal logic. For both versions there was a clear distinction between the styles adopted by users during the first day/week/month of usage, and during the second and third months, a result we had not anticipated. ## 1 Introduction Menu driven interfaces are often designed to fulfill perceived or expected end user needs and interaction styles, yet in practice users may adopt numerous styles, some of them unanticipated by designers. This may be due to many factors, including software appropriation [18, 19, 44], the time since they started using the software (e.g. the first week or after six months usage), intentions concerning that particular use (e.g. a quick use, or a long and thorough use), or state of mind (e.g. distracted or intentional), etc. The result is interaction styles can vary both from user to user [52] and, over time, for each individual user, within and between interaction sessions. Consequently, designers may wish to redesign an interface in the light of how users have been using their system over time. Current tools for studying interaction include qualitative methods, such as interviews, think-alouds, and direct observations, can help uncover users' behaviours and preferences, but these procedures are expensive to carry out with large user populations. Restricting to smaller sample sizes might miss some forms of activity, and might also be biased culturally, e.g., if based on users local to the designers. Quantitative methods can include many more users -- maybe literally _all_ the users -- but most existing methods rely on modelling assumptions made in advance, e.g. assumed tasks that users carry out, or on relatively simple metrics. For example, they may focus on the statistics of occurrence of basic features such as time in app and screens visited, or on task-based measures [21]. We describe a study of user-centred redesign of a mobile app, that uses a new quantitative approach for studying interaction. The data are logged user interaction traces, extracted from a population of app users (in our case, all users). We segmented the traces into several data sets according the time intervals: 1st day, 1st week, 1st, 2nd, and 3rd month of usage, so we could observe if styles correlate with length of engagement. We did not pre-suppose tasks, but logged all interactions that change user views. From these data sets we inferred computational models using machine learning (ML) unsupervised clustering methods. A novelty of our approach is that we used probabilistic temporal logic properties to interrogate the inferred models, and then interpreted the results, using inductive coding, to evaluate how well the design supports the interaction styles we uncovered. This informed a redesign of the interface, which we then deployed and studied in the same way. The study involved two design iterations of the hierarchical menu interface for AppTracker [37], which allows its users to keep track of the usage of their device, and can be thought of as an instrument for _personal informatics_[51, 32]. Each design, which we call AppTracker 1 and AppTracker2, was deployed for at least 6 months, involving thousands of users. The work presented here represents a rare and long term collaboration over several years, between researchers in human-computer interaction evaluation, mobile app design, machine learning, and model checking. The paper is organised as follows. The next section contains an overview of AppTracker and the study design. In Sect. 3 we give the main results, which include the interaction styles uncovered in AppTracker1, the redesigned interface for AppTracker2, the interaction styles uncovered in AppTracker2, and a comparison of interaction styles uncovered in AppTracker1 and AppTracker2. We reflect on our findings and the role of ML in Sect. 4, related work is in Sect. 5, and conclusions and future work in Sect. 6. ## 2 Study Design AppTracker allows users to keep track of and view the usage of all mobile apps on their iPhone or iPad device. It runs in the background, monitoring the opening and closing of all apps installed on the device, as well as every time the device is locked or unlocked. It collects this data and can then display a series of charts and statistics, offering users insight into their behaviour, such as time spent per day on their device, or their most used apps over time. The user interface is based on hierarchical menus, allowing navigation through the menu structure to access charts and summary statistics. AppTracker generates two distinct forms of data to study: i) a user's use of their device, with records of every app they launch, and ii) user interactions _within the AppTracker application itself_ such as button clicks and screen changes. The former type of data has been analysed in previous work [37], this study is based on the latter type of data. ### Overview Figure 1 outlines the flow of the study. AppTracker1 user interactions were logged across a large population of users, traces were segmented into time series data sets, and admixture Markov models were inferred from those data sets using an ML clustering algorithm. A key concept for each Markov model is the inferred _activity patterns_ and probabilities to transition between them; together these encapsulate common observed temporal behaviours shared across a set of logged user traces. Each activity pattern is a discrete-time Markov chain, in which observed variables label the AppTracker states and each pattern corresponds to a latent state in the admixture Markov model of all interaction behaviour. We analysed the models' activity patterns using a set of (parameterised) probabilistic temporal logic properties and model checking [5]. We used inductive coding [46] to categorise the results, e.g. numbers of steps to reach a state, session lengths, predominant states, and then interpreted the results to identify the activity patterns. We used another set of temporal logic properties to produce the long run likelihoods and probabilities to transition between activity patterns, all of which contributes to a description of the interaction styles. We considered how well AppTracker1 supported users interaction styles and addressed deficiencies in a redesign to feed into the next version AppTracker2 which was then deployed. We iterated the analytics again, based on AppTracker2 user traces, and concluded the study by comparing the interaction styles in AppTracker1 and AppTracker2. Figure 1: Study design ### Time Series Data Each logged user trace is a sequence of event labels. Each trace consists of many user interaction sessions, which start when the application is launched or being brought to the foreground (denoted by event startS) and end when the application is closed or put in the background (denoted by event stopS). User traces are segmented by time intervals \([t_{1},t_{2}]\) such that the first session of the segment occurs on or after time-stamp \(t_{1}\) and the last session of the fragment occurs before time-stamp \(t_{2}\); the whole trace may extend beyond time-stamp \(t_{2}\). ### Interface for AppTracker1 On launching AppTracker1, the main menu screen offers four main options (Fig. 2(a)); from top to bottom they are: * _Overall Usage_ contains summaries of all the data recorded since AppTracker1 was installed and opens the views OverallUsage and Stats (Fig. 2(b)). * _Last 7 Days_ opens the view Last7Days and displays a chart of activity of the user's 5 most used apps during the last 7 days. * _Select by Period_ opens the view SelectPeriod and shows statistics for a selected period of time, e.g. which apps were used most since Friday, the daily time spent on Facebook over the last month, hourly device usage on Monday (Fig. 2(c)). * _Settings_ allows a user to start and stop the tracker, or to reset their recorded data. There are 16 user-initiated events that switch between views (the name of the event is the resulting view), see state diagram in Fig. 3. States are grouped into Summary states for viewing summary or overall usage data; Specific states for viewing drilled down, specific data; Session-related states marking the start and the end of a session; and remaining states. Note state Last7Days is a summary state or a specific state depending on the context of use. Logged interaction data are stored in a MySQL database using the SGLog framework [25] and processed using JavaScript to obtain user traces in JSON format. AppTracker1 was first released in August 2013 and downloaded over 35,000 times. Our data sets are taken from a sample of 322 user traces during 2013 and 2014. The maximum session count over all the traces is 129, the minimum was limited to 5. ### Computational Methods We used ML clustering methods to infer admixture Markov models, first defined in [1], based on first-order auto-regressive hidden Markov Models (AR-HMM) [38]. Admixture models permit interleaved variation, so we can model users that may switch interaction styles both _within_ and _between_ sessions. We include some definitions here for completeness. A _discrete-time Markov chain_ (DTMC) is a tuple \(\mathcal{D}=(S,s_{0},\mathcal{P},\mathcal{L})\) where: \(S\) is a set of states; \(s_{0}\in S\) is the initial state; \(\texttt{P}:S\times S\rightarrow[0,1]\) is the transition probability function such that for all states \(s\in S\) we have Figure 2: Screenshots from AppTracker1: (a) The main menu view corresponds to the state Main; (b) The stats view corresponds to the state Stats. (c) The daily device usage view corresponds to the state ChartOverall when the selected period is Monday 4 November. \(\sum_{s^{\prime}\in S}\mathtt{P}(s,s^{\prime})=1\); and \(\mathcal{L}:S\to 2^{\mathcal{A}}\) is a labelling function associating to each state \(s\) in \(S\) a set of valid atomic propositions from a set \(\mathcal{A}\). A _path_ (or execution) of a DTMC is a non-empty sequence \(s_{0}s_{1}s_{2}\ldots\) where \(s_{i}\in S\) and \(\mathtt{P}(s_{i},s_{i+1})>0\) for all \(i\geq 0\). A transition is also called a _time-step_. A _first-order auto-regressive hidden Markov Model_ (AR-HMM) [38] is as a tuple \((\mathcal{X},\mathcal{Y},\pi,A,B)\) where: \(\mathcal{X}\) is the set of hidden (or latent) states \(\mathcal{X}=\{1,\ldots,K\}\); \(\mathcal{Y}\) is the set of observed states generated by hidden states; \(\pi:\mathcal{X}\rightarrow[0,1]\) is an initial distribution with \(\sum_{x\in\mathcal{X}}\pi(x)=1\); \(A:\mathcal{X}\times\mathcal{X}\rightarrow[0,1]\) is the transition probability matrix, such that for all \(x\in\mathcal{X}\) we have \(\sum_{x^{\prime}\in\mathcal{X}}A(x,x^{\prime})=1\); \(B:\mathcal{X}\times\mathcal{Y}\times\mathcal{Y}\rightarrow[0,1]\) is the observation probability matrix, such that for all \(x\in\mathcal{X}\) and \(y\in\mathcal{Y}\) we have \(\sum_{y^{\prime}\in\mathcal{Y}}B(x,y,y^{\prime})=1\). Now let \(\mathcal{P}\) be a population of \(M\) user traces over \(n\) different types of event labels, \(\mathcal{A}\) the set of the labels of all events occurring in \(\mathcal{P}\), and \(K\) a positive integer. A **generalised population admixture model with \(K\) components** or **GPAM(\(K\)**) for the user trace population \(\mathcal{P}\) is a tuple \((\mathcal{X},\mathcal{Y},\pi,A,B,\mathcal{L})\) where \((\mathcal{X},\mathcal{Y},\pi,A,B)\) is an AR-HMM with \(|\mathcal{X}|=K\) and \(|\mathcal{Y}|=n\), and \(\mathcal{L}:\mathcal{Y}\rightarrow\mathcal{A}\) is the labelling function mapping a unique event name to each observed state in \(\mathcal{Y}\). A pictorial representation of a GPAM(2) is given in Fig. 4. For any GPAM \((\mathcal{X},\mathcal{Y},\pi,A,B,\mathcal{L})\) and a latent state \(x\in\mathcal{X}\), the tuple \((\mathcal{Y},\mathcal{L}^{-1}(\mathtt{startS}),B(x),\mathcal{L})\) is a discrete-time Markov chain (DTMC) called an **activity pattern**. For each input data set \(\mathcal{P}\) of user traces, we compute \(n\times n\) transition-occurrence matrices for each trace \(\alpha\) such that matrix position \((i,j)\) is the number of times the subsequence \(\alpha_{i}\alpha_{j}\) (i.e. two adjacent event labels) occurs in \(\alpha\). The resulting transition-occurrence matrices are the input data for ML; we employ the Baum--Webluch clustering algorithm [49], which uses the local non-linear optimisation Expectation-Maximisation (EM) algorithm [17] for finding maximum likelihood parameters of observing each trace, and restarting the algorithm whenever the log-likelihood has multiple-local maxima. We used the probabilistic model checker PRISM [31] to analyse temporal properties expressed in rPCTL, an extension of Probabilistic Computation Tree Logic PCTL* [31, 5] with rewards; an overview can be found in Appendix 0.B. The set of parameterised properties we defined and used is contained in Table 1. These temporal properties express behaviour _within_ an activity pattern or behaviour that involves _several_ activity patterns. The properties of the latter type are StateToPattern and LongRunPattern, and we used them for model checking on GPAMs only. The rest of the properties were used on activity patterns as DTMCs. First we model-check the temporal properties VisitProbInit, StepCountInit, VisitCountInit, SessionLength, and SessionCount for different, incremental values of \(K\) for all states. Optionally, we use the properties VisitProbBtw and StepCountStr when Figure 4: Pictorial representation of a GPAM(2) with the latent states \(x_{1}\) and \(x_{2}\). The two activity patterns are the DTMCs in each box; there are four observed states \(y_{0},y_{1},y_{2},y_{3}\). Transition probabilities are indicated by the thickness of transitions. Figure 3: AppTracker1 state diagram. The vertical vertical layout of the menus corresponds to a left to right traversal of the states in the dotted boxes. it is too difficult to interpret the results from the other properties on activity patterns. Then the properties StateToPattern and LongRunPattern help us identify more nuanced characteristics by considering if an activity pattern changes within a session and the long run probability of each activity pattern, respectively. Other PTCL* properties considered, but not included in this paper, helped us analyse whether some particular states lead to the end of a session in fewer steps than other states and whether there are correlations between state formulae (over states and/or patterns). Observed states are grouped, according to their expected function and purpose within the application, for example Summary and Specific for AppTracker. This grouping is done iteratively: an initial grouping is proposed by the designers, then after analysing the results of the temporal properties within each activity pattern, the analysts and designers may, together, revise the grouping. Note that such state groups may overlap. The source code for the quantitative analysis is available online 1. Footnote 1: [https://github.com/oanaandrei/temporalanalytics](https://github.com/oanaandrei/temporalanalytics) ### Inductive Coding While the PRISM model checking results are quantitative, interpretation of those results is subjective, and therefore similar to the interpretation of qualitative data. We adopt the general inductive approach for analysing qualitative evaluation data [46] that aims to allow research findings to emerge from the data, rather than a deductive approach that tests a hypothesis. Coding was carried out independently by two evaluators (authors Andrei and Calder), with several revisions and refinements, then checked for clarity (all four authors), and stakeholder checks (Morrison and Chalmers as designers). We note that many hours were spent choosing meaningful labels. Since the labels themselves have inherent meaning, the labels were changed several times throughout this work, as we gained a deeper understanding of all the nuances of the activity patterns. We suggest this is both a strength and a weakness of our approach. ## 3 Study Results We give an overview of results, following the study flow in Fig. 1. For brevity we report results only for GPAM(2) models; details for GPAM(3) models are available in Appendix C. We refer to results for models from data sets for 1st day, 1st week, and 1st month as results for _early days usage_ and for the 2nd and 3rd month as _experienced usage_. For inferring the models, Baum-Welch algorithm was implemented in Java and ran on a 2.8GHz Intel Xeon (single thread, one core); it was restarted 200 times with 100 maximum number of iterations for each restart. As example performance, for the data set consisting of the first month of usage, the algorithm took 3.2 min for \(K=2\), 4.1 min for \(K=3\), and 5.3 min for \(K=4\). We started by model checking the properties VisitProbInit, StepCountInit, VisitCountInit, SessionLength, and SessionCount for \(K=2\) for all states. We selected a single value for \(N\), typically somewhere between 10 and \begin{table} \begin{tabular}{l l} \hline \hline **Name** & **Formula and informal description** \\ \hline VisitProbInit & \(\texttt{P}_{\neg}\texttt{[true}\,\texttt{U}^{\leq N}(y=j)]\) : probability to reach \(j\) from the initial state within \(N\) steps \\ StepCountInit & \(\texttt{R}_{\texttt{steps}=\neg}\texttt{[F}\,(y=j)]\) : expected number of steps to reach \(j\) from the initial state \\ VisitCountInit & \(\texttt{R}_{\texttt{states}=j\neg}\texttt{[C}^{\leq N}]\) : expected number of visits to \(j\) within \(N\) steps \\ \hline SessionLength & \(\texttt{R}_{\texttt{steps}=\neg}\texttt{[F}\,(y=\texttt{stop})]\) : expected number of steps until the end of session. \\ SessionCount & \(\texttt{R}_{\texttt{states}=\neg}\texttt{[C}^{\leq N}]\) : expected number of sessions within \(N\) steps \\ \hline VisitProbBtw & \(\textit{filter}(\texttt{state},\texttt{P}_{\neg}\texttt{[(\neg stepS)}\, \texttt{U}^{\leq N}(y=j_{2})],(y=j_{1}))\) : probability of reaching observed state \(j_{2}\) from \(j_{1}\) \\ within the same session \\ StepCountBtw & \(\textit{filter}(\texttt{state},\texttt{R}_{\texttt{steps}=\neg}\texttt{[F}\,(y=j_{2 })],(y=j_{1}))\) : expected number of steps to reach state \(j_{2}\) from \(j_{1}\) \\ \hline StateToPattern & \(\texttt{P}_{\geq 1}\texttt{[F}\,(x=i_{1}\wedge y=j)]\wedge\texttt{P}_{\geq 1}\texttt{[G}\,((x=i_{1} \wedge y=j)\Rightarrow\texttt{P}_{\neg p}\texttt{[}(x=i_{1}\wedge\neg\texttt{ stopS})\,\texttt{U}\,(x=i_{2}))])\): likelihood of observed state \(j\) in activity pattern \(i_{1}\) leading to changing the activity pattern \(i_{2}\) within the same session \\ StateToStop & \(\texttt{P}_{\geq 1}\texttt{[F}\,(x=i\wedge y=j)]\wedge\texttt{P}_{\geq 1}\texttt{[G}\,((x=i \wedge y=j)\Rightarrow\texttt{P}_{\neg p}\texttt{[}(x=i)\,\texttt{U}\,\texttt{ stopS})\,\texttt{]])}\) : likelihood of observed state \(j\) in activity pattern \(i\) leading to the end of a session, without changing the activity pattern \\ LongRunPattern & \(\texttt{S}_{\neg}\texttt{[}\,x=i\,\texttt{]}\) : probability of being in activity pattern \(i\) in the long run \\ \hline \hline \end{tabular} \end{table} Table 1: Reward-based PCTL (rPCTL) properties are parameterised by: \(N\) a positive integer value for the number of time steps; \(j,j_{1},j_{2}\) positive integer values denoting observed state identifiers; \(i,i_{1},i_{2}\) integer values between 1 and \(K\) denoting the hidden state identifiers (activity patterns); \(p\in[0,1]\) a probability. 150; in our experience 50 is a good starting value. Note that the result from the model checker may not be a number (either probability or positive real), due to state unreachability, a filter satisfying no states, or the iterative method not converging within 100,000 iterations (limit chosen for the PRISM model checker). In this case the result is given as "--". To aid interpretation, results are ordered "best" to "worst" as follows: greatest to least value for VisitProbInit, VisitProbBtw, and VisitCountInit, while least to greatest value for StepCountInit and StepCountBtw. This ordering reflects the following judgments: a higher probability to visit a state, a higher number of state visits, and fewer steps to reach a state, are all indicators of greater (user) interest in a state or a pair of states. We encode this ordering visually using the colour blue for the "best" results and purple for the "worst" results. ### AppTracker1 Interaction Styles Tables 2 and 3 include activity pattern results for the temporal properties VisitProbInit, VisitCountInit, and StepCountInit analysed on GPAM(2) models inferred from different time intervals of usage. We say that a state is _predominant_ within an activity pattern if: (i) the probability to reach it computed using VisitProbInit is greater than 0.5, the visit count computed by VisitCountInit is greater than 1, and the number of time steps to reach it computed by StepCountInit is lower than the time bound \(N=50\) chosen for the temporal properties, and (ii) there is no other activity pattern in the same model where the VisitCountInit and StepCountInit score at least three times better. An activity pattern is _centred_ on a set of states (or state grouping they belong to) when those states are predominant within the activity pattern. Activity patterns are labelled according to two dimensions: _usage intensity_ and _predominant states_. Values in a usage intensity type are based on abstractions of the frequency and length of session, as well as on results returned by the properties VisitProbInit, VisitCountInit, StepCountInit, VisitProbBtw, and StepCountBtw for the predominant states in relation to the intensity usage type. Table 4 shows our workings through the inductive coding process: * First we consider session characteristics and use the Jenks natural breaks optimisation method [28] to determine the best arrangement of session count values into three categories, and similarly for the session length. * Second, we give an initial categorisation of the activity patterns based on session characteristics and predominant states. We can see immediately there is a correlation between fewer/longer sessions, and more numerous/shorter sessions. We note that more many/long sessions and few/short sessions did not occur. The state identifiers upon which each activity pattern is centred are listed in decreasing order of their results: the better the result for a state in a pattern (the higher probability to reach, the higher visit count, the fewer steps to reach) the higher the predominance of that particular state compared to other states. We refer to a subset of states as TopLevelMenu, these are states that are reached in one or two button presses in average (corresponding to time steps in the respective DTMC). * Third, we conclude that the usage intensity type consists of three values that we call Browsing, Glancing, and Focussing. * Finally, we combine usage intensity and predominant state group type to assign four activity pattern labels. Note that two possible combinations are not present: GlancingSummary and BrowsingSpecific. We can see a clear split between the activity patterns for early days usage and experienced usage. After labelling of the activity pattern, a further analysis based on model checking is carried out to investigate further the relationships between them using the temporal properties LongRunPattern and StateToPattern, hence find out more about the interaction styles. The long run probabilities for activity patterns are shown in Fig. 10(a) and probabilities that, for a given activity pattern, a state leads to activity pattern change (within a session) are given in Fig. 6(a). The latter shows that for the early days usage, all states are highly likely (close to probability 1) to transition from GlancingSpecific to BrowsingSummary behaviour, and less likely the other way around (probability between 0.5 and 0.75), except for Stats for which the probability is around 0.2, and for ChartOverall and ChartAppsInPeriod with even lower probability. These two exceptions correlate with the results highlighted in the StateToStop analysis (not shown here) that indicate that once in BrowsingSummary in any of the states Stats, ChartOverall, and ChartAppsInPeriod, it is unlikely to move to another activity pattern before the end of session. In the experienced user models, it is more likely to move from FocussingSummary to FocussingSpecific than the other way around, but overall, it is unlikely to move between these two patterns within a session. In conclusion, the interaction styles of AppTracker1 based on GPAM(2) models are summarised as follows. Early days usage contained Glancing and Browsing patterns, the latter possibly because users explore all the screens and features offered by the app. For experienced users, usage is _focussed_ with FocussingSpecific being a little more likely. Glancing patterns involve the shortest sessions in terms of screen view counts over all patterns, and also appear as the most numerous sessions. We note similarity in these Glancing patterns to the micro-usages discussed in [22], defined as "brief bursts of interaction with applications". An important question to ask is: _does the latent structure we uncover simply reflect the top level menu structure?_ To answer it we looked at GPAM(3) models (see Appendix C), since there are three main menu \begin{table} \begin{tabular}{|c|c|c c|c c|c c|c c|c c|} \hline \multirow{2}{*}{**Time interval**} & \multirow{2}{*}{**Time**} & \multicolumn{2}{c|}{**OverallUsage**} & \multicolumn{2}{c|}{**LastTdays**} & \multicolumn{2}{c|}{**SelectPeriod**} & \multicolumn{2}{c|}{**Stats**} & \multicolumn{2}{c|}{**AppsInPeriod**} \\ \cline{3-11} & & **AP1** & **AP2** & **AP1** & **AP2** & **AP1** & **AP2** & **AP1** & **AP2** & **AP1** & **AP2** \\ \hline \multirow{4}{*}{**Time interval**} & [0,1] & 0.94 & 0.99 & 0.80 & 0.89 & 0.80 & 0.42 & 0.81 & 0.99 & 0.45 & 0.13 \\ & [0,7] & 0.67 & 0.99 & 0.89 & 0.88 & 0.88 & 0.48 & 0.55 & 0.99 & 0.67 & 0.21 \\ & [0,30] & 0.59 & 0.99 & 0.91 & 0.92 & 0.90 & 0.56 & 0.46 & 0.98 & 0.76 & 0.29 \\ & [30,60] & 0.87 & 0.99 & 0.98 & 0.31 & 0.93 & 0.00 & 0.45 & 0.96 & 0.77 & 0.00 \\ & [60,90] & 0.91 & 0.99 & 0.97 & 0.02 & 0.96 & 0.10 & 0.56 & 0.91 & 0.83 & 0.09 \\ \hline \multirow{4}{*}{**Time interval**} & [0,1] & 3.54 & 14.58 & 1.63 & 2.24 & 1.92 & 0.72 & 1.74 & 5.77 & 0.95 & 0.28 \\ & [0,7] & 1.19 & 15.25 & 2.21 & 2.09 & 2.75 & 0.87 & 0.84 & 5.27 & 2.11 & 0.48 \\ & [0,30] & 0.89 & 15.55 & 2.39 & 2.52 & 2.62 & 1.19 & 0.65 & 4.75 & 1.95 & 0.69 \\ & [30,60] & 2.28 & 14.27 & 5.06 & 0.40 & 4.29 & 0.01 & 0.80 & 4.04 & 4.39 & 0.01 \\ & [60,90] & 3.00 & 14.73 & 4.48 & 0.02 & 4.61 & 0.10 & 1.28 & 3.63 & 5.64 & 0.82 \\ \hline \multirow{4}{*}{**Time interval**} & [0,1] & 16.55 & 4.53 & 30.27 & 22.75 & 30.45 & 90.36 & 29.82 & 12.01 & 83.40 & 332.40 \\ & [0,7] & 44.55 & 3.63 & 22.14 & 23.24 & 23.27 & 75.96 & 63.24 & 12.58 & 45.55 & 210.12 \\ & [0,30] & 56.55 & 3.44 & 20.07 & 19.28 & 21.53 & 59.94 & 18.13 & 13.68 & 35.54 & 145.35 \\ & [30,60] & 23.41 & 2.15 & 9.02 & 137.09 & 18.90 & 5483.99 & 85.45 & 15.97 & 34.45 & 25915.18 \\ & [60,90] & 19.55 & 2.23 & 10.46 & 2269.78 & 15.49 & 483.09 & 61.19 & 21.01 & 28.65 & 532.74 \\ \hline \end{tabular} \end{table} Table 2: AppTracker1 activity pattern analytics. Results for the properties VisitProbInit, VisitCountInit, and StepCountInit instantiated with \(N=50\), the states OverallUsage, LastTdays, SelectPeriod, Stats, AppsInPeriod, and the two activity patterns, which we call AP1 and AP2, on GPAM(2) models for five time intervals. Each row corresponds to a GPAM(2) model learned from the segmented data set for the respective time interval. Blue indicates best result and purple indicates worst for a particular state across the activity patterns in a GPAM(2) model for a particular time interval. Figure 5: AppTracker1 and AppTracker2. Visualisation of probabilities of being in each activity pattern in the long run in GPAM(2) models for different time intervals; the entries are ordered decreasingly for each time interval. These results are based on model checking the temporal property LongRunPattern for each activity pattern. options (excluding Settings). If the activity patterns simply reflect the menu structure, then when we would expect each of the patterns in GPAM(3) to be centred on one the above states, with very low correlations between pairs of those states. This was not the case. In all five GPAM(3) models we found activity patterns with either Focussing or Glancing usage intensity, centred around Last7Days and either SelectPeriod or AppsInPeriod; there was no model with one pattern centred on either SelectPeriod or AppsInPeriod but no Last7Days and another pattern centred on Last7Days, but not on SelectPeriod or AppsInPeriod. ### Redesigning the Interface for Experienced Usage It became evident that AppTracker's top-level menu was not a good fit for users' interaction styles. For example, we uncovered Glancing behaviours, where users would quickly consult the app to view their usage behaviour, but this did not fit well with the menu structure: a user could glance at all-time most-used apps (OverallUsage state) within the first menu item, but if they had used AppTracker1 for a long time, it becomes increasingly likely that this list would be static. Conversely, while a user could find recent (e.g. today's) usage, this would involve several steps through the menu of more detailed information. We chose to concentrate on redesign for experienced usage; these were users who had voluntarily continued with the app over a period of time so we also expected to see more stable interaction styles, where initial user learning processes have subsided. An equally valid alternative would have been early stage users, for example to work on issues that may improve retention of users beyond the initial experiences. Our aim was not to change the overall purpose of AppTracker, or to add new features, but to reconfigure the menu structure, by adding, removing or moving states, to increase the support for more efficient Glancing on Summary states and Focussing on Specific states. These behaviours were not well aligned with the existing menu layout. For example, we see AppsInPeriod (from the Specific sub-menu) appear in Glancing activity patterns, because users might drill down into the specific menus to see the current day's activity. Table 4 also \begin{table} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{**Categories of session counts**} \\ \hline \multicolumn{2}{|c|}{**few**} & **mid** & **many** \\ \hline \(\{0.37,0.47,0.54\}\) & \(\{5.43,6.17,7.35,7.55\}\) & \(\{10.13,10.82\}\) \\ \hline \multicolumn{3}{|c|}{**Categories of session lengths**} \\ \hline \multicolumn{3}{|c|}{**short**} & **mid** & **long** \\ \hline \(\{3.51,3.81,3.86\}\) & \(\{5.36,5.56,7.09,8.28\}\) & \(\{87.76,102.07,130.96\}\) \\ \hline \multicolumn{3}{|c|}{**Time**} & **AP1** & **AP2** \\ \hline \multirow{2}{*}{**interval**} & **Sessions** & **Predominant** \\ & **characteristics** & **states** & **characteristics** \\ \hline \hline \([0,1]\) & many/short & \(5\) & \(9\) & \(4\) \\ \hline \([0,7]\) & many/short & \(5\) & \(4\) & \(6\) \\ \hline \([0,30]\) & many/short & \(5\) & \(4\) & \(6\) \\ \hline \([30,60]\) & mid/mid & \(4\) & \(5\) & \(6\) \\ \hline \([60,90]\) & mid/mid & \(6\) & \(5\) & \(4\) \\ \hline \multicolumn{3}{|c|}{**Usage intensity Usage intensity characteristics**} \\ \hline \multicolumn{3}{|c|}{**Browsing few/long sessions centred on at least three states in the TopLevelMenu, sometimes from different state groups**} \\ \multicolumn{3}{|c|}{**Glancing many/short sessions centred on one or two states**} \\ \multicolumn{3}{|c|}{**Focussing mid/mid sessions centred on states from the same group**} \\ \hline \hline \multicolumn{3}{|c|}{**Time interval**} & **AP1 label** & **AP2 label** \\ \hline \([0,1]\) & GlancingSpecific & BrowsingSummary \\ \([0,7]\) & GlancingSpecific & BrowsingSummary \\ \([0,30]\) & GlancingSpecific & BrowsingSummary \\ \([30,60]\) & FocussingSpecific & FocussingSummary \\ \([60,90]\) & FocussingSpecific & FocussingSummary \\ \hline \hline \end{tabular} \end{table} Table 4: AppTracker1. GPAM(2). Inductive coding: categorisation of the average session counts and lengths, subsequently used for the initial categorisation of activity patterns alongside the predominant states, and description of the usage intensities. State identifiers (ids) are: 3 is OverallUsage, 4 is Last7Days, 5 is SelectPeriod, 6 is AppsInPeriod, 9 is Stats, 10 is ChartOverall, 12 is ChartStats, and 14 is ChartAppsInPeriod. State identifiers are highlighted as Summary states and Specific states State 4 has a dual role as Summary or Specific depending on the context. shows activity clustering around Summary and Specific states during experienced usage, but not in an efficient way; users are seen to be \(\mathtt{Focussing}\) on Summary states, when such information could be presented in such a way as to allow glancing to acquire the desired information more quickly. As a consequence we changed the top level menu structure to offer two options (plus \(\mathtt{Settings}\)) instead of three. The two options correspond to \(\mathtt{Glancing}\) and \(\mathtt{Browsing}\) usage, and to \(\mathtt{Focussing}\) usage, and we call them _My Top Apps_ and _Explore Data_, respectively. The corresponding two state groups are \(\mathtt{Summary}\) states and \(\mathtt{Specific}\) states (as the union of \(\mathtt{SpecificMenu}\), \(\mathtt{SpecificByPeriod}\), \(\mathtt{SpecificByApp}\), and \(\mathtt{SpecificStats}\) state groups). Hence, several states further down the hierarchy are moved or split, i.e. moving states upon which an activity pattern is centred to be close to the option associated with that activity pattern. To support \(\mathtt{Glancing}\) and \(\mathtt{Browsing}\) usage behaviour, _My Top Apps_ contains only tables showing: (i) the user's most-used apps since installation of AppTracker and (ii) the most-used apps on the current day. This way the redesigned menu aims to make today's usage much more easily accessible from the top-level menu. The changes are illustrated in Fig. 8 and summarised as: * The app's menu is restructured to specifically support as two main styles of use \(\mathtt{Glancing}\) and \(\mathtt{Focussing}\), and therefore to only have two main nouns to make these behaviours more distinct. * The screen view \(\mathtt{OverallUsage}\) was replaced by a new, more glancing-like view \(\mathtt{OverallAll}\) that does not allow for drilling down into detailed usage. * A new glancing-like screen view \(\mathtt{AppsToday}\) is included in the \(\mathtt{Summary}\) part of the menu. * The screen view \(\mathtt{LastT Days}\) was removed. * The \(\mathtt{Specific}\) states group of the menu was broken up into more subtle sub-components: \(\mathtt{SpecificByPeriod}\), \(\mathtt{SpecificByApp}\), \(\mathtt{SpecificStats}\). * The screen view \(\mathtt{OverallUsage}\) was moved into the \(\mathtt{Specific}\) part and renamed as \(\mathtt{OverallbyApp}\) alongside with \(\mathtt{ChartOverallbyApp}\); these two new states are grouped into \(\mathtt{SpecificByApp}\). * The screen view \(\mathtt{Stats}\) was moved into the \(\mathtt{Specific}\) part of the menu alongside with \(\mathtt{ChartStats}\); these two new states are grouped into \(\mathtt{SpecificStats}\). The main menu screen of AppTracker2 offers three main options (Fig. 7(a)), and there are 18 user-initiated events. New states are: * OverallAll: summary statistics about the overall device usage since installing \(\mathtt{AppTracker2}\), * ExploreData: three options for more in-depth exploration of all recorded data, * AppsToday: summaries of the current day's usage of apps, * AppsbyPeriod: usage statistics for various apps by selected time period, * ChartAppsInPeriod: detailed app usage when selected from \(\mathtt{AppsbyPeriod}\) Figure 6: AppTracker1 and AppTracker2. Visualisations of the average likelihood of changing activity pattern from an state within a session in \(\mathtt{GPAM}(2)\) models for different time intervals. Excepting the \(\mathtt{Stats}\) and \(\mathtt{ChartStats}\) states, for which the property returns lower probabilities than the average, the probabilities returned for all states are very similar. Dotted arrows denote probabilities in \([0.02,0.2)\), thin arrows in \([0.5,0.8)\), and thick arrows \(\geq 0.8\). Likelihoods lower than \(0.02\) are omitted. These results are based on model checking the temporal property \(\mathtt{StateToPattern}\) for various combinations of states and activity patterns. * OverallbyApp: usage summary for a selected app, * ChartOverallbyApp: detailed app usage, when selected from OverallbyApp. States are grouped, depending on position in the overall AppTracker2 menu, insights gained from the interaction styles of AppTracker1, and design intentions of AppTracker2. AppTracker2 was released in 2016 and our data sets are taken from a sample of 600 user traces over a period of six months. ### AppTracker2 Interaction Styles For brevity we show only our working for the inductive coding process for GPAM(2) models in Table 5. The labels are self-explanatory, but it is important to note that Summary states are a different grouping than in AppTracker1, see Fig. 8. In particular the new state AppsToday (state 8) features in all the activity patterns except FocussingSpecificByPeriod. The main result is again there are four distinct activity patterns and a clear split between styles for early days usage and experienced usage. Long run probabilities are in Fig. 10(b) and probabilities that for a given activity pattern, a state leads to activity pattern change (within a session), are given in Fig. 6(b). The interaction styles for GPAM(2) models of AppTracker2 usage are summarised as follows. Overall, there are fewer and shorter sessions, there are no short sessions and no mid/mid combinations. There is much more Focussing behaviour. There is very little Browsing in the early days models, which could indicate that AppTracker2 is easy to use, or that most of the users are already experts in using this second version of the app. The probabilities to transition are low across all models. ### Comparing Interaction Styles in AppTracker1 and AppTracker2 Comparing the interaction styles in AppTracker2 with AppTracker1, there are many similarities between activity patterns, and these are reflected in the label names. This is not surprising, given the redesign involved menu re-organisation, not a full-scale application redesign. The major differences between the styles can be seen in Fig. 5 and Fig. 6. We summarise the _effects_ of the redesign on interaction styles as: AppTracker2 has no short sessions, more Focussing in all models, with Focussing on specific states more likely for experienced users, very little Browsing, and with low likelihood, and only in early days usage models, extensive occurrences of the new summary state AppsToday, and a lower likelihood to move out of an activity pattern. The last may indicate more alignment with user intention and in particular with experienced user, or that most of the users are already experts in using this second version of the app. One could argue that the FocussingTopLevelMenu observed in the early days usage is a form of longer, but focussed Glancing. It is interesting to note that while the redesign was targeted mainly at experienced users, it had a significant, positive effect on early days users. Figure 7: Screenshots from AppTracker2: (a) The main menu view corresponds to the state Main; the _My Top Apps_ option takes the user to the state Overallll, _Explore Data_ to the state ExploreData, and _Settings_ to the state Settings. (b) The _My Top Apps_ view (corresponding to the state Overallll) shows the overall usage of the device since installing the app and offers the possibility to view today’s usage – the bottom-right option _Today_. (c) The _Explore Data_ view offers the options of exploring the data _By Period_ (state SelectPeriod), _By App_ (state OverallbyApp), and viewing the _Usage Stats_ (state Stats). ### Insights from Models with at Least Three Clusters Increasing the number of clusters \(K\) to 3 revealed more fine-grained interest in selected states or sets of states, and also the possibility of very short sessions in AppTracker2. For example, in GPAM(3) we uncovered Glancing again for experienced usage, but it was entirely focussed on the Main menu or on the (new) Summary states. A brief analysis of GPAM(3) models for both AppTracker1 and AppTracker2 can be found in the Appendix C. We observed that as we increase the number of model components, we see some finer-grained interest in selected states or groups of states (e.g. in FocussingSpecific in GPAM(4) and GPAM(5) models) that was less prominent for smaller values of \(K\). For \(K\geq 4\) we see more activity patterns that are centred around each of the states. There is no optimal value for \(K\), in general. It is an exploratory tool whose usefulness depends on the number of state groups, the state diagram, and the granularity that is helpful for redesign. In this study we found 2-3 clusters most useful. Note, we needed to study at least \(K=3\) to investigate whether activity patterns align to top level menu choices. It was also important to analyse GPAM(4) and GPAM(5) models to confirm they did not reveal significantly different or more useful (for redesign) activity patterns; details of GPAM(4) and GPAM(5) analyses are not included here. Figure 8: AppTracker1 and AppTracker1 interface designs. We use same colours between the two app versions to show correspondence. ## 4 Discussion We reflect on some implications of working with data-driven ML models and computational methods. ### Data Reliability and Segmentation We identified and subsequently fixed several errors in the logs such as missing start or end of sessions, missing events, unexpected timestamps, all due to the interactions between the logging framework SGLog and iOS such as apps crashing before data were written to file, or network failures in transmitting data to our servers for analysis. We discarded the user traces with less than five sessions because our focus was on studying long term engagement and such short user traces would only add more noise in the data set. The choice of five sessions as cut off for the minimum user trace length was empirical and we cannot exclude that a different minimum would produce different results. We selected different time intervals that covered experiences from initial engagement (first day) to extended engagement (over several months). These intervals were of interest, and made sense to us, for this application. More generally, designers might choose to focus on different intervals, at different times. For example, shortly after release of an initial design, they may focus on models inferred from first day usage (or even shorter, e.g. the first five minutes), but in subsequent designs they may choose to consider only models inferred from usage data from users that engage for long periods of time. ### Bias in Data Sets AppTracker2 was released as an update to AppTracker1 (both available only on the Cydia app store for jailbroken iOS devices). As such, our set of AppTracker2 users could be either existing users who installed the update, or new users coming to the app afresh. This might have an effect on interaction styles, because existing users \begin{table} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{**Categories of session counts**} \\ \hline **few** & **many** \\ \hline \(\{1.40,2.10,3.04\}\) & \(\{5.35,5.86,5.87,6.23,6.39,6.52,7.02\}\) \\ \hline \multicolumn{3}{|c|}{**Categories of session lengths**} \\ \hline \multicolumn{3}{|c|}{**mid**} & **long** \\ \hline \(\{6.14,6.57,6.68,6.88,7.52,7.84,8.31\}\) & \(\{17.61,25.08,38.84\}\) \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{3}{|c|}{**Time**} & **AP1** & **AP2** \\ \hline **int.** & **Sessions** & **States ids** & **Sessions** & **States ids** \\ \hline \hline \([0,1]\) & few/long & 5, 8 & many/mid & 5, 8 & 8 \\ \hline \([0,7]\) & few/long & 5, 8 & many/mid & 5, 8 & 8 \\ \hline \([0,30]\) & few/long & 5, 6 & 8 & many/mid & 5, 8 & 8 \\ \hline \([30,60]\) & many/mid & 9, 6, 11, 12 & many/mid & 5, 8, 6 & 6 \\ \hline \([60,90]\) & many/mid & 6, 9, 11, 12 & many/mid & 5, 8, 6 & 6 \\ \hline \end{tabular} \end{table} Table 5: AppTracker2, GPAM(2) models. Categorisation of the average session counts and lengths, subsequently used for the initial categorisation of activity patterns alongside the predominant states, and description of the usage intensity labels. State identifiers (ids) are: 5 is OverallAll, 6 is ExploreData, 8 is AppsToday, 9 is SelectPeriod, 10 is Stats, 11 is AppsbyPeriod, 12 is ChartHopsInPeriod, 13 is OverallbyApp, 14 is ChartStats, 15 is ChartOverallbyApp. State identifiers are highlighted as Summary states and Specific states. would be more familiar with the system, and would also have existing logged information recorded, so would be looking at charts and tables already populated with data, whereas new users would see empty screens during the early days. More generally, it can be a challenge to distinguish new users, because users can reinstall an app or purchase new hardware. For example, Apple altered the type of unique device identifiers that apps can access, to track users across installations. More recently Apple have prohibited this entirely, such that if a user uninstalls all apps from a developer and then re-installs, an entirely new identifier is generated, and nothing is provided that can link the user of the second installation to the first one (and vice versa) [9, 40]. These are issues inherent in performing real world deployments of apps rather than conducting more constrained lab-style research. We would argue that great benefits are gained from the large number of users we were able to recruit, and the external validity gained through users'self-selecting' to download our app rather than having being explicitly recruited as participants in a trial, and our subsequent study of people's use of our apps on their own devices embedded in their everyday lives. ### Selection of States, Volume of Data, and Scalability Our study gained from having significant volume of log data to work on, for reliable application of ML methods. It is important to note that neither the volume of log data nor number of user traces is relevant to probabilistic model checking; only the number of observed states determines its complexity. The AppTracker study involved a vocabulary of 16-18 observed states, which was comfortably manageable, but model-checking would have difficulty with an order of magnitude larger. This could arise in an app with much richer menu options, or further inclusion of categorical variables such as location, demographic information, duration of the event, time of day or week, etc. ### User Proficiency and Focus over Time Simpler forms of analytics may chase absolute metrics such as spending longer in the app or launching the app more often as measures of engagement [4]. We did we not conflate (subject) intensity of usage with usage expertise and/or proficiency. We aimed to support the styles that were observed, and note that both Focussing and Glancing styles were present in experienced usage models. ### Further Redesign Possibilities **New users.** If there is a difference between new and longer-term users (e.g. first day/week/month users vs. second/third month users), then add new functionality that supports application 'onboarding' and a smooth transition into another style of interaction. **Micro-usages.** Look for activity carried out in very brief micro-usages that might be better served by widgets rather than navigating the full application. By widget we mean a simple additional element of a device's graphical interface, which is usually complementary to an application running on that device. **Shortcuts.** Identify the most popular initial states in an activity pattern and implement shortcuts to these states. **Split the application.** If there is (nearly) always only one activity pattern per session and patterns do not overlap, then split the application into two (or more) separate applications. ## 5 Related work ### Our Previous Work We developed our analytics over a number of years: refining the Markovian models, the temporal properties, and segmentation of data sets. Our first study [3] involved the iOS multi-player game Hungary Yoshi [35], one data set, a simple inferred model, simple temporal logic properties, and \(K=2\). The app was different to AppTracker in that it had clear (user) goal states and (in addition to user interactions) external events when the device picked up scanned Wi-Fi access points. We uncovered two interaction styles representing different strategies for playing the game, but could not implement a redesign because Apple's iOS changes meant we could no longer scan for Wi-Fi access points. The app is no longer under development. Our initial analysis of AppTracker [2] used a different inferred admixture model, a restricted set of generic temporal properties, and segmented data sets. There was no inductive coding nor menu redesign. The GPAM model was defined in [1], where we also introduced the possibility of logic formulae over the latent variables, though we did not consider the probability to transition between activity patterns. Significantly, none of our earlier work involved specific design recommendations, the implementation and deployment of a new design, nor analysis of interaction styles in the new design. ### Other Related Work Our approach to modelling was motivated initially by an empirical study of simplicial mixtures for modelling webpage browsing and telephone usage [24]. We note that in the same year, Bowring et al. [11] referred to Girolami et al.'s work [24] when suggesting that a hidden Markov model for automatic classification of software was possible future work. To our knowledge, no one else has investigated admixture models for modelling interaction behaviour, however the existence of different user-populations or user types among the larger mobile app user population was also recently highlighted in [15; 29; 52] and mixture models proposed. Markov chains as models for software usage were proposed nearly 30 years ago by Whittaker and Poore [50], where transition probabilities are estimated from the frequency counts of all bigrams occurring in the execution traces. We can consider this model as a GPAM with only one component, i.e. GPAM(1). Markov chain models of software behaviour have also been employed in [16; 45] and more recently in [23; 12; 34; 30; 48; 20]. Usage styles are uncovered using statistical methods in [30], though not as latent states in a hidden Markov model variant. The analysis techniques used in [50] or in [45] are classic mathematical operations on the transition matrix such as computing the long run probability of being in one state or the expected number of states transitions to first reach a state (mean first passage time), whereas we use probabilistic temporal logic properties, which allows for more expressive properties to be formulated and then analysed automatically in a probabilistic model checking tool. First-order Markov models have been used for modelling in many related problems such as: human navigation on the Web, where states correspond to visited webpages [10; 14; 42; 41; 23], more specifically clickstream data [36; 33; 8], usability analysis, where states correspond to button presses in general [45], mobile applications, where states correspond to device screen events [3; 30], and human interactions with a search engine [47]. Research on mobile app usage by Banovic et al. [6] presents evidence for (and characterises sessions based on) duration and interaction types such as glance, review and engage. We also identified three types of usage intensity (Glancing, Browsing and Focussing), however they are characterised by the number of in-session interactions and frequency of sessions. We note that both our Glancing and the glance of [6] involve microsuages. We suggest our characterisation of glancing behaviour is closer to that of checking habit [39] as brief, repetitive inspection of dynamic content quickly accessible on the device. The BEAR tool [23] is similar in that it infers discrete time Markov chains from logs and probabilistic temporal logic properties and PRISM are used to query the models; but the key difference is users are classified according to static attributes, e.g. by time zone or operating system, which the designer has pre-defined, and behaviours are assumed to be homogeneous, i.e. there is no in-class variation and no ability to express or detect hybrid behaviours. This issue is also raised in [12], arguing that the filters for partitioning the log data can have a dramatic effect on the resulting model and the subsequent analyses. We also note DTMCs are models for usage patterns in [43], where mHealth apps are analysed based on visualisation of the interactive graphical representation of DTMCs and clustered sequences of various lengths. As mentioned above, probabilistic temporal logic provides an additional analytical tool for analysing DTMCs beyond insights gained from graphical representations. Other computational interaction approaches to modelling human behaviour, leveraging logs of user-initiated events, include Markov decision processes (MDPs) [7] (identifying routines across individuals and populations) and partially observable Markov decision processes (POMDPs) [13; 26]. We could analyse properties of such Markovian models with probabilistic model checking, and thus bring into play the power of temporal logic and interpretation of results following our approach. ## 6 Conclusions Interaction redesign and data go hand in glove, but previously we did not have the quantitative tools to uncover styles that are more nuanced than tasks, especially at a large scale. In this paper we have shown how new computational methods, based on unsupervised ML clustering and probabilistic temporal logic, provide such a new quantitative tool for studying and interpreting interaction styles. The admixture Markov models we inferred embody the ways user wanted to, and actually _did_, use AppTracker. The study results were a revelation to us: we had no preconceptions about possible differences between early days and experienced usage, and what kind of activity patterns we would find in both AppTracker1 and AppTracker2. We found that AppTracker1's top-level menu was not a good fit for the ways that users interacted with the app. For example, we identified experienced usage styles consisting mainly of Glancing and Focussing patterns that are centred on AppTracker1's Summary and Specific states respectively. These styles were not aligned with the existing menu layout. For example, AppsInPeriod (from the Specific sub-menu) appears in Glancing activity patterns, because users drill down into the specific menus to see the current day's activity. And conversely, users are seen to be Focussing on Summary states, when the information could be presented in such a way as to allow it to be acquired in a Glancing behaviour. Consequently we re-designed the interface, offering only two main options instead of three, and moving states upon which an activity pattern is centred to be close to the option. The interaction styles we uncovered in AppTracker2 showed that it supports the purpose of the redesign, with no short sessions, more Focussing in all models, and Browsing, which was prevalent for early days usage in AppTracker1, almost disappeared and had only a low likelihood in early days usage models. Overall the likelihood to transition between activity patterns was reduced, indicating that users more quickly found a suitable style. These insights were only possible through the use of admixture (as opposed to mixture) Markov models and the StateToPattern property that includes latent variables. Many, if not all, of the general concerns about use of ML and computational methods applied to our study of interaction design include bias in data, data reliability, and temporal validity of model. The first two are a consequence of real-world deployment, rather than lab-based studies. The first included a bias we could not eliminate: the possible effects of existing users on the second design. The inability to distinguish existing and new users with absolute certainty is an aspect of real-world app deployments, and subsequently for the use of computational methods. The second is also an aspect of real-world deployments as interactions between the different systems, including those for communications and data collection, affect data reliability. The last is pertinent to redesign - how often to infer models from new data sets depends on how quickly and to what degree the underlying data is changing. It is sometimes tempting, when applying computational methods, to refer to "the data set" as if it were one monolithic entity. This study has highlighted the impact of data segmentation (in our case, temporal segmentation) on the models and subsequent longitudinal analysis and decisions. We emphasise that data never actually speaks for itself, it is up to the analyst to pose meaningful questions and visualisations. A crucial tool for the analyst in posing these questions is probabilistic temporal logic formulae over the latent variables - the activity patterns uncovered in the ML inference. We require a _temporal_ logic to reason about computation paths, to express relationships between observed states and behaviours within a session. We found the properties concerning likelihood of changing activity patterns (StateToPattern) and long run behaviour (LongRunPattern) to be the most powerful and useful aspects of our analytics, as these allowed us to see which activity patterns were more popular or transient, for given time intervals, hence gaining additional insight into different interaction styles to inform redesign. As future work, we can add categories of dwelling time before each event and static user attributes. We could also introduce qualitative methods such as user interviews, to gain insight into the intent behind an activity pattern. A novel, on-line approach to this would be to pop-up a questionnaire when a user employs a specific activity pattern for a length of time, or starts to employ to a specific pattern; this would extend context-triggered experience sampling methods proposed in [27]. A complementary, more passive, on-line approach would be to fire up, selectively, more frequent, fine-grained and/or different tracking and sensing so as to gain a more in-depth picture, in a temporary way that is mindful of the fact there may be costs to users such as increased battery drain and data transmission charges. We could also experiment with data sets from different sub-populations, e.g. to compare activity patterns of users that engage for a long time, with those who disengage after as short period of time. Finally, we could automate the visualisations and allow interactions with the analytics, e.g. click on an individual bar in long run probability charts, which would show corresponding predominant states. ###### Acknowledgements. This research was supported by UKRI-EPSRC programme grants EP/J007617/1 _A Population Approach to Ubicomp System Design_ and EP/N007565 _Science of Sensor System Software_.
2310.05686
The potential of large language models for improving probability learning: A study on ChatGPT3.5 and first-year computer engineering students
In this paper, we assess the efficacy of ChatGPT (version Feb 2023), a large-scale language model, in solving probability problems typically presented in introductory computer engineering exams. Our study comprised a set of 23 probability exercises administered to students at Rey Juan Carlos University (URJC) in Madrid. The responses produced by ChatGPT were evaluated by a group of five statistics professors, who assessed them qualitatively and assigned grades based on the same criteria used for students. Our results indicate that ChatGPT surpasses the average student in terms of phrasing, organization, and logical reasoning. The model's performance remained consistent for both the Spanish and English versions of the exercises. However, ChatGPT encountered difficulties in executing basic numerical operations. Our experiments demonstrate that requesting ChatGPT to provide the solution in the form of an R script proved to be an effective approach for overcoming these limitations. In summary, our results indicate that ChatGPT surpasses the average student in solving probability problems commonly presented in introductory computer engineering exams. Nonetheless, the model exhibits limitations in reasoning around certain probability concepts. The model's ability to deliver high-quality explanations and illustrate solutions in any programming language, coupled with its performance in solving probability exercises, suggests that large language models have the potential to serve as learning assistants.
Angel Udias, Antonio Alonso-Ayuso, Ignacio Sanchez, Sonia Hernandez, Maria Eugenia Castellanos, Raquel Montes Diez, Emilio Lopez Cano
2023-10-09T12:54:58Z
http://arxiv.org/abs/2310.05686v1
The Potential of Large Language Models for Improving Probability Learning: A Study on ChatGPT3.5 and First-Year Computer Engineering Students ## Abstract In this paper, we assess the efficacy of ChatGPT (version Feb 2023), a large-scale language model, in solving probability problems typically presented in introductory computer engineering exams. Our study comprised a set of 23 probability exercises administered to students at Rey Juan Carlos University (URJC) in Madrid. The responses produced by ChatGPT were evaluated by a group of five statistics professors, who assessed them qualitatively and assigned grades based on the same criteria used for students. Our results indicate that ChatGPT surpasses the average student in terms of phrasing, organization, and logical reasoning. The model's performance remained consistent for both the Spanish and English versions of the exercises. However, ChatGPT encountered difficulties in executing basic numerical operations. Our experiments demonstrate that requesting ChatGPT to provide the solution in the form of an R script proved to be an effective approach for overcoming these limitations. In summary, our results indicate that ChatGPT surpasses the average student in solving probability problems commonly presented in introductory computer engineering exams. Nonetheless, the model exhibits limitations in reasoning around certain probability concepts. The model's ability to deliver high-quality explanations and illustrate solutions in any programming language, coupled with its performance in solving probability exercises, suggests that large language models have the potential to serve as learning assistants. ## Background & Summary Teaching has undergone significant changes in recent years, with a particular emphasis on digitalization and the use of computer tools (Hofer et al., 2021). This shift, prompted by the pandemic, has accelerated the adoption of technology in education, with the development of audio-visual materials and automatically grading exercises, in addition to videoconferencing. These tools have enabled educators to provide more interactive and engaging learning experiences, allowing students to explore concepts more deeply and at their own pace (Giesbers et al., 2013). However, it is important to note that while these developments have certainly improved the learning experience, the importance of knowledge consolidation exercises, particularly in mathematics and statistics. cannot be overstated. These exercises are crucial in ensuring that students not only understand the concepts but also know how to use them to solve real-world problems (Freeman et al., 2014). In fact, research has shown that students who engage in active learning and practice display better retention of information and perform more effectively on exams on exams (Carini et al., 2006). Furthermore, the utilization of technology in education has presented new challenges. For instance, students may encounter difficulties staying engaged and motivated in online or hybrid learning environments, resulting in higher dropout rates. Additionally, there may be concerns regarding the quality of online resources and the dependability of automated grading systems. Despite these obstacles, the integration of technology in education is here to stay, and educators must continue to adapt to these changes to ensure that students are adequately prepared for the future. This includes integrating innovative tools such as artificial intelligence (Guan et al., 2020) or chatbots, which can provide students with immediate feedback and support (Wollny et al., 2021). Translating real-world problems presented in natural language into abstract problems expressed in variables, parameters, functions, and equations is one of the most significant challenges faced by statistics students (Garfield et al., 2008). For example, many students may find it difficult to identify a binomial distribution when inspecting a batch of parts and discarding it if the number of defective parts exceeds a specific threshold. Additionally, statistics is a cross-disciplinary field relevant to various disciplines, including economics, computer science, natural sciences, medicine, and linguistics. Therefore, it is crucial to provide students with a diverse array of exercises that challenge them to identify the most suitable statistical or probability model to represent a given situation and to apply appropriate theoretical tools accordingly. Large language models (LLMs) have the potential to bring about a revolution in the field of education (Floridi and Chiriatti, 2020). These models rely on transformer architectures (Vaswani et al., 2017) that enable them to process natural language input and follow instructions through Reinforcement Learning based on Human Feedback (RLHF) (Ouyang et al., 2022). LLMs can also be fine-tuned to specific contexts and aligned to generate human-like responses with impressive accuracy (Brown et al., 2020) while avoiding the production of irrelevant or harmful content. Generative pre-Trained Transformers (GPT) are a family of LLMs introduced by OpenAI in 2018 (Radford et al., 2018). Over the various generations of GPT models, their size (i.e., number of parameters) and complexity have substantially increased. The third-generation models have 175 billion parameters and are fine-tuned to follow instructions via RLHF (Ouyang et al., 2022). ChatGPT, which was released in November 2022, is based on the GPT 3.5 model, with additional fine-tuning through supervised learning and RLHF to enable interaction with users via a text dialog. ChatGPT is available to users through a web interface, and the underlying model is regularly updated. The latest version available as of writing this paper is February 13, 2023 LLMs have a wide range of potential applications across various domains. In the field of education, their possible uses are diverse and extensive, including but not limited to automated essay grading, personalized learning, intelligent tutoring systems, and conversational agents (Holmes and Tuomi, 2022). Despite the potential benefits of using large language models in education, their integration also presents significant challenges due to their current limitations. One of the most pressing issues is the potential for inaccuracies, which can lead to the models presenting compelling yet false information to the user. Additionally, biases in the output of these models can perpetuate and even amplify existing societal inequalities, posing a significant concern (Shahriar and Hayashi, 2022; Barikeri et al., 2021). To effectively address these challenges, it is essential for teachers and learners to develop a set of competencies and literacies that enable them to understand the technology and its limitations. This includes critical thinking skills and strategies for fact-checking, as well as an understanding of the potential biases and risks associated with large language models (Bender and Friedman, 2020). In conclusion, while the potential benefits of large language models in education are significant, it is crucial to approach their integration with a clear pedagogical strategy and a strong focus on responsible and ethical use (Kasneci et al., 2023; Holmes et al., 2022). The objective of this research is two-fold. First, we evaluate the capability of ChatGPT (version Feb 2023) to solve probability exercises commonly administered in first-year computer engineering exams, and to compare the answers generated by the model with those evaluated by expert professors. Second, we provide a qualitative assessment of the results obtained and discuss the potential application of Large Language Models in education at university level. The paper is structured as follows: Section 2 provides a detailed description of the experiment, including the methodology used. Section 3 is dedicated to the analysis of the results, and Section 4 presents the conclusions and suggestions for future research. ## Methods To conduct the experiment, the first step is to gather a representative set of probability exercises. We have compiled 23 exercises that were initially proposed to first-year Computer Engineering students enrolled in introductory courses of statistics at Rey Juan Carlos University in Madrid, Spain. The URJC degree's cut-off mark, like that of other Spanish universities, is determined by the last student's mark admitted to each degree annually, which varies based on applicants' access marks and the number of places offered. Specifically, for the Computer Engineering degree at URJC, the cut-off marks were 7.371, 7.529, and 7.543 out of 10 points for the years 2020, 2021, and 2022, respectively. The exercises were originally created by six lecturers specifically for their respective examinations and have not been shared on the internet or made publicly available. Therefore, it is unlikely, though not impossible, that they have been included in an AI training database. It is possible that similar but not identical exercises may exist. The exercises have been systematically categorized into one or two of the nine distinct categories based on the unique attributes of the questions posed (see Table 1). Additionally, each exercise may consist of one or more questions. The wording (in Spanish) for each exercise can be found in the additional materials. We also gathered data on the students' grades for each group in which the exercises were administered, as shown in Table 1. For each exercise, the following information is provided: category, number of sections, number of students with available results, average score (out of 10), standard deviation, and number of students who received the minimum score (0) and the maximum score (10). \begin{table} \begin{tabular}{|r|r|r|r|r|r|r|} \hline ID & Category & Number & Number & Mean & Std. & \% of & \% of \\ & & of & of & & dev. & Zeros & Tens \\ & & questions & students & & & & \\ \hline 1 & CB & 1 & 65 & 4.41 & 4.87 & 53.8 & 40.0 \\ \hline 2 & CB, BY & 1 & 32 & 4.42 & 4.45 & 43.8 & 31.3 \\ \hline \end{tabular} \end{table} Table 1: Summary information for each exercise. Categorisation of exercises: CB: Combinatorial; CD: Conditional; BY: Bayes; IT: Intersection; DF: Distribution functions; BN: Binomial; NO: Normal; PS: Poisson; GE: Geometric. In the second step, the 23 exercises were presented to ChatGPT3.5 (Feb 13 version) via the web interface ([https://chat.openai.com/chat](https://chat.openai.com/chat)), and the responses were recorded. The exercises were provided in their original form, as written by the lecturers in the Spanish language, without any modifications or clarifications. In the case where an exercise had multiple questions, each question was presented in a new chat along with the main statement. The 23 exercises included a total of 69 questions. It is worth noting that ChatGPT does not always generate the same response to a given exercise. As a result, all exercises were presented to ChatGPT3.5 at least three times on different days between February 13 and March 5, 2023, always by the same person using the same account. The prompts utilized with ChatGPT (Kojima et al., 2032) were created by either directly using the original exercise description or appending it with one of the following sentences: * "Give me a solution with a brief justification". * "Give me a solution being concise in the answer". * "Solve the following exercise by being concise in your explanations". These three slightly different prompt variations allowed us to explore how ChatGPT can be guided to make the style of its answers similar to those of a university student answering an exam. After collecting the different chat responses to each exercise, the analysis process began. It was noticed that ChatGPT generated distinct responses for the three prompt variations employed, although with consistent reasoning. Specifically, for 16 of the exercises, all three responses were deemed equivalent in terms of their reasoning. However, for the remaining 7 exercises, ChatGPT used similar reasoning in two of the responses, and different reasoning in the third. In such cases, both alternative answers were chosen for further evaluation, irrespective of whether the reasoning was correct or the numerical outcome of the exercise. The selection was solely based on the difference between the reasoning applied. This process resulted in a total of 30 answers for the 23 exercises: sixteen exercises had one answer, and the remaining seven had two. It is worth noting that the task of inputting the questions into the chat and selecting the answers was performed by the same lecturer who was not involved in the grading phase. The next phase of the work involved the evaluation of the ChatGPT responses to each of the 23 exercises. Five lecturers, all of whom had extensive experience in teaching the subject, participated in this phase, and each exercise was evaluated by at least three of them. The lecturers were asked to assess each exercise on a scale of 0 to 10, using the same evaluation criteria they applied to their own students, even if the exercise they were evaluating was not one they had proposed. Along with assigning a final score for the exercise, they were also asked to provide additional comments. These comments included the reason for the main penalty they assigned to the answer (e.g. identification of the Figure 1: Screenshot with an image showing the exercise number 1 posed to the GPT and the answer it returns. problem, explanation of the reasoning, calculations), the level of difficulty of each exercise (low, medium, high), their perception of whether the answer was given by a human or not, and, in the case of exercises with multiple questions, which one is the most penalised. It is becoming increasingly common to evaluate students' knowledge through self-correction exercises found on platforms such as Moodle (Fatmi et al., 2021; Simanullang et al., 2020). These exercises are typically based on numerical results or the selection of alternatives (Blanco and Ginovart, 2012; Handayanto et al., 2018), and the reasoning used to obtain the result is not assessed, only the final answer. To address this limitation, an additional experiment was conducted by presenting each exercise to ChatGPT again, with the addition of the prompt "Write a code in R language to calculate the probability" before the section of each exercise that asks for a "probability calculation". This allowed ChatGPT to generate R code to express the solution and overcome its documented limitations in performing numerical operations. The R code generated by ChatGPT was executed, and the obtained result was compared with the correct value. ## Results Figure 2: Illustration capturing the first exercise posed to GPT, where R code is requested and the corresponding response it generates. The number of students who participated in each exercise varied depending on the group or groups assigned the exercise, with the lowest and highest number of students answering an exercise being 14 and 126, respectively (as shown in Table 1). Ideally, the results of each probability problem should reflect the difficulty of the exercise and the level of the classes to which the exercise was given. However, since the entrance grades of students were similar across all groups and the statistics subject encompassing these exam exercises is part of the first-year curriculum, the levels of the students can be considered comparable across all groups. As mentioned previously, all exercises were assessed by several lecturers (at least three), who were aware that they were evaluating non-human responses. The lecturers' evaluations showed a maximum difference of less than +/-1 point in all ratings, except for exercise number 12, where the two furthest scores differed by +/-2 points. It should be noted that the lecturers exhibited a high level of agreement on the sections where errors were made and the types of errors made. The differences in scores were primarily due to the varying penalty criteria applied by each lecturer in relation to numerical calculation errors. Table 2 summarizes the average scores obtained by the students and the average evaluation of the ChatGPT responses by the lecturers. For the seven exercises in which ChatGPT applied different resolution methods, each answer was evaluated independently, and the results are presented in two rows in Table 2. In addition, among the seven exercises in which ChatGPT applied two different resolution methodologies at different times, two of the alternative answers (exercises 1 and 19) were entirely incorrect and scored zero in all lecturers' evaluations. It is worth noting that both exercises only have one question. Among the remaining five exercises with two alternative answers, score discrepancies between the two alternatives ranged from 3 points (exercise 11) to a mere 1.2 points (exercise 6). As these exercises have multiple sections, smaller score differences typically indicate errors in some but not all sections. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Id**} & \multirow{2}{*}{**(1)**} & \multirow{2}{*}{**(2)**} & \multicolumn{4}{c|}{**Questions penalization**} \\ \cline{3-6} & & & **a** & **b** & **c** & **d** & **e** \\ \hline **1-1** & 0.00 & \multirow{3}{*}{4.41} & WR & & & & \\ \cline{1-1} **1-2** & 7.67 & & NE & & & & \\ \cline{1-1} **2-1** & 5.17 & \multirow{3}{*}{4.42} & NE & & & & \\ \cline{1-1} **2-2** & 7.67 & & NE & & & & \\ \cline{1-1} **3** & 4.58 & 6.52 & & II & & & \\ \cline{1-1} \cline{2-6} **4-1** & 8.33 & \multirow{3}{*}{5.25} & NE & NE & & \\ \cline{1-1} \cline{2-6} **4-2** & 9.17 & & & & & NE & \\ \cline{1-1} \cline{2-6} **5** & 7.67 & 6.17 & NE & NE & NE & \\ \cline{1-1} \cline{2-6} **6-1** & 7.33 & \multirow{3}{*}{5.40} & WR & & & & \\ \cline{1-1} \cline{2-6} **6-2** & 8.54 & & & & & NE & NE \\ \hline \end{tabular} \end{table} Table 2: Mean score of GPT responses (1) and mean score of student responses (2) for each exercise. When there are two answers to the same question, they are displayed in consecutive lines by identifying them with -1 or -2 after the exercise number. Columns from _a_) to _e_) correspond to each of the questions in an exercise. Type of errors that lecturers detected in the GPT responses (MA: Meaningless answer; II: Incorrect identification of the problem type; WR: Wrong reasoning; NE: numerical error) Figure 3 displays the distribution of students' marks for each exercise in the form of a violin plot (with a box plot inside). Violin plots offer a more detailed representation than traditional box plots of the distribution of students' marks when the distributions are multimodal and have many observations at the extremes (Hintze & Nelson, 1998). In this case, the colour coding indicates the level of difficulty of each exercise, as assessed by the lecturers. Most of the lecturers (although not all) identified exercises 5, 6, 7, 8, 9, 11, 12, 18, and 20 (green in Figure 3) as easy, and exercises 1, 2, 17, and 19 (pink in Figure 3) as difficult. Additionally, Figure 3 reveals that the exercises with the highest scores, in decreasing order, were exercises 9, 8, 21, 11, 17, 3, and 20, whereas exercises 18, 23, 10, and 1 received the lowest scores. These results demonstrate a reasonable level of agreement with the lecturers' assessments, except for exercise 17, which yielded satisfactory marks despite being classified as difficult, and exercise 18, where students received low scores despite it being labelled as easy according to the lecturers' assessment. In exercises with fewer sections or questions, scores tend to be either very high or very low, leading to a more bimodal distribution. For instance, exercises 1, 2, and 19 only have one question, and their distributions are evidently bimodal. The lecturers' average score for evaluating the ChatGPT responses is shown in Figure 3 as a red diamond. In cases where ChatGPT provided two responses for the same exercise, the response with the lowest average score was represented with a red diamond, and the one with the highest score with a blue diamond. Figure 4 compares the mean scores obtained by the students with those generated by ChatGPT for each exercise. In cases where ChatGPT provided two responses, the lower score was selected for analysis. The plot shows data points above and below the diagonal line, indicating exercises where ChatGPT scores were above or below the average of the students, respectively. The results reveal that ChatGPT outperformed the students in 16 out of 23 exercises, which accounts for approximately 70% of the exercises analysed. When only considering the highest score among the two potential ChatGPT responses (Table 2), this percentage increased to 78%. Out of the 23 exercises analysed, there were six exercises (9, 10, 12, 13, 14, 20) where there were minimal differences between the evaluations of ChatGPT and the students' marks. These exercises required the application of total probability problems or Bayes' theorem, and while ChatGPT's reasoning was correct, its computations were flawed. On the other hand, in six exercises, ChatGPT's average ratings were lower than the students' mean scores, especially in exercises 1 and 19 where ChatGPT received a score of zero from the lecturers. However, in these two exercises, ChatGPT provided alternate responses (as shown in Figure 3) that scored higher than the students' median grades. In the remaining eleven exercises, ChatGPT's response scores were significantly higher than those of the students (as demonstrated in Figure 4). Once again, taking into account the best score in cases with two potential answers, the differences between the students' mean scores and ChatGPT's ratings are even more significant. Figure 3: Violin plots with the distribution of student marks sorted by median value. The white dot is the median of the distribution, and the black rectangle represents the interquartile range. The mean of the GPT answer evaluations is shown as a red dot, and a blue dot represents the highest evaluation when there are multiple answers. Despite knowing that they were assessing an AI, lecturers were asked to indicate whether they felt the answers appeared human-like. They could select from the following options: a) Yes, the entire answer; b) Yes, the answer justifies/explains too well; c) Yes, for several reasons; or d) No, not at all. For the 80 % of the exercises, the lecturers felt that the answers did not appear human-like because they "justified/explained too well." Despite the fact that in the multiple-choice GPT exercises each question was posed independently, the lecturers observed that the answers provided using GPT were exceptionally lengthy, surpassing even the extent of detailed responses typically generated by the students themselves. Lecturers were also asked to indicate whether the quality of the explanations was higher or lower than usual for the students' answers, to which the overwhelming majority replied "yes". The lecturers were also asked to identify the specific types of errors present in each question of every exercise. They were provided with four options: i) Meaningless answer, ii) Incorrect identification of the problem type, iii) Faulty reasoning, or iv) Calculation errors. Table 2 shows the specific errors identified by the lecturers and the corresponding questions where they were found. The evaluation of the 85 questions revealed that 15 of them showed faulty reasoning or incorrect problem identification, while 34 questions contained calculation errors (as shown in Table 2). However, it should be noted that numerical errors were not evaluated in cases where the GPT-generated answers contained faulty reasoning. After the evaluation conducted by the lecturers, we performed a meticulous analysis of all answers, both correct and incorrect, in search of numerical Figure 4: Scatter plot comparing the means of students’ and GPT scores for each exercise. The red line indicates the 1:1 relationship. The numbers indicate the Id of each exercise. errors. Our findings revealed that out of the 23 exercise answers and 7 alternate exercise answers, only the response to exercise number 8 exhibited a complete absence of numerical errors across all sections (as shown in Table 2). This did not significantly impact the results because the lecturers' evaluation mainly penalized conceptual errors and to a lesser extent, calculation errors. It is important to note that it is unusual for a student to make numerous calculation errors during an exam, whereas the GPT model tends to produce inaccurate answers in this area. In order to conduct a separate assessment of ChatGPT's reasoning abilities, independent of the limitations it faces in performing numerical operations (Borji, 2023; Frieder et al., 2023), the questions were modified in order to avoid the GPT having to perform numerical calculations. Instead, the questions were rephrased (always in Spanish) to ask the GPT to generate R code (R Core Team, 2022) with which to answer the question. The entire set of questions was presented to the GPT 10 times in a non-consecutive manner for the same person. Subsequently, all the R code generated by the GPT was executed and the numerical output was compared to the correct numerical value for each section question. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{questions} & \\ \hline QId & a & b & c & d & e & score \\ \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 4: Frequency of accurate numerical responses obtained with R- code answers generated by GPT after ten attempts for each section of the 23 exam exercises. Right displaying the final score, calculated as the average of the ten answers provided for each question. Table 4 presents a summary of the findings from this experiment. Among the 69 questions, ChatGPT was able to consistently provide the accurate numerical answer for 31 questions, whereas for 12 questions, it never produced the precise numerical answer. For the remaining 26 questions, the GPT generated the correct answer with varying frequency, as illustrated in Table 4. The final qualification for each exercise were calculated, considering the value of each question in the exercise, by weighting the scores of all the ten answers provided for each question and the relative value of the question in the exercise (see additional material). Figure 5 shows a comparison of the distributions of the marks for the 23 exercises of the students' answers and the marks obtained by ChatGPT for both natural language and R-code answers. The results indicate that ChatGPT's responses have a superior mark distribution compared to that of the students, but with greater variability. Specifically, in the natural language response domain, ChatGPT's median mark exceeds the third quartile of the students. It is important to note that the majority of the exercises were not designed specifically for automated grading and involve a scenario where the first section's numerical result carries over to the subsequent sections, which creates an inequitable situation for automated assessment of the exercises. This aspect would undoubtedly penalize not only students but also ChatGPT's R-code assessment. Figure 6 presents a comparative analysis between the scores obtained by ChatGPT for natural language responses, as evaluated by the lecturers, and R-code responses, as presented in Table 4. In cases where two natural language response alternatives were Figure 5: Box plot with the distribution of marks for all 23 exercises obtained by the students, and the ChatGPT responses in natural language and in R code. available, the lower-performing alternative was utilized for the figure. Upon examining the graph, there is not enough evidence to suggest that one problem-solving approach is significantly superior to the other. In the R-coded responses, the problems with numerical calculations disappeared, and the errors were mainly due to conceptual misunderstandings. Notably, for exercises 1, 5, 13, and 18, GPT consistently produced precise results in all ten attempts, indicating the consistent use of the correct methodology. In natural language responses to exercise 1, GPT used the correct methodology in two out of three attempts, with an incorrect methodology applied in one attempt (resulting in a null value in figure 6). In exercises 5, 13 and 18, lecturers penalised answers only for numerical errors (table 2) which are now no longer present. More striking are the significant improvements observed in the scores obtained for R-code responses to exercises 3, 10, and 11, where lecturers penalized different types of issues in natural language responses other than numerical errors (as shown in table 2). Specifically, in exercise 10, questions b and c, lecturers identified incorrect reasoning in natural language responses (as shown in Table 2), but this issue was resolved in R-code format for question b, while it persisted for question c (Table 4). For question a of exercise 11, reasoning problems only occurred in three out of ten attempts when the response was Figure 6: Scatter plot comparing the mean scores obtained for the ChatGPT in natural language and in R code responses. In the natural language exercises in which two alternative responses are evaluated, only the one with the worst score has been taken into account. generated in R-code format. Additionally, in question 3b, the incorrect identification of the problem that appeared in natural language responses was observed in only two out of ten attempts when answered in R-code format (Table 4). The situation was reversed in exercises 16 and 17, where the evaluations of natural language answers were notably higher than those of R-code responses. In both exercises, the numerical results of all questions depended on the numerical result of the first question. Therefore, R-code responses were heavily penalized if there was any error in the first question. Of particular interest is the observation that, for certain types of problems, ChatGPT proposes solution methods through R-coded responses that were not presented when the answers were in natural language. Notably, ChatGPT employs techniques such as probability calculations by applying simulation of thousands of repetitions, utilization of R functions to calculate integrals, and employment of R packages or specific or self-written functions to estimate probability, normal or geometric distribution. (Additional material) Considering both natural language and R-coded answers, we could conclude that the ChatGPT always has conceptual or interpretative errors in less than 15 of the 85 questions. We have considered the possibility that the difficulties in solving these questions properly could be due to a poorer ability to interpret the Spanish language. To verify this, we retested these questions 10 more times in their original Spanish version and their corresponding English translations (additional material) and did not find any difference in the ability to get it right in either language. This seems to indicate that ChatGPT has an equal ability to understand both languages, and that the poor performance in some items is not due to difficulty in interpreting natural language in Spanish. The additional material describes some of these incorrect answers, and it is worth noting that ChatGPT seems to face difficulties with conditional probability exercises that are not presented in a straightforward manner. While it is able to identify the conditional problem and describe the necessary operations (in both natural language and R-coded answers), it often struggles when performing them. This issue is especially noticeable in the third section of exercises 10, 16, and 20. ## Discussion and Conclusions This study compares the responses to probability exercises generated by ChatGPT with those provided by university students. To achieve this, a representative set of 23 exercises was analyzed. These exercises were previously proposed by five lecturers to computer engineering students taking their first year of statistics courses in a public university in Spain. Each question was presented to ChatGPT3.5 several times, using exactly the same phrasing in Spanish. Additionally, ChatGPT was asked to generate R code to calculate the numerical value. The written answers were evaluated by the lecturers and compared with those obtained by the students and with the scores of the purely numerical answers of ChatGPT. ChatGPT demonstrated superior skills in formulation, organization, and reasoning compared to the average student in natural language responses but had frequent errors in numerical operations. When correct reasoning was applied but numerical errors were made, lecturers applied a penalty of approximately 25%. Even so, and considering only the worst answers to each question, ChatGPT outperformed the students in 16 of the 23 exercises. The results indicate that when ChatGPT provides R-coded responses, the answer is always correct in 45% of the questions, always incorrect in 17%, and correct or incorrect with varying ratios in the remaining 38%. Issues with numerical operations disappear when ChatGPT is asked to generate R-coded answers, and sometimes it employs solution methods (such as simulation of all possibilities and numerical integration) that it did not consider in the answers written in natural language. The ChatGPT consistently provides unique answers and often uses complex reasoning that is well-articulated and draws on prior knowledge. While the ChatGPT is capable of generating well-founded answers and detailed explanations, there is still room for improvement in its understanding of certain probability concepts. However, when providing answers in R code, the explanations are particularly noteworthy and are a valuable resource for computer engineering students to enhance their coding skills while learning about probability. Furthermore, the analysis indicates that the ChatGPT's performance in answering questions in both Spanish and English is similar, regardless of whether the answers are provided in natural language or R code. It is highly likely that future generations of LLMs will effectively overcome the limitations demonstrated by ChatGPT in mathematical operations. Augmented Large Language Models, such as those proposed by Schick et al. (2023), Yao et al. (2022), and Mialon et al. (2023), can rely on specialized tools like calculators, code execution environments, and specific mathematical tools to carry out such operations seamlessly. Furthermore, according to the Chinchilla scaling law (Hoffmann et al., 2023), LLMs trained with more data will exhibit increased performance in a wide range of tasks, including those required to solve probability exercises like the one in question. Thus, we can expect an improved generation of large language models to significantly enhance their performance in solving such exercises in the near future. The availability of intelligent assistants such as ChatGPT has revolutionized the way we approach teaching and learning. With the help of AI, students now have access to a vast collection of resources and receive personalized support that was previously impossible. As a result, traditional teaching methods such as completing collections of exercises or web quizzes for solving numerical problems are becoming less relevant and less effective. However, LLMs present new possibilities: since students do not know a priori whether the answers provided by the chat are correct or incorrect, they must critically analyse and understand the answers, which improves their knowledge and enables them to work independently. With the help of ChatGPT, they can be asked to solve a series of statistical exercises and explain why an answer is correct or incorrect, which can force them to be more critical of the subject content. Moreover, LLMs designed to interact in the form of dialog offer interesting opportunities to be used as tutors. Students can not only request to solve and explain an exercise, but also engage in a dialog with the LLM to request further explanations and interactively explore elements they do not fully understand. Moreover, LLMs can create new exercises of the same kind for students to practice and prepare better for exams, and they can correct the answers in a personalized way. This shift towards personalized, AI-based learning has the potential to revolutionize education, making it more accessible and effective for students around the world. While traditional teaching methods may still have their place, it is clear that LLM powered intelligent assistants like ChatGPT are changing the way we learn and teach, paving the way for a more efficient and effective educational future. The versatility of LLMs also opens the door to fine-tune models for specific use-cases in education, either through model fine-tuning or prompt engineering and in-context learning. However, it is important to be mindful of the existing limitations and risks associated with these models, including biases, inaccuracies, and the documented tendency to stray away harmfully from the intended context of operation.
2301.02870
Sublinear Time Algorithms for Several Geometric Optimization (With Outliers) Problems In Machine Learning
In this paper, we study several important geometric optimization problems arising in machine learning. First, we revisit the Minimum Enclosing Ball (MEB) problem in Euclidean space $\mathbb{R}^d$. The problem has been extensively studied before, but real-world machine learning tasks often need to handle large-scale datasets so that we cannot even afford linear time algorithms. Motivated by the recent studies on {\em beyond worst-case analysis}, we introduce the notion of stability for MEB, which is natural and easy to understand. Roughly speaking, an instance of MEB is stable, if the radius of the resulting ball cannot be significantly reduced by removing a small fraction of the input points. Under the stability assumption, we present two sampling algorithms for computing radius-approximate MEB with sample complexities independent of the number of input points $n$. In particular, the second algorithm has the sample complexity even independent of the dimensionality $d$. We also consider the general case without the stability assumption. We present a hybrid algorithm that can output either a radius-approximate MEB or a covering-approximate MEB. Our algorithm improves the running time and the number of passes for the previous sublinear MEB algorithms. Our method relies on two novel techniques, the Uniform-Adaptive Sampling method and Sandwich Lemma. Furthermore, we observe that these two techniques can be generalized to design sublinear time algorithms for a broader range of geometric optimization problems with outliers in high dimensions, including MEB with outliers, one-class and two-class linear SVMs with outliers, $k$-center clustering with outliers, and flat fitting with outliers. Our proposed algorithms also work fine for kernels.
Hu Ding
2023-01-07T15:03:45Z
http://arxiv.org/abs/2301.02870v1
Sublinear Time Algorithms for Several Geometric Optimization (With Outliers) Problems In Machine Learning+ ###### Abstract In this paper, we study several important geometric optimization problems arising in machine learning. First, we revisit the Minimum Enclosing Ball (MEB) problem in Euclidean space \(\mathbb{R}^{d}\). The problem has been extensively studied before, but real-world machine learning tasks often need to handle large-scale datasets so that we cannot even afford linear time algorithms. Motivated by the recent studies on _beyond worst-case analysis_, we introduce the notion of stability for MEB, which is natural and easy to understand. Roughly speaking, an instance of MEB is stable, if the radius of the resulting ball cannot be significantly reduced by removing a small fraction of the input points. Under the stability assumption, we present two sampling algorithms for computing radius-approximate MEB with sample complexities independent of the number of input points \(n\). In particular, the second algorithm has the sample complexity even independent of the dimensionality \(d\). We also consider the general case without the stability assumption. We present a hybrid algorithm that can output either a radius-approximate MEB or a covering-approximate MEB. Our algorithm improves the running time and the number of passes for the previous sublinear MEB algorithms. Our method relies on two novel techniques, the Uniform-Adaptive Sampling method and Sandwich Lemma. Furthermore, we observe that these two techniques can be generalized to design sublinear time algorithms for a broader range of geometric optimization problems with outliers in high dimensions, including MEB with outliers, one-class and two-class linear SVMs with outliers, \(k\)-center clustering with outliers, and flat fitting with outliers. Our proposed algorithms also work fine for kernels. ## 1 Introduction Many real-world machine learning tasks can be formulated as geometric optimization problems in Euclidean space. We start with a fundamental geometric optimization problem, _Minimum Enclosing Ball (MEB)_, which has attracted a lot of attentions in past years. Given a set \(P\) of \(n\) points in Euclidean space \(\mathbb{R}^{d}\), where \(d\) could be quite high, the problem of MEB is to find a ball with minimum radius to cover all the points in \(P\)[20, 45, 69]. MEB finds several important applications in machine learning [76]. For example, the popular classification model _Support Vector Machine (SVM)_ can be formulated as an MEB problem in high dimensional space, if all the mapped points have the same norm by using kernel method, _e.g.,_ the popular radial basis function kernel; this SVM is called "Core Vector Machine (CVM)" which is currently one of the most important SVM training methods for large-scale data sets, since it was proposed in 2005 [90]. Hence fast MEB algorithms can be used to speed up its training procedure [29, 30, 90]. Recently, MEB has also been studied for preserving privacy [44, 77] and quantum cryptography [53]. Usually, we consider the approximate solutions of MEB. If a ball covers all the \(n\) points but has a radius larger than the optimal one, we call it a **"radius-approximate solution"**; if a ball has the radius no larger than the optimal one but covers less than \(n\) points, we call it a **"covering-approximate solution"** instead (the formal definitions are shown in Section 2). In the era of big data, the dataset could be so large that we cannot even afford linear time algorithms. This motivates us to ask the following questions: _Is it possible to develop approximation algorithms for MEB that run in sublinear time in the input size? And how about other high-dimensional geometric optimization problems arising in machine learning?_ It is common to assume that the input data is represented by a \(n\times d\) matrix, and any algorithm having complexity \(o(nd)\) can be considered as a sublinear time algorithm. In practice, data items are usually represented as sparse vectors in \(\mathbb{R}^{d}\); so it can be fast to perform the operations, like distance computing, even though the dimensionality \(d\) is high (_e.g.,_ if each vector has \(s\ll d\) non-zero entries, the time for computing the distance is \(O(s)\) rather than \(O(d)\); see the concluding remarks of [30]). Moreover, the number of input points \(n\) is often much larger than the dimensionality \(d\) in many real-world scenarios. **Therefore, we are interested in designing the algorithms that have complexities sublinear in \(n\) (or linear in \(n\) but with small factor before it).** ### Our Main Ideas and Results Our idea for designing sublinear time MEB algorithms is inspired by the recent studies on optimization with respect to stable instances, under the umbrella of _beyond worst-case analysis_[82]. For example, several recent works introduced the notion of stability for problems like clustering and max-cut [8, 13, 18]. In this paper, we give the notion of **"stability"** for MEB. Roughly speaking, an instance of MEB is stable, if the radius of the resulting ball cannot be significantly reduced by removing a small fraction of the input points (_e.g.,_ the radius cannot be reduced by \(10\%\) if only \(1\%\) of the points are removed). The rationale behind this notion is quite natural: if the given instance is not stable, the small fraction of points causing significant reduction in the radius should be viewed as outliers (or we may need multiple balls to cover the input points as the \(k\)-center clustering problem [51, 61]). To the best of our knowledge, this is the first study on MEB from the perspective of stability. We prove an important implication of the stability assumption: informally speaking, if an instance of MEB is stable, its center should reveal a certain extent of robustness in the space (Section 3). Using this implication, we propose two sampling algorithms for computing \((1+\epsilon)\)-radius approximate MEB with sublinear time complexities (Section 4); in particular, our second algorithm has the sample size (_i.e.,_ the number of sampled points) independent of the number of input points \(n\) and dimensionality \(d\) (to the best of our knowledge, this is the first algorithm achieving \((1+\epsilon)\)-radius approximation with such a sublinear complexity). Moreover, we have an interesting observation: the ideas developed under the stability assumption can even help us to solve the general instance without the stability assumption, if we relax the requirement slightly. We introduce **a hybrid approach** that can output either a radius-approximate MEB or a covering-approximate MEB, depending upon whether the input instance is sufficiently stable1 (Section 5). Also, a byproduct is that we can infer the stability degree of the given instance from the output. It is worth noting that the simple uniform sampling idea based on VC-dimension [58, 92] can only yield a "bi-criteria" approximation, which has errors on both the radius and the number of covered points (see the discussion on our first sampling algorithm in Section 4.1). Comparing with the sublinear time MEB algorithm proposed by Clarkson _et al._[30], we reduce the total running time from \(\tilde{O}(\epsilon^{-2}n+\epsilon^{-1}d+M)\) to \(O(n+h(\epsilon,\delta)\cdot d+M)\), where \(M\) is the number of non-zero entries in the input \(n\times d\) matrix and \(h(\epsilon,\delta)\) is a factor depending on the pre-specified radius error bound \(\epsilon\) and covering error bound \(\delta\). Thus, our improvement is significant if \(n\gg d\). The only tradeoff is that we allow a covering approximation for unstable instance (given the lower bound proved by [30], it is quite unlikely to reduce the term \(\epsilon^{-2}n\) if we keep restricting the output to be \((1+\epsilon)\)-radius approximation). Moreover, our algorithm only needs **uniform sampling and a single pass **over the data**; on the other hand, the algorithm of [30] needs \(\tilde{O}(\epsilon^{-1})\) passes (the details are shown in Table 1). In addition to the stability idea, our method also relies on two key techniques, the novel "**Uniform-Adaptive Sampling**" method and "**Sandwich Lemma**". Roughly speaking, the Uniform-Adaptive Sampling method can help us to bound the error induced in each "randomized greedy selection" step; the Sandwich Lemma enables us to estimate the objective value of each candidate and select the best one in sublinear time. Finally, we present several extensions of our result. In practice, we may assume the presence of outliers in given datasets. In particular, as the rapid development of machine learning, the field of _adversarial machine learning_ has attracted a great amount of attentions [17, 52]. A small set of outliers could be added by some adversarial attacker to make the model severely deviate and cause unexpected error (the seminal paper [16] on poisoning attacks against SVM has just received the _ICML2022 Test of Time award_). To defend such poisoning attacks, we often design robust algorithms that are resilient against outliers [65]. However, the presence of outliers makes the problem not only non-convex but also highly combinatorial in high dimensions; for example, if \(m\) of the input \(n\) data items are outliers (\(m<n\)), we have to consider an exponentially large number \(\binom{n}{m}\) of different possible cases when optimizing the objective function. So we consider to design sublinear time algorithms for the following problems. **MEB with outliers.** MEB with outliers is a natural generalization of the MEB problem, where the goal is to find the minimum ball covering at least a certain fraction of input points. We can apply MEB with outliers to solve many practical problems (_e.g.,_ outlier recognition) in data mining and data analysis [89]. We define the stability for MEB with outliers, and propose the sublinear time approximation algorithm. Our algorithm is the first sublinear time algorithm for the MEB with outliers problem (comparing with the previous linear time algorithms like [22]), to the best of our knowledge. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline **Results** & **Quality** & **Time** & **Number of passes** & **Extendibility for MEB with outliers** \\ \hline Clarkson _et al._[30] & \((1+\epsilon)\)-rad. & \(\tilde{O}(\epsilon^{-2}n+\epsilon^{-1}d+M)\) & \(\tilde{O}(\epsilon^{-1})\) & N/A \\ \hline \multirow{3}{*}{Core-sets methods [20, 29, 69, 79]} & \multirow{3}{*}{\((1+\epsilon)\)-rad.} & roughly \(O(\epsilon^{-1}nd)\) & \multirow{3}{*}{\(O(\epsilon^{-1})\)} & \multirow{3}{*}{bi-criteria approx. [22]} \\ & & or \(O(\epsilon^{-1}(n+d+M))\) & & \\ & & if \(M=o(nd)\) & & \\ \hline \multirow{3}{*}{Numerical method [84]} & \multirow{3}{*}{\((1+\epsilon)\)-rad.} & \(\tilde{O}(\epsilon^{-1/2}nd)\) or & \multirow{3}{*}{\(O(\epsilon^{-1/2})\)} & \multirow{3}{*}{N/A} \\ & & & if \(M=o(nd)\) & \\ \hline Numerical method [6] & \((1+\epsilon)\)-rad. & \(\tilde{O}(nd+n\sqrt{d}/\sqrt{\epsilon})\) & \(\tilde{O}(d+\sqrt{d/\epsilon})\) & N/A \\ \hline Streaming algorithm [4, 25] & 1.22-rad. & \(O(nd/\epsilon^{5})\) & one pass & N/A \\ \hline \multirow{3}{*}{**This paper**} & stable instance & \((1+\epsilon)\)-rad. & \(O(C_{1}\cdot d)\) (Sec. 4.2) & uniform sampling & N/A \\ \cline{1-1} \cline{2-5} & \multirow{3}{*}{\((1+\epsilon)\)-rad.} & \(O\big{(}(n+C_{2})d\big{)}\) or & \multirow{3}{*}{\(\begin{array}{c}\text{uniform sampling}\\ \text{on }(n+C_{2}\cdot d+M)\end{array}\)} & \multirow{3}{*}{\(\begin{array}{c}\text{($1+\epsilon$)-rad.}\\ \text{on }(1-\delta)\text{-cov.}\end{array}\)} \\ & & if \(M=o(nd)\) (Sec. 5.3) & & (Sec. 6) \\ \hline \hline \end{tabular} \end{table} Table 1: The existing and our results for computing MEB in high dimensions. In the table, “rad.” and “cov.” stand for “radius approximation” and “covering approximation”, respectively. “\(M\)” is the number of non-zero entries in the input \(n\times d\) matrix. The factor \(C_{1}\) depends on \(\epsilon\) and the stability degree of the given instance; the factor \(C_{2}\) depends on \(\epsilon\) and \(\delta\). **Other enclosing with outliers problems.** Besides MEB with outliers, we observe that our proposed techniques can be used to solve a broader range of enclosing with outliers problems. We define a general optimization problem called **minimum enclosing "x" (MEX) with Outliers**, where the "x" could be a specified kind of shape (_e.g.,_ the shape is a ball for MEB with outliers). We prove that it is possible to generalize the Uniform-Adaptive Sampling method and Sandwich Lemma to adapt the shape "x", as long as it satisfies several properties. In particular we focus on the MEX with outlier problems including flat fitting, \(k\)-center clustering, and SVM with outliers; a common characteristic of these problems is that each of them has an iterative algorithm based on greedy selection for its vanilla version (without outliers) that is similar to the MEB algorithm of [20]. Though these problems have been widely studied before, the research in terms of their sublinear time algorithms is till quite limited. **Remark 1**.: Because the geometric optimization problems studied in this paper are motivated from machine learning applications, we also take into account the **kernels**[85]. Our proposed algorithms only need to conduct the basic operations, like computing the distance and inner product, on the data items. Therefore, they also work fine for kernels. **The rest of the paper is organized as follows.** In Section 1.2, we summarize the previous results that are related to our work. In Section 2, we present the important definitions and briefly introduce the coreset construction method for MEB from [20] (which will be used in our following algorithms and analysis). In Section 3, we prove the implication of MEB stability. Further, in Section 4 we propose two sublinear time MEB algorithms for stable instance. In Section 5, we propose two key techniques, Uniform-Adaptive sampling and Sandwich lemma, and then present our sublinear time algorithm for general MEB without the stability assumption. In Section 6, we extend the idea of Section 5 to the MEB with outliers problem. Finally, we present the generalized Uniform-Adaptive sampling and Sandwich lemma, together with the applications in several enclosing with outliers problems (including flat fitting, \(k\)-center clustering, and SVM with outliers) in Section 7. ### Previous Work The works most related to ours are [7, 30]. Clarkson _et al._[30] developed an elegant perceptron framework for solving several optimization problems arising in machine learning, such as MEB. Given a set of \(n\) points in \(\mathbb{R}^{d}\) represented as an \(n\times d\) matrix with \(M\) non-zero entries, their framework can compute the MEB in \(\tilde{O}\big{(}\frac{n}{\epsilon^{2}}+\frac{d}{\epsilon}\big{)}\) time 2. Note that the parameter "\(\epsilon\)" is an additive error (_i.e.,_ the resulting radius is \(r+\epsilon\) if \(r\) is the radius of the optimal MEB) which can be converted into a relative error (_i.e.,_\((1+\epsilon)r\)) in \(O(M)\) preprocessing time. Thus, if \(M=o(nd)\), the running time is still sublinear in the input size \(nd\) (please see Table 1). The framework of [30] also inspires the sublinear time algorithms for training SVMs [60] and approximating Semidefinite Programs [47]. Hayashi and Yoshida [59] presented a sampling-based method for minimizing quadratic functions of which the MEB objective is a special case, but it yields a large additive error \(O(\epsilon n^{2})\). Footnote 2: The asymptotic notation \(\tilde{O}(f)=O\big{(}f\cdot\texttt{polylog}(\frac{nd}{\epsilon})\big{)}\). Alon _et al._[7] studied the following property testing problem: given a set of \(n\) points in some metric space, determine whether the instance is \((k,b)\)-clusterable, where an instance is called \((k,b)\)-clusterable if it can be covered by \(k\) balls with radius (or diameter) \(b>0\). They proposed several sampling algorithms to answer the question "approximately". Specifically, they distinguish between the case that the instance is \((k,b)\)-clusterable and the case that it is \(\epsilon\)-far away from \((k,b^{\prime})\)-clusterable, where \(\epsilon\in(0,1)\) and \(b^{\prime}\geq b\). "\(\epsilon\)-far" means that more than \(\epsilon n\) points should be removed so that it becomes \((k,b^{\prime})\)-clusterable. Note that their method cannot yield a single criterion radius-approximation or covering-approximation algorithm for the MEB problem, since it will introduce unavoidable errors on the radius and the number of covered points due to the relaxation of "\(\epsilon\)-far". But it is possible to convert it into a "**bi-criteria**" approximation, where it allows approximations on both the radius and the number of uncovered outliers (_e.g.,_ discard more than the pre-specified number of outliers). **MEB and core-set.** A _core-set_ is a small set of points that approximates the structure/shape of a much larger point set [1, 43, 80]. The core-set idea has also been used to compute approximate MEB in high dimensional space [67, 69, 79, 22]. Badoiu and Clarkson [20] showed that it is possible to find a core-set of size \(\lceil 2/\epsilon\rceil\) that yields a \((1+\epsilon)\)-radius approximate MEB. Several other methods can yield even lower core-set sizes, such as [67, 21]. In fact, the algorithm for computing the core-set of MEB is a _Frank-Wolfe_ algorithm [46], which has been systematically studied by Clarkson [29]. Other MEB algorithms that do not rely on core-sets include [6, 45, 84]. Agarwal and Sharathkumar [4] presented a streaming \((\frac{1+\sqrt{3}}{2}+\epsilon)\)-radius approximation algorithm for computing MEB; later, Chan and Pathak [25] proved that the same algorithm actually yields an approximation ratio less than \(1.22\). Very recently, Cohen-Addad _et al._[31] proposed the sublinear time algorithm for computing high dimensional power means (_e.g.,_ geometric median and mean points) by using core-sets. **MEB with outliers and \(k\)-center clustering with outliers.** The MEB with outliers problem can be viewed as the case \(k=1\) of the \(k\)-center clustering with outliers problem [27]. Badoiu _et al._[22] extended their core-set idea to the problems of MEB and \(k\)-center clustering with outliers, and achieved linear time bi-criteria approximation algorithms (if \(k\) is assumed to be a constant). Huang _et al._[62] and Ding _et al._[41] respectively showed that simple uniform sampling approach can yield bi-criteria approximation of \(k\)-center clustering with outliers. Several algorithms for the low dimensional MEB with outliers have also been developed [54, 54, 71, 42]. There also exist a number of works on streaming MEB and \(k\)-center clustering with outliers [28, 24, 94, 72]. Other related topics include robust optimization [14], robust fitting [3, 57], and optimization with uncertainty [23]. **SVM with outliers.** Given two point sets \(P_{1}\) and \(P_{2}\) in \(\mathbb{R}^{d}\), the problem of _Support Vector Machine (SVM)_ is to find the largest margin to separate \(P_{1}\) and \(P_{2}\) (if they are separable) [26]. SVM can be formulated as a quadratic programming problem, and a number of efficient techniques have been developed in the past, such as the soft margin SVM [81, 32], \(\nu\)-SVM [86, 33], and Core-SVM [91]. There also exist a number of works on designing robust algorithms for SVM with outliers [88, 40, 93]. **Flat fitting with outliers.** Given an integer \(j\geq 0\) and a set of points in \(\mathbb{R}^{d}\), the flat fitting problem is to find a \(j\)-dimensional flat having the smallest maximum distance to the input points [55]; obviously, the MEB problem is a special case with \(j=0\). In high dimensions, Har-Peled and Varadarajan [56] provided a linear time algorithm if \(j\) is assumed to be fixed; their running time was further reduced by Panigrahy [79] based on a core-set approach. There also exist several methods considering flat fitting with outliers but only for low-dimensional case [3, 57]. **Optimizations under stability.** Bilu and Linial [18] showed that the Max-Cut problem becomes easier if the given instance is stable with respect to perturbation on edge weights. Ostrovsky _et al._[78] proposed a separation condition for \(k\)-means clustering which refers to the scenario where the clustering cost of \(k\)-means is significantly lower than that of \((k-1)\)-means for a given instance, and demonstrated the effectiveness of the Lloyd heuristic [70] under the separation condition. Balcan _et al._[13] introduced the concept of approximation-stability for finding the ground-truth of \(k\)-median and \(k\)-means clustering. Awasthi _et al._[8] introduced another notion of clustering stability and gave a PTAS for \(k\)-median and \(k\)-means clustering. More clustering algorithms under stability assumption were studied in [12, 11, 9, 10, 68]. **Sublinear time algorithms.** Besides the aforementioned sublinear MEB algorithm [30], a number of sublinear time algorithms have been studied for the problems like clustering [63, 64, 73, 74, 43] and property testing [50, 15]. More detailed discussion on sublinear time algorithms can be found in the survey papers [34, 83]. ## 2 Definitions and Preliminaries We describe and analyze our algorithms in the unit-cost RAM model [75]. Suppose the input is represented by an \(n\times d\) matrix (_i.e.,_\(n\) points in \(\mathbb{R}^{d}\)). As mentioned in [30], it is common to assume that each entry of the matrix can be recovered in constant time. We let \(|A|\) denote the number of points of a given point set \(A\) in \(\mathbb{R}^{d}\), and \(||x-y||\) denote the Euclidean distance between two points \(x\) and \(y\) in \(\mathbb{R}^{d}\). We use \(\mathbb{B}(c,r)\) to denote the ball centered at a point \(c\) with radius \(r>0\). Below, we give the definitions for MEB and the notion of stability. To keep the structure of our paper more compact, we place other necessary definitions for our extensions to Section 5, Section 6, and Section 7, respectively. Definition 1 (Minimum Enclosing Ball (MEB)): Given a set \(P\) of \(n\) points in \(\mathbb{R}^{d}\), the MEB problem is to find a ball with minimum radius to cover all the points in \(P\). The resulting ball and its radius are denoted by \(\mathbf{MEB}(P)\) and \(\mathbf{Rad}(P)\), respectively. Definition 2 (Radius Approximation and Covering Approximation): Let \(0<\epsilon,\delta<1\). A ball \(\mathbb{B}(c,r)\) is called a \((1+\epsilon)\)-radius approximation of \(\mathbf{MEB}(P)\), if the ball covers all points in \(P\) and has radius \(r\leq(1+\epsilon)\mathbf{Rad}(P)\). On the other hand, the ball is called a \((1-\delta)\)-covering approximation of \(\mathbf{MEB}(P)\), if it covers at least \((1-\delta)n\) points in \(P\) and has radius \(r\leq\mathbf{Rad}(P)\). Both radius approximation and covering approximation are single-criterion approximations. When \(\epsilon\) (_resp.,_\(\delta\)) approaches to \(0\), the \((1+\epsilon)\)-radius approximation (_resp.,_\((1-\delta)\)-covering approximation) will approach to \(\mathbf{MEB}(P)\). The "covering approximation" seems to be similar to "MEB with outliers", but actually they are quite different (see Definition 4 in Section 5). Definition 3 ((\(\alpha\), \(\beta\))-stable): Given a set \(P\) of \(n\) points in \(\mathbb{R}^{d}\) with two parameters \(\alpha\) and \(\beta\) in \((0,1)\), \(P\) is an (\(\alpha\), \(\beta\))-stable instance if **(1) \(\mathbf{Rad}(P\setminus Q)>(1-\alpha)\mathbf{Rad}(P)\)** for any \(Q\subset P\) with \(|Q|<\beta n\), and **(2)** there exists a \(Q^{\prime}\subset P\) with \(|Q^{\prime}|=\lceil\beta n\rceil\) having \(\mathbf{Rad}(P\setminus Q^{\prime})\leq(1-\alpha)\mathbf{Rad}(P)\). **The intuition of Definition 3.** Actually, \(\beta\) can be viewed as a function of \(\alpha\), and vice versa. For example, given an \(\alpha>0\), there always exists a \(\beta\geq\frac{1}{n}\) such that \(P\) is an (\(\alpha\), \(\beta\))-stable instance (\(\beta\geq\frac{1}{n}\) because we must remove at least one point). The property of stability indicates that \(\mathbf{Rad}(P)\) cannot be significantly reduced unless removing a large enough fraction of points from \(P\). For a fixed \(\alpha\), the larger \(\beta\) is, the more stable \(P\) should be. Similarly, for a fixed \(\beta\), the smaller \(\alpha\) is, the more stable \(P\) should be. Actually, our stability assumption is quite reasonable in practice. For example, if the radius can be reduced considerably (say by \(\alpha=10\%\)) after removing only a very small fraction (say \(\beta=1\%\)) of points, it is natural to view the small fraction of points as outliers. To better understand the notion of stability in high dimensions, we consider the following two examples. **Example (i).** Suppose that the distribution of \(P\) is uniform and dense inside \(\mathbf{MEB}(P)\). Let \(\alpha\in(0,1)\) be a fixed number, and we study the corresponding \(\beta\) of \(P\). If we want the radius of the remaining \((1-\beta)n\) points to be as small as possible, intuitively we should remove the outermost \(\beta n\) points (since \(P\) is uniform and dense). Let \(Q^{\prime}\) denote the set of outermost \(\beta n\) points that has \(\mathbf{Rad}(P\setminus Q^{\prime})\leq(1-\alpha)\mathbf{Rad}(P)\). Then we have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P)\right)}=\frac{1-\alpha}{2}\mathbf{Rad }(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P)\right)}=\frac{1-\alpha}{2}\mathbf{Rad }(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P)\right)}=\frac{1-\alpha}{2}\mathbf{Rad }(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P)\right)}=\frac{1-\alpha}{2}\mathbf{Rad }(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P)\right)}=\frac{1-\alpha}{2}\mathbf{Rad }(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P)\right)}=\frac{1-\alpha}{2} \mathbf{Rad}(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P\setminus Q^{\prime})\right)}=\frac{ 1-\alpha}{2}\mathbf{Rad}(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P\setminus Q^{\prime})\right)}=\frac{ 1-\alpha}{2}\mathbf{Rad}(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P\setminus Q^{\prime})\right)}=\frac{ 1-\alpha}{2}\mathbf{Rad}(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P\setminus Q^{\prime})\right)}=\frac{ 1-\alpha}{2}\mathbf{Rad}(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P\setminus Q^{\prime})\right)}=\frac{ 1-\alpha}{2}\mathbf{Rad}(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P\setminus Q^{\prime})\right)}=\frac{ 1-\alpha}{2}\mathbf{Rad}(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P\setminus Q^{\prime})\right)}=\frac{ 1-\alpha}{2}\mathbf{Rad}(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P \setminus Q^{\prime})\right)}{Vd\left(\mathbf{MEB}(P\setminus Q^{\prime}) \right)}=\frac{1-\alpha}{2}\mathbf{Rad}(P)\). We have \(\frac{|P\setminus Q^{\prime}|}{|P|}\approx\frac{Vd\left(\mathbf{MEB}(P\setminus Q ^{\prime})\right)}{Vd\left(\mathbf{MEB}(P\setminus Q^{\prime})\right)}=\frac{ 1-\alpha}{2}\mathbf{Rad}(P)\). \(\frac{(\mathbf{Rad}(P)Q^{\prime})^{d}}{(\mathbf{Rad}(P))^{d}}\leq(1-\alpha)^{d}\), where \(Vol(\cdot)\) is the volume function. That is, \(1-\beta\leq(1-\alpha)^{d}\) and it implies \(\lim_{d\to\infty}\beta=1\) when \(\alpha\) is fixed; that means \(P\) tends to be very stable as \(d\) increases. **Example (ii).** Consider a regular \(d\)-dimensional simplex \(P\) containing \(d+1\) points where each pair of points have the pairwise distance equal to \(1\). It is not hard to obtain \(\mathbf{Rad}(P)=\sqrt{\frac{d}{2(1+d)}}\), and we denote it by \(r_{d}\). If we remove \(\beta(d+1)\) points from \(P\), namely it becomes a regular \(d^{\prime}\)-dimensional simplex with \(d^{\prime}=(1-\beta)(d+1)-1\), the new radius \(r_{d^{\prime}}=\sqrt{\frac{d^{\prime}}{2(1+d^{\prime})}}\). To achieve \(\frac{r_{d^{\prime}}}{r_{d}}\leq 1-\alpha\) with a fixed \(\alpha\), it is easy to see that \(1-\beta\) should be no larger than \(\frac{1}{1+(2\alpha-\alpha^{2})d}\); this implies \(\lim_{d\to\infty}\beta=1\). Similar to example (i), the instance \(P\) tends to be very stable as \(d\) increases. **Remark 2**.: In practice, it is difficult to know the exact value of \(\beta\) for a fixed \(\alpha\). However, the value of \(\beta\) only affects the sample sizes in our proposed algorithms in Section 4, and thus only assuming a reasonable lower bound \(\beta_{0}<\beta\) is already sufficient. In Section 5, we also consider the general case without the stability assumption, where the proposed algorithm does not even need to input \(\beta_{0}\). ### A More Careful Analysis for Core-set Construction in [20] We first briefly introduce the core-set construction for MEB, since it will be used in our proposed algorithms. Let \(0<\epsilon<1\). The algorithm in [20] yields an MEB core-set of size \(2/\epsilon\) (for convenience, we always assume that \(2/\epsilon\) is an integer). But there is a small issue in their paper. The analysis assumes that the exact MEB of the core-set is computed in each iteration, but in fact one may only compute an approximate MEB. Thus, an immediate question is whether the quality is still guaranteed with such a change. Kumar _et al._[69] fixed this issue, and showed that computing a \((1+O(\epsilon^{2}))\)-approximate MEB for the core-set in each iteration still guarantees a core-set with size \(O(1/\epsilon)\), where the hidden constant is larger than 80. Clarkson [29] showed that the greedy core-set construction algorithm of MEB, as a special case of the Frank-Wolfe algorithm, yields a core-set with size slightly larger than \(4/\epsilon\). Note that there exist several other methods yielding even lower core-set size [21, 67], but their construction algorithms are more complicated and thus not applicable to our problems. Below we show that it is possible to guarantee a core-set of [20] with the size being arbitrarily close to \(2/\epsilon\), even if we only compute an approximate MEB in each iteration. This improves the core-set sizes of [29, 69], and the new analysis is also interesting in its own right. For the sake of completeness, we first briefly introduce the idea of the core-set construction algorithm in [20]. Given a point set \(P\subset\mathbb{R}^{d}\), the algorithm is a simple iterative procedure. Initially, it selects an arbitrary point from \(P\) and places it into an initially empty set \(T\). In each of the following \(2/\epsilon\) iterations, the algorithm updates the center of \(\mathbf{MEB}(T)\) and adds to \(T\) the farthest point from the current center of \(\mathbf{MEB}(T)\). Finally, the center of \(\mathbf{MEB}(T)\) induces a \((1+\epsilon)\)-approximation for \(MEB(P)\). The selected set of \(2/\epsilon\) points (_i.e._, \(T\)) is called the core-set of MEB. To ensure the expected improvement in each iteration, they [20] showed that the following two inequalities hold if the algorithm always selects the farthest point to the current center of \(\mathbf{MEB}(T)\): \[r_{i+1}\geq(1+\epsilon)\mathbf{Rad}(P)-L_{i};\ \ \ \ r_{i+1}\geq\sqrt{r_{i}^{2}+L_{i}^{2}}, \tag{1}\] where \(r_{i}\) and \(r_{i+1}\) are the radii of \(\mathbf{MEB}(T)\) in the \(i\)-th and \((i+1)\)-th iterations, respectively, and \(L_{i}\) is the shifting distance of the center of \(\mathbf{MEB}(T)\) from the \(i\)-th to \((i+1)\)-th iteration. As mentioned earlier, we often compute only an approximate \(\mathbf{MEB}(T)\) in each iteration. In the \(i\)-th iteration, we let \(c_{i}\) and \(o_{i}\) denote the centers of the exact and the approximate \(\mathbf{MEB}(T)\), respectively. Suppose that \(||c_{i}-o_{i}||\leq\xi r_{i}\), where \(\xi\in(0,\frac{\epsilon}{1+\epsilon})\) (we will see why this bound is needed later). Using another algorithm proposed in [20], one can obtain the point \(o_{i}\) in \(O(\frac{1}{\xi^{2}}|T|d)\) time. Note that we only compute \(o_{i}\) rather than \(c_{i}\) in each iteration. Hence we can only select the farthest point (say \(q\)) to \(o_{i}\). If \(||q-o_{i}||\leq(1+\epsilon)\mathbf{Rad}(P)\), we are done and a \((1+\epsilon)\)-approximation of MEB is already obtained. Otherwise, we have \[(1+\epsilon)\mathbf{Rad}(P)<||q-o_{i}||\leq||q-c_{i+1}||+||c_{i+1}-c_{i}||+||c _{i}-o_{i}||\leq r_{i+1}+L_{i}+\xi r_{i} \tag{2}\] by the triangle inequality (see Figure 1). In other words, we should replace the first inequality of (1) by "\(r_{i+1}>(1+\epsilon)\mathbf{Rad}(P)-L_{i}-\xi r_{i}\)". Also, the second inequality of (1) still holds since it depends only on the property of the exact MEB (see [20, Lemma 2.1]). Thus, we have \[r_{i+1}\geq\max\Big{\{}\sqrt{r_{i}^{2}+L_{i}^{2}},(1+\epsilon)\mathbf{Rad}(P) -L_{i}-\xi r_{i}\Big{\}}. \tag{3}\] This leads to the following theorem whose proof can be found in Section A. Theorem 2.1: _In the core-set construction algorithm of [20], if one computes an approximate MEB for \(T\) in each iteration and the resulting center \(o_{i}\) has the distance to \(c_{i}\) less than \(\xi r_{i}=s\frac{\epsilon}{1+\epsilon}r_{i}\) for some \(s\in(0,1)\), the final core-set size is bounded by \(z=\frac{2}{(1-s)\epsilon}\). Also, the bound could be arbitrarily close to \(2/\epsilon\) when \(s\) is small enough._ We can simply set \(s\) to be any constant in \((0,1)\); for instance, if \(s=1/3\), the core-set size will be bounded by \(z=3/\epsilon\). Since \(|T|\leq z\) in each iteration, the total running time is \(O\Big{(}z\big{(}|P|d+\frac{1}{\xi^{2}}zd\big{)}\Big{)}=O\Big{(}\frac{1}{ \epsilon}(|P|+\frac{1}{c^{3}})d\Big{)}\). Remark 3: We also want to emphasize a simple observation on the above core-set construction procedure, which will be used in our algorithms and analyses later on. The algorithm always selects the farthest point to \(o_{i}\) in each iteration. However, this is actually not necessary. As long as the selected point has distance at least \((1+\epsilon)\mathbf{Rad}(P)\), the result presented in Theorem 2.1 is still true. If no such a point exists (_i.e.,_\(P\setminus\mathbb{B}\big{(}o_{i},(1+\epsilon)\mathbf{Rad}(P)\big{)}=\emptyset\)), a \((1+\epsilon)\)-radius approximate MEB (_i.e.,_ the ball \(\mathbb{B}\big{(}o_{i},(1+\epsilon)\mathbf{Rad}(P)\big{)}\)) has been already obtained. Remark 4 (kernels): If each point \(p\in P\) is mapped to \(\psi(p)\) in \(\mathbb{R}^{D}\) by some kernel function (_e.g.,_ as the CVM [90]), where \(D\) could be \(+\infty\), we can still run the core-set algorithm of [20], since the algorithm only needs to compute the distances and the center \(o_{i}\) is always a convex combination of \(T\) in each iteration; instead of returning an explicit center, the algorithm will output the coefficients of the convex combination for the center. And similarly, our Algorithm 2 presented in Section 4.2 also works fine for kernels. ## 3 Implication of the Stability Property In this section, we show an important implication of the stability property of Definition 3. Figure 1: An illustration of (2). **Theorem 2**: _Assume \(\epsilon,\epsilon^{\prime},\beta_{0}\in(0,1)\). Let \(P\) be an \((\epsilon^{2},\beta)\)-stable instance of the MEB problem with \(\beta>\beta_{0}\), and \(o\) be the center of its MEB. Let \(\tilde{o}\) be a given point in \(\mathbb{R}^{d}\). Assume the number \(r\leq(1+\epsilon^{\prime 2})\mathbf{Rad}(P)\). If the ball \(\mathbb{B}\big{(}\tilde{o},r\big{)}\) covers at least \((1-\beta_{0})n\) points from \(P\), the following holds_ \[||\tilde{o}-o||<(2\sqrt{2}\epsilon+\sqrt{3}\epsilon^{\prime})\mathbf{Rad}(P). \tag{4}\] Theorem 2 indicates that if a ball covers a large enough subset of \(P\) and its radius is bounded, its center should be close to the center of \(\mathbf{MEB}(P)\). Let \(P^{\prime}=\mathbb{B}\big{(}\tilde{o},r\big{)}\cap P\), and assume \(o^{\prime}\) is the center of \(\mathbf{MEB}(P^{\prime})\). To bound the distance between \(\tilde{o}\) and \(o\), we bridge them by the point \(o^{\prime}\) (since \(||\tilde{o}-o||\leq||\tilde{o}-o^{\prime}||+||o^{\prime}-o||\)). The following are two key lemmas for proving Theorem 2. Lemma 1: _The distance \(||o^{\prime}-o||\leq\sqrt{2}\epsilon\mathbf{Rad}(P)\)._ Proof: We consider two cases: \(\mathbf{MEB}(P^{\prime})\) is totally covered by \(\mathbf{MEB}(P)\) and otherwise. For the first case (see Figure 2(a)), it is easy to see that \[||o^{\prime}-o||\leq\mathbf{Rad}(P)-(1-\epsilon^{2})\mathbf{Rad}(P)=\epsilon^{ 2}\mathbf{Rad}(P)<\sqrt{2}\epsilon\mathbf{Rad}(P), \tag{5}\] where the first inequality comes from the fact that \(\mathbf{MEB}(P^{\prime})\) has radius at least \((1-\epsilon^{2})\mathbf{Rad}(P)\) (Definition 3). Thus, we can focus on the second case below. Let \(a\) be any point located on the intersection of the two spheres of \(\mathbf{MEB}(P^{\prime})\) and \(\mathbf{MEB}(P)\). Then we have the following claim. **Claim 1.** The angle \(\angle ao^{\prime}o\geq\pi/2\). Proof: Suppose that \(\angle ao^{\prime}o<\pi/2\). Note that \(\angle aoo^{\prime}\) is always smaller than \(\pi/2\) since \(||o-a||=\mathbf{Rad}(P)\geq\mathbf{Rad}(P^{\prime})=||o^{\prime}-a||\). Therefore, \(o\) and \(o^{\prime}\) are separated by the hyperplane \(H\) that is orthogonal to the segment \(\overline{oo}\) and passes through the point \(a\). See Figure 2(b). Now we show that \(P^{\prime}\) can be covered by a ball smaller than \(\mathbf{MEB}(P^{\prime})\). Let \(o_{H}\) be the point \(H\cap\overline{o^{\prime}o}\), and \(t\) (_resp., \(t^{\prime}\)_) be the point collinear with \(o\) and \(o^{\prime}\) on the right side of the sphere of \(\mathbf{MEB}(P^{\prime})\) (_resp.,_ left side of the sphere of \(\mathbf{MEB}(P)\); see Figure 2(b)). Then, we have \[||t-o_{H}||+||o_{H}-o^{\prime}||=||t-o^{\prime}||=||a-o^{\prime}|| <||o^{\prime}-o_{H}||+||o_{H}-a||\] \[\implies||t-o_{H}||<||o_{H}-a||. \tag{6}\] Similarly, we have \(||t^{\prime}-o_{H}||<||o_{H}-a||\). Consequently, \(\mathbf{MEB}(P)\cap\mathbf{MEB}(P^{\prime})\) is covered by the ball \(\mathbb{B}(o_{H},||o_{H}-a||)\) (the "red dotted" ball in Figure 2(b)). Further, because \(P^{\prime}\) is covered by \(\mathbf{MEB}(P)\cap\mathbf{MEB}(P^{\prime})\) and \(||o_{H}-a||<||o^{\prime}-a||=\mathbf{Rad}(P^{\prime})\), \(P^{\prime}\) is covered by the ball \(\mathbb{B}(o_{H},||o_{H}-a||)\) that is smaller than \(\mathbf{MEB}(P^{\prime})\). This contradicts to the fact that \(\mathbf{MEB}(P^{\prime})\) is the minimum enclosing ball of \(P^{\prime}\). Thus, the claim \(\angle ao^{\prime}o\geq\pi/2\) is true. Given Claim 1, we know that \(||o^{\prime}-o||\leq\sqrt{\big{(}\mathbf{Rad}(P)\big{)}^{2}-\big{(}\mathbf{Rad}(P^ {\prime})\big{)}^{2}}\). See Figure 2(c). Moreover, Definition 3 implies that \(\mathbf{Rad}(P^{\prime})\geq(1-\epsilon^{2})\mathbf{Rad}(P)\). Therefore, we have \[||o^{\prime}-o||\leq\sqrt{\big{(}\mathbf{Rad}(P)\big{)}^{2}-\big{(}(1- \epsilon^{2})\mathbf{Rad}(P)\big{)}^{2}}\leq\sqrt{2}\epsilon\mathbf{Rad}(P). \tag{7}\] Lemma 2: _The distance \(||\tilde{o}-o^{\prime}||<(\sqrt{2}\epsilon+\sqrt{3}\epsilon^{\prime})\mathbf{ Rad}(P)\)._ Proof: Let \(L\) be the hyperplane orthogonal to the segment \(\overline{\tilde{o}o^{\prime}}\) and passing through the center \(o^{\prime}\). Suppose \(\tilde{o}\) is located on the left side of \(L\). Then, there always exists a point \(b\in P^{\prime}\) located on the right closed semi-sphere of \(\mathbf{MEB}(P^{\prime})\) divided by \(L\) (this result is from [22, Lemma 2.2]; for completeness, we state the lemma in Section B). See Figure 2(d). That is, the angle \(\angle bo^{\prime}\tilde{o}\geq\pi/2\). As a consequence, we have \[||\tilde{o}-o^{\prime}||\leq\sqrt{||\tilde{o}-b||^{2}-||b-o^{\prime}||^{2}}. \tag{8}\] Moreover, since \(||\tilde{o}-b||\leq r\leq(1+\epsilon^{\prime 2})\mathbf{Rad}(P)\) and \(||b-o^{\prime}||=\mathbf{Rad}(P^{\prime})\geq(1-\epsilon^{2})\mathbf{Rad}(P)\), (8) implies that \(||\tilde{o}-o^{\prime}||\leq\sqrt{(1+\epsilon^{\prime 2})^{2}-(1-\epsilon^{2})^{2} \mathbf{Rad}(P)}\), where this upper bound is equal to \[\sqrt{2\epsilon^{\prime 2}+\epsilon^{\prime 4}+2\epsilon^{2}-\epsilon^{4} \mathbf{Rad}(P)}<\sqrt{3\epsilon^{\prime 2}+2\epsilon^{2}}\mathbf{Rad}(P)<( \sqrt{2}\epsilon+\sqrt{3}\epsilon^{\prime})\mathbf{Rad}(P). \tag{9}\] By triangle inequality, Lemmas 1 and 2, we immediately have \[||\tilde{o}-o||\leq||\tilde{o}-o^{\prime}||+||o^{\prime}-o||<(2\sqrt{2} \epsilon+\sqrt{3}\epsilon^{\prime})\mathbf{Rad}(P). \tag{10}\] This completes the proof of Theorem 3.1. ## 4 Sublinear Time Algorithms for MEB under Stability Assumption Suppose \(\epsilon\in(0,1)\). We assume that the given instance \(P\) is an \((\epsilon^{2},\beta)\)-stable instance where \(\beta\) is larger than a given lower bound \(\beta_{0}\) (_i.e.,_\(\beta>\beta_{0}\)). Using Theorem 3.1, we present two different sublinear time sampling algorithms for computing MEB. Following most of the articles on sublinear time algorithms (_e.g.,_[73, 35, 74]), in each sampling step of our algorithms, we always take the sample **independently and uniformly at random**. ### The First Algorithm Figure 3: An illustration for the first sampling algorithm. The red points are the samples; we expand \(\mathbb{B}(c,r)\) slightly and the larger ball is a radius-approximate MEB of the whole input point set. The first algorithm is based on the theory of VC dimension and \(\epsilon\)-nets [58, 92]. Roughly speaking, we compute an approximate MEB of a small random sample (say, \(\mathbb{B}(c,r)\)), and expand the ball slightly; then we prove that this expanded ball is an approximate MEB of the whole data set (see Figure 3). Our key idea is to show that \(\mathbb{B}(c,r)\) covers at least \((1-\beta_{0})n\) points and therefore \(c\) is close to the optimal center by Theorem 2.1. As emphasized in Section 1.1, our result is a single-criterion approximation. If simply applying the uniform sample idea without the stability assumption (as the ideas in [41, 62]), it will yield a bi-criteria approximation where the ball has to cover less than \(n\) points for achieving the desired bounded radius. ``` 0: Two parameters \(0<\epsilon,\eta<1\); an \((\epsilon^{2},\beta)\)-stable instance \(P\) of MEB problem in \(\mathbb{R}^{d}\), where \(\beta\) is larger than a given lower bound \(\beta_{0}>0\). 1: Sample a set \(S\) of \(\Theta(\frac{1}{\beta_{0}}\cdot\max\{\log\frac{1}{\eta},d\log\frac{d}{\beta_{0 }}\})\) points from \(P\) uniformly at random. 2: Apply any approximate MEB algorithm (such as the core-set based algorithm [20]) to compute a \((1+\epsilon^{2})\)-radius approximate MEB of \(S\), and let the obtained ball be \(\mathbb{B}(c,r)\). 3: Output the ball \(\mathbb{B}(c,\frac{1+(2\sqrt{2}+\sqrt{3})\epsilon}{1-\epsilon^{2}}r)\). ``` **Algorithm 1****MEB Algorithm I** Theorem 2.3: _With probability \(1-\eta\), Algorithm 1 returns a \(\lambda\)-radius approximate MEB of \(P\), where_ \[\lambda=\frac{\big{(}1+(2\sqrt{2}+\sqrt{3})\epsilon\big{)}(1+ \epsilon^{2})}{1-\epsilon^{2}}=1+O(\epsilon). \tag{11}\] Before proving Theorem 2.3, we prove the following lemma first. Lemma 3: _Let \(S\) be a set of \(\Theta(\frac{1}{\beta_{0}}\cdot\max\{\log\frac{1}{\eta},d\log\frac{d}{\beta_{0 }}\})\) points sampled randomly and independently from a given point set \(P\subset\mathbb{R}^{d}\), and \(B\) be any ball covering \(S\). Then, with probability \(1-\eta\), \(|B\cap P|\geq(1-\beta_{0})|P|\)._ Proof: Consider the range space \(\Sigma=(P,\Phi)\) where each range \(\phi\in\Phi\) is the complement of a ball in the space. In a range space, a subset \(Y\subset P\) is a \(\beta_{0}\)-net if \[\text{for any }\phi\in\Phi,\,\frac{|P\cap\phi|}{|P|}\geq\beta_{0} \Longrightarrow Y\cap\phi\neq\emptyset. \tag{12}\] The size \(|S|=\Theta(\frac{1}{\beta_{0}}\cdot\max\{\log\frac{1}{\eta},d\log\frac{d}{ \beta_{0}}\})\), and from [58, 92] we know that \(S\) is a \(\beta_{0}\)-net of \(P\) with probability \(1-\eta\). Thus, if \(|B\cap P|<(1-\beta_{0})|P|\), _i.e.,_\(|P\setminus B|>\beta_{0}|P|\), we have \(S\cap\big{(}P\setminus B\big{)}\neq\emptyset\). This contradicts to the fact that \(S\) is covered by \(B\). Consequently, \(|B\cap P|\geq(1-\beta_{0})|P|\). Proof: **(of Theorem 2.3)** Denote by \(o\) the center of \(\mathbf{MEB}(P)\). Since \(S\subset P\) and \(\mathbb{B}(c,r)\) is a \((1+\epsilon^{2})\)-radius approximate MEB of \(S\), we know that \(r\leq(1+\epsilon^{2})\mathbf{Rad}(P)\). Moreover, Lemma 3 implies that \(|\mathbb{B}(c,r)\cap P|\geq(1-\beta_{0})|P|\) with probability \(1-\eta\). Suppose it is true and let \(P^{\prime}=\mathbb{B}(c,r)\cap P\). Then, we have the distance \[||c-o||\leq(2\sqrt{2}+\sqrt{3})\epsilon\mathbf{Rad}(P) \tag{13}\] via Theorem 2.1 (we set \(\epsilon^{\prime}=\epsilon\)). For simplicity, we use \(x\) to denote \((2\sqrt{2}+\sqrt{3})\epsilon\). The inequality (13) implies that the point set \(P\) is covered by the ball \(\mathbb{B}(c,(1+x)\mathbf{Rad}(P))\). Note that we cannot directly return \(\mathbb{B}(c,(1+x)\mathbf{Rad}(P))\) as the final result, since we do not know the value of \(\mathbf{Rad}(P)\). Thus, we have to estimate the radius \((1+x)\mathbf{Rad}(P)\). Since \(P^{\prime}\) is covered by \(\mathbb{B}(c,r)\) and \(|P^{\prime}|\geq(1-\beta_{0})|P|\), \(r\) should be at least \((1-\epsilon^{2})\mathbf{Rad}(P)\) due to Definition 3. Hence, we have \[\frac{1+x}{1-\epsilon^{2}}r\geq(1+x)\mathbf{Rad}(P). \tag{14}\] That is, \(P\) is covered by the ball \(\mathbb{B}(c,\frac{1+x}{1-\epsilon^{2}}r)\). Moreover, the radius \[\frac{1+x}{1-\epsilon^{2}}r\leq\frac{1+x}{1-\epsilon^{2}}(1+\epsilon^{2}) \mathbf{Rad}(P). \tag{15}\] This means the ball \(\mathbb{B}(c,\frac{1+x}{1-\epsilon^{2}}r)\) is a \(\lambda\)-radius approximate MEB of \(P\), where \[\lambda=(1+\epsilon^{2})\frac{1+x}{1-\epsilon^{2}}=\frac{\big{(}1+(2\sqrt{2}+ \sqrt{3})\epsilon\big{)}(1+\epsilon^{2})}{1-\epsilon^{2}} \tag{16}\] and \(\lambda=1+O(\epsilon)\) if \(\epsilon\) is a fixed small number in \((0,1)\). **Running time of Algorithm 1.** For simplicity, we assume \(\log\frac{1}{\eta}<d\log\frac{d}{\beta_{0}}\). If we use the core-set based algorithm [20] to compute \(\mathbb{B}(c,r)\) (see Remark 3), the running time of Algorithm 1 is \(O\big{(}\frac{1}{\epsilon^{2}}(|S|d+\frac{1}{\epsilon^{6}}d)\big{)}=O\big{(} \frac{d^{2}}{\epsilon^{2}\beta_{0}}\log\frac{d}{\beta_{0}}+\frac{d}{\epsilon^ {8}}\big{)}=\tilde{O}(d^{2})\) where the hidden factor depends on \(\epsilon\) and \(\beta_{0}\). **Remark 5**.: If the dimensionality \(d\) is too high, the random projection technique _Johnson-Lindenstrauss (JL) transform_[36] can be used to approximately preserve the radius of enclosing ball [2, 87, 66]. However, it is not useful for reducing the time complexity of Algorithm 1. If we apply the JL-transform on the sampled \(\Theta(\frac{d}{\beta_{0}}\log\frac{d}{\beta_{0}})\) points in Step 1, the JL-transform step itself already takes \(\Omega(\frac{d^{2}}{\beta_{0}}\log\frac{d}{\beta_{0}})\) time. ### The Second Algorithm Our first algorithm in Section 4.1 is simple, but has a sample size (_i.e.,_ the number of sampled points) depending on the dimensionality \(d\), while **the second algorithm has a sample size independent of both \(n\) and \(d\)** (it is particularly important when a kernel function is applied, because the new dimension could be very large or even \(+\infty\)). We briefly overview our idea first. **High level idea of the second algorithm:** Recall our Remark 3 (ii). If we know the value of \((1+\epsilon)\mathbf{Rad}(P)\), we can perform almost the same core-set construction procedure described in Theorem 1 to achieve an approximate center of \(\mathbf{MEB}(P)\), where the only difference is that we add a point with distance at least \((1+\epsilon)\mathbf{Rad}(P)\) to \(o_{i}\) in each iteration. In this way, we avoid selecting the farthest point to \(o_{i}\), since this operation will inevitably have a linear time complexity. To implement our strategy in sublinear time, we need to determine the value of \((1+\epsilon)\mathbf{Rad}(P)\) first. We propose Lemma 4 below to estimate the range of \(\mathbf{Rad}(P)\), and then perform a binary search on the range to determine the value of \((1+\epsilon)\mathbf{Rad}(P)\) approximately. Based on the stability property, we observe that the core-set construction procedure can serve as an "oracle" to help us to guess the value of \((1+\epsilon)\mathbf{Rad}(P)\) (see Algorithm 3). Let \(h>0\) be a candidate. We add a point with distance at least \(h\) to \(o_{i}\) in each iteration. We prove that the procedure cannot continue for more than \(z\) iterations if \(h\geq(1+\epsilon)\mathbf{Rad}(P)\), and will continue more than \(z\) iterations with constant probability if \(h<(1-\epsilon)\mathbf{Rad}(P)\), where \(z\) is the size of core-set described in Theorem 1. Also, during the core-set construction, we add the points to the core-set via random sampling, rather than a deterministic way. A minor issue here is that we need to replace \(\epsilon\) by \(\epsilon^{2}\) in Theorem 1, so as to achieve the overall \((1+O(\epsilon))\)-radius approximation in the following analysis. Lemma 4: _Given a parameter \(\eta\in(0,1)\), one selects an arbitrary point \(p_{1}\in P\) and takes a sample \(Q\subset P\) with \(|Q|=\frac{1}{\beta_{0}}\log\frac{1}{\eta}\) uniformly at random. Let \(p_{2}=\arg\max_{p\in Q}||p-p_{1}||\). Then, with probability \(1-\eta\),_ \[\mathbf{Rad}(P)\in[\frac{1}{2}||p_{1}-p_{2}||,\frac{1}{1-\epsilon^{2}}||p_{1}- p_{2}||]. \tag{17}\] Proof: First, the lower bound of \(\mathbf{Rad}(P)\) is obvious since \(||p_{1}-p_{2}||\) is always no larger than \(2\mathbf{Rad}(P)\). Then, we consider the upper bound. Let \(\mathbb{B}(p_{1},l)\) be the ball covering exactly \((1-\beta_{0})n\) points of \(P\), and thus \(l\geq(1-\epsilon^{2})\mathbf{Rad}(P)\) according to Definition 3. To complete our proof, we also need the following folklore lemma presented in [39]. Lemma 5: _[_39_]_ _Let \(N\) be a set of elements, and \(N^{\prime}\) be a subset of \(N\) with size \(|N^{\prime}|=\tau\,|N|\) for some \(\tau\in(0,1)\). Given \(\eta\in(0,1)\), if one randomly samples \(\frac{\ln 1/\eta}{\ln 1/(1-\tau)}\leq\frac{1}{\tau}\ln\frac{1}{\eta}\) elements from \(N\), then with probability at least \(1-\eta\), the sample contains at least one element of \(N^{\prime}\)._ In Lemma 5, let \(N\) and \(N^{\prime}\) be the point set \(P\) and the subset \(P\setminus\mathbb{B}(p_{1},l)\), respectively. We know that \(Q\) contains at least one point from \(N^{\prime}\) according to Lemma 5 (by setting \(\tau=\beta_{0}\)). Namely, \(Q\) contains at least one point outside \(\mathbb{B}(p_{1},l)\). Moreover, because \(p_{2}=\arg\max_{p\in Q}||p-p_{1}||\), we have \(||p_{1}-p_{2}||\geq l\geq(1-\epsilon^{2})\mathbf{Rad}(P)\), _i.e._, \(\mathbf{Rad}(P)\leq\frac{1}{1-\epsilon^{2}}||p_{1}-p_{2}||\) (see Figure 4 for an illustration). Algorithm 3 serves as a subroutine in Algorithm 2. In Algorithm 3, we simply set \(z=\frac{3}{\epsilon^{2}}\) with \(s=1/3\) as described in Theorem 3.1 (as mentioned before, we replace \(\epsilon\) by \(\epsilon^{2}\)); we compute \(o_{i}\) having distance less than \(s\frac{\epsilon^{2}}{1+\epsilon^{2}}\mathbf{Rad}(T)\) to the center of \(\mathbf{MEB}(T)\) in Step 2(1). ``` 0: Two parameters \(0<\epsilon,\eta_{0}<1\); an \((\epsilon^{2},\beta)\)-stable instance \(P\) of MEB problem in \(\mathbb{R}^{d}\), where \(\beta\) is larger than a given lower bound \(\beta_{0}>0\). Set the interval \([a,b]\) for \(\mathbf{Rad}(P)\) that is obtained by Lemma 4. 1: Among the set \(\{(1-\epsilon^{2})a,(1+\epsilon^{2})(1-\epsilon^{2})a,\cdots,(1+\epsilon^{2})^ {w}(1-\epsilon^{2})a=(1+\epsilon^{2})b\}\) where \(w=\lceil\log_{1+\epsilon^{2}}\frac{2}{(1-\epsilon^{2})^{2}}\rceil+1=O(\frac{ 1}{\epsilon^{2}})\), perform binary search for the value \(h\) by using Algorithm 3 with \(z=\frac{3}{\epsilon^{2}}\) and \(\eta=\frac{\eta_{0}}{20\log w}\). 2: Suppose that Algorithm 3 returns "no" when \(h=(1+\epsilon^{2})^{i_{0}}(1-\epsilon^{2})a\) and returns "yes" when \(h=(1+\epsilon^{2})^{i_{0}+1}(1-\epsilon^{2})a\). 3: Run Algorithm 3 again with \(h=(1+\epsilon^{2})^{i_{0}+2}a\), \(z=\frac{3}{\epsilon^{2}}\), and \(\eta=\eta_{0}/2\); let \(\tilde{o}\) be the obtained ball center of \(T\) when the loop stops. 4: Return the ball \(\mathbb{B}(\tilde{o},r)\), where \(r=\frac{1+(2\sqrt{\frac{3}{\epsilon}}+\frac{2\sqrt{\frac{3}{\epsilon}}}{3}) \epsilon}{1+\epsilon^{2}}h\). ``` **Algorithm 2****MEB Algorithm II** Figure 4: An illustration of Lemma 4; the red points are the sampled set \(Q\). **Theorem 4**.: _With probability \(1-\eta_{0}\), Algorithm 2 returns a \(\lambda\)-radius approximate MEB of \(P\), where_ \[\lambda=\frac{(1+x_{1})(1+x_{2})}{1+\epsilon^{2}}=1+O(\epsilon)\quad\text{with} \quad x_{1}=O\big{(}\frac{\epsilon^{2}}{1-\epsilon^{2}}\big{)},x_{2}=O\big{(} \frac{\epsilon}{\sqrt{1-\epsilon^{2}}}\big{)}. \tag{18}\] _The running time is \(\tilde{O}\big{(}(\frac{1}{\epsilon^{2}\beta_{0}}+\frac{1}{\epsilon^{8}})d \big{)}\), where \(\tilde{O}(f)=O(f\cdot\texttt{polylog}(\frac{1}{\epsilon},\frac{1}{\eta_{0}}))\)._ Before proving Theorem 4, we provide Lemma 6 first. **Lemma 6**.: _If \(h\geq(1+\epsilon^{2})\mathbf{Rad}(P)\), Algorithm 3 returns "yes"; else if \(h<(1-\epsilon^{2})\mathbf{Rad}(P)\), Algorithm 3 returns "no" with probability at least \(1-\eta\)._ Proof.: First, we assume that \(h\geq(1+\epsilon^{2})\mathbf{Rad}(P)\). Recall the remark following Theorem 1. If we always add a point \(q\) with distance at least \(h\geq(1+\epsilon^{2})\mathbf{Rad}(P)\) to \(o_{i}\), the loop 2(1)-(5) cannot continue more than \(z\) iterations, _i.e._, Algorithm 3 will return "yes". Now, we consider the case \(h<(1-\epsilon^{2})\mathbf{Rad}(P)\). Similar to the proof of Lemma 4, we consider the ball \(\mathbb{B}(o_{i},l)\) covering exactly \((1-\beta_{0})n\) points of \(P\). According to Definition 3, we know that \(l\geq(1-\epsilon^{2})\mathbf{Rad}(P)>h\). Also, with probability \(1-\eta/z\), the sample \(Q\) contains at least one point outside \(\mathbb{B}(o_{i},l)\) due to Lemma 5. By taking the union bound, with probability \((1-\eta/z)^{z}\geq 1-\eta\), \(||q-o_{i}||\) is always larger than \(h\) and eventually Algorithm 3 will return "no". Proof.: **(of Theorem 4)** Since Algorithm 3 returns "no" when \(h=(1+\epsilon^{2})^{i_{0}}(1-\epsilon^{2})a\) and returns "yes" when \(h=(1+\epsilon^{2})^{i_{0}+1}(1-\epsilon^{2})a\), from Lemma 6 we know that \[(1+\epsilon^{2})^{i_{0}}(1-\epsilon^{2})a <(1+\epsilon^{2})\mathbf{Rad}(P); \tag{19}\] \[(1+\epsilon^{2})^{i_{0}+1}(1-\epsilon^{2})a \geq(1-\epsilon^{2})\mathbf{Rad}(P). \tag{20}\] The above inequalities together imply that \[\frac{(1+\epsilon^{2})^{3}}{1-\epsilon^{2}}\mathbf{Rad}(P)>(1+\epsilon^{2})^ {i_{0}+2}a\geq(1+\epsilon^{2})\mathbf{Rad}(P). \tag{21}\] Thus, when running Algorithm 3 with \(h=(1+\epsilon^{2})^{i_{0}+2}a\) in Step 3, the algorithm returns "yes" (by the right hand-side of (21)). Then, consider the ball \(\mathbb{B}(\tilde{o},h)\). We claim that \(|P\setminus\mathbb{B}(\tilde{o},h)|<\beta_{0}n\). Otherwise, the sample \(Q\) contains at least one point outside \(\mathbb{B}(\tilde{o},h)\) with probability \(1-\eta/z\) in Step 2(2) of Algorithm 3, _i.e.,_ the loop will continue. Thus, it contradicts to the fact that the algorithm returns "yes". Let \(P^{\prime}=P\cap\mathbb{B}(\tilde{o},h)\), and then \(|P^{\prime}|\geq(1-\beta_{0})n\). Moreover, the left hand-side of (21) indicates that \[h=(1+\epsilon^{2})^{i_{0}+2}a<(1+\frac{8\epsilon^{2}}{1-\epsilon^{2}}) \mathbf{Rad}(P). \tag{22}\] Now, we can apply Theorem 2.2, where we set "\(\epsilon^{\prime}\)" to be "\(\sqrt{\frac{8\epsilon^{2}}{1-\epsilon^{2}}}\)" in the theorem. Let \(o\) be the center of \(\mathbf{MEB}(P)\). Consequently, we have \[||\tilde{o}-o||<(2\sqrt{2}+2\sqrt{6}/\sqrt{1-\epsilon^{2}})\epsilon\cdot \mathbf{Rad}(P). \tag{23}\] For simplicity, we let \(x_{1}=\frac{8\epsilon^{2}}{1-\epsilon^{2}}\) and \(x_{2}=(2\sqrt{2}+2\sqrt{6}/\sqrt{1-\epsilon^{2}})\epsilon\). Hence, \(h\leq(1+x_{1})\mathbf{Rad}(P)\) and \(||\tilde{o}-o||\leq x_{2}\mathbf{Rad}(P)\) in (22) and (23). From (23), we know that \(P\subset\mathbb{B}(\tilde{o},(1+x_{2})\mathbf{Rad}(P))\). From the right hand-side of (21), we know that \((1+x_{2})\mathbf{Rad}(P)\leq\frac{1+x_{2}}{1+\epsilon^{2}}h\). Thus, we have \(P\subset\mathbb{B}\Big{(}\tilde{o},\frac{1+x_{2}}{1+\epsilon^{2}}h\Big{)}\) where \(\frac{1+x_{2}}{1+\epsilon^{2}}h=\frac{1+(2\sqrt{2}+\frac{2\sqrt{6}}{\sqrt{1- \epsilon^{2}}})\epsilon}{1+\epsilon^{2}}h\). Also, the radius \[\frac{1+x_{2}}{1+\epsilon^{2}}h\underbrace{\leq}_{\text{by \eqref{eq:22}}}\frac{(1+x_{2})(1+x_{1})}{1+ \epsilon^{2}}\mathbf{Rad}(P)=\lambda\cdot\mathbf{Rad}(P). \tag{24}\] Thus \(\mathbb{B}\Big{(}\tilde{o},\frac{1+x_{2}}{1+\epsilon^{2}}h\Big{)}\) is a \(\lambda\)-radius approximate MEB of \(P\), and \(\lambda=1+O(\epsilon)\) if \(\epsilon\) is a fixed small number in \((0,1)\). **Success probability.** The success probability of Algorithm 3 is \(1-\eta\). In Algorithm 2, we set \(\eta=\frac{\eta_{0}}{2\log w}\) in Step 1 and \(\eta=\eta_{0}/2\) in Step 3, respectively. We take the union bound and the success probability of Algorithm 2 is \((1-\frac{\eta_{0}}{2\log w})^{\log w}(1-\eta_{0}/2)>1-\eta_{0}\). **Running time.** As the subroutine, Algorithm 3 runs in \(O(z(\frac{1}{\beta_{0}}(\log\frac{z}{\eta})d+\frac{1}{\epsilon^{6}}d))\) time; Algorithm 2 calls the subroutine \(O\big{(}\log(\frac{1}{\epsilon^{2}})\big{)}\) times. Note that \(z=O(\frac{1}{\epsilon^{2}})\). Thus, the total running time is \(\tilde{O}\big{(}(\frac{1}{\epsilon^{2}\beta_{0}}+\frac{1}{\epsilon^{8}})d\big{)}\). ## 5 Sublinear Time Algorithm for General MEB In Section 4, we propose the sublinear time algorithms under the stability assumption. Specifically, we assume that the given instance is \((\epsilon^{2},\beta)\)-stable and \(\beta\) is larger than a reasonable known lower bound \(\beta_{0}\). However, when \(\beta_{0}\)'s value is unknown, we cannot not determine the sample size for the algorithm; or we may only know a trivial lower bound, _e.g.,_\(\frac{1}{n}\), and then the sample size could be too large. So in this section we consider the general case without the stability assumption. **High-level idea.** An interesting observation is that the ideas developed for stable instance can even help us to develop a hybrid approach for MEB when the stability assumption does not hold. First, we "suppose" the input instance is \((\alpha,\beta)\)-stable where "\(\alpha\)" and "\(\beta\)" are designed based on the pre-specified radius error bound \(\epsilon\) and covering error bound \(\delta\), and compute a "potential" \((1+\epsilon)\)-radius approximation (say a ball \(B_{1}\)); then we compute a "potential" \((1-\delta)\)-covering approximation (say a ball \(B_{2}\)), where the definition of "covering approximation" is given in Definition 2; finally, we determine the final output based on the ratio of their radii. Specifically, we set a threshold \(\tau\) that is determined by the given radius error bound \(\epsilon\). If the ratio is no larger than \(\tau\), we can infer that \(B_{1}\) is a "true" \((1+\epsilon)\)-radius approximation and return it; otherwise, we return \(B_{2}\) that is a "true" \((1-\delta)\)-covering approximation. Moreover, for the latter case (_i.e.,_ returning a \((1-\delta)\)-covering approximation), we will show that our proposed algorithm yields a radius not only being strictly smaller than \(\mathbf{Rad}(P)\), but also having a gap of \(\Theta(\epsilon^{2})\cdot\mathbf{Rad}(P)\) to \(\mathbf{Rad}(P)\) (_i.e.,_ the returned radius is at most \(\big{(}1-\Theta(\epsilon^{2})\big{)}\cdot\mathbf{Rad}(P)\)). Our algorithm only needs uniform sampling and a single pass over the input data, where the space complexity in memory is \(O(d)\) (the hidden factor depends on \(\epsilon\) and \(\delta\)); if the input data matrix is sparse (_i.e.,_\(M=o(nd)\)), the time complexity is sublinear. Before presenting our algorithms, we need to show the formal definitions for the problem of MEB with outliers first, since it will be used for computing the \((1-\delta)\)-covering approximation. Definition 4 (MEB with Outliers): Given a set \(P\) of \(n\) points in \(\mathbb{R}^{d}\) and a small parameter \(\gamma\in[0,1)\), the MEB with outliers problem is to find the smallest ball that covers \((1-\gamma)n\) points. Namely, the task is to find a subset of \(P\) with size \((1-\gamma)n\) such that the resulting MEB is the smallest among all possible choices of the subset. The obtained ball is denoted by \(\mathbf{MEB}(P,\gamma)\). For convenience, we use \(P_{\mathrm{opt}}\) to denote the optimal subset of \(P\) with respect to \(\mathbf{MEB}(P,\gamma)\). Namely, \(P_{\mathrm{opt}}=\arg_{Q}\min\Big{\{}\mathbf{Rad}(Q)\mid Q\subset P,|Q|=(1- \gamma)n\Big{\}}\). From Definition 4, we can see that the main challenge is to determine the subset of \(P\). Similar to Definition 2, we also define the radius approximation and covering approximation for MEB with outliers. Definition 5 (Radius Approximation and Covering Approximation): Let \(0<\epsilon,\delta<1\). A ball \(\mathbb{B}(c,r)\) is called a \((1+\epsilon)\)-radius approximation of \(\mathbf{MEB}(P,\gamma)\), if the ball covers \((1-\gamma)n\) points of \(P\) and has radius \(r\leq(1+\epsilon)\mathbf{Rad}(P_{\mathrm{opt}})\). On the other hand, the ball is called a \((1-\delta)\)-covering approximation of \(\mathbf{MEB}(P,\gamma)\), if it covers at least \((1-\delta-\gamma)n\) points in \(P\) and has radius \(r\leq\mathbf{Rad}(P_{\mathrm{opt}})\). A bi-criteria \((1+\epsilon,1-\delta)\)-approximation is a ball that covers at least \(\big{(}1-\delta-\gamma\big{)}n\) points and has radius at most \((1+\epsilon)\mathbf{Rad}(P_{\mathrm{opt}})\). **Roadmap.** First, we introduce two random sampling techniques in Section 5.1, which are the keys for designing the sublinear bi-criteria approximation algorithm for MEB with outliers in Section 5.2. Based on the bi-criteria approximation of Section 5.2, we can solve the general MEB problem in Section 5.3. ### Two Key Lemmas for Handling Outliers To shed some light on our ideas, consider using the core-set construction method in Section 2.1 to compute a bi-criteria \((1+\epsilon,1-\delta)\)-approximation for an instance \((P,\gamma)\) of MEB with outliers. Let \(o_{i}\) be the obtained ball center in the current iteration, and \(Q\) be the set of \((\delta+\gamma)n\) farthest points to \(o_{i}\) from \(P\). A key step for updating \(o_{i}\) is finding a point in the set \(P_{\mathrm{opt}}\cap Q\) (the formal analysis is given in Section 5.2). Actually, this can be done by performing a random sampling from \(Q\). However, it requires to compute the set \(Q\) in advance, which takes an \(\Omega(nd)\) time complexity. To keep the running time to be sublinear, we need to find a point from \(P_{\mathrm{opt}}\cap Q\) by a more sophisticated way. Since \(P_{\mathrm{opt}}\) is mixed with outliers in the set \(Q\), simple uniform sampling cannot realize our goal. To solve this issue, we propose a "two level" sampling procedure which is called "**Uniform-Adaptive Sampling**". Roughly speaking, we take a random sample \(A\) of size \(n^{\prime}\) first (_i.e.,_ the uniform sampling step), and then randomly select a point from \(Q^{\prime}\), the set of the farthest \(\frac{3}{2}(\delta+\gamma)n^{\prime}\) points from \(A\) to \(o_{i}\) (_i.e.,_ the adaptive sampling step). According to Lemma 7, with probability at least \((1-\eta_{1})\frac{\delta}{3(\delta+\gamma)}\), the selected point belongs to \(P_{\mathrm{opt}}\cap Q\); more importantly, the sample size \(n^{\prime}\) is independent of \(n\) and \(d\). The key to prove Lemma 7 is to show that the size of the intersection \(Q^{\prime}\cap\big{(}P_{\mathrm{opt}}\cap Q\big{)}\) is large enough. By setting an appropriate value for \(n^{\prime}\), we can prove a lower bound of \(|Q^{\prime}\cap\big{(}P_{\mathrm{opt}}\cap Q\big{)}|\). Lemma 7 (Uniform-Adaptive Sampling): _Let \(\eta_{1}\in(0,1)\). If we sample \(n^{\prime}=O(\frac{1}{\delta}\log\frac{1}{\eta_{1}})\) points independently and uniformly at random from \(P\) and let \(Q^{\prime}\) be the set of farthest \(\frac{3}{2}(\delta+\gamma)n^{\prime}\) points to \(o_{i}\) from the sample, then, with probability at least \(1-\eta_{1}\), the following holds_ \[\frac{\Big{|}Q^{\prime}\cap\big{(}P_{\mathrm{opt}}\cap Q\big{)}\Big{|}}{|Q^{ \prime}|}\geq\frac{\delta}{3(\delta+\gamma)}. \tag{25}\] Proof: Let \(A\) denote the set of sampled \(n^{\prime}\) points from \(P\). First, we know \(|Q|=(\delta+\gamma)n\) and \(|P_{\mathrm{opt}}\cap Q|\geq\delta n\) (since there are at most \(\gamma n\) outliers in \(Q\)). For ease of presentation, let \(\lambda=\frac{|P_{\rm opt}\cap Q|}{n}\geq\delta\). Let \(\{x_{i}\mid 1\leq i\leq n^{\prime}\}\) be \(n^{\prime}\) independent random variables with \(x_{i}=1\) if the \(i\)-th sampled point of \(A\) belongs to \(P_{\rm opt}\cap Q\), and \(x_{i}=0\) otherwise. Thus, \(E[x_{i}]=\lambda\) for each \(i\). Let \(\sigma\) be a small parameter in \((0,1)\). By using the Chernoff bound, we have \(\mbox{\bf Pr}\Big{(}\sum_{i=1}^{n^{\prime}}x_{i}\notin(1\pm\sigma)\lambda n^{ \prime}\Big{)}\leq e^{-O(\sigma^{2}\lambda n^{\prime})}\). That is, \[\mbox{\bf Pr}\Big{(}|A\cap\big{(}P_{\rm opt}\cap Q\big{)}|\in(1\pm\sigma) \lambda n^{\prime}\Big{)}\geq 1-e^{-O(\sigma^{2}\lambda n^{\prime})}. \tag{26}\] Similarly, we have \[\mbox{\bf Pr}\Big{(}|A\cap Q|\in(1\pm\sigma)(\delta+\gamma)n^{\prime}\Big{)} \geq 1-e^{-O(\sigma^{2}(\delta+\gamma)n^{\prime})}. \tag{27}\] Note that \(n^{\prime}=O(\frac{1}{\delta}\log\frac{1}{n_{1}})\). By setting \(\sigma<1/2\) for (26) and (27), we have \[\Big{|}A\cap\big{(}P_{\rm opt}\cap Q\big{)}\Big{|}>\frac{1}{2}\delta n^{\prime }\ \ \ \ \mbox{ and }\ \ \ \Big{|}A\cap Q\Big{|}<\frac{3}{2}(\delta+\gamma)n^{\prime} \tag{28}\] with probability \(1-\eta_{1}\). Note that \(Q\) contains all the farthest \((\delta+\gamma)n\) points to \(o_{i}\). Denote by \(l_{i}\) the \(\big{(}(\delta+\gamma)n+1\big{)}\)-th largest distance from \(P\) to \(o_{i}\). Then we have \[A\cap Q=\{p\in A\mid||p-o_{i}||>l_{i}\}. \tag{29}\] Also, since \(Q^{\prime}\) is the set of the farthest \(\frac{3}{2}(\delta+\gamma)n^{\prime}\) points to \(o_{i}\) from \(A\), there exists some \(l^{\prime}_{i}>0\) such that \[Q^{\prime}=\{p\in A\mid||p-o_{i}||>l^{\prime}_{i}\}. \tag{30}\] (29) and (30) together imply that either \((A\cap Q)\subseteq Q^{\prime}\) or \(Q^{\prime}\subseteq(A\cap Q)\). Since \(\big{|}A\cap Q\big{|}<\frac{3}{2}(\delta+\gamma)n^{\prime}\) and \(|Q^{\prime}|=\frac{3}{2}(\delta+\gamma)n^{\prime}\), we know \(\Big{(}A\cap Q\Big{)}\subseteq Q^{\prime}\). Therefore, \[\Big{(}A\cap\big{(}P_{\rm opt}\cap Q\big{)}\Big{)}=\Big{(}P_{\rm opt}\cap \big{(}A\cap Q\big{)}\Big{)}\subseteq Q^{\prime}. \tag{31}\] Also, it is obvious that \[\Big{(}A\cap\big{(}P_{\rm opt}\cap Q\big{)}\Big{)}\subseteq\big{(}P_{\rm opt} \cap Q\big{)}. \tag{32}\] The above (31) and (32) together imply \[\Big{(}A\cap\big{(}P_{\rm opt}\cap Q\big{)}\Big{)}\subseteq\Big{(}Q^{\prime} \cap\big{(}P_{\rm opt}\cap Q\big{)}\Big{)}. \tag{33}\] Moreover, since \(Q^{\prime}\subseteq A\), we have \[\Big{(}Q^{\prime}\cap\big{(}P_{\rm opt}\cap Q\big{)}\Big{)}\subseteq\Big{(}A \cap\big{(}P_{\rm opt}\cap Q\big{)}\Big{)}. \tag{34}\] Consequently, (33) and (34) together imply \(Q^{\prime}\cap\big{(}P_{\rm opt}\cap Q\big{)}=A\cap\big{(}P_{\rm opt}\cap Q \big{)}\) and hence \[\frac{\Big{|}Q^{\prime}\cap\big{(}P_{\rm opt}\cap Q\big{)}\Big{|}}{|Q^{\prime }|}=\frac{\Big{|}A\cap\big{(}P_{\rm opt}\cap Q\big{)}\Big{|}}{|Q^{\prime}|} \geq\frac{\delta}{3(\delta+\gamma)}, \tag{35}\] where the inequality comes from the first inequality of (28) and the fact \(|Q^{\prime}|=\frac{3}{2}(\delta+\gamma)n^{\prime}\). The random sampling method is not always guaranteed to succeed. To boost the overall success probability, we have to repeatedly run the algorithm multiple times and each time the algorithm will generate a candidate solution (_i.e.,_ the ball center). Consequently we have to select the best one as our final solution. With a slight abuse of notation, we still use \(o_{i}\) to denote a candidate ball center; since our goal is to achieve a \((1+\epsilon,1-\delta)\)-approximation, we need to compute the \(\big{(}(\delta+\gamma)n+1\big{)}\)-th largest distance from \(P\) to \(o_{i}\), which is denoted as \(l_{i}\). A straightforward way is to compute the value "\(l_{i}\)" in linear time for each candidate and return the one having the smallest \(l_{i}\). In this section, we propose the "**Sandwich Lemma**" to estimate \(l_{i}\) in sublinear time. Let \(B\) be the set of \(n^{\prime\prime}\) sampled points from \(P\) in Lemma 8, and \(\tilde{l}_{i}\) be the \(\big{(}(1+\delta/\gamma)^{2}\gamma n^{\prime\prime}+1\big{)}\)-th largest distance from \(B\) to \(o_{i}\). If we can prove the inequalities (37) and (38) of Lemma 8, then they can imply that \(\tilde{l}_{i}\) is a qualified estimation of \(l_{i}\): if \(\mathbb{B}(o_{i},l_{i})\) is a \((1+\epsilon,1-\delta)\)-approximation, the ball \(\mathbb{B}(o_{i},\tilde{l}_{i})\) should be a \((1+\epsilon,1-O(\delta))\)-approximation. The key idea is to prove that the ball \(\mathbb{B}(o_{i},\tilde{l}_{i})\) is "sandwiched" by two balls \(\mathbb{B}(o_{i},\tilde{l}_{i})\) and \(\mathbb{B}(o_{i},l_{i})\), where \(\tilde{l}_{i}^{\prime}\) is a carefully designed value satisfying \[\text{(i) }\tilde{l}_{i}^{\prime}\leq\tilde{l}_{i}\leq l_{i}\text{ and (ii) }\Big{|}P\setminus\mathbb{B}(o_{i},\tilde{l}_{i}^{\prime})\Big{|}\leq(\gamma+O (\delta))n. \tag{36}\] See Figure 5 for an illustration. These two conditions of \(\tilde{l}_{i}^{\prime}\) can imply the inequalities (37) and (38) of Lemma 8. Similar to Lemma 7, the sample size \(n^{\prime\prime}\) is also independent of \(n\) and \(d\). **Lemma 8** (Sandwich Lemma).: _Let \(\eta_{2}\in(0,1)\) and assume \(\delta<\gamma/3\). If we sample \(n^{\prime\prime}=O\big{(}\frac{\gamma}{\delta^{2}}\log\frac{1}{\eta_{2}}\big{)}\) points independently and uniformly at random from \(P\) and let \(\tilde{l}_{i}\) be the \(\big{(}(1+\delta/\gamma)^{2}\gamma n^{\prime\prime}+1\big{)}\)-th largest distance from the sample to \(o_{i}\), then, with probability \(1-\eta_{2}\), the following holds_ \[\tilde{l}_{i} \leq l_{i}; \tag{37}\] \[\Big{|}P\setminus\mathbb{B}(o_{i},\tilde{l}_{i})\Big{|} \leq(\gamma+5\delta)n. \tag{38}\] Proof.: Let \(B\) denote the set of sampled \(n^{\prime\prime}\) points from \(P\). For simplicity, let \(t=(\delta+\gamma)n\). Assume \(\tilde{l}_{i}^{\prime}>0\) is the value such that \(\Big{|}P\setminus\mathbb{B}(o_{i},\tilde{l}_{i}^{\prime})\Big{|}=\frac{( \gamma+\delta)^{2}}{\gamma-\delta}n\). Recall that \(l_{i}\) is the \(\big{(}t+1\big{)}\)-th largest distance from \(P\) to \(o_{i}\). Since \((\delta+\gamma)n<\frac{(\gamma+\delta)^{2}}{\gamma-\delta}n\), it is easy to know \(\tilde{l}_{i}^{\prime}\leq l_{i}\). Below, we aim to prove that the \(\big{(}(1+\delta/\gamma)^{2}\gamma n^{\prime\prime}+1\big{)}\)-th farthest point from \(B\) is in the ring bounded by the spheres \(\mathbb{B}(o_{i},\tilde{l}_{i}^{\prime})\) and \(\mathbb{B}(o_{i},l_{i})\) (see Figure 5). Note the size \(|B|=n^{\prime\prime}=O\big{(}\frac{\gamma}{\delta^{2}}\log\frac{1}{\eta_{2}}\big{)}\). Again, using the Chernoff bound (let \(\sigma=\delta/2\)) and the same idea for proving (28), we have \[\Big{|}B\setminus\mathbb{B}(o_{i},\tilde{l}_{i}^{\prime})\Big{|} \geq(1-\frac{\delta}{2\gamma})\frac{(\gamma+\delta)^{2}}{\gamma- \delta}n^{\prime\prime}>(1-\frac{\delta}{\gamma})\frac{(\gamma+\delta)^{2}}{ \gamma-\delta}n^{\prime\prime}=(1+\delta/\gamma)^{2}\gamma n^{\prime\prime}; \tag{39}\] \[\Big{|}B\cap Q\big{|} \leq(1+\frac{\delta}{2\gamma})\frac{t}{n}n^{\prime\prime}<(1+ \delta/\gamma)\frac{t}{n}n^{\prime\prime}=(1+\delta/\gamma)^{2}\gamma n^{ \prime\prime}, \tag{40}\] with probability \(1-\eta_{2}\). Suppose that (39) and (40) both hold. Recall that \(\tilde{l}_{i}\) is the \(\big{(}(1+\delta/\gamma)^{2}\gamma n^{\prime\prime}+1\big{)}\)-th largest distance from the sampled points \(B\) to \(o_{i}\), so \(\Big{|}B\setminus\mathbb{B}(o_{i},\tilde{l}_{i})\Big{|}=(1+\delta/\gamma)^{2 }\gamma n^{\prime\prime}\). Together with (39), we have \(\Big{|}B\setminus\mathbb{B}(o_{i},\tilde{l}_{i})\Big{|}\leq\Big{|}B\setminus \mathbb{B}(o_{i},\tilde{l}_{i}^{\prime})\Big{|}\), _i.e.,_ \[\tilde{l}_{i}\geq\tilde{l}_{i}^{\prime}. \tag{41}\] The inequality (40) implies that the \(\big{(}(1+\delta/\gamma)^{2}\gamma n^{\prime\prime}+1\big{)}\)-th farthest point (say \(q_{x}\)) from \(B\) to \(o_{i}\) is not in \(Q\). Then, we claim that \(\mathbb{B}(o_{i},\tilde{l}_{i})\cap Q=\emptyset\). Otherwise, let \(q_{y}\in\mathbb{B}(o_{i},\tilde{l}_{i})\cap Q\). Then we have \[||q_{y}-o_{i}||\leq\tilde{l}_{i}=||q_{x}-o_{i}||. \tag{42}\] Note that \(Q\) is the set of farthest \(t\) points to \(o_{i}\) of \(P\). So \(q_{x}\notin Q\) implies \[||q_{x}-o_{i}||<\min_{q\in Q}||q-o_{i}||\leq||q_{y}-o_{i}|| \tag{43}\] which is in contradiction to (42). Therefore, \(\mathbb{B}(o_{i},\tilde{l}_{i})\cap Q=\emptyset\). Further, since \(\mathbb{B}(o_{i},l_{i})\) excludes exactly the farthest \(t\) points (_i.e._, \(Q\)), "\(\mathbb{B}(o_{i},\tilde{l}_{i})\cap Q=\emptyset\)" implies \[\tilde{l}_{i}\leq l_{i}. \tag{44}\] Overall, we have \(\tilde{l}_{i}\in[\tilde{l}_{i}^{\prime},l_{i}]\) from (41) and (44), _i.e.,_ the \(\big{(}(1+\delta/\gamma)^{2}\gamma n^{\prime\prime}+1\big{)}\)-th farthest point from \(B\) locates in the ring bounded by the spheres \(\mathbb{B}(o_{i},\tilde{l}_{i}^{\prime})\) and \(\mathbb{B}(o_{i},l_{i})\) as shown in Figure 5. Also, \(\tilde{l}_{i}\geq\tilde{l}_{i}^{\prime}\) implies \[\Big{|}P\setminus\mathbb{B}(o_{i},\tilde{l}_{i})\Big{|}\leq\Big{|}P\setminus \mathbb{B}(o_{i},\tilde{l}_{i}^{\prime})\Big{|}=\frac{(\gamma+\delta)^{2}}{ \gamma-\delta}n<(\gamma+5\delta)n, \tag{45}\] where the last equality comes from the assumption \(\delta<\gamma/3\). So (37) and (38) are true in Lemma 8. **Remark 6**.: Actually our proposed Uniform-Adaptive Sampling method and Sandwich lemma are quite generic, and we will show that they can be generalized to solve a broader range of enclosing with outliers problems in Section 7. ### Sublinear Time Algorithm for Bi-criteria Approximation In this section, we propose a sublinear time algorithm for computing a bi-criteria \((1+\epsilon,1-\delta)\)-approximation for the input instance \((P,\gamma)\); that is, the returned ball covers at least \(\big{(}1-\delta-\gamma\big{)}n\) points and has radius at most \((1+\epsilon)\mathbf{Rad}(P_{\text{opt}})\). Recall the remark following Theorem 1.1. As long as the selected point has a distance to the center of \(\mathbf{MEB}(T)\) larger than \((1+\epsilon)\) times the optimal radius, the expected improvement will always be guaranteed. Following this observation, we investigate the following approach. Suppose we run the core-set construction procedure decribed in Theorem 1 (we should replace \(P\) by \(P_{\mathrm{opt}}\) in our following analysis). In the \(i\)-th step, we add an arbitrary point from \(P_{\mathrm{opt}}\setminus\mathbb{B}(o_{i},(1+\epsilon)\mathbf{Rad}(P_{\mathrm{ opt}}))\) to \(T\). We know that a \((1+\epsilon)\)-approximation is obtained after at most \(\frac{2}{(1-s)\epsilon}\) steps, that is, \(P_{\mathrm{opt}}\subset\mathbb{B}\big{(}o_{i},(1+\epsilon)\mathbf{Rad}(P_{ \mathrm{opt}})\big{)}\) for some \(i\leq\frac{2}{(1-s)\epsilon}\). However, we need to solve two key issues for realizing the above approach: **(i)** how to determine the value of \(\mathbf{Rad}(P_{\mathrm{opt}})\) and **(ii)** how to correctly select a point from \(P_{\mathrm{opt}}\setminus\mathbb{B}(o_{i},(1+\epsilon)\mathbf{Rad}(P_{ \mathrm{opt}}))\). Actually, we can implicitly avoid the first issue via replacing \((1+\epsilon)\mathbf{Rad}(P_{\mathrm{opt}})\) by the \(t\)-th largest distance from the points of \(P\) to \(o_{i}\), where we set \(t=(\delta+\gamma)n\) for guaranteeing a \((1+\epsilon,1-\delta)\)-approximation. For the second issue, we randomly select one point from the farthest \(t\) points of \(P\) to \(o_{i}\), and show that it belongs to \(P_{\mathrm{opt}}\setminus\mathbb{B}(o_{i},(1+\epsilon)\mathbf{Rad}(P_{ \mathrm{opt}}))\) with a certain probability. Based on the above idea, we present a sublinear time \((1+\epsilon,1-\delta)\)-approximation algorithm in this section. To better understand the algorithm, we show a linear time algorithm first (Algorithm 4 in Sections 5.2.1). Note that Badoiu _et al._[22] also presented a \((1+\epsilon,1-\delta)\)-approximation algorithm but with a higher complexity, and please see our detailed analysis on the running time at the end of Sections 5.2.1. More importantly, we can improve the running time of Algorithm 4 to be sublinear. For this purpose, we need to avoid computing the farthest \(t\) points to \(o_{i}\), since this operation will take linear time. Also, Algorithm 4 generates a set of candidates for the solution and we need to select the best one. This process also costs linear time. By using the techniques proposed in Section 5.1, we can solve these issues and develop a sublinear time algorithm that has the sample complexity independent of \(n\) and \(d\), in Section 5.2.2. #### 5.2.1 A Linear Time Algorithm In this section, we present our linear time \((1+\epsilon,1-\delta)\)-approximation algorithm for MEB with outliers. ``` 0: A point set \(P\) with \(n\) points in \(\mathbb{R}^{d}\), the fraction of outliers \(\gamma\in(0,1)\), and the parameters \(0<\epsilon,\delta<1\), \(z\in\mathbb{Z}^{+}\). 1: Let \(t=(\delta+\gamma)n\). 2: Initially, randomly select a point \(p\in P\) and let \(T=\{p\}\). 3:\(i=1\); repeat the following steps until \(i>z\): 1. Denote by \(c_{i}\) the exact center of \(\mathbf{MEB}(T)\). Compute the approximate center \(o_{i}\) with a distance to \(c_{i}\) of less than \(\xi\mathbf{Rad}(T)=s\frac{\epsilon}{1+\epsilon}\mathbf{Rad}(T)\) as described in Theorem 1, where \(s\) is set to be \(\frac{\epsilon}{z+\epsilon}\). 2: Let \(Q\) be the set of farthest \(t\) points from \(P\) to \(o_{i}\); denote by \(l_{i}\) the \((t+1)\)-th largest distance from \(P\) to \(o_{i}\). 3: Randomly select a point \(q\in Q\), and add it to \(T\). 4:\(i=i+1\). 4: Output the ball \(\mathbb{B}(o_{i},l_{i})\) where \(\hat{i}=\arg_{i}\min\{l_{i}\mid 1\leq i\leq z\}\). ``` **Algorithm 4**\((1+\epsilon,1-\delta)\)-approximation Algorithm for MEB with outliers Theorem 5.: _If the input parameter \(z=\frac{2}{(1-s)\epsilon}\) (we assume it is an integer for convenience), then with probability \((1-\gamma)(\frac{\delta}{\gamma+\delta})^{z}\), Algorithm 4 outputs a \((1+\epsilon,1-\delta)\)-approximation for the MEB with outliers problem._ Before proving Theorem 5, we present the following two lemmas first. Lemma 9.: _With probability \((1-\gamma)(\frac{\delta}{\gamma+\delta})^{z}\), after running \(z\) rounds in Step 3 of Algorithm 4, the obtained set \(T\subset P_{\mathrm{opt}}\)._ Proof.: Initially, because \(|P_{\mathrm{opt}}|/|P|=1-\gamma\), the first selected point in Step 2 belongs to \(P_{\mathrm{opt}}\) with probability \(1-\gamma\). In each of the \(z\) rounds in Step 3, the selected point belongs to with probability \(\frac{\delta}{\gamma+\delta}\), since \[\frac{|P_{\mathrm{opt}}\cap Q|}{|Q|}=1-\frac{|Q\setminus P_{\mathrm{opt}}|}{|Q|} \geq 1-\frac{|P\setminus P_{\mathrm{opt}}|}{|Q|}=1-\frac{\gamma n}{(\delta+ \gamma)n}=\frac{\delta}{\delta+\gamma}. \tag{46}\] Therefore, with probability \((1-\gamma)(\frac{\delta}{\gamma+\delta})^{z}\) the whole set \(T\subset P_{\mathrm{opt}}\). Lemma 10: _In the \(i\)-th round of Step 3 for \(1\leq i\leq z\), at least one of the following two events happens: (1) \(o_{i}\) is the ball center of a \((1+\epsilon,1-\delta)\)-approximation; (2) \(r_{i+1}>(1+\epsilon)\textbf{Rad}(P_{\mathrm{opt}})-||c_{i}-c_{i+1}||-\xi r_{i}\), where \(r_{i}\) is the exact radius of \(\textbf{MEB}(T)\) is the \(i\)-th round._ Proof: If \(l_{i}\leq(1+\epsilon)\textbf{Rad}(P_{\mathrm{opt}})\), then we are done. That is, the ball \(\mathbb{B}(o_{i},l_{i})\) covers \((1-\delta-\gamma)n\) points with radius \(l_{i}\leq(1+\epsilon)\textbf{Rad}(P_{\mathrm{opt}})\) (the first event happens). Otherwise, \(l_{i}>(1+\epsilon)\textbf{Rad}(P_{\mathrm{opt}})\) and we consider the second event. Let \(q\) be the point added to \(T\) in the \(i\)-th round. Using the triangle inequality, we have \[||o_{i}-q||\leq||o_{i}-c_{i}||+||c_{i}-c_{i+1}||+|c_{i+1}-q||\leq\xi r_{i}+||c_ {i}-c_{i+1}||+r_{i+1}. \tag{47}\] Since \(l_{i}>(1+\epsilon)\textbf{Rad}(P_{\mathrm{opt}})\) and \(q\) lies outside of \(\mathbb{B}(o_{i},l_{i})\), _i.e,_\(||o_{i}-q||\geq l_{i}>(1+\epsilon)\textbf{Rad}(P_{\mathrm{opt}})\), (47) implies that the second event happens and the proof is completed. Proof: **(of Theorem 5)** Suppose that the first event of Lemma 10 never happens. As a consequence, we obtain a series of inequalities for each pair of radii \(r_{i+1}\) and \(r_{i}\), _i.e.,_\(r_{i+1}>(1+\epsilon)\textbf{Rad}(P_{\mathrm{opt}})-||c_{i}-c_{i+1}||-\xi r_{i}\). Assume that \(T\subset P_{\mathrm{opt}}\) in Lemma 9, _i.e.,_ each time the algorithm correctly adds a point from \(P_{\mathrm{opt}}\) to \(T\). Using the almost identical idea for proving Theorem 1 in Section 2.1, we know that a \((1+\epsilon)\)-approximate MEB of \(P_{\mathrm{opt}}\) is obtained after at most \(z\) rounds. The success probability directly comes from Lemma 9. Overall, we obtain Theorem 5. Theorem 5 directly implies the following corollary. Corollary 1: _If one repeatedly runs Algorithm 4\(O(\frac{1}{1-\gamma}(1+\frac{\gamma}{\delta})^{z})\) times, with constant probability, the algorithm outputs a \((1+\epsilon,1-\delta)\)-approximation for the problem of MEB with outliers._ **Running time.** In Theorem 5, we set \(z=\frac{2}{(1-s)\epsilon}\) and \(s\in(0,1)\). To keep \(z\) small, according to Theorem 1, we set \(s=\frac{\epsilon}{2+\epsilon}\) so that \(z=\frac{2}{\epsilon}+1\) (only larger than the lower bound \(\frac{2}{\epsilon}\) by 1). For each round of Step 3, we need to compute an approximate center \(o_{i}\) that has a distance to the exact one less than \(\xi r_{i}=s\frac{\epsilon}{1+\epsilon}r_{i}=O(\epsilon^{2})r_{i}\). Using the algorithm proposed in [20], this can be done in \(O(\frac{1}{\epsilon^{2}}|T|d)=O(\frac{1}{\epsilon^{3}}d)\) time. Also, the set \(Q\) can be obtained in linear time by the algorithm in [19]. In total, the time complexity for obtaining a \((1+\epsilon,1-\delta)\)-approximation in Corollary 1 is \[O\big{(}\frac{C}{\epsilon}(n+\frac{1}{\epsilon^{5}})d\big{)}, \tag{48}\] where \(C=O(\frac{1}{1-\gamma}(1+\frac{\gamma}{\delta})^{\frac{2}{\epsilon}+1})\). As mentioned before, Badoiu _et al._[22] also proposed a linear time bi-criteria approximation. However, the hidden constant of their running time is exponential in \(\Theta(\frac{1}{\epsilon\delta})\) that is much larger than \(\frac{2}{\epsilon}+1\). #### 5.2.2 Improvement on Running Time In this section, we show that the running time of Algorithm 4 can be further improved to be independent of the number of points \(n\). First, we observe that it is not necessary to compute the set \(Q\) of the farthest \(t\) points in Step 3(2) of the algorithm. Actually, as long as the selected point \(q\) is part of \(P_{\mathrm{opt}}\cap Q\) in Step 3(3), a \((1+\epsilon,1-\delta)\)-approximation is still guaranteed. The Uniform-Adaptive Sampling procedure proposed in Section 5.1 can help us to obtain a point \(q\in P_{\mathrm{opt}}\cap Q\) without computing the set \(Q\). Moreover, in Lemma 8, we show that the radius of each candidate solution can be estimated via random sampling. Overall, we achieve a sublinear time algorithm (Algorithm 5). Following the analysis in Section 5.2.1, we set \(s=\frac{\epsilon}{2+\epsilon}\) so that \(z=\frac{2}{(1-s)\epsilon}=\frac{2}{\epsilon}+1\). We present the results in Theorem 6 and Corollary 2. Comparing with Theorem 5, we have an extra \((1-\eta_{1})(1-\eta_{2})\) in the success probability in Theorem 6, due to the probabilities from Lemmas 7 and 8. Another minor issue is that the covering approximation error is increased from \(\delta\) to \(5\delta\) when applying Lemma 8. Actually this issue can be easily solved by replacing \(\delta\) by \(\delta/5\) in the parameters \(n^{\prime}\), \(t^{\prime}\), \(n^{\prime\prime}\), and \(t^{\prime\prime}\), and the asymptotic complexity does not change. ``` 0: A point set \(P\) with \(n\) points in \(\mathbb{R}^{d}\), the fraction of outliers \(\gamma\in(0,1)\), and the parameters \(\epsilon,\eta_{1},\eta_{2}\in(0,1)\), \(\delta\in(0,1/3\gamma)\), and \(z\in\mathbb{Z}^{+}\). 1: Let \(n^{\prime}=O(\frac{1}{\delta}\log\frac{1}{\eta_{1}})\), \(n^{\prime\prime}=O\big{(}\frac{\gamma}{\delta^{2}}\log\frac{1}{\eta_{2}}\big{)}\), \(t^{\prime}=\frac{3}{2}(\delta/5+\gamma)n^{\prime}\), and \(t^{\prime\prime}=(1+\frac{\delta}{5\gamma})^{2}\gamma n^{\prime\prime}\). 2: Initially, randomly select a point \(p\in P\) and let \(T=\{p\}\). 3:\(i=1\); repeat the following steps until \(j=z\): ``` **Algorithm 5** Sublinear Time \((1+\epsilon,1-\delta)\)-approximation Algorithm for MEB with Outliers Theorem 6.1: _If the input parameter \(z=\frac{2}{\epsilon}+1\), then with probability \((1-\gamma)\big{(}(1-\eta_{1})(1-\eta_{2})\frac{\delta/5}{3(\gamma+\delta/5)} \big{)}^{z}\), Algorithm 5 outputs a \((1+\epsilon,1-\delta)\)-approximation for the problem of MEB with outliers._ To boost the success probability in Theorem 6, we need to repeatedly run Algorithm 5 and output the best candidate. However, we need to be careful on setting the parameters. The success probability in Theorem 6 consists of two parts, \(\mathcal{P}_{1}=(1-\gamma)\big{(}(1-\eta_{1})\frac{\delta/5}{3(\gamma+\delta/5 )}\big{)}^{z}\) and \(\mathcal{P}_{2}=(1-\eta_{2})^{z}\), where \(\mathcal{P}_{1}\) indicates the probability that \(\{o_{1},\cdots,o_{z}\}\) contains a qualified candidate, and \(\mathcal{P}_{2}\) indicates the success probability of Lemma 8 over all the \(z\) rounds. Therefore, if we run Algorithm 5\(N=O(\frac{1}{\mathcal{P}_{1}})\) times, with constant probability (by taking the union bound), the set of all the generated candidates contains at least one that yields a \((1+\epsilon,1-\delta)\)-approximation; moreover, to guarantee that we can correctly estimate the resulting radii of all the candidates via the Sandwich Lemma with constant probability, we need to set \(\eta_{2}=O(\frac{1}{zN})\) (because there are \(O(zN)\) candidates). Corollary 2: _If one repeatedly runs Algorithm 5\(N=O\Big{(}\frac{1}{1-\gamma}\big{(}\frac{1}{1-\eta_{1}}(3+\frac{3\gamma}{ \delta/5})\big{)}^{z}\Big{)}\) times with setting \(\eta_{2}=O(\frac{1}{zN})\), with constant probability, the algorithm outputs a \((1+\epsilon,1-\delta)\)-approximation for the problem of MEB with outliers._ The calculation of running time is similar to (48) in Section 5.2.1. We just replace \(n\) by \(\max\{n^{\prime},n^{\prime\prime}\}=O\big{(}\frac{\gamma}{\delta^{2}}\log\frac{1} {n_{2}}\big{)}=O\big{(}\frac{\gamma}{\delta^{2}}\log(zN)\big{)}=\tilde{O}\big{(} \frac{\gamma}{\delta^{2}\epsilon}\big{)}\)3, and change the value of \(C\) to be \(O\big{(}\frac{1}{1-\gamma}\big{(}\frac{1}{1-\eta_{1}}(3+\frac{3\gamma}{\delta /5})\big{)}^{\frac{2}{\epsilon}+1}\big{)}\). So the total running time is independent of \(n\). Footnote 3: The asymptotic notation \(\tilde{O}(f)=O\big{(}f\cdot\texttt{polylog}(\frac{\gamma}{\eta_{1}\delta(1- \gamma)})\big{)}\). ### General MEB Problem Now we consider solving the general MEB problem without the stability assumption in this Section. Let \(0<\epsilon\), \(\delta<1\) be two given parameters. First, we view the input \(P\) as an instance \((P,\delta/2)\) of MEB with outliers (_i.e.,_\(\gamma=\delta/2\)). Then, we apply the algorithm of Section 5.2 to obtain a bi-criteria \((1+\epsilon^{2}/2,1-\delta/2)\)-approximation solution \(\mathbb{B}(c,r_{c})\) (we replace the "\(\epsilon\)" by \(\epsilon^{2}/2\) and replace the "\(\delta\)" by \(\delta/2\)). The obtained ball \(\mathbb{B}(c,r_{c})\) covers at least \((1-\delta/2-\delta/2)n=(1-\delta)n\) points of \(P\), and the radius \[r_{c}\leq(1+\frac{1}{2}\epsilon^{2})\cdot r_{-\delta/2}, \tag{49}\] where \(r_{-\delta/2}\) stands for the radius of the smallest ball that covers at least \((1-\delta/2)n\) points of \(P\). Second, we assume that the input \(P\) is an \((\alpha,\beta)\)-stable instance with \(\alpha=\epsilon^{2}\) and \(\beta=\delta/2\); then run Algorithm 2 to obtain a candidate ball center \(\tilde{o}\) (of course, we can also use Algorithm 1, where the only difference is that the sample complexity will be higher). To compute the real radius \(r_{\tilde{o}}\) yielded from \(\tilde{o}\) (since \(P\) may not be a real \((\alpha,\beta)\)-stable instance), we just need to read the whole dataset \(P\) in one pass. Finally, we determine the final output based on the ratio \(r_{\tilde{o}}/r_{c}\). ``` 0: An instance \(P\) of MEB problem in \(\mathbb{R}^{d}\); two parameters \(0<\epsilon,\delta<1\). 1: View the input as a \((P,\delta/2)\) instance of MEB with outliers; apply the method of Corollary 2 to obtain a bi-criteria \((1+\epsilon^{2}/2,1-\delta/2)\)-approximation solution \(\mathbb{B}(c,r_{c})\) on \((P,\delta/2)\). 2: Assume that the input \(P\) is an \((\alpha,\beta)\)-stable instance with \(\alpha=\epsilon^{2}\) and \(\beta=\delta/2\); then run Algorithm 2 to obtain a candidate ball center \(\tilde{o}\). 3: Read the whole input dataset \(P\) in one-pass, and compute the radius \(r_{\tilde{o}}=\max_{p\in P}||\tilde{o}-p||\). 4: If the ratio \(\frac{r_{\tilde{o}}}{r_{c}}\leq\frac{1+\epsilon^{2}}{1-\epsilon^{2}/2}\), return the ball \(\mathbb{B}(\tilde{o},r_{\tilde{o}})\) and say "it is a \((1+\epsilon)\)-radius approximation". 5: Else, return the ball \(\mathbb{B}(c,r_{c})\) and say "it is a \((1-\delta)\)-covering approximation". ``` **Algorithm 6 Hybrid Approximation for MEB** Theorem 5.1: _With constant success probability, Algorithm 6 returns either a \((1+\epsilon)\)-radius approximation or a \((1-\delta)\)-covering approximation, and the running time is \(O\Big{(}\big{(}n+h(\epsilon,\delta)\big{)}\cdot d\Big{)}\), where \(h(\epsilon,\delta)=O\big{(}\frac{1}{1-\delta/2}\exp(O(1/\epsilon^{2}))\big{)}\). The algorithm only needs uniform sampling and a single pass over the input data, and the space complexity in memory is \(O(h(\epsilon,\delta)\cdot d)\). Moreover, if the input data matrix (the \(n\times d\) matrix representing the input \(P\)) has at most \(M\ll nd\) non-zeros entries, the total running time will be \(O\big{(}n+h(\epsilon,\delta)\cdot d+M\big{)}\)._ Remark 7: In the following proof, we will see that when the algorithm returns a \((1-\delta)\)-covering approximation, the returned radius is not only \(\leq\mathbf{Rad}(P)\), but also at most \(\big{(}1-\Theta(\epsilon^{2})\big{)}\cdot\mathbf{Rad}(P)\) (see (52) and (54)). Proof: We study the time and space complexities first. The method of Corollary 2 only needs uniform samplings, and Step 2 of Algorithm 6 is a single pass over the input data. According to Corollary 2, we know the space complexity is \(O(h(\epsilon,\delta)\cdot d)\) with \(h(\epsilon,\delta)=O\big{(}\frac{1}{1-\delta/2}\exp(O(1/\epsilon^{2}))\big{)}\). The total running time is \(O\Big{(}\big{(}n+h(\epsilon,\delta)\big{)}\cdot d\Big{)}\). Furthermore, we consider the case that the input matrix is sparse. In Step 3, we need to compute the value \(r_{\tilde{o}}=\max_{p\in P}||\tilde{o}-p||\). For each point \(p\in P\), we know \[||\tilde{o}-p||^{2}=||\tilde{o}||^{2}+||p||^{2}-2\langle\tilde{o},p\rangle, \tag{50}\] where \(\langle\tilde{o},p\rangle\) stands for their inner product. The value of \(||\tilde{o}||^{2}\) can be obtained in \(O(d)\) time, and if the input data matrix has at most \(M\ll nd\) non-zeros entries, the complexity for computing the values \(\{||p||^{2}-2\langle\tilde{o},p\rangle\mid p\in P\}\) is \(O(n+M)\). Overall, the complexity of Algorithm 6 is \(O\big{(}n+h(\epsilon,\delta)\cdot d+M\big{)}\). Now, we prove the solution quality. We let \(\alpha=\epsilon^{2}\) and \(\beta=\delta/2\), and consider the following two cases. **Case 1:** the instance \(P\) is \((\alpha,\beta)\)-stable. Then we directly have \[r_{\tilde{o}}\leq(1+\epsilon)\cdot\mathbf{Rad}(P). \tag{51}\] If \(\frac{r_{\tilde{o}}}{r_{c}}>\frac{1+\epsilon}{1-\epsilon^{2}/2}\), together with (51), we have \[r_{c}<\Big{(}1-\epsilon^{2}/2\Big{)}\cdot\mathbf{Rad}(P). \tag{52}\] Then we can return the ball \(\mathbb{B}(c,r_{c})\) and say "it is a \((1-\delta)\)-covering approximation". On the other hand, when \(\frac{r_{\tilde{o}}}{r_{c}}\leq\frac{1+\epsilon}{1-\epsilon^{2}/2}\), from (51) we can return the ball \(\mathbb{B}(\tilde{o},r_{\tilde{o}})\) and say "it is a \((1+\epsilon)\)-radius approximation". **Case 2:**\(P\) is not an \((\alpha,\beta)\)-stable instance. Then, from the definition of stability we know the optimal radius of the instance \((P,\delta/2)\) is no larger than \[(1-\epsilon^{2})\cdot\mathbf{Rad}(P). \tag{53}\] So we have \[r_{c}<(1+\frac{1}{2}\epsilon^{2})(1-\epsilon^{2})\cdot\mathbf{Rad}(P)<\Big{(} 1-\epsilon^{2}/2\Big{)}\cdot\mathbf{Rad}(P). \tag{54}\] If \(\frac{r_{\tilde{o}}}{r_{c}}\leq\frac{1+\epsilon}{1-\epsilon^{2}/2}\), together with (54), it implies \[r_{\tilde{o}}<(1+\epsilon)\cdot\mathbf{Rad}(P). \tag{55}\] Then we can return the ball \(\mathbb{B}(\tilde{o},r_{\tilde{o}})\) and say "it is a \((1+\epsilon)\)-radius approximation". On the other hand, when \(\frac{r_{\tilde{o}}}{r_{c}}>\frac{1+\epsilon}{1-\epsilon^{2}/2}\), from (54) we can return the ball \(\mathbb{B}(c,r_{c})\) and say "it is a \((1-\delta)\)-covering approximation". Since the success probability of the method of Section 5.2 is constant, the overall success probability of Algorithm 7 is constant as well. **More analysis on the result of Algorithm 6.** We further consider an "inverse" question: can we infer the stability degree of the given instance \(P\) from the output of Algorithm 6? In Step 2, we assume that \(P\) is an \((\epsilon^{2},\delta/2)\)-stable instance, but this may not be true in reality. Recall the definition of "\((\alpha,\beta)\)-stable" in Definition 3. We know that there always exists a value \(\hat{\alpha}\in[0,1)\) such that \(P\) is a \((\hat{\alpha},\delta/2)\)-stable. We can use "\(\hat{\alpha}\)" to indicate the stability degree of \(P\), for the fixed "\(\delta/2\)". The following theorem shows that we can infer the value of \(\hat{\alpha}\) through Algorithm 6. **Theorem 6.1**: _If Algorithm 6 returns a \((1+\epsilon)\)-radius approximation, then \(\hat{\alpha}<\epsilon\); otherwise, the algorithm returns a \((1-\delta)\)-covering approximation and it implies \(\hat{\alpha}\succ\frac{\epsilon^{2}}{2}\)._ _In other words, the algorithm can distinguish the case \(\hat{\alpha}\geq\epsilon\) (it must returns a \((1-\delta)\)-covering approximation) and the case \(\hat{\alpha}\leq\frac{\epsilon^{2}}{2}\) (it must returns a \((1+\epsilon)\)-radius approximation); but if \(\frac{\epsilon^{2}}{2}<\hat{\alpha}<\epsilon\), the algorithm can return either a \((1-\delta)\)-covering approximation or a \((1+\epsilon)\)-radius approximation._ Proof: Recall we set \(\alpha=\epsilon^{2}\) and \(\beta=\delta/2\) in Algorithm 6. First, we suppose the output is a \((1+\epsilon)\)-radius approximation. One possible case is the instance \(P\) is a real \((\alpha,\beta)\)-stable instance, and then \(\hat{\alpha}=\alpha<\epsilon\). The other possible case is that \(P\) is not \((\alpha,\beta)\)-stable but the ratio \(\frac{r_{\hat{\alpha}}}{r_{c}}\leq\frac{1+\epsilon}{1-\epsilon^{2}/2}\). Together with (49), we have \[\frac{\mathbf{Rad}(P)}{r_{-\delta/2}}\leq\frac{r_{\hat{\alpha}}}{\frac{1}{1+ \epsilon^{2}/2}r_{c}}\leq\frac{(1+\epsilon)(1+\epsilon^{2}/2)}{1-\epsilon^{2 }/2}. \tag{56}\] So \(\hat{\alpha}=1-\frac{r_{-\delta/2}}{\mathbf{Rad}(P)}\leq 1-\frac{1-\epsilon^{2}/2 }{(1+\epsilon)(1+\epsilon^{2}/2)}<\epsilon\). Overall, as long as the output is a \((1+\epsilon)\)-radius approximation, \(\hat{\alpha}\) should be smaller than \(\epsilon\). Then we suppose the output is a \((1-\delta)\)-covering approximation. One possible case is the instance \(P\) is not \((\alpha,\beta)\)-stable, and then \(\hat{\alpha}>\alpha=\epsilon^{2}\). The other possible case is that \(P\) is \((\alpha,\beta)\)-stable but the ratio \(\frac{r_{\hat{\alpha}}}{r_{c}}>\frac{1+\epsilon}{1-\epsilon^{2}/2}\). Together with (51), we have \[\frac{\mathbf{Rad}(P)}{r_{-\delta/2}}\geq\frac{\frac{1}{1+\epsilon}r_{\hat{ \alpha}}}{r_{c}}>\frac{1}{1-\epsilon^{2}/2}. \tag{57}\] So \(\hat{\alpha}=1-\frac{r_{-\delta/2}}{\mathbf{Rad}(P)}>1-(1-\epsilon^{2}/2)= \epsilon^{2}/2\). Overall, as long as the output is a \((1-\delta)\)-covering approximation, \(\hat{\alpha}>\min\{\epsilon^{2},\epsilon^{2}/2\}=\frac{\epsilon^{2}}{2}\). ## 6 Extension I: Hybrid Approximation for MEB with Outliers In this section, we extend the idea of Section 5.3 to present a hybrid approximation algorithm for the MEB with outliers problem \((P,\gamma)\). First, we extend Definition 3 of MEB to MEB with outliers. Definition 6 ((\(\alpha\), \(\beta\))-stable for MEB with Outliers): Let \(0<\alpha,\beta<1\). Given an instance \((P,\gamma)\) of the MEB with outliers problem in Definition 4, \((P,\gamma)\) is an (\(\alpha\), \(\beta\))-stable instance if (1) \(\mathbf{Rad}(P\setminus Q)>(1-\alpha)\mathbf{Rad}(P_{\mathrm{opt}})\) for any \(Q\subset P\) with \(|Q|<\big{(}\gamma+\beta\big{)}n\), and (2) there exists a \(Q^{\prime}\subset P\) with \(|Q^{\prime}|=\lceil(\beta+\gamma)n\rceil\) having \(\mathbf{Rad}(P\setminus Q^{\prime})\leq(1-\alpha)\mathbf{Rad}(P_{\mathrm{opt}})\). Definition 6 directly implies the following claim. Claim 2: If \((P,\gamma)\) is an \((\alpha,\,\beta)\)-stable instance of the problem of MEB with outliers, the corresponding \(P_{\mathrm{opt}}\) is an (\(\alpha\), \(\tilde{\beta}\))-stable instance of MEB with \(\tilde{\beta}\geq\frac{\beta}{1-\gamma}\). Note that Definition 6 implicitly requires \(\beta<1-\gamma\). So it implies the lower bound \(\frac{\beta}{1-\gamma}\) of \(\tilde{\beta}\) in Claim 2 cannot be larger than \(1\). To see the correctness of Claim 2, we can use contradiction. Suppose that there exists a subset \(P^{\prime}\subset P_{\mathrm{opt}}\) such that \(|P^{\prime}|>(1-\frac{\beta}{1-\gamma})|P_{\mathrm{opt}}|=(1-\gamma-\beta)n\) and \(\mathbf{Rad}(P^{\prime})\leq(1-\alpha)\mathbf{Rad}(P_{\mathrm{opt}})\). Then, it is in contradiction to the fact that \((P,\gamma)\) is an \((\alpha,\beta)\)-stable instance of MEB with outliers. To apply the idea of Section 5.3, a significant challenge is that the set \(P_{\mathrm{opt}}\) is mixed with the outliers, and thus we cannot easily obtain a \((1+\epsilon)\)-radius approximation as Algorithm 6. Our starting point is still the sublinear time bi-criteria approximation algorithm proposed in Section 5.2. Specifically, given any two small parameters \(0<\epsilon\), \(\delta<1\), the algorithm returns a set of candidate ball centers via the uniform-adaptive sampling procedure. We use \(\Xi\) to denote this set. With constant probability, as least one candidate from \(\Xi\), say \(s\), satisfies the following inequality: \[\big{|}\mathbb{B}\big{(}s,(1+\epsilon)\cdot\mathbf{Rad}(P_{\mathrm{opt}}) \big{)}\cap P\big{|}\geq\big{(}1-\delta-\gamma\big{)}n. \tag{58}\] Namely, it is a "\((1+\epsilon,1-\delta)\)-approximation". To pick such a qualified candidate, it is possible to estimate the size of \(\mathbb{B}\big{(}s,(1+\epsilon)\cdot\mathbf{Rad}(P_{\mathrm{opt}})\big{)}\cap P\) by using the uniform sampling based technique "sandwich lemma" (instead of reading the whole dataset \(P\)). It is worth to note an implicit fact about Theorem 5 of Section 5.2. Actually, in the proof it showed that among the candidate set \(\Xi\), there exists one solution \(s\) such that the ball \(\mathbb{B}\big{(}s,(1+\epsilon)\cdot\mathbf{Rad}(P_{\mathrm{opt}})\big{)}\) covers at least \(\big{(}1-\delta-\gamma\big{)}n\) points from \(P_{\mathrm{opt}}\) (since the set \(T\subset P_{\mathrm{opt}}\) and the solution \(s\) is generated from \(T\) (see Lemma 9)). So the solution \(s\) should satisfy \[\big{|}\mathbb{B}\big{(}s,(1+\epsilon)\cdot\mathbf{Rad}(P_{\mathrm{opt}}) \big{)}\cap P_{\mathrm{opt}}\big{|}\geq\big{(}1-\delta-\gamma\big{)}n, \tag{59}\] which is stronger than (58). But the sandwich lemma may ignore such a stronger solution, since only selecting a solution satisfying (58) is already sufficient to guarantee a \((1+\epsilon,1-\delta)\)-approximation. We introduce the following new algorithm for MEB with outliers based on this observation. **The hybrid approximation algorithm.** Let \(\epsilon\) and \(\delta\) be the two given parameters. First, we apply the method of Section 5.2. But we do not directly input the couple \((\epsilon,\delta)\) to the bi-criteria approximation algorithm; instead, we use \((\frac{1}{2(2\sqrt{2}+\sqrt{3})^{2}}\epsilon^{2},\delta)\) (we will explain why we have the coefficient "\(\frac{1}{2(2\sqrt{2}+\sqrt{3})^{2}}\)" in our analysis). That is, we compute a set \(\Xi\) of candidate ball centers via the uniform-adaptive sampling of Section 5.2, and at least one center yields a \((1+\frac{1}{2(2\sqrt{2}+\sqrt{3})^{2}}\epsilon^{2},1-\delta)\)-approximation for the instance \((P,\gamma)\). Then, for each candidate \(q\in\Xi\), we define two values: \[r_{q} =\min\Big{\{}r>0\mid\big{|}\mathbb{B}(q,r)\cap P\big{|}\geq(1- \gamma)n\Big{\}}; \tag{60}\] \[r^{\prime}_{q} =\min\Big{\{}r>0\mid\big{|}\mathbb{B}(q,r)\cap P\big{|}\geq\big{(} 1-\delta-\gamma\big{)}n\Big{\}}. \tag{61}\] We can compute these two values for all the candidates of \(\Xi\) by scanning the input \(P\) in one pass (instead of using the sandwich lemma). We select the two points \(s_{1}=\operatorname*{arg\,min}_{q\in\Xi}r_{q}\) and \(s_{2}=\operatorname*{arg\,min}_{q\in\Xi}r^{\prime}_{q}\) (they may or may not be the same point). If the ratio \(\frac{r_{s_{1}}}{r^{\prime}_{s_{2}}}\leq\frac{1+\epsilon}{1-\epsilon^{2}/ \big{(}2(2\sqrt{2}+\sqrt{3})^{2}\big{)}}\), return the ball \(\mathbb{B}(s_{1},r_{s_{1}})\) and say "it is a \((1+\epsilon)\)-radius approximation"; else, return the ball \(\mathbb{B}(s_{2},r^{\prime}_{s_{2}})\) and say "it is a \((1-\delta)\)-covering approximation". ``` 1:An instance \((P,\gamma)\) of MEB with outliers problem in \(\mathbb{R}^{d}\); two parameters \(0<\epsilon,\delta<1\). 2:Apply the uniform-adaptive sampling method of Section 5.2 to obtain a set \(\Xi\) of candidate ball centers, where at least one center yields a \((1+\frac{1}{2(2\sqrt{2}+\sqrt{3})^{2}}\epsilon^{2},1-\delta)\)-approximation for the instance \((P,\gamma)\). 3:Read the whole input dataset \(P\) in one pass, and compute the values \(r_{q}\) and \(r^{\prime}_{q}\) as the formulas (60) and (61) for each \(q\in\Xi\). 4:Let \(s_{1}=\operatorname*{arg\,min}_{q\in\Xi}r_{q}\) and \(s_{2}=\operatorname*{arg\,min}_{q\in\Xi}r^{\prime}_{q}\). 5:If the ratio \(\frac{r_{s_{1}}}{r^{\prime}_{s_{2}}}\leq\frac{1+\epsilon}{1-\epsilon^{2}/ \big{(}2(2\sqrt{2}+\sqrt{3})^{2}\big{)}}\), return the ball \(\mathbb{B}(s_{1},r_{s_{1}})\) and say "it is a \((1+\epsilon)\)-radius approximation". 6:Else, return the ball \(\mathbb{B}(s_{2},r^{\prime}_{s_{2}})\) and say "it is a \((1-\delta)\)-covering approximation". ``` **Algorithm 7** **Hybrid Approximation for MEB with Outliers **Theorem 9**.: _With constant success probability, Algorithm 7 returns either a \((1+\epsilon)\)-radius approximation or a \((1-\delta)\)-covering approximation, and the running time is \(O(g(\epsilon,\delta,\gamma)\cdot nd)\), where \(g(\epsilon,\delta,\gamma)=O(\frac{1}{1-\gamma}(\frac{\gamma+\delta}{\delta})^{O( 1/\epsilon^{2})})\). The algorithm only needs uniform sampling and a single pass over the input data, and the space complexity in memory is \(O(g(\epsilon,\delta,\gamma)\cdot d)\). Moreover, if the input data matrix (the \(n\times d\) matrix representing the input \(P\)) has at most \(M\ll nd\) non-zeros entries, the total running time will be \(O\big{(}g(\epsilon,\delta,\gamma)\cdot(n+d+M)\big{)}\)._ **Remark 8**.: Similar to Theorem 7, we will see that when the algorithm returns a \((1-\delta)\)-covering approximation, the returned radius is at most \(\big{(}1-\Theta(\epsilon^{2})\big{)}\cdot\mathbf{Rad}(P_{\mathrm{opt}})\) (see (63) and (64)). Proof.: **(of Theorem 9)** We study the time and space complexities first. The method of Corollary 2 only needs uniform samplings, and Step 2 of Algorithm 7 is a single pass over the input data. The size of \(\Xi\) is \(g(\epsilon,\delta,\gamma)=O(\frac{1}{1-\gamma}(\frac{\gamma+\delta}{\delta})^ {O(1/\epsilon^{2})})\) based on Corollary 2. Overall, the space complexity is \(O(g(\epsilon,\delta,\gamma)\cdot d)\). And the complexity for generating \(\Xi\) is \(O\big{(}|\Xi|\cdot\texttt{poly}(\frac{1}{\epsilon},\frac{1}{\delta})d\big{)}\) which is sublinear in the input size \(nd\). It is easy to see that the complexity of Step 2 dominates the whole complexity. Therefore, the total running time is \(O(g(\epsilon,\delta,\gamma)\cdot nd)\). Furthermore, we consider the case that the input matrix is sparse. Similar to the proof of Theorem 7, we know that the complexity of Algorithm 7 is \(O\big{(}g(\epsilon,\delta,\gamma)\cdot(n+d+M)\big{)}\) if the input data matrix has at most \(M\ll nd\) non-zeros entries. Now, we prove the solution quality. We let \(\alpha=\frac{1}{(2\sqrt{2}+\sqrt{3})^{2}}\epsilon^{2}\) and \(\beta=(1-\gamma)\delta\), and consider the following two cases. **Case 1:** the instance \((P,\gamma)\) is \((\alpha,\beta)\)-stable (_i.e.,_\(P_{\mathrm{opt}}\) is an \((\alpha,\tilde{\beta})\)-stable instance of MEB with \(\tilde{\beta}\geq\delta\), according to Claim 2). Denote by \(o\) the optimal center of \(\mathbf{MEB}(P_{\mathrm{opt}})\). We suppose one candidate ball center \(q_{0}\) of \(\Xi\) satisfies the formula (59). As a consequence, from Theorem 2, we know that \(||q_{0}-o||\leq(2\sqrt{2}+\sqrt{3})\sqrt{\alpha}\cdot\mathbf{Rad}(P_{\mathrm{ opt}})=\epsilon\cdot\mathbf{Rad}(P_{\mathrm{opt}})\). That is, \[r_{s_{1}}\leq r_{q_{0}}\leq(1+\epsilon)\cdot\mathbf{Rad}(P_{\mathrm{opt}}). \tag{62}\] If \(\frac{r_{s_{1}}}{r_{s_{2}}^{\prime}}>\frac{1+\epsilon}{1-\epsilon^{2}\big{/} \big{(}2(2\sqrt{2}+\sqrt{3})^{2}\big{)}}\), together with (62), we have \[r_{s_{2}}^{\prime}<\Big{(}1-\epsilon^{2}\big{/}\big{(}2(2\sqrt{2}+\sqrt{3})^{ 2}\big{)}\Big{)}\cdot\mathbf{Rad}(P_{\mathrm{opt}}). \tag{63}\] Then we can return the ball \(\mathbb{B}(s_{2},r_{s_{2}}^{\prime})\) and say "it is a \((1-\delta)\)-covering approximation". On the other hand, when \(\frac{r_{s_{1}}}{r_{s_{2}}^{\prime}}\leq\frac{1}{1-\epsilon^{2}\big{/}\big{(}2 (2\sqrt{2}+\sqrt{3})^{2}\big{)}}\), from (62) we can return the ball \(\mathbb{B}(s_{1},r_{s_{1}})\) and say "it is a \((1+\epsilon)\)-radius approximation". **Case 2:**\((P,\gamma)\) is not an \((\alpha,\beta)\)-stable instance. Then it implies \[r_{s_{2}}^{\prime} <(1+\frac{1}{2(2\sqrt{2}+\sqrt{3})^{2}}\epsilon^{2})(1-\frac{1}{ (2\sqrt{2}+\sqrt{3})^{2}}\epsilon^{2})\cdot\mathbf{Rad}(P_{\mathrm{opt}})\] \[<\Big{(}1-\epsilon^{2}\big{/}\big{(}2(2\sqrt{2}+\sqrt{3})^{2} \big{)}\Big{)}\cdot\mathbf{Rad}(P_{\mathrm{opt}}). \tag{64}\] If \(\frac{r_{s_{1}}}{r_{s_{2}}^{\prime}}\leq\frac{1+\epsilon}{1-\epsilon^{2}\big{/} \big{(}2(2\sqrt{2}+\sqrt{3})^{2}\big{)}}\), together with (64), it implies \[r_{s_{1}}<(1+\epsilon)\cdot\mathbf{Rad}(P_{\mathrm{opt}}). \tag{65}\] Then we can return the ball \(\mathbb{B}(s_{1},r_{s_{1}})\) and say "it is a \((1+\epsilon)\)-radius approximation". On the other hand, when \(\frac{r_{s_{1}}}{r_{s_{2}}^{\prime}}>\frac{1+\epsilon}{1-\epsilon^{2}\big{/} \big{(}2(2\sqrt{2}+\sqrt{3})^{2}\big{)}}\), from (64) we can return the ball \(\mathbb{B}(s_{2},r_{s_{2}}^{\prime})\) and say "it is a \((1-\delta)\)-covering approximation". Since the success probability of the method of Section 5.2 is constant, the overall success probability of Algorithm 7 is constant as well. We also have the following theorem for inferring the stability of the instance \((P,\gamma)\), and the proof is almost identical to the proof of Theorem 4.1. Theorem 6.1: _Suppose \((P,\gamma)\) is a \((\hat{\alpha},(1-\gamma)\delta)\)-stable instance. If Algorithm 7 returns a \((1+\epsilon)\)-radius approximation, then \(\hat{\alpha}<\epsilon\); otherwise, the algorithm returns a \((1-\delta)\)-covering approximation and it implies \(\hat{\alpha}>\frac{\epsilon^{2}}{2(2\sqrt{2}+\sqrt{3})^{2}}\)._ ## 7 Extension II: Bi-criteria Approximations for MEX With Outliers In this section, we extend Definition 4 for MEB with outliers and define a more general problem called **minimum enclosing "x" (MEX) with Outliers**. Then we show that the ideas of Lemma 7 and 8 can be generalized to deal with MEX with outliers problems, as long as the shape "x" satisfies several properties. **To describe a shape "x", we need to clarify three basic concepts: center, size, and distance function.** Let \(\mathcal{X}\) be the set of specified shapes in \(\mathbb{R}^{d}\). We require that each shape \(x\in\mathcal{X}\) is uniquely determined by the following two components: "\(c(x)\)", the **center** of \(x\), and "\(s(x)\geq 0\)", the **size** of \(x\). For any two shapes \(x_{1},x_{2}\in\mathcal{X}\), \(x_{1}=x_{2}\) if and only if \(c(x_{1})=c(x_{2})\) and \(s(x_{1})=s(x_{2})\). Moreover, given a center \(o_{0}\) and a value \(l_{0}\geq 0\), we use \(x(o_{0},l_{0})\) to denote the shape \(x\) with \(c(x)=o_{0}\) and \(s(x)=l_{0}\). For different shapes, we have different definitions for the center and size. For example, if \(x\) is a ball, \(c(x)\) and \(s(x)\) should be the ball center and the radius respectively; given \(o_{0}\in\mathbb{R}^{d}\) and \(l_{0}\geq 0\), \(x(o_{0},l_{0})\) should be the ball \(\mathbb{B}(o_{0},l_{0})\). As a more complicated example, consider the \(k\)-center clustering with outliers problem, which is to find \(k\) balls to cover the input point set excluding a certain number of outliers and minimize the maximum radius (_w.l.o.g.,_ we can assume that the \(k\) balls have the same radius). For this problem, the shape "x" is a union of \(k\) balls in \(\mathbb{R}^{d}\); the center \(c(x)\) is the set of the \(k\) ball centers and the size \(s(x)\) is the radius. For any point \(p\in\mathbb{R}^{d}\) and any shape \(x\in\mathcal{X}\), we also need to define a **distance function**\(f(c(x),p)\) between the center \(c(x)\) and \(p\). For example, if \(x\) is a ball, \(f(c(x),p)\) is simply equal to \(||p-c(x)||\); if \(x\) is a union of \(k\) balls with the center \(c(x)=\{c_{1},c_{2},\cdots,c_{k}\}\), the distance should be \(\min_{1\leq j\leq k}||p-c_{j}||\). Note that the distance function is only for ranking the points to \(c(x)\), and not necessary to be non-negative (_e.g.,_ in Section 7.3, we define a distance function \(f(c(x),p)\leq 0\) for SVM). By using this distance function, we can define the set "\(Q\)" and the value "\(l_{i}\)" when generalizing Lemma 7 and 8 below. To guarantee their correctnesses, we also require \(\mathcal{X}\) to satisfy the following three properties. **Property 1**.: _For any two shapes \(x_{1}\neq x_{2}\in\mathcal{X}\), if \(c(x_{1})=c(x_{2})\), then_ \[s(x_{1})\leq s(x_{2})\Longleftrightarrow x_{1}\text{ is covered by }x_{2}, \tag{66}\] _where "\(x_{1}\) is covered by \(x_{2}\)" means "for any point \(p\in\mathbb{R}^{d}\), \(p\in x_{1}\Rightarrow p\in x_{2}\)"._ **Property 2**.: _Given any shape \(x\in\mathcal{X}\) and any point \(p_{0}\in x\), the set_ \[\{p\mid p\in\mathbb{R}^{d}\text{ and }f(c(x),p)\leq f(c(x),p_{0})\}\subseteq x. \tag{67}\] **Property 3**.: _Given any shape center \(o_{0}\) and any point \(p_{0}\in\mathbb{R}^{d}\), let \(r_{0}=\min\{r\mid r\geq 0,p_{0}\in x(o_{0},r)\}\). Then \(p_{0}\in x(o_{0},r_{0})\) and \(p_{0}\notin x(o_{0},r)\) for any \(r<r_{0}\). (**Note:** usually the value \(r_{0}\) is just the distance from \(p_{0}\) to the shape center \(o_{0}\); but for some cases, such as the SVM problem in Section 7.3, the shape size and distance function have different meanings)._ Intuitively, Property 1 shows that \(s(x)\) defines an order of the shapes sharing the same center \(c(x)\). Property 2 shows that the distance function \(f\) defines an order of the points to a given shape center \(c(x)\). Property 3 shows that a center \(o_{0}\) and a point \(p_{0}\) can define a shape just "touching" \(p_{0}\). We can take \(\mathcal{X}=\{\)all \(d\)-dimensional balls\(\}\) as an example. For any two concentric balls, the smaller one is always covered by the larger one (Property 1); if a point \(p_{0}\) is inside a ball \(x\), any point \(p\) having the distance \(||p-c(x)||\leq||p_{0}-c(x)||\) should be inside \(x\) too (Property 2); also, given a ball center \(o_{0}\) and a point \(p_{0}\), \(p_{0}\in\mathbb{B}(o_{0},||p_{0}-o_{0}||)\) and \(p_{0}\notin\mathbb{B}(o_{0},r)\) for any \(r<||p_{0}-o_{0}||\) (Property 3). Now, we introduce the formal definitions of the MEX with outliers problem and its bi-criteria approximation. Definition 7 (MEX with Outliers): Suppose the shape set \(\mathcal{X}\) satisfies Property 1, 2, and 3. Given a set \(P\) of \(n\) points in \(\mathbb{R}^{d}\) and a small parameter \(\gamma\in(0,1)\), the MEX with outliers problem is to find the smallest shape \(x\in\mathcal{X}\) that covers \((1-\gamma)n\) points. Namely, the task is to find a subset of \(P\) with size \((1-\gamma)n\) such that its minimum enclosing shape of \(\mathcal{X}\) is the smallest among all possible choices of the subset. The obtained solution is denoted by \(\mathbf{MEX}(P,\gamma)\). Definition 8 (Bi-criteria Approximation): Given an instance \((P,\gamma)\) for MEX with outliers and two small parameters \(0<\epsilon,\delta<1\), a \((1+\epsilon,1-\delta)\)-approximation of \((P,\gamma)\) is a solution \(x\in\mathcal{X}\) that covers at least \(\big{(}1-\delta-\gamma\big{)}n\) points and has the size at most \((1+\epsilon)s(x_{\text{opt}})\), where \(x_{\text{opt}}\) is the optimal solution. It is easy to see that Definition 4 of MEB with outliers actually is a special case of Definition 7. Similar to MEB with outliers, we still use \(P_{\text{opt}}\), where \(P_{\text{opt}}\subset P\) and \(|P_{\text{opt}}|=(1-\gamma)n\), to denote the subset covered by the optimal solution of MEX with outliers. Now, we provide the generalized versions of Lemma 7 and 8. Similar to the core-set construction method in Section 2.1, we assume that there exists an iterative algorithm \(\Gamma\) to compute MEX (without outliers); actually, this is an important prerequisite to design the sub-linear time algorithms under our framework (we will discuss the iterative algorithms for the MEX with outliers problems including flat fitting, \(k\)-center clustering, and SVM, in the following subsections). In the \(i\)-th iteration of \(\Gamma\), it maintains a shape center \(o_{i}\). Also, let \(Q\) be the set of \((\delta+\gamma)n\) farthest points from \(P\) to \(o_{i}\) with respect to the distance function \(f\). First, we need to define the value "\(l_{i}\)" by \(Q\) in the following claim. Claim 3: There exists a value \(l_{i}\geq 0\) satisfying \(P\setminus x(o_{i},l_{i})=Q\). Proof: The points of \(P\) can be ranked based on their distances to \(o_{i}\). Without loss of generality, let \(P=\{p_{1},p_{2},\cdots,p_{n}\}\) with \(f(o_{i},p_{1})>f(o_{i},p_{2})>\cdots>f(o_{i},p_{n})\) (for convenience, we assume that any two distances are not equal; if there is a tie, we can arbitrarily decide their order to \(o_{i}\)). Then the set \(Q=\{p_{j}\ |\ 1\leq j\leq(\delta+\gamma)n\}\). Moreover, from Property 3, we know that each point \(p_{j}\in P\) corresponds to a value \(r_{j}\) that \(p_{j}\in x(o_{i},r_{j})\) and \(p_{j}\notin x(o_{i},r)\) for any \(r<r_{j}\). Denote by \(x_{j}\) the shape \(x(o_{i},r_{j})\). We select the point \(p_{j_{0}}\) with \(j_{0}=(\delta+\gamma)n+1\). From Property 2, we know that \(p_{j}\in x_{j_{0}}\) for any \(j\geq j_{0}\), _i.e.,_ **(a)**_\(P\setminus Q\subseteq x_{j_{0}}\)._ We also need to prove that \(p_{j}\notin x_{j_{0}}\) for any \(j<j_{0}\). Assume there exists some \(p_{j_{1}}\in x_{j_{0}}\) with \(j_{1}<j_{0}\). Then we have \(r_{j_{1}}<r_{j_{0}}\) and thus \(p_{j_{0}}\notin x_{j_{1}}\) (by Property 3). By Property 2, \(p_{j_{0}}\notin x_{j_{1}}\) implies \(f(o_{i},p_{j_{0}})>f(o_{i},p_{j_{1}})\), which is in contradiction to the fact \(f(o_{i},p_{j_{0}})<f(o_{i},p_{j_{1}})\). So we have **(b)**\(Q\cap x_{j_{0}}=\emptyset\). The above **(a)** and **(b)** imply that \(\{P\cap x_{j_{0}},Q\}\) is a partition of \(P\), _i.e.,_ \((P\cap x_{j_{0}})\cup Q=P\) and \((P\cap x_{j_{0}})\cap Q=\emptyset\). So we know \(P\setminus x_{j_{0}}=Q\). Therefore, we can set the value \(l_{i}=r_{j_{0}}\) and then \(P\setminus x(o_{i},l_{i})=Q\). Lemma 11 (Generalized Uniform-Adaptive Sampling): _Let \(\eta_{1}\in(0,1)\). If we sample \(n^{\prime}=O(\frac{1}{\delta}\log\frac{1}{\eta_{1}})\) points independently and uniformly at random from \(P\) and let \(Q^{\prime}\) be the set of farthest \(\frac{3}{2}(\delta+\gamma)n^{\prime}\) points to \(o_{i}\) from the sample, then, with probability at least \(1-\eta_{1}\), the following holds_ \[\frac{\left|Q^{\prime}\cap\left(P_{opt}\cap Q\right)\right|}{|Q^{ \prime}|}\geq\frac{\delta}{3(\gamma+\delta)}. \tag{68}\] Proof: Let \(A\) denote the set of sampled \(n^{\prime}\) points from \(P\). Similar to (28), we have \[\left|A\cap\left(P_{\mathrm{opt}}\cap Q\right)\right|>\frac{1}{2} \delta n^{\prime}\quad\quad\text{and}\quad\quad\left|A\cap Q\right|<\frac{3}{ 2}(\delta+\gamma)n^{\prime} \tag{69}\] with probability \(1-\eta_{1}\). Similar to (29), we have \[A\cap Q=\{p\in A\mid f(o_{i},p)>f(o_{i},p_{j_{0}})\}, \tag{70}\] where \(p_{j_{0}}\) is the point selected in the proof of Claim 3. By using the same manner of Claim 3, we also can select a point \(p_{j^{\prime}_{0}}\in A\) with \[Q^{\prime}=\{p\in A\mid f(o_{i},p)>f(o_{i},p_{j^{\prime}_{0}})\}. \tag{71}\] Then, we can prove \[\Big{(}A\cap\left(P_{\mathrm{opt}}\cap Q\right)\Big{)}=\Big{(}Q^ {\prime}\cap\left(P_{\mathrm{opt}}\cap Q\right)\Big{)}. \tag{72}\] by using the same idea of (33). Hence, \[\frac{\left|Q^{\prime}\cap\left(P_{\mathrm{opt}}\cap Q\right) \right|}{|Q^{\prime}|}=\frac{\left|A\cap\left(P_{\mathrm{opt}}\cap Q\right) \right|}{|Q^{\prime}|}\geq\frac{\delta}{3(\gamma+\delta)}, \tag{73}\] where the final inequality comes from the first inequality of (69) and the fact \(|Q^{\prime}|=\frac{3}{2}(\delta+\gamma)n^{\prime}\). Lemma 12 (Generalized Sandwich Lemma): _Let \(\eta_{2}\in(0,1)\) and assume \(\delta<\gamma/3\). \(l_{i}\) is the value from Claim 3. We sample \(n^{\prime\prime}=O\big{(}\frac{\gamma}{\delta^{2}}\log\frac{1}{\eta_{2}}\big{)}\) points independently and uniformly at random from \(P\) and let \(q\) be the \(\big{(}(1+\delta/\gamma)^{2}\gamma n^{\prime\prime}+1\big{)}\)-th farthest one from the sampled points to \(o_{i}\). If \(\tilde{l}_{i}=\min\{r\mid r\geq 0,q\in x(o_{i},r)\}\) (similar to the way defining "\(\tau_{0}\)" in Property 3), then, with probability \(1-\eta_{2}\), the following holds_ \[\tilde{l}_{i} \leq l_{i}; \tag{74}\] \[\left|P\setminus x(o_{i},\tilde{l}_{i})\right| \leq(\gamma+5\delta)n. \tag{75}\] Proof: Let \(B\) denote the set of sampled \(n^{\prime\prime}\) points from \(P\). By using the same manner of Claim 3, we know that there exists a value \(\tilde{l}^{\prime}_{i}>0\) satisfying \(\left|P\setminus x(o_{i},\tilde{l}^{\prime}_{i})\right|=\frac{(\gamma+\delta) ^{2}}{\gamma-\delta}\gamma n\). Similar to the proof of Lemma 8, we can prove that \(\tilde{l}_{i}\in[\tilde{l}^{\prime}_{i},l_{i}]\). Due to Property 1, we know that \(x(o_{i},\tilde{l}_{i})\) is "sandwiched" by the two shapes \(x(o_{i},\tilde{l}^{\prime}_{i})\) and \(x(o_{i},l_{i})\). Further, since \(x(o_{i},\tilde{l}^{\prime}_{i})\) is covered by \(x(o_{i},\tilde{l}_{i})\), we have \[\left|P\setminus x(o_{i},\tilde{l}_{i})\right|\leq\left|P\setminus x (o_{i},\tilde{l}^{\prime}_{i})\right|=\frac{(\gamma+\delta)^{2}}{\gamma-\delta }\gamma n=(\gamma+5\delta)n, \tag{76}\] where the last equality comes from the assumption \(\delta<\gamma/3\). So (74) and (75) are true. By using Lemma 11 and Lemma 12, we study several applications in the following subsections. ### \(k\)-Center Clustering with Outliers Let \(\gamma\in(0,1)\). Given a set \(P\) of \(n\) points in \(\mathbb{R}^{d}\), the problem of \(k\)**-center clustering with outliers** is to find \(k\) balls to cover \((1-\gamma)n\) points, and the maximum radius of the balls is minimized (_w.l.o.g.,_ we can assume that the \(k\) balls have the same radius). Given an instance \((P,\gamma)\), let \(\{C_{1},\cdots,C_{k}\}\) be the \(k\) clusters forming \(P_{\mathrm{opt}}\) (the subset of \(P\) yielding the optimal solution), and \(r_{\mathrm{opt}}\) be the optimal radius; that is, each \(C_{j}\) is covered by an individual ball with radius \(r_{\mathrm{opt}}\). Similar to Section 5.2, we first introduce a linear time algorithm, and then show how to modify it to be sublinear time by using Lemma 11 and 12. **Linear time algorithm.** Our algorithm in Section 5.2.1 can be generalized to be a linear time bi-criteria algorithm for the problem of \(k\)-center clustering with outliers, if \(k\) is assumed to be a constant. Our idea is as follows. In Algorithm 4, we maintain a set \(T\) as the core-set of \(P_{\mathrm{opt}}\); here, we instead maintain \(k\) sets \(T_{1},T_{2},\cdots,T_{k}\) as the core-sets of \(C_{1},C_{2},\cdots,C_{k}\), respectively. Consequently, each \(T_{j}\) for \(1\leq j\leq k\) has an approximate MEB center \(o_{i}^{j}\) in the \(i\)-th round of Step 3, and we let \(O_{i}=\{o_{i}^{1},\cdots,o_{i}^{k}\}\). Initially, \(O_{0}\) and \(T_{j}\) for \(1\leq j\leq k\) are all empty; we randomly select a point \(p\in P\), and with probability \(1-\gamma\), \(p\in P_{\mathrm{opt}}\) (w.l.o.g., we assume \(p\in C_{1}\) and add it to \(T_{1}\); thus \(O_{1}=\{p\}\) after this step). We let \(Q\) be the set of farthest \(t=(\delta+\gamma)n\) points to \(O_{i}\), and \(l_{i}\) be the \((t+1)\)-th largest distance from \(P\) to \(O_{i}\) (the distance from a point \(p\in P\) to \(O_{i}\) is \(\min_{1\leq j\leq k}||p-o_{i}^{j}||\)). Then, we randomly select a point \(q\in Q\), and with probability \(\frac{\delta}{\gamma+\delta}\), \(q\in P_{\mathrm{opt}}\) (as (46) in Lemma 9). For ease of presentation, we assume that \(q\in P_{\mathrm{opt}}\) happens and we have an "oracle" to guess which optimal cluster \(q\) belongs to, say \(q\in C_{j_{q}}\); then, we add \(q\) to \(T_{j_{q}}\) and update the approximate MEB center of \(T_{j_{q}}\). Since each optimal cluster \(C_{j}\) for \(1\leq j\leq k\) has the core-set with size \(\frac{2}{\epsilon}+1\) (by setting \(s=\frac{\epsilon}{2+\epsilon}\) in Theorem 1), after adding at most \(k(\frac{2}{\epsilon}+1)\) points, the distance \(l_{i}\) will be smaller than \((1+\epsilon)r_{\mathrm{opt}}\). Consequently, a \((1+\epsilon,1-\delta)\)-approximation solution is obtained when \(i\geq k(\frac{2}{\epsilon}+1)\). **Note** that some "small" clusters could be missing from the above random sampling based approach and therefore \(|O_{i}|\) could be less than \(k\); however, it always can be guaranteed that the total number of missing inliers is at most \(\delta n\), _i.e.,_ a \((1+\epsilon,1-\delta)\)-approximation is always guaranteed (otherwise, the ratio \(\frac{|P_{\mathrm{opt}}\cap Q|}{|Q|}>\frac{\delta}{\gamma+\delta}\) and we can continue to sample a point from \(P_{\mathrm{opt}}\) and then update \(O_{i}\)). To remove the oracle for guessing the cluster containing \(q\), we can enumerate all the possible \(k\) cases; since we add \(k(\frac{2}{\epsilon}+1)\) points to \(T_{1},T_{2},\cdots,T_{k}\), it generates \(k^{k(\frac{2}{\epsilon}+1)}=2^{k\log k(\frac{2}{\epsilon}+1)}\) solutions in total, and at least one yields a \((1+\epsilon,1-\delta)\)-approximation with probability \((1-\gamma)(\frac{\delta}{\gamma+\delta})^{k(\frac{2}{\epsilon}+1)}\) (by the same manner for proving Theorem 5). Theorem 11: _Let \((P,\gamma)\) be an instance of \(k\)-center clustering with outliers. Given two parameters \(\epsilon,\delta\in(0,1)\), there exists an algorithm that outputs a \((1+\epsilon,1-\delta)\)-approximation with probability \((1-\gamma)(\frac{\delta}{\gamma+\delta})^{k(\frac{2}{\epsilon}+1)}\). The running time is \(O(2^{k\log k(\frac{2}{\epsilon}+1)}(n+\frac{1}{\epsilon^{5}})d)\)._ _If one repeatedly runs the algorithm \(O(\frac{1}{1-\gamma}(\frac{\gamma+\delta}{\delta})^{k(\frac{2}{\epsilon}+1)})\) times, with constant probability, the algorithm outputs a \((1+\epsilon,1-\delta)\)-approximation solution._ Similar to our discussion on the running time for MEB with outliers in Section 5.2.1, Badoiu _et al._[22] also achieved a linear time bi-criteria approximation for the \(k\)-center clustering with outliers problem (see Section 4 in their paper). However, the hidden constant of their running time is exponential in \((\frac{k}{\epsilon\delta})^{O(1)}\) that is much larger than "\(k\log k(\frac{2}{\epsilon}+1)\)" in Theorem 11. **Sublinear time algorithm.** The linear time algorithm can be further improved to be sublinear time; the idea is similar to that for designing sublinear time algorithm for MEB with outliers in Section 5.2.2. First, we follow Definition 7 and define the shape set \(\mathcal{X}\), where each \(x\in\mathcal{X}\) is union of \(k\) balls in the space; the center \(c(x)\) should be the set of its \(k\) ball centers, say \(c(x)=\{o_{x}^{1},o_{x}^{2},\cdots,o_{x}^{k}\}\), and the size \(s(x)\) is the radius, _i.e.,_\(x=\cup_{j=1}^{k}\mathbb{B}(o_{x}^{j},s(x))\). Obviously, if \(x\) is a feasible solution for the instance \((P,\gamma)\), the size \(\left|P\cap(\cup_{j=1}^{k}\mathbb{B}(o_{x}^{j},s(x)))\right|\) should be at least \((1-\gamma)n\). Also, define the distance function \(f(c(x),p)=\min_{1\leq j\leq k}||p-o_{x}^{j}||\). It is easy to verify that the shape set \(\mathcal{X}\) satisfies Property 1, 2, and 3. From Lemma 11, we know that it is possible to obtain a point in \(P_{\mathrm{opt}}\cap Q\) with probability \((1-\eta_{1})\frac{\delta}{3(\gamma+\delta)}\). Further, we can estimate the value \(l_{i}\) and select the best candidate solution based on Lemma 12. Overall, we have the following theorem. Theorem 7.1: _Let \((P,\gamma)\) be an instance of \(k\)-center clustering with outliers. Given the parameters \(\epsilon,\delta,\eta_{1},\eta_{2}\in(0,1)\), there exists an algorithm that outputs a \((1+\epsilon,1-\delta)\)-approximation with probability \((1-\gamma)\big{(}(1-\eta_{1})(1-\eta_{2})\frac{\delta}{3(\gamma+\delta)} \big{)}^{k(\frac{2}{\epsilon}+1)}\). The running time is \(\tilde{O}(2^{k\log k(\frac{2}{\epsilon}+1)}(\frac{\gamma}{\delta^{2}}+\frac {1}{\epsilon^{5}})d)\)._ _If one repeatedly runs the algorithm \(N=O\Big{(}\frac{1}{1-\gamma}\big{(}\frac{1}{1-\eta_{1}}(\frac{3(\gamma+\delta )}{\delta})\big{)}^{k(\frac{2}{\epsilon}+1)}\Big{)}\) times with setting \(\eta_{2}=O(\frac{1}{2^{k\log k(\frac{2}{\epsilon}+1)}N})\), with constant probability, the algorithm outputs a \((1+\epsilon,1-\delta)\)-approximation solution._ ### Flat Fitting with Outliers Let \(j\) be a fixed integer between \(0\) and \(d\). Given a \(j\)-dimensional flat \(\mathcal{F}\) and a point \(p\in\mathbb{R}^{d}\), we define their distance, \(dist(\mathcal{F},p)\), to be the Euclidean distance from \(p\) to its projection onto \(\mathcal{F}\). Let \(P\) be a set of \(n\) points in \(\mathbb{R}^{d}\). The problem of **flat fitting** is to find the \(j\)-dimensional flat \(\mathcal{F}\) that minimizes \(\max_{p\in P}dist(\mathcal{F},p)\). It is easy to see that the MEB problem is the case \(j=0\) of the flat fitting problem. Furthermore, given a parameter \(\gamma\in(0,1)\), the **flat fitting with outliers** problem is to find a subset \(P^{\prime}\subset P\) with size \((1-\gamma)n\) such that \(\max_{p\in P^{\prime}}dist(\mathcal{F},p)\) is minimized. Similar to MEB with outliers, we also use \(P_{\mathrm{opt}}\) to denote the optimal subset. Before presenting our algorithms for flat fitting with outliers, we first introduce the linear time algorithm from Har-Peled and Varadarajan [56] for the vanilla version (without outliers). We start from the case \(j=1\), _i.e.,_ the flat \(\mathcal{F}\) is a line in the space. Roughly speaking, their algorithm is an iterative procedure to update the solution round by round, until it is close enough to the optimal line \(l_{\mathrm{opt}}\). There are two parts in the algorithm. **(1)** It picks an arbitrary point \(p_{\Delta}\in P\) and let \(q_{\Delta}\) be the farthest point of \(P\) from \(p_{\Delta}\); it can be proved that the line passing through \(p_{\Delta}\) and \(q_{\Delta}\), denoted as \(l_{0}\), is a good initial solution that yields a 4-approximation with respect to the objective function. **(2)** In each of the following rounds, the algorithm updates the solution from \(l_{i-1}\) to \(l_{i}\) where \(i\geq 1\) is the current number of rounds: let \(p_{i}\) be the farthest point of \(P\) from \(l_{i-1}\) and let \(h_{i}\) denote the 2-dimensional flat spanned by \(p_{i}\) and \(l_{i-1}\); then the algorithm computes a set of \(O(\frac{1}{\epsilon^{8}}\log^{2}\frac{1}{\epsilon})\) lines on \(h_{i}\), and picks one of them as \(l_{i}\) via an "oracle". They proved that the improvement from \(l_{i-1}\) to \(l_{i}\) is significant enough; thus, after running \(\nu=O(\frac{1}{\epsilon^{3}}\log\frac{1}{\epsilon})\) rounds, it is able to achieve a \((1+\epsilon)\)-approximation. To remove the "oracle", the algorithm can enumerate all the \(O(\frac{1}{\epsilon^{8}}\log^{2}\frac{1}{\epsilon})\) lines on \(h_{i}\), and thus the total running time is \(O\big{(}2^{\frac{1}{\epsilon^{8}}\log^{2}\frac{1}{\epsilon}}nd\big{)}\). **Linear time algorithm.** Now we consider to adapt the above algorithm to the case with outliers, where in fact the idea is similar to the idea proposed in Section 5.2.1 for MEB with outliers. For simplicity, we still use the same notations as above. Consider the part **(1)** first. If we randomly pick a point \(p_{\Delta}\) from \(P\), with probability \(1-\gamma\), it belongs to \(P_{\mathrm{opt}}\); further, we randomly pick a point, denoted as \(q_{\Delta}\), from the set of \((\delta_{0}+\gamma)n\) farthest points of \(P\) from \(p_{\Delta}\), where the value of \(\delta_{0}\) will be determined below. Obviously, with probability \(\frac{\delta_{0}}{\gamma+\delta_{0}}\), \(q_{\Delta}\in P_{\mathrm{opt}}\). Denote by \(P_{0}=\{p\in P_{\mathrm{opt}}\mid||p-p_{\Delta}||\leq||q_{\Delta}-p_{\Delta}||\}\). Then we have the following lemma. Lemma 13: _Denote by \(l_{0}\) the line passing through \(p_{\Delta}\) and \(q_{\Delta}\). Then, with probability \((1-\gamma)\big{(}\frac{\delta_{0}}{\gamma+\delta_{0}}\big{)}\),_ \[\max_{p\in P_{0}}dist(l_{0},p)\leq 4\max_{p\in P_{0}}dist(l_{opt},p)\leq 4\max_{p \in P_{\mathrm{opt}}}dist(l_{opt},p). \tag{77}\] _Also, the size of \(P_{0}\) is at least \(\big{(}1-(\delta_{0}+\gamma)\big{)}n\)._ It is straightforward to obtain the size of \(P_{0}\). The inequality (77) directly comes from the aforementioned result of [56], as long as \(p_{\Delta}\) and \(q_{\Delta}\in P_{\mathrm{opt}}\). So we can use the line \(l_{0}\) as our initial solution. Then, we can apply the same random sampling idea to select the point \(p_{i}\) in the \(i\)-th round. Namely, we randomly pick a point as \(p_{i}\) from the set of \((\delta_{0}+\gamma)n\) farthest points of \(P\) from \(l_{i}\). Moreover, we need to shrink the set \(P_{i-1}\) to \(P_{i}=\{p\in P_{i-1}\mid dist(l_{i-1},p)\leq dist(l_{i-1},p_{i})\}\). Similar to Lemma 13, we can show that the improvement from \(l_{i-1}\) to \(l_{i}\) is significant enough with probability \((1-\gamma)(\frac{\delta_{0}}{\gamma+\delta_{0}})^{i+1}\), and the size of \(P_{i}\) is at least \(\big{(}1-((i+1)\delta_{0}+\gamma)\big{)}n\). After running \(\nu\) rounds, we obtain the line \(l_{\nu}\) such that \(\max_{p\in P_{\nu}}dist(l_{\nu},p)\leq(1+\epsilon)\max_{p\in P_{\mathrm{opt}}} dist(l_{\mathrm{opt}},p)\), and \(|P_{\nu}|\geq\big{(}1-((\nu+1)\delta_{0}+\gamma)\big{)}n\). So if we set \(\delta_{0}=\frac{\delta}{\nu+1}\) with a given \(\delta\in(0,1)\), the line \(l_{\nu}\) will be a bi-criteria \((1+\epsilon,1-\delta)\)-approximation of the instance \((P,\gamma)\). By using the idea in [56], we can extend the result to the case \(j>1\) with \(\nu=\frac{e^{O(j^{2})}}{\epsilon^{2j+1}}\log\frac{1}{\epsilon}\). We refer the reader to [56] for more details. Theorem 7.1: _Let \((P,\gamma)\) be an instance of \(j\)-dimensional flat fitting with outliers. Given two parameters \(\epsilon,\delta\in(0,1)\), there exists an algorithm that outputs a \((1+\epsilon,1-\delta)\)-approximation with probability \((1-\gamma)\big{(}\frac{1}{2}\big{)}^{g(j,\epsilon)}\) where \(g(j,\epsilon)=poly(e^{O(j^{2})},\frac{1}{\epsilon^{j}})\). The running time is \(O(2^{g^{\prime}(j,\epsilon)}nd)\) where \(g^{\prime}(j,\epsilon)=poly(e^{O(j^{2})},\frac{1}{\epsilon^{j}})\)._ _If one repeatedly runs the algorithm \(\frac{2^{g(j,\epsilon)}}{1-\gamma}\) times, with constant probability, the algorithm outputs a \((1+\epsilon,1-\delta)\)-approximation solution._ **Sublinear time algorithm.** We can view the flat fitting with outliers problem as an MEX with outliers problem. Let \(r\geq 0\) and \(\mathcal{F}\) be a \(j\)-dimensional flat. Then we can define a \(j\)-dimensional "slab" \(SL(\mathcal{F},r)=\{p\in\mathbb{R}^{d}\mid dist(\mathcal{F},p)\leq r\}\), where its "center" and "size" are \(\mathcal{F}\) and \(r\) respectively (_e.g.,_ a ball is a \(0\)-dimensional slab); the distance function \(f(\mathcal{F},p)=dist(\mathcal{F},p)\). It is easy to see that the shape set of slabs satisfies Property 1, 2, and 3. Furthermore, finding the optimal flat is equivalent to finding the smallest slab covering \((1-\gamma)n\) points of \(P\). Therefore, by using Lemma 11 and 12, we achieve the following theorem. Theorem 7.2: _Let \((P,\gamma)\) be an instance of \(j\)-dimensional flat fitting with outliers. Given the parameters \(\epsilon,\delta,\eta_{1},\eta_{2}\in(0,1)\), there exists an algorithm that outputs a \((1+\epsilon,1-\delta)\)-approximation with probability \((1-\gamma)\big{(}(1-\eta_{1})(1-\eta_{2})\frac{\delta}{3(\gamma+\delta)} \big{)}^{g(j,\epsilon)}\) where \(g(j,\epsilon)=poly(e^{O(j^{2})},\frac{1}{\epsilon^{j}})\). The running time is \(O(2^{g^{\prime}(j,\epsilon,\delta,\gamma)}d)\) where \(g^{\prime}(j,\epsilon)=poly(e^{O(j^{2})},\frac{1}{\epsilon^{j}},\frac{1}{ \delta},\frac{1}{\gamma})\)._ _If one repeatedly runs the algorithm \(N=O\Big{(}\frac{1}{1-\gamma}\big{(}\frac{1}{1-\eta_{1}}\big{(}\frac{3(\gamma+ \delta)}{\delta}\big{)}\big{)}^{g(j,\epsilon)}\Big{)}\) times with setting \(\eta_{2}=O(\frac{1}{2^{g(j,\epsilon)}N})\), with constant probability, the algorithm outputs a \((1+\epsilon,1-\delta)\)-approximation solution._ ### One-class SVM with Outliers In practice, datasets often contain outliers. The separating margin of SVM could be considerably deteriorated by outliers. As mentioned in [40], most of existing techniques [88, 93] for SVM outliers removal are numerical approaches (_e.g.,_ adding some penalty item to the objective function), and only can guarantee local optimums. Ding and Xu [40] modeled SVM with outliers as a combinatorial optimization problem and provided an algorithm called "Random Gradient Descent Tree". We focus on one-class SVM with outliers first, and explain the extension for two-class SVM with outliers in Section 7.4. Below is the definition of the one-class SVM with outliers problem proposed in [40]. Definition 9 (One-class SVM with Outliers): Given a set \(P\) of \(n\) points in \(\mathbb{R}^{d}\) and a small parameter \(\gamma\in(0,1)\), the one-class SVM with outliers problem is to find a subset \(P^{\prime}\subset P\) _with size \((1-\gamma)n\) and a hyperplane \(\mathcal{H}\) separating the origin \(o\) and \(P^{\prime}\), such that the distance between \(o\) and \(\mathcal{H}\) is maximized._ **Linear time algorithm.** We briefly overview the algorithm of [40]. They also considered the "bi-criteria approximation" with two small parameters \(\epsilon,\delta\in(0,1)\): a hyperplane \(\mathcal{H}\) separates the origin \(o\) and a subset \(P^{\prime}\subset P\) with size \(\big{(}1-\delta-\gamma\big{)}n\), where the distance between \(o\) and \(\mathcal{H}\) is at least \((1-\epsilon)\) of the optimum. The idea of [40] is based on the fact that the SVM (without outliers) problem is equivalent to the polytope distance problem in computational geometry [48]. _Let \(o\) be the origin and \(P\) be a given set of points in \(\mathbb{R}^{d}\). The **polytope distance problem** is to find a point \(q\) inside the convex hull of \(P\) so that the distance \(||q-o||\) is minimized._ For an instance \(P\) of one-class SVM, it can be proved that the vector \(q_{opt}-o\), if \(q_{opt}\) is the optimal solution for the polytope distance between \(o\) and \(P\), is the normal vector of the optimal hyperplane. We refer the reader to [40, 48] for more details. The polytope distance problem can be efficiently solved by _Gilbert Algorithm_[46, 49]. For completeness, we present it in Algorithm 8. ``` Input: A point-set \(P\) in \(\mathbb{R}^{d}\), and \(N\in\mathbb{Z}^{+}\). Output:\(v_{i}\) as an approximate solution of the polytope distance between the origin and \(P\). 1. Initialize \(i=1\) and \(v_{1}\) to be the closest point in \(P\) to the origin \(o\). 2. Iteratively perform the following steps until \(i=N\). 1. Find the point \(p_{i}\in P\) whose orthogonal projection on the supporting line of segment \(\overline{ov_{i}}\) has the closest distance to \(o\) (called the projection distance of \(p_{i}\)), _i.e.,_\(p_{i}=\arg\min_{p\in P}\{\frac{\langle p,v_{i}\rangle}{||v_{i}||}\}\), where \(\langle p,v_{i}\rangle\) is the inner product of \(p\) and \(v_{i}\) (see Figure 6). 2. Let \(v_{i+1}\) be the point on segment \(\overline{v_{i}p_{i}}\) closest to the origin \(o\); update \(i=i+1\). ``` **Algorithm 8** Gilbert Algorithm [40, 49] Similar to the core-set construction method of MEB in Section 2.1, the algorithm also greedily improves the current solution by selecting some point \(p_{i}\) in each iteration. Let \(\rho\) be the polytope distance between \(o\) and \(P\), \(D=\max_{p,q\in P}||p-q||\), and \(E=\frac{D^{2}}{\rho^{2}}\). Given \(\epsilon\in(0,1)\), it has been proved that a \((1-\epsilon)\)-approximation of one-class SVM (_i.e.,_ a separating margin with the width at least \((1-\epsilon)\) of the optimum) can be achieved by running Algorithm 8 at most \(2\lceil 2E/\epsilon\rceil\) steps [29, 48]. To handle outliers, the algorithm of [40] follows the similar intuition of Section 5.2.1; it replaces the step of greedily selecting the point \(p_{i}\) by randomly sampling a point from a set \(Q\), which contains the \((\delta+\gamma)n\) points having the smallest projection distances (_i.e.,_ the values of the function \(\frac{\langle p,v_{i}\rangle}{||v_{i}||}\) in Step 2(a) of Algorithm 8). To achieve a \((1-\epsilon,1-\delta)\)-approximation with constant success probability, the algorithm takes \(O\big{(}\frac{1}{1-\gamma}(1+\frac{\gamma}{\delta})^{z}\frac{D^{2}}{\epsilon \rho^{2}}nd\big{)}\) time, where \(z=O(\frac{D^{2}}{\epsilon\rho^{2}})\). **Sublinear time algorithm.** We define \(\mathcal{X}\) to be the set of all the closed half-spaces not covering the origin \(o\) in \(\mathbb{R}^{d}\); for each \(x\in\mathcal{X}\), let \(\mathcal{H}_{x}\) be the hyperplane enclosing \(x\) and let Figure 6: An illustration of step 2 in Algorithm 8; \(p_{i}\mid_{v_{i}}\) is the projection of \(p_{i}\) on \(\overline{ov_{i}}\). be the projection of \(o\) on \(\mathcal{H}_{x}\) (see Figure 7). We suppose that the given instance \((P,\gamma)\) has feasible solution. That is, there exists at least one half-space \(x\in\mathcal{X}\) that the hyperplane \(\mathcal{H}_{x}\) separates the origin \(o\) and a subset \(P^{\prime}\) with size \((1-\gamma)n\). We define the center \(c(x)=\frac{h_{x}}{||h_{x}||}\); since the MEX with outlier problem in Definition 7 is a minimization problem, we design the size function \(s(x)=\frac{1}{||h_{x}||}\). Obviously, a \((1-\epsilon)\)-approximation of the SVM with outliers problem is equivalent to a \(\frac{1}{1-\epsilon}\)-approximation with respect to the size function \(s(x)\). We also define the distance function \(f(c(x),p)=-\langle p,\frac{h_{x}}{||h_{x}||}\rangle\). It is easy to verify that the shape set \(\mathcal{X}\) satisfies Property 1, 2, and 3. Recall that Algorithm 8 selects the point \(p_{i}=\arg\min_{p\in P}\{\frac{\langle p,v_{i}\rangle}{||v_{i}||}\}\) in each iteration. Actually, the vector \(\frac{v_{i}}{||v_{i}||}\) can be viewed as a shape center and \(p_{i}\) is the farthest point to \(\frac{v_{i}}{||v_{i}||}\) based on the distance function \(f(c(x),p)\). Moreover, the set \(Q\) mentioned in the previous linear time algorithm actually is the set of the farthest \((\delta+\gamma)n\) points from \(P\) to \(\frac{v_{i}}{||v_{i}||}\). Consequently, we can apply Lemma 11 to sample a point from \(P_{opt}\cap Q\), and apply Lemma 12 to estimate the value of \(l_{i}\) for each candidate solution \(\frac{v_{i}}{||v_{i}||}\). Overall, we can improve the running time of the algorithm of [40] to be independent of \(n\). Theorem 7.1: _Let \((P,\gamma)\) be an instance of SVM with outliers. Given the parameters \(\epsilon,\delta,\eta_{1},\eta_{2}\in(0,1)\), there exists an algorithm that outputs a \((1-\epsilon,1-\delta)\)-approximation with probability \((1-\gamma)\big{(}(1-\eta_{1})(1-\eta_{2})\frac{\delta}{3(\gamma+\delta)}\big{)} ^{z}\) where \(z=O(\frac{D^{2}}{\epsilon\rho^{2}})\). The running time is \(\tilde{O}\big{(}\frac{D^{2}\gamma}{2\epsilon^{2}\rho^{2}}d\big{)}\)._ _If one repeatedly runs the algorithm \(N=O\Big{(}\frac{1}{1-\gamma}\big{(}\frac{1}{1-\eta_{1}}(3+\frac{3\gamma}{ \delta})\big{)}^{z}\Big{)}\) times with setting \(\eta_{2}=O(\frac{1}{zN})\), with constant probability, the algorithm outputs a \((1-\epsilon,1-\delta)\)-approximation solution._ ### Two-class SVM with Outliers Below is the definition of the two-class SVM with outliers problem proposed in [40]. Definition 10 (Two-class SVM with Outliers): Given two point sets \(P_{1}\) and \(P_{2}\) in \(\mathbb{R}^{d}\) and two small parameters \(\gamma_{1},\gamma_{2}\in(0,1)\), the two-class SVM with outliers problem is to find two subsets \(P^{\prime}_{1}\subset P_{1}\) and \(P^{\prime}_{2}\subset P_{2}\) with \(|P^{\prime}_{1}|=(1-\gamma_{1})|P_{1}|\) and \(|P^{\prime}_{2}|=(1-\gamma_{2})|P_{2}|\), and a margin separating \(P^{\prime}_{1}\) and \(P^{\prime}_{2}\), such that the width of the margin is maximized. We use \(P^{opt}_{1}\) and \(P^{opt}_{2}\), where \(|P^{opt}_{1}|=(1-\gamma_{1})|P_{1}|\) and \(|P^{opt}_{2}|=(1-\gamma_{2})|P_{2}|\), to denote the subsets of \(P_{1}\) and \(P_{2}\) which are separated by the optimal margin. The ordinary two-class SVM (without outliers) problem is equivalent to computing the polytope distance between the origin \(o\) and \(\mathcal{M}(P_{1},P_{2})\), where \(\mathcal{M}(P_{1},P_{2})\) is the Minkowski difference of \(P_{1}\) and \(P_{2}\)[48]. Note that it is not necessary to compute the set \(\mathcal{M}(P_{1},P_{2})\) explicitly. Instead, Algorithm 8 only needs to select one point from \(\mathcal{M}(P_{1},P_{2})\) in each iteration, and overall the running time is still linear in the input size. To deal with two-class SVM with outliers, Ding and Xu [40] slightly modified their algorithm for the case of one-class. In each iteration, it considers two subsets \(Q_{1}\subset P_{1}\) and \(Q_{2}\subset P_{2}\), which respectively consist of points having the \((\delta+\gamma_{1})|P_{1}|\) smallest projection distances among all points in \(P_{1}\) and the \((\delta+\gamma_{2})|P_{2}|\) largest projection distances among all points in \(P_{2}\) on the vector \(v_{i}\); then, the algorithm randomly selects two points \(p_{i}^{1}\in Q_{1}\) and \(p_{i}^{2}\in Q_{2}\), and their difference vector \(p_{i}^{2}-p_{i}^{1}\) will serve as the role of \(p_{i}\) in Step 2(a) of Algorithm 8 to update the current solution \(v_{i}\). This approach yields a \((1-\epsilon,1-\delta)\)-approximation in linear time. To improve the algorithm to be sublinear, we need several modifications on our previous idea for the case of one-class. First, we change the distance function to be: \[f(p,c)=\left\{\begin{array}{ll}-\langle p,\frac{h_{x}}{\|h_{x}\|}\rangle& \mbox{if $p\in P_{1}$;}\\ \langle p,\frac{h_{x}}{\|h_{x}\|}\rangle&\mbox{if $p\in P_{2}$.}\end{array}\right.\] By using this new distance function, we can apply Lemma 11 to obtain the points \(p_{i}^{1}\in Q_{1}\cap P_{2}^{opt}\) and \(p_{i}^{2}\in Q_{2}\cap P_{2}^{opt}\) separately in sublinear time. Given a vector (_i.e.,_ candidate center) \(\frac{v_{i}}{\|v_{i}\|}\), assume \(\mathcal{H}^{\perp}\) and \(\mathcal{H}^{\top}\) are the parallel hyperplanes orthogonal to \(\frac{v_{i}}{\|v_{i}\|}\) that the margin formed by them separates \(P_{1}^{\prime}\) and \(P_{2}^{\prime}\), where \(P_{1}^{\prime}\subset P_{1}\) and \(P_{2}^{\prime}\subset P_{2}\) with \(|P_{1}^{\prime}|=(1-\gamma_{1})|P_{1}|\) and \(|P_{2}^{\prime}|=(1-\gamma_{2})|P_{2}|\). Without loss of generality, we assume that the origin \(o\) is inside the margin. Suppose that the distances from \(o\) to \(\mathcal{H}^{\perp}\) and \(\mathcal{H}^{\top}\) are \(s^{\perp}\) and \(s^{\top}\), respectively. Then, we obtain two shapes (closed half-spaces) \(x^{\perp}=(-\frac{v_{i}}{\|v_{i}\|},\frac{1}{s^{\perp}})\) and \(x^{\top}=(\frac{v_{i}}{\|v_{i}\|},\frac{1}{s^{\top}})\) with \(P_{1}^{\prime}\subset x^{\perp}\) and \(P_{2}^{\prime}\subset x^{\top}\). Consequently, we can apply Lemma 12 twice to obtain two values \(\frac{1}{\tilde{s}^{\perp}}\leq\frac{1}{s^{\perp}}\) and \(\frac{1}{\tilde{s}^{\top}}\leq\frac{1}{s^{\top}}\) with \(\left|P_{1}\setminus x(-\frac{v_{i}}{\|v_{i}\|},\frac{1}{\tilde{s}^{\perp}}) \right|\leq(O(\delta)+\gamma_{1})|P_{1}|\) and \(\left|P_{2}\setminus x(\frac{v_{i}}{\|v_{i}\|},\frac{1}{\tilde{s}^{\top}}) \right|\leq(O(\delta)+\gamma_{2})|P_{2}|\). Therefore, we can use the value \(\tilde{s}^{\perp}+\tilde{s}^{\top}\) as an estimation of \(s^{\perp}+s^{\top}\). See Figure 8 for an illustration. Overall, we can achieve a \((1-\epsilon,1-O(\delta))\)-approximation in sublinear time. ## 8 Future Work Following our work, several interesting problems deserve to be studied in future. For example, different from radius approximation, the current research on covering approximation of MEB is still inadequate. In particular, can we provide a lower bound for the complexity of computing covering approximate MEB, as the lower bound result for radius approximate MEB proved by [30]? Also, is it possible to extend the stability notion to other geometric optimization problems with more complicated structures? In Section 7, we only provide the bi-criteria approximations for the MEX with outliers problems. So it is interesting to consider to extend the stability notion to these geometric optimization problems, and then we can design the hybrid approximation algorithms for them. Acknowledgements The research of this work was supported in part by National Key R&D program of China through grant 2021YFA1000900 and the Provincial NSF of Anhui through grant 2208085MF163. The author also want to thank Prof. Jinhui Xu for his helpful comments on this draft.
2307.06998
Iso-entangled bases and joint measurements
While entanglement between distant parties has been extensively studied, entangled measurements have received relatively little attention despite their significance in understanding non-locality and their central role in quantum computation and networks. We present a systematic study of entangled measurements, providing a complete classification of all equivalence classes of iso-entangled bases for projective joint measurements on 2 qubits. The application of this classification to the triangular network reveals that the Elegant Joint Measurement, along with white noise, is the only measurement resulting in output permutation invariant probability distributions when the nodes are connected by Werner states. The paper concludes with a discussion of partial results in higher dimensions.
Flavio Del Santo, Jakub Czartowski, Karol Życzkowski, Nicolas Gisin
2023-07-13T18:00:51Z
http://arxiv.org/abs/2307.06998v1
# Iso-entangled bases and joint measurements ###### Abstract While entanglement between distant parties has been extensively studied, entangled measurements have received relatively little attention despite their significance in understanding non-locality and their central role in quantum computation and networks. We present a systematic study of entangled measurements, providing a complete classification of all equivalence classes of iso-entangled bases for projective joint measurements on 2 qubits. The application of this classification to the triangular network reveals that the Elegant Joint Measurement, along with white noise, is the only measurement resulting in output permutation invariant probability distributions when the nodes are connected by Werner states. The paper concludes with a discussion of partial results in higher dimensions. ## I Introduction In 1935, Schrodinger stated that entanglement is not one but rather _the_ characteristic trait of quantum mechanics. Indeed, today it is well known that entanglement is not only necessary for the violation of celebrated Bell inequalities--disproving local hidden variables--but for most of the applications in quantum information science such as security proofs of quantum cryptography or quantum teleportation, to name but a few examples. Entanglement is sometimes called the "quantum teleportation channel". However, this overlooks the fact that entanglement plays a dual role in this fascinating process: first as the channel connecting the distant parties, indeed, but also in the joint measurement that triggers the teleportation process [1]. Similarly, these joint measurements are at the heart of entanglement swapping [2] and dense coding [3]. Formally, they are represented in quantum theory by self-adjoint operators which in turn are characterized by their eigenvectors. When these eigenvectors are entangled, one says that the measurement is entangled. For example, in the best known joint measurement, the eigenvectors are the Bell states which are all maximally entangled. Entanglement between distant parties, traditionally named Alice and Bob, is by now well-studied and understood. However, entangled measurements received so far relatively little attention [4; 5] and have never been studied in a systematic manner. This is somewhat surprising and disappointing, given their central role in quantum computation [6] and quantum networks [7]. In fact, it has been recently pointed out that understanding entangled measurements is one of the most interesting future directions in the foundations of quantum physics [8]. Studying entanglement beyond maximal value can lead to novel understanding and applications. Indeed, it is known by now that maximally entangled states are not always the best resource for quantum information tasks: non-maximally entangled quantum states in general outperform maximally entangled ones in most measures of non-locality, such as Bell inequalities, entanglement simulation with communication, the detection loophole and quantum cryptography [9; 10]. While this has not been investigated nearly as thoroughly for joint measurements, it has been shown that non-maximally entangled measurements represent stronger resources for certain tasks, such as the violation of bilocality [11]. In this paper, we provide the first systematic study of entangled measurements for the simplest case. The problem is known to be difficult in full generality, hence we assume that all the eigenvectors have the same degree of entanglement, i.e., they form an iso-entangled basis (previous works on non-maximally entangled joint measurements and iso-entangled bases are [11; 12; 13; 14; 15; 16; 17]). Moreover, we mostly limit our analysis to projective joint measurements on 2 qubits. Here, we give a complete classification of all iso-entangled bases of 2 qubits, up to the natural equivalence relation of local unitary rotations and swapping of the qubits. Next, we apply our parametrization to the triangular network and prove that the Elegant Joint Measurement (and white noise) is the only measurement that leads to output permutation invariant probability distributions when the nodes are connected by identical Werner states. Finally, we discuss partial results in higher dimensions. ## Complete classification of all equivalence classes of iso-entangled bases of 2 qubits Consider measurements on two qubits, i.e., the partition of the Hilbert space \(\mathcal{C}^{4}=\mathcal{C}^{2}\otimes\mathcal{C}^{2}\) is fixed. An iso-entangled basis is an orthonormal basis s.t. all 4 vectors \(\ket{\psi_{j}}\), \(j=1,\ldots 4\), have the same degree of entanglement. There are many measures of entanglement, but for pure bipartite states \(\rho_{AB}=\ket{\psi}\bra{\psi}_{AB}\) they are all equivalent [18]. We quantify the degree of entanglement by its tangle, equal to squared concurrence [19; 20; 21] \[\xi=2\left(1-\mathrm{Tr}(\rho_{A}^{2})\right)\in[0,1], \tag{1}\] where \(\rho_{A}=\mathrm{Tr}_{B}(\rho_{AB})\) is the reduced density matrix; this monotonically quantifies entanglement from 0 (separable states) to 1 (maximally entangled states). **Defintion 1** (Local equivalence of bases) Let us define the equivalence relation \(\sim\) : two bases \(B_{1}\) and \(B_{2}\) are equivalent iff they are identical under local unitaries, \(U_{i}\) (equivalently, local changes of basis), or identical under swap \(S_{A\leftrightarrow B}\) and local unitaries, i.e. \[B_{1}\sim B_{2}\Leftrightarrow B_{2}=(U_{A}\otimes U_{B})(\cdot)B_{1}P, \tag{2}\] with \((\cdot)\in\{\mathds{1},\mathrm{S}_{A\leftrightarrow B}\}\) and \(P\) an arbitrary permutation. Our goal is to find a parametrization of each family of equivalence classes. Starting with 12 real parameters for an arbitrary dephased orthonormal basis of \(\mathcal{C}^{4}\), we subtract \(3+3\) parameters for local changes of bases, and the 3 constraints that all 4 vectors have the same degree of entanglement. We thus expect an iso-entangled basis of two-qubits to depend in general on 3 parameters. Our main result consists in the following proposition: **Proposition 1** (Complete classification of iso-entangled bases of 2 qubits) All equivalence classes of iso-entangled bases on the space \(C\) with respect to the relation (2) constitute a three-dimensional manifold composed of two families, together with the closure of discontinuous submanifolds, given by three additional families of equivalence classes of smaller dimension. The specific functional form of the families is provided in Eq. (15) for the general family, (12) for the Bell family, and in Eqs. (9), (10) for the families of smaller dimensions. ### Constructive proof Let \(B\) be a matrix of order 4 whose columns are 4 basis vectors \(\{\ket{\psi_{1}},\ket{\psi_{2}},\ket{\psi_{3}},\ket{\psi_{4}}\}\) in \(\mathcal{C}^{4}=\mathcal{C}^{2}\otimes\mathcal{C}^{2}\). Let us write \(B\) in the following skewed basis (by applying local change of basis only), consisting only of product states, but in general different from the computational basis: \[\ket{0,0},\ket{0,1},\ket{1,\varphi},\ket{1,\varphi^{\perp}}, \tag{3}\] where \(\ket{\varphi}=\cos(\tau)\ket{0}+\sin(\tau)\ket{1}\) and \(\ket{\varphi^{\perp}}=\cos(\tau)\ket{1}-\sin(\tau)\ket{0}\). To simplify the derivation, we use the fact that any 2-dimensional subspace of \(\mathcal{C}^{4}\) contains at least one product state [22]. Imposing the orthonormality leads to the following parametrization of an arbitrary equivalence class of 2-qubit orthonormal bases (for derivation see SM, section A): \[B_{\ket{\varphi}}=\left(\begin{array}{cccc}0&0&-c\alpha\cdot e^{i\gamma}&s \alpha\cdot e^{i\gamma}\\ s\delta\cdot c\theta&c\delta\cdot c\theta&-s\alpha\cdot s\theta&-c\alpha\cdot s \theta\\ s\delta\cdot s\theta&c\delta\cdot s\theta&s\alpha\cdot c\theta&c\alpha\cdot c \theta\\ -c\delta\cdot e^{i\beta}&s\delta\cdot e^{i\beta}&0&0\end{array}\right), \tag{4}\] where we have introduced the compact notation \(c\delta=\cos\delta\), \(s\delta=\sin\delta\) and similarly for \(c\alpha\) and \(s\alpha\), and \(c\theta\) and \(s\theta\). The subscript \(\ket{\varphi}\) indicates that the coefficients are expressed in the basis provided in Eq. (3). As expected, this parametrization has 6 parameters: \(\alpha,\delta,\theta,\gamma,\beta\) and \(\tau\) (with \(\tau\) included implicitly in skewed basis (3)). By computing the tangle \(\xi_{j}\) (Eq. (1)) for each state \(\ket{\psi_{j}}\) of \(B\), we can now impose the constraints of iso-entanglement: \[\xi_{i}=\xi_{j},\quad\forall i,j\in\{1,2,3,4\}. \tag{5}\] Note that only 3 of these equations are independent, thus solving these constraints will lead to a parametrization depending on \(6-3=3\) parameters. The previous equations yield the following complete set of solutions: 1. \(\cos\theta=0\), or 2. \(\sin 2\theta\neq 0\), and \(\sin\tau=0\implies\alpha=\frac{\pi}{4}=\pm\delta+\frac{l\pi}{2}\), or 3. \(\sin\theta=0\), and \(\sin\tau\neq 0\implies\alpha=\pm\delta+\frac{k\pi}{2}\), or 4. \(\cos\tau=0\implies\alpha=\pm\delta+\frac{m\pi}{2}\), or 5. \(\cos\theta\neq 0\), and \(\sin\theta\neq 0\), and \(\cos\tau\neq 0\), and \(\sin\tau\neq 0\), and \(\sin(2\delta)\neq 0\implies\alpha=\pm\delta+\frac{n\pi}{2}\). As we shall see, the first solution (\(\cos\theta=0\)) is somehow trivial, for it leads to all four basis states to be separable. All the other solutions imply that \(\alpha=\pm\delta\) (omitting here the periodicity of \(\pi/2\)). This condition can thus be substituted in (three of) the equations (5), leading to the following simplified expressions for the iso-entanglement conditions: \[0=\xi_{1}-\xi_{2} = -8\cos^{2}\theta\sin\theta\cos\tau\cdot\] \[\cdot(\cos\tau\sin\theta\cos(2\delta)-\sin\tau\sin(2\delta)\cos\beta)\] \[0=\xi_{3}-\xi_{4} = -8\cos^{2}\theta\sin\theta\cos\tau\cdot\] \[\cdot(\cos\tau\sin\theta\cos(2\delta)+\sin\tau\sin(2\delta)\cos\gamma)\] \[0=\xi_{1}-\xi_{3} = 8\cos^{2}\theta\sin\theta\cos\tau\cdot\] \[\cdot\sin(2\delta)\sin^{2}\delta\sin\tau\big{(}\cos\beta+\cos \gamma\big{)}.\] We are now in position to fully characterize the different classes of parametrizations of iso-entangled bases of two qubits. These correspond to the 5 different solutions (i)-(v) above of Eqs. (6),(7),(8). Note, however, that two of the solutions (namely, (iii.a) and (iii.b)), lead to equivalent families up to a swap (so they belong to the same equivalence class). Therefore, we arrive at 4 families of iso-entangled bases. We will denominate the different families \(I^{(j)}\) with \(j\in(1,\ldots,4)\), and we will express them in either the computational basis or in the skewed basis (3); we will indicate this by a subscript \(|0\rangle\) or \(|\varphi\rangle\), respectively. We will see that each of them is characterized not only by a different functional form of the states (which reflects different geometrical properties thereof) but also by the amount of parameters which the degree of entanglement \(\xi^{(j)}\) depends on. ### Four inequivalent families of iso-entangled bases Solutions (i-iv) lead to the following four families of isoentangled bases: ###### Acknowledgements. **1. Skewed product family** Starting from condition (i), the parametrisation can be reduced to \[I^{(1)}_{|0\rangle}=\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&\cos\tau&-\sin\tau\\ 0&0&\sin\tau&\cos\tau\end{array}\right). \tag{9}\] where the other parameters have been absorbed into local transformations. Note, that the degree of entanglement is \(\xi^{(1)}=0\), independently of \(\tau\). As already mentioned, this family contains only product bases, equivalent to skewed basis provided in Eq. (3). From the point of view of the Bloch ball (See Fig. 1) this family is composed always from two twice degenerate points on the north and south poles in one reduction, and two pairs of opposite poles in the other. ###### Acknowledgements. **2. Elegant family** Condition (ii) yields a family which can be parametrized as \[I^{(2)}_{|0\rangle}=\frac{1}{\sqrt{2}}\left(\begin{array}{cccc}0&0&-e^{i \zeta}&e^{i\zeta}\\ e\theta&e\theta&-s\theta&-s\theta\\ s\theta&\theta&e\theta&e\theta\\ 1&-1&0&0\end{array}\right), \tag{10}\] where we have introduced the local transformation of the form \(\exp(i\zeta\sigma_{z})^{\otimes 2}\), with \(\zeta=\gamma-\beta\). Hence, this family has only 2 parameters. Note that in this case, the skewed basis and the computational one correspond, i.e., \(|\varphi\rangle=|0\rangle\). The squared concurrence reads: \[\xi^{(2)}=\frac{\sin^{2}(2\theta)}{4}, \tag{11}\] which depends only on one parameter. Note that the degree of entanglement is bound, \(\xi^{(2)}\in\left[0,\frac{1}{4}\right]\). Nil entanglement (i.e., \(\xi^{(2)}=0\)) corresponds to \(\theta=0\), which leads again to the separable basis (3). The maximal amount of entanglement, \(\xi^{(2)}=1/4\), is obtained by \(\theta=\pi/4\). Note that this family contains the _Elegant Joint Measurement_ (EJM), which play a special role in network nonlocality [23]. EJM has, in fact, \(\xi=1/4\), and is retrieved for \(\zeta=\pi/2\). Since EJM is the extremal case of this family, we name this "Elegant family". Fixing the maximal amount of entanglement, however, does not single out EJM and leads to a 1-parameter subfamily. In Bloch ball representation (see Fig. 2), the first two states lie on a hyperbole in the \(x\)-\(z\) plane, whereas the other two lie on a full rotational hyperboloid with symmetry around \(z\) axis. The opening angle of the limiting cone of these hyperboloids is \(\theta\) in one, and \(\pi-\theta\) in the other reduction. A generic member of this family forms a simplex with three pairs of edges of different lengths. The EJM is singled out by maximizing the volume of both reductions. Figure 1: Both reductions density matrices for four pure states for an exemplary member of the skewed product family. Note that in the first reduction (left) all states lie on the \(z\) axis, while in the second they form a rectangle in the \(x-z\) plane. Figure 2: Partial traces for a selected member of the Elegant family (in red) together with the EJM (blue). A generic member of this family forms simplex structures with all states lying on cones with opening angles \(2\theta\) and \(\pi-2\theta\), for the two reductions respectively; the second pair is rotated with respect to the first by an angle \(\zeta\). In particular, EJM is found by setting \(\theta=\pi/4\) and \(\zeta=\pi/2\), thus forming two regular simplices. ## 3 Bell family This family is the conflation of conditions (iii.a) and (iii.b), which are equivalent up to a swap \(\mathrm{S}_{A\leftrightarrow B}\) and substituting \(\tau\) by \(\theta\), respectively. In both cases the phase \(e^{i\beta}\) can be reabsorbed into the computational state \(|1\rangle\) of the first qubit and defining \(\zeta=\gamma+\beta\), or into \(|0\rangle\) of the second qubit and defining \(\zeta^{\prime}=\gamma-\beta\), respectively. Hence, this family reads: \[I_{|0\rangle}^{(3)}=\left(\begin{array}{cccc}0&0&-c\delta\cdot e^{i\zeta}&s \delta\cdot e^{i\zeta}\\ s\delta&c\delta&0&0\\ s\tau\cdot c\delta&-s\tau\cdot s\delta&c\tau\cdot s\delta&c\tau\cdot c\delta \\ -c\sigma\cdot c\delta&c\tau\cdot s\delta&s\tau\cdot s\delta&s\tau\cdot c\delta \end{array}\right). \tag{12}\] Hence, one has the 3 expected parameters (\(\delta,\zeta\) and \(\tau\)). The tangle reads: \[\xi^{(3)}=\sin^{2}(2\delta)\sin^{2}(\tau), \tag{13}\] which depends on 2 parameters and varies between 0 and 1. For \(\delta=\pi/4\) and \(\tau=\pi/2\) one achieves maximally entangled states, i.e., \(\xi^{(3)}=1\). This is equivalent to the standard _Bell State Measurement_ (BSM), which is the unique maximally entangled basis up to local transformations [24]. This thus suggests the name of this family. For nil entanglement, i.e., \(\xi^{(3)}=0\), one has either \(\delta=0\), or \(\cos(\tau)=\pm 1\); both cases are equivalent to the already discussed separable family (3). Despite full range of attainable entanglement, from the perspective of the Bloch ball, this family always produces rectangles lying in the \(x\)-\(z\) plane in one reduction, and in a rotated plane in the other, with rotation being controlled by \(\zeta\) phase. In particular, we note that in Bloch representation (see Fig. 3) a part of the Bell family will overlap with the subset of Elegant family with \(\zeta=0\). An alternative derivation of this family, resulting in a canonical form, is given in SM, section B. ## 4 General family In the case of condition (iv) we find that necessarily we have \(e^{i\gamma}=-e^{\pm i\beta}\), which yields the relation: \[\tan(\tau)=\frac{\cos(2\delta)\sin(\theta)}{\sin(2\delta)\cos(\beta)}. \tag{14}\] Hence, the parametrization reads: \[I_{|\varphi\rangle}^{(4)}=\left(\begin{array}{cccc}0&0&c\delta\cdot e^{i \beta}&-s\delta\cdot e^{\pm i\beta}\\ s\delta\cdot e\theta&c\delta\cdot e\theta&-s\delta\cdot s\theta&-c\delta \cdot s\theta\\ s\delta\cdot\theta&c\delta\cdot s\theta&s\delta\cdot c\theta&c\delta\cdot \theta\\ -c\delta\cdot e^{i\beta}&s\delta\cdot e^{i\beta}&0&0\end{array}\right). \tag{15}\] The expected 3 parameters are \(\delta,\theta\) and \(\beta\). The tangle reads: \[\xi^{(4)} = \frac{\sin^{2}(2\theta)\sin^{2}(2\delta)}{4}\cdot \tag{16}\] \[\cdot\frac{\sin^{2}(2\delta)\cos^{2}(\beta)+\cos^{2}(2\delta)}{ \sin^{2}(2\delta)\cos^{2}(\beta)+\cos^{2}(2\delta)\sin^{2}(\theta)},\] which varies between 0 and 1. Note that this is the most general family of iso-entangled bases, for its degree of entanglement depends on all the three parameters and it has overlaps with all the other families. Furthermore, a generic basis from this family will yield non-degenerate simplices in both reductions. Note that Eqs. (14) and (16) have five singularity points. Studying the (directional) limits of these multivariable functions yields the following cases: * \(\lim\beta\rightarrow\pi/2\) reduces the General family (15) to a two-parameter subfamily of the Bell family (12). In particular, this implies that \(\tau\rightarrow\pi/2\) and \(\xi^{(4)}\rightarrow\cos^{2}(\theta)\sin^{2}(2\delta)\), which has the same form of Eq. (13). Note, that the Bell family depends on the same number of parameters as the General family, therefore it cannot be fully retrieved by any of the limits. * \(\lim\delta\rightarrow\pi/4\) reduces the General family to the El-egant family (10). * \(\lim\theta\to 0\) and \(\delta\to 0\) reduces General family to the skewed product family (9), independently of the direction of approach of these limits. * \(\lim\beta\rightarrow\pi/2\) and \(\delta\rightarrow\pi/4\) leads to an interpolation-between a part of the Elegant family and a subfamily of the Bell family, depending on the direction of approach of the limit. In Ref. [11], a one-parameter iso-entangled family was proposed that also interpolates between EJM and BSM. However, this cannot be contained within this limit case because the latter does not admit regular-simplices within the reductions, contrarily to the family in [11] (see SM, section C). Figure 3: Generic member of the Bell family. Note that the four vectors in both reductions form two rectangles, with the first one lying on a cone with the rotation axis along \(z\) axis. * \(\lim\beta\rightarrow\pi/2\) and \(\lim\theta\to 0\) leads to a subfamily of the Bell family wherein, however, the degree of entanglement is upper bounded, with the bound depending on the angle of approach \(\phi\in(-\pi/2,\pi/2)\) as \((1+\tan(|\phi|))^{-2}\). From this, one sees that the three particular families (9), (10), and (12) (partly) form the closure of the General family. ## III An application to quantum networks Let us consider a triangular network scenario, in which Alice, Bob and Charlie share pairwise a Bell state of two qubits, e.g., \[\ket{\Psi}_{ABC}=\ket{\psi_{+}}_{AB}\otimes\ket{\psi_{+}}_{AC}\otimes\ket{\psi_ {+}}_{BC}, \tag{17}\] and each chooses a basis to perform a joint measurement on their pair of qubits (see Ref. [25]). The scenario is said to be Output-Permutation Invariant (OPI) if the output probability distribution can be defined by three constants \[p_{1}=p_{iii},\qquad p_{2}=p_{\sigma(ij)},\qquad p_{3}=p_{\sigma(ijk)}, \tag{18}\] for \(i\neq j\neq k\neq i\) and any permutation \(\sigma\); intuitively, it means that no node, nor output, of the network is distinguished. Similar notion can be defined for larger networks based on the network graph automorphism group. Since the iso-entangled bases set each of the measurement states on equal footing, they appear to be natural candidates for measurements realizing OPI in such networks. We find that setting \(\beta=\gamma+\pi/2\) and then \(\gamma=\frac{1}{2}\arccos(-\sin(2\theta))\) in the Elegant family leads to a 1-parameter subset of measurements which leads to OPI distributions. Interestingly, none of the measurements in this family, except for the extremal points, remains OPI under local noise \(\Phi_{\epsilon}(\rho)=(1-\epsilon)\rho+\frac{\epsilon}{4}\mathbb{I}\) acting on each edge of the network (see Fig. 4). ## IV Discussion and outlook In this letter, we have provided complete classification of all the equivalence classes of bases of two qubits, whose four states have all the same degree of entanglement (i.e., iso-entangled bases). In particular, we have shown that there exist four inequivalent families of equivalence classes, characterized by their numbers of parameters and geometrical constraints of their reductions in the Bloch ball representation. This study represents a first necessary step towards a deeper understanding of entangled measurements, a topic that has received surprisingly little attention--especially if compared to entangled states between distant parties--despite their pivotal importance in quantum computation, and other quantum tasks (such as quantum teleportation, dense coding, or the activation of nonlocality in networks). Although our findings provide the theoretical framework for further studies, many questions remain open. Most of the aforementioned tasks, such as quantum teleportation or dense coding, make use of Bell State Measurements. Our work provides the tool to start asking in systematic manner questions like: for which tasks partially entangled measurements provide stronger resource than the maximally entangled ones? This can bring novel insights into nonlocality, especially in the context of quantum networks with no inputs, in which nonlocality is triggered exclusively by the selected measurements. Moreover, further questions arise concerning implementability: which of the entangled measurements can be experimentally realised using standard resources such as linear optical elements? Furthermore, this preliminary study has addressed only the problem of entangled measurements in the simplest case of two qubits. The natural extension to higher dimensions turns out to be hard, with sparse known examples in literature [13; 14; 27; 15]. In SM, section D, we provide a short review of already known families together with a new family of partially entangled bases. This represents a first attempt towards a generalization to higher dimensions that will remain as a direction of future research. ## V Acknowledgements We thank Otfried Guhne for pointing out Ref. [22] to us. F.D.S. acknowledges support from FWF (Austrian Science Fund) through an Erwin Schrodinger Fellowship Figure 4: \(p_{3}\)-\(p_{1}\) plane for OPI measurements, with the red line representing probabilities corresponding to EJM acting on Bell states under local noise, \(\Phi_{\epsilon}(\ket{\psi_{+}}\!\bra{\psi_{+}})^{\otimes 3}\), while green line corresponds to the OPI stemming from the Elegant family acting on the network state \(\ket{\Psi}\!\bra{\Psi}\) from (17). The Finner inequality is known to be a bound for local and quantum distributions [26]. (Project J 4699s). J.Cz. and K.Z. gratefully acknowledge financial support by Narodowe Centrum Nauki under the Quantera project number 2021/03/Y/ST2/00193 and the project number 2019/35/O/ST2/01049. The research has also been supported by a grant from the Priority Research Area DigiWorld under the Strategic Programme Excellence Initiative at Jagiellonian University. N.G. acknowledges support from the Swiss National Science Foundation via the NCCR-SwissMap.
2307.00331
Variation-aware Vision Transformer Quantization
Despite the remarkable performance of Vision Transformers (ViTs) in various visual tasks, the expanding computation and model size of ViTs have increased the demand for improved efficiency during training and inference. To address the heavy computation and parameter drawbacks, quantization is frequently studied in the community as a representative model compression technique and has seen extensive use on CNNs. However, due to the unique properties of CNNs and ViTs, the quantization applications on ViTs are still limited and underexplored. In this paper, we identify the difficulty of ViT quantization on its unique variation behaviors, which differ from traditional CNN architectures. The variations indicate the magnitude of the parameter fluctuations and can also measure outlier conditions. Moreover, the variation behaviors reflect the various sensitivities to the quantization of each module. The quantization sensitivity analysis and comparison of ViTs with CNNs help us locate the underlying differences in variations. We also find that the variations in ViTs cause training oscillations, bringing instability during quantization-aware training (QAT). Correspondingly, we solve the variation problem with an efficient knowledge-distillation-based variation-aware quantization method. The multi-crop knowledge distillation scheme can accelerate and stabilize the training and alleviate the variation's influence during QAT. We also proposed a module-dependent quantization scheme and a variation-aware regularization term to suppress the oscillation of weights. On ImageNet-1K, we obtain a 77.66% Top-1 accuracy on the extremely low-bit scenario of 2-bit Swin-T, outperforming the previous state-of-the-art quantized model by 3.35%.
Xijie Huang, Zhiqiang Shen, Kwang-Ting Cheng
2023-07-01T13:01:39Z
http://arxiv.org/abs/2307.00331v1
# Variation-aware Vision Transformer Quantization ###### Abstract Despite the remarkable performance of Vision Transformers (ViTs) in various visual tasks, the expanding computation and model size of ViTs have increased the demand for improved efficiency during training and inference. To address the heavy computation and parameter drawbacks, quantization is frequently studied in the community as a representative model compression technique and has seen extensive use on CNNs. However, due to the unique properties of CNNs and ViTs, the quantization applications on ViTs are still limited and underexplored. In this paper, we identify the difficulty of ViT quantization on its unique **variation** behaviors, which differ from traditional CNN architectures. The variations indicate the magnitude of the parameter fluctuations and can also measure outlier conditions. Moreover, the variation behaviors reflect the various sensitivities to the quantization of each module. The quantization sensitivity analysis and comparison of ViTs with CNNs help us locate the underlying differences in variations. We also find that the variations in ViTs cause training oscillations, bringing instability during quantization-aware training (QAT). Correspondingly, we solve the variation problem with an efficient knowledge-distillation-based variation-aware quantization method. The multi-crop knowledge distillation scheme can accelerate and stabilize the training and alleviate the variation's influence during QAT. We also proposed a module-dependent quantization scheme and a variation-aware regularization term to suppress the oscillation of weights. On ImageNet-1K, we obtain a 77.66% Top-1 accuracy on the extremely low-bit scenario of 2-bit Swin-T, outperforming the previous state-of-the-art quantized model by 3.35%. Code and models are publicly available at [https://github.com/HuangOwen/VVTQ](https://github.com/HuangOwen/VVTQ). ## 1 Introduction Vision Transformers (ViTs), inspired by the success of transformer-based models in Natural Language Processing (NLP) tasks, have achieved impressive accuracy on a variety of computer vision tasks (Krizhevsky et al., 2012; He et al., 2016; Tan and Le, 2019). Despite the intrinsic superiority of ViTs, their remarkable performance also comes from the tremendous parameter numbers. For instance, Swin-L (Liu et al., 2021) of input size \(224\times 224\) has a total number of parameters of 197M with FLOPs of 34.5G. The high latency and large model size have become the most significant obstacle to the efficient deployment of the ViTs, especially on devices with computation constraints. In recent years, researchers have explored and proposed various model compression methods to improve the computational efficiency of deep learning models. These model compression techniques include quantization (Zhou et al., 2016; Choi et al., 2018; Wang et al., 2019; Esser et al., 2020; Bhalgat et al., 2020; Yamamoto, 2021; Huang et al., 2022), pruning (Liu et al., 2017, 2018; Molchanov et al., 2019; Liu et al., 2019), knowledge distillation (Hinton et al., 2015; Park et al., 2019; Shen and Xing, 2021), and compact network design (Howard et al., 2017; Pham et al., 2018; Guo et al., 2020). Among these methods, quantization of weights and activations have been the most widely utilized techniques because they enjoy the advantage of the promising affinity across different hardware architectures (Judd et al., 2016; Jouppi et al., 2017; Sharma et al., 2018). Although a few efforts (Liu et al., 2021; Yuan et al., 2021; Li et al., 2022; Lin et al., 2021; Li et al., 2022; Li et al., 2022; Li and Gu, 2022) have been made to apply quantization techniques to ViTs, most of them (Liu et al., 2021; Yuan et al., 2021; Lin et al., 2021) are based on Post-Training Quantization (PTQ) which suffers from a significant decline in performance and a bitwidth limitation at 8-bit or 6-bit. Additionally, the few existing Quantization-Aware Training (QAT) methods (Li et al., 2022; Li and Gu, 2022; Li et al., 2022) take much more time than the full-precision model in training, and the models still fail to achieve the desired performance when being quantized to low-precision such as 3-bit and 2-bit. The lower accuracy of quantized ViTs compared to CNNs guides us to raise the question: _What is it that hinders us from improving the performance of quantized ViTs?_ Meanwhile, the low efficiency of previous QAT methods makes applying quantization to more ViT structures difficult. Thus, another question we would like to raise is: _How to improve the efficiency of ViT quantization?_ To comprehensively decipher the inherent obstacles that adversely impact the efficacy and performance of ViT quantization, in this work, we initially conduct an exhaustive investigation of the quantization resilience of each component within the structural layout of the ViTs. The empirical findings derived from the isolated variable (leave-one-out) quantization ablation experiments substantiate that specific constituents, such as Multi-head self-attention (MHSA), exhibit higher sensitivity to quantization compared to other constituents. We further perform a comparative analysis between the weight and activation distribution of ViTs and CNNs, deducing that the intrinsic variability of the distribution serves as the pivotal factor instigating complications with respect to ViTs quantization. This is confirmed through constant monitoring of the weight changing trajectory during the training phase, which revealed that this variability instigates a phenomenon known as weight oscillation. Such a phenomenon has detrimental effects on quantization, potentially culminating in decelerated convergence. In light of the variation analysis, we propose an optimized solution for ViT quantization that is attuned to variations, demonstrating enhanced efficiency. Initially, a multi-crop knowledge distillation approach is employed, which aids in decreasing the data variance within mini-batches during the training phase, thereby stabilizing and expediting the training process. In terms of the distribution variance observed across differing modules, we introduce a module-specific scaling methodology. This strategy seeks to identify varying scale factors pertinent to different modules, thereby holistically accommodating the diversity in weight distribution through a gradient scaling technique that is sensitive to weight magnitude. When compared with the baseline quantization method, LSQ+ (Bhalgat et al., 2020), the presented approach exhibits less susceptibility to fluctuations in weight distribution and outliers that may arise within ViTs. Furthermore, to combat the potential oscillation throughout the training phase, we put forth a regularization process that is attuned to oscillation within quantization bins. This process seeks to penalize the variance in weight distribution within each respective quantization bin. Extensive experiments across various ViT architectures with different characteristics, including DeiT (Touvron et al., 2021), Swin Transformer (Liu et al., 2021), and SReT (Shen et al., 2021), are conducted to verify the effectiveness and efficiency of our proposed method. For DeiT-T on ImageNet-1K dataset, as shown in Figure 1, our 4-bit quantized model can significantly improve top-1 accuracy to 74.71% compared to the model quantized by LSQ+ (Bhalgat et al., 2020) which achieves 72.62%. Furthermore, to the best of our knowledge, our approach is the first to surpass the full-precision baseline with a 4-bit quantized DeiT-T model and the pioneer in extending the frontier of ViT quantization to a 2-bit level, applicable to both Figure 1: Top-1 accuracy on ImageNet-1K vs. BitOPs comparison of 2/3/4-bit quantized ViT models (DeiT-T, SReT-T, Swin-T) using LSQ+ (Bhalgat et al., 2020) quantization and our method. weights and activations. Through these methodologies, we exhibit exceptional training optimization, as evidenced by a 50% reduction in total training duration compared to our established baseline. In summary, our contribution can be concluded as: * We reveal the inherent complexity associated with the quantization of ViTs from the perspective of **variation**. Our claims that ViTs grapple with weight fluctuations and activation distribution disparities are substantiated through sensitivity analysis, comparison of ViTs to CNNs, and investigation of oscillatory behavior. * We adopt a multi-crop knowledge distillation-based quantization methodology to decrease the data variance within mini-batches during training following (Shen and Xing, 2021), and introduce module-dependent quantization and oscillation-aware regularization strategies. The proposed method is capable of mitigating the impact of variations in ViTs. * We perform extensive experiments on DeiT, Swin, and SReT architectures using the ImageNet-1K dataset. Our approach significantly outperforms prior state-of-the-art quantization schemes, demonstrating both superior efficiency and performance. ## 2 Related Work **Vision Transformer:** Transformer (Vaswani et al., 2017) was originally proposed for natural language processing tasks and demonstrated remarkable performance across various benchmarks. Inspired by the success, Vision Transformers(ViTs) (Dosovitskiy et al., 2020) utilize multi-head self-attention blocks for replacing convolutions and treating an image as patches/tokens. The attention mechanism can help capture both short-range and long-range visual dependencies. DeiT (Touvron et al., 2021) introduced a teacher-student distillation token strategy and employed various data augmentation techniques in the training of ViTs and significantly improved the effectiveness and efficiency. Swin (Liu et al., 2021) proposed the shift window attention scheme at various scales to limit the self-attention computation in local windows, which largely boosts the performance and reduces complexity. Recently, SReT (Shen et al., 2021) has been proposed with a weight-sharing mechanism by a sliced recursion structure. The convolution layers in SReT also help supplement the inductive bias lacking in ViTs. Various extensions of ViTs (Wu et al., 2021; Yuan et al., 2021; Dong et al., 2022) and more applications (Zheng et al., 2021; Caron et al., 2021; Bertasius et al., 2021; Arnab et al., 2021; Wang et al., 2021) are still emerging. **Quantization Techniques:** Quantization techniques aim to replace the full-precision weights and activations with lower-precision representation. Based on the quantization intervals, they can be categorized into uniform and non-uniform quantization. While uniform quantization (Zhou et al., 2016; Choi et al., 2018; Esser et al., 2020) with uniform quantization interval has better hardware affinity and efficiency, Non-uniform quantization (Miyashita et al., 2016; Zhang et al., 2018; Li et al., 2019), due to its flexible representation, can usually better allocate the quantization values to minimize the quantization error and achieve better performance than uniform schemes. In addition, the quantization methods can also be classified as quantization-aware training (QAT) (Zhou et al., 2016; Esser et al., 2020; Bhalgat et al., 2020) and post-training quantization (PTQ) (Nagel et al., 2020; Fang et al., 2020; Wang et al., 2020) based on whether to retrain a model with quantized weights and activations or start with a pre-trained model and directly quantize it without extra training. The majority of previous ViT quantization methods, such as Liu et al. (Liu et al., 2021), PTQ4ViT (Yuan et al., 2021), and FQ-ViT (Lin et al., 2021), focused on PTQ of ViTs. Due to the intrinsic restriction of PTQ, these methods only perform 8-bit or 6-bit quantization. **Knowledge Distillation:** The concept of knowledge distillation is first proposed in (Hinton et al., 2015), where the core insight is to encourage student models to emulate the distribution of teacher models' prediction. The prediction distribution of teacher models contains more information than the one-hot labels. More recently, various knowledge distillation methods (Cho and Hariharan, 2019; Park et al., 2019; Tung and Mori, 2019; Mirzadeh et al., 2020; Shen and Xing, 2021) have been proposed for better efficiency and effectiveness. The knowledge-distillation methods are also widely adopted in previous research (Mishra and Marr, 2018; Polino et al., 2018; Huang et al., 2022) to help quantization-aware training. ## 3 Approach ### ViT Architecture and Quantization **ViT Architecture.** The basic block of ViTs is the transformer layer, consisting of Multi-head Self Attention (MHSA), Layer Normalization (LN) (Ba et al., 2016), and Feed-forward Network (FFN). The transformer layer can be formulated as: \[\begin{split}\mathbf{X^{\prime}}&=\text{LN}(\mathbf{ X_{i}}+\text{MHSA}(\mathbf{X_{i}}))\\ \mathbf{X_{O}}&=\text{LN}(\mathbf{X^{\prime}}+ \text{FFN}(\mathbf{X^{\prime}})),\end{split} \tag{1}\] where \(\mathbf{X_{i}}\), \(\mathbf{X^{\prime}}\), and \(\mathbf{X_{o}}\) are this transformer block's input, intermediate representation, and output. The MHSA module consists of \(h\) heads, and each head performs inner products with a scaling factor and a _softmax_ operation. For the \(i\)-th head, input \(\mathbf{X_{i}}\) is projected into _query_, _key_, and _value_ vectors with multiplication with learnable weight matrix \(\mathbf{W_{Q,i}},\mathbf{W_{K,i}},\mathbf{W_{V,i}}\) respectively, which can be written as: \[\mathbf{Q_{i}}=\mathbf{X_{i}}\mathbf{W_{Q,i}},\mathbf{K_{i}}=\mathbf{X_{i}} \mathbf{W_{K,i}},\mathbf{V_{i}}=\mathbf{X_{i}}\mathbf{W_{V,i}}, \tag{2}\] and the output of \(i\)-th head is \[\text{head}_{\text{i}}=\text{softmax}(\mathbf{Q_{i}}\mathbf{K_{i}^{T}}/ \sqrt{\mathbf{d_{k}}})\mathbf{V_{i}}, \tag{3}\] where \(\mathbf{1}/\sqrt{\mathbf{d_{k}}}\) is the scaling factor for normalization. MHSA further concatenates the output of these heads to improve the representative capacity and projects to the output by multiplication with a learnable weight matrix \(\mathbf{W_{o}}\): \[\text{MHSA}(\mathbf{X_{i}})=\text{Concat}(\text{head}_{\mathbf{1}},\text{ head}_{\mathbf{2}},...,\text{head}_{\mathbf{h}})\mathbf{W_{o}}. \tag{4}\] **Quantization.** Given the real-value data to be quantized as \(x^{r}\), the scale factor \(s\) of the quantizer, the number of positive quantization levels \(Q_{P}\), and the number of negative quantization levels \(Q_{N}\), we can have the quantizer \(q_{b}\) that output the \(b\)-bit quantized representation of the input real value as \(x^{q}=q_{b}(x^{r}):\) \[x^{q}=q_{b}(x^{r})=s\times[\text{clip}(x^{r}/s,-Q_{N},Q_{P})], \tag{5}\] where \([\cdot]\) is the rounding function that rounds the input to the nearest integer, \(\text{clip}(x,r_{1},r_{2})\) return \(x\) with all value below \(r_{1}\) set to be \(r_{1}\) and all values above \(r_{2}\) set to be \(r_{2}\). For the unsigned quantization, \(Q_{N}=0,Q_{P}=2^{b}-1\). While for the quantization of signed data, \(Q_{N}=2^{b-1},Q_{P}=2^{b-1}-1\). To solve the problem that the gradient cannot back-propagate in Equation 5, the straight-through estimator (STE) (Bengio et al., 2013) is utilized to approximate the gradient during quantization-aware training. The gradient of the rounding operation is approximated as 1 in the quantization limit. In the back-propagation with STE, the gradient of the loss \(\mathcal{L}\) with respect to the real-value data \(x^{r}\) is set to be: \[\frac{\partial\mathcal{L}}{\partial x^{r}}=\frac{\partial\mathcal{L}}{ \partial x^{q}}\cdot\mathbf{1}_{-Q_{N}\leq x^{r}/s\leq Q_{P}}, \tag{6}\] where \(\mathbf{1}\) is the indicator function that outputs 1 within the quantization limit and 0 otherwise. This STE is widely used in quantization-aware training (QAT). Correspondingly, we focus on uniform quantization and QAT in this work. ### Understanding Variation of ViTs Many existing studies highlight that ViTs exhibit greater sensitivity to quantization compared to CNNs. For instance, Bit-Split (Wang et al., 2020), which successfully achieves 4-bit quantization on ResNet with an accuracy loss of less than 1%, exhibits significant accuracy degradation of over 2% (Lin et al., 2021) when applied to 8-bit quantization of DeiT. However, there is a paucity of comprehensive analyses detailing the reasons behind ViTs' heightened computational sensitivity compared to CNNs. In this section, we will primarily examine the quantization sensitivity of each component via a leave-one-out quantization analysis. Upon identifying the problematic areas or "pain points" in ViT quantization, we will contrast ViTs with CNNs to validate the fundamental challenge in quantization, referred to in this work as **variation**. We define the term **variation** to include two components: (1) the differential sensitivity and importance of each module and (2) the variance of weight distribution. We will explore the variation in sensitivity in Section 3.2.1 and delve into the variation in distribution and its subsequent side-effect of oscillation phenomenon in Sections 3.2.2 and 3.2.3. #### 3.2.1 Quantization Sensitivity Analysis Prior study Q-ViT (Li et al., 2022b) conducted a quantization robustness analysis on ViTs, concluding that the GELU activation function substantially mitigates performance during the quantization process. However, their experiments relied on post-training quantization (PTQ), which stands in stark contrast to quantization-aware training (QAT). Moreover, their experimental methodology lacked a comprehensive analysis of different components at a more granular level, such as the quantization impact on query, key, and value weight matrices. In this section, we aim to disentangle the intricacies of ViT quantization by executing an in-depth leave-one-out analysis employing QAT. In terms of quantization methods, we employ LSQ+(Bhalgat et al., 2020). All components except for the analysis target will be quantized to 3-bit, while the analysis target will be retained at full precision. The experimental results using DeiT-T on the ImageNet-1K are presented in Table 1. These results indicate that MHSA, particularly the _value_ weight matrices, are highly susceptible to quantization. Although MHSA and the _value_ weight matrix constitute a relatively minor fraction of parameters in comparison to the FFN, maintaining these parts at full precision can optimize the performance of the quantized model. While we have fully exploited the clue that the quantization sensitivity of MHSA is higher than other components in ViTs, another critical clue is that some heads in MHSA are more important than other heads in Transformer-based models, which has already been proved in NLP tasks (Michel et al., 2019). Here we apply a similar analysis as (Michel et al., 2019) to quantize various heads in different layers in ViTs. The target heads are quantized to 2-bit while the remaining components are quantized to 8-bit. The results of DeiT-T with three heads in a layer and 12 layers are shown in Figure 2. The results that some heads have higher accuracy degradation show that the quantization sensitivity of different heads at different layers varies. The first and last few layers are more sensitive to quantization. Additionally, the heads in the same layer also show a quantization robustness variation. For example, in layer 8 of the quantized model, the lower precision of head 0 (shown in 8-0 in Figure 2) will result in higher accuracy drop compared to the two parallel heads in the same layer. #### 3.2.2 Variation of ViTs and CNNs In Section 3.2.1, we have demonstrated that ViTs suffer from significant variation in the sensitivity to quantization. However, previous mixed precision quantization research on CNN has also discovered that different parts of models have various quantization robustness. To fully understand why the sensitivity to \begin{table} \begin{tabular}{l|c c c} \hline Quantization Target & Top-1 Acc(\%) & Top-5 Acc(\%) & Para(\%) \\ \hline \hline None (FP Model) & 73.75 & 91.87 & 100 \\ All (Baseline 3-bit) & 68.22 & 88.56 & 0 \\ \hline All, except FFN & 69.47 & 89.60 & 62.1 \\ All, except MHSA & **71.28** & **90.66** & 31.1 \\ All, except _query_ in MHSA & 69.66 & 89.94 & 7.8 \\ All, except _key_ in MHSA & 69.92 & 89.81 & 7.8 \\ All, except _value_ in MHSA & **70.72** & **90.40** & 7.8 \\ \hline \end{tabular} \end{table} Table 1: Leave-one-out-anlysis for quantization of various components in DeiT-T on ImageNet-1K. The Para(%) stands for the percentage of parameters that are **not** quantized among all trainable parameters. Figure 2: The accuracy degradation compared to the full-precision model when a specific head in a layer is quantized. The label \(h\)-\(l\) in abscissa indicates the head \(h\) in layer \(l\) is quantized. quantization in ViTs is higher than CNNs, we visualize and quantify the distribution of different modules inside full-precision CNNs and ViTs to compare the real **variation** of ViT and CNN models. To give an intuitive result on the variation of CNNs and ViTs, we first visualize the weight distribution across different channels in pre-trained full precision ResNet-18 (He et al., 2016) and DeiT-T. The results are shown in Figure 3. Based on our investigation, the ResNet-18 model shares a similar distribution across different channels, while the weight distribution varies significantly in different modules in DeiT-T. To quantify the fluctuation in the latent real-valued weight magnitude, we proceed to calculate the Average Standard Deviation of the Absolute Mean (SDAM) of the real-valued weight magnitude within each module of CNNs and ViTs. The SDAM metric has been previously employed to evaluate the stability and fairness of training in prior studies (Liu et al., 2021). The corresponding results of the SDAM comparison are tabulated in Table 2. These numerical findings corroborate that the variability associated with ViTs surpasses that of CNNs with respect to the weight distribution. Correspondingly, prior work (Lin et al., 2021) has highlighted significant disparities in the distribution of activations in ViTs as opposed to CNNs. Although these variations may augment the representational capacity of ViTs, they concurrently introduce complexities when implementing quantization in the context of ViT models. Consequently, the conception and arrangement of the quantization scheme become paramount, particularly in the generation of quantization scales and the determination of clipping factors during the process of quantization-aware training. #### 3.2.3 Oscillation in Training High variance in weight and activation distribution can lead to suboptimal quantization, thereby inducing increased quantization errors. In quantization-aware training, certain modules fail to learn meaningful representation during the optimization process. This effect and its association with distribution variation have been investigated in AdamBNN (Liu et al., 2021), where the notion of _flip-flop_ was introduced, signifying the change in quantization results of weights at specific iterations. We observed that low-precision quantization of ViTs is also subject to a comparable effect, termed **oscillation**. This denotes the circumstance where the latent weights fluctuate around the boundary of adjacent quantization bins during quantization-aware training. As per our understanding, (Nagel et al., 2022) is the sole work probing into these effects, however, it restricts its scope to CNNs and their impact on batch normalization, a technique not employed in ViTs. We take the initiative to identify and analyze this oscillation phenomenon specific to ViTs. \begin{table} \begin{tabular}{c|c c|c c c} \hline \hline Model & ResNet-18 & VGG-11 & ViT-T & DeiT-T & Swin-T \\ \hline SDAM & 5.59e-2 & 3.74e-2 & 9.65e-2 & 8.35e-2 & 9.71e-2 \\ \hline \hline \end{tabular} \end{table} Table 2: Standard Deviation of the Absolute Mean (SDAM) of real-value weight in CNNs and ViTs. Figure 3: The weight distribution variance in CNNs (ResNet-18) and ViTs (DeiT-T). The visualized weight tensors are randomly selected from different channels in CNNs and different modules (heads) in ViTs. An illustration of the oscillation phenomenon is shown in Figure 4. Conventionally, the distribution of full-precision initialization adheres to a Gaussian distribution. There exist only a limited number of latent weights that precisely coincide with the optimal quantization value. A majority of weights necessitate updates during the process of quantization-aware training. However, when certain real-value weights \(w^{r}_{t}\) cross the quantization boundary at a particular iteration \(t\), the update of real weights \(|w^{r}_{t}-w^{r}_{t-1}|\) triggers an update in the quantized value by a constant value \(|q(w^{r}_{t})-q(w^{r}_{t-1})|=s\). Here, \(s\) represents the quantization scale and constitutes the length of a quantization bin within the framework of a uniform quantization scheme. As indicated by the STE detailed in Equation 6, the gradient of the real value is assigned a value identical to this quantized value, resulting in a consistent gradient that encourages the real value to once again traverse the quantization boundary, given that the learning rate remains consistent. We further observe the side effect in the quantization-aware training of ViTs. As shown in Figure 3(a), the weights associated with MHSA tend to accumulate around the quantization threshold following a certain number of training epochs. Figure 3(b) presents an example of this oscillatory behavior within the weights of ViTs. This oscillation effect adversely influences the training of ViTs and leads to substantial quantization error. The formulation of a solution to prevent this phenomenon, through the reduction of variation and mitigation of the impact, will be central to our design methodology for quantization. ### Variation-aware ViT Quantization As observed in Section 3.2, there exists a substantial fluctuation amongst all components of ViTs, which can precipitate an oscillation phenomenon potentially introducing instability during training. Motivated by this observation, we aim to introduce a **variation-aware** quantization scheme to mitigate the impacts of such fluctuations and enhance both the effectiveness and computational efficiency of ViT quantization. As illustrated in Figure 5, our approach incorporates several crucial components: training facilitated by multi-crop knowledge distillation, a module-specific quantization scheme, and a regularization strategy sensitive to oscillatory behavior. #### 3.3.1 Multi-crop Knowledge Distillation To solve the variation mentioned in Section 3.2.2 and help stabilize the training, we first propose a Multi-crop Knowledge Distillation (MCKD) scheme. The core insight is to train our quantized ViT models with a full-precision model as the teacher. The loss function is designed to enforce the similarity between the output distribution of the full-precision teacher and quantized student ViT model: \[\mathcal{L}_{\text{Vanilla}KD}=-\frac{1}{N}\sum_{c}\sum_{i=1}^{N}p_{c}^{T_{f}} (X_{i})\log(p_{c^{q}}^{S_{q}}(X_{i})), \tag{7}\] where the KD loss is defined as the cross-entropy between the output distributions \(p_{c}\) of a full-precision teacher \(T_{f}\) and a quantized ViT student \(S_{q}\). \(X_{i}\) is the input sample. \(c\) and \(N\) denote the classes and the Figure 4: The visualization of weight distribution during quantization-aware training and the weight oscillation effect due to distribution variance. The layer we select is _blocks.1.attn.proj-v.weight_ in 4-bit quantized DeiT-S with scale \(\alpha=0.0077\). number of samples, respectively. Note that one-hot label is not involved in training in our setting. The KD scheme helps our model converge fast because it learns the mapping directly from the full-precision teacher, which contains richer information. Previous research (Yuan et al., 2020; Zhou et al., 2020; Menon et al., 2021) also points out that KD loss can be seen as a regularization term to reduce the variance during the training, which makes the training more stable and alleviates the influence of the distribution variation. However, here we only employ KD loss as the sole objective to optimize the target model, which has been demonstrated more effective with adequate supervision signal in KD (Shen et al., 2021). One disadvantage of the conventional KD training scheme is that generating the prediction \(p_{c}^{T_{f}}\) of the teacher \(T_{f}\) consumes a relatively long time, which makes the training inefficient. To tackle this challenge, we propose to use a multi-crop KD scheme as FKD that first random crops \(M\) regions from one image \(X_{i}\), and inputs each cropped image to the teacher model \(T_{f}\) to get the soft label \(p_{c}^{T_{f}}(X_{i,m}),m\in M\), where \(m\) is the index of the cropped region. The soft label is stored together with its coordinates and augmentation hyper-parameters. In the training phase, we directly load the soft label and cropping parameter from the storage, and the cropped sample used for the training with KD. The loss function of this multi-crop KD scheme is: \[\mathcal{L}_{KD}=-\frac{1}{NM}\sum_{c}\sum_{i=1}^{N}\sum_{m=1}^{M}p_{c}^{T_{f} }(X_{i,m})\log(p_{c}^{S_{u}}(X_{i,m})). \tag{8}\] The higher quality of the soft label generated by this scheme would reduce the variation within a mini-batch to a greater extent. Meanwhile, the data and its corresponding label is loaded the same as the training without knowledge distillation, where the time for inference with the teacher model is saved. We further show in the experiment that this multi-crop KD scheme improves performance by reducing variation and significantly boosts efficiency. #### 3.3.2 Module-dependent Quantization We utilize the same scale learning strategy as LSQ+ (Bhalgat et al., 2020), wherein the scale factor \(s\) is dynamically learned during the optimization. Our exploration in Section 3.2.1 establishes a substantial variation in the sensitivity of distinct modules to quantization. However, conventional implementations of ViT quantization often overlook this characteristic. In view of the variability observed in ViTs, we propose a module-dependent quantization scheme that facilitates the learning of the quantization scale \(s\) at the Figure 5: An overview of our efficient variation-aware quantization method. The left part illustrates how we perform QAT with multi-crop knowledge distillation. The right part demonstrates the proposed module-dependent quantization scheme. granular module level (_query_, _key_, and _value_ in distinct heads of MHSA). This approach contrasts with previous layer-wise quantization methods that assigned a uniform scale to differing modules. Instead, we implement scale-learning quantization at a higher resolution, thereby promoting a finer granularity. Previous work (Bhalgat et al., 2020) has pointed out the negative impact of an imbalance gradient scale. However, the situation is even more severe in the quantization of ViTs, as weight distribution shows a significant variation. To overcome this challenge, we adopt a module-dependent gradient scaling that balances the weights and scale factor gradient, fully considering the distribution variation in different modules. We multiply the loss of scale factor \(s\) by a gradient scale \(g\) that encodes the magnitude of the weights in this module, which can be formulated as \(\frac{\partial\mathcal{L}}{\partial s}\longleftarrow\frac{\partial\mathcal{ L}}{\partial s}\cdot\frac{1}{\sqrt{Q_{P}||w||_{1}}}\), where \(||w||_{1}\) computes the \(L_{1}\)-norm of weights in the quantized module. For the modules with higher variation, the \(L_{1}\)-norm of weights will be higher than average, and the update of scale factor \(s\) will be decreased to ensure that the outliers of the distribution do not influence the scale factor. #### 3.3.3 Oscillation-aware Bin Regularization In the analysis of Section 3.2.3, we identify that the weight distribution variance in ViTs caused oscillation, leading to instability during training. In the view of distribution in each quantization bin, the majority of the weights oscillate between both sides of the quantization bin. To suppress the oscillation phenomenon during QAT, we regularize the weight distribution with an Oscillation-aware Bin Regularizer (OBR) to encourage the real-value weights to be close to the quantization bin center. The proposed OBR can be formulated as \[\mathcal{L}_{OBR}=\sum_{m=1}^{M}(||w_{m}^{r}-w_{m}^{q}||_{2}+\sum_{n=1}^{2^{k} }\mathcal{V}(w_{n,m}^{r})), \tag{9}\] where \(w_{m}^{r},w_{n}^{q},w_{n,m}^{r}\) represent the real value and quantized value of weights in module \(m\), and real value weights in the quantization bin \(n\), respectively. \(||\cdot||_{2}\) computes the \(L_{2}\)-norm and \(\mathcal{V}(\cdot)\) computes variance for all quantization bins with more than two elements. Unlike the previous weight regularization (Chmiel et al., 2020) applied in quantization which only considers the global weight distribution, we minimize the global quantization error and local distribution variance in a specific quantization bin. Ideally, the distribution of the weights in a quantization bin is regularized to be a Dirac delta distribution which can largely suppress the oscillation during training. The final optimization target is \(\mathcal{L}=\mathcal{L}_{KD}+\lambda\mathcal{L}_{OBR}\), where \(\lambda\) is the weighting coefficient to balance between \(\mathcal{L}_{KD}\) and \(\mathcal{L}_{OBR}\). To make sure that the regularization does not influence the learning of scale factors at the very early stage of training, we gradually increase the coefficient \(\lambda\) during training by applying a cosine annealing schedule following (Nagel et al., 2022). ## 4 Experiments ### Experimental Settings **Dataset** The experiments are carried out on the ImageNet-1K dataset (Deng et al., 2009). We only perform basic data augmentation in PyTorch (Paszke et al., 2019), which includes _RandomResizedCrop_ and _RandomHorizontalFlip_ during the training and single-crop operation during the evaluation. **Model** We evaluate our quantization methods on three ViT architectures: DeiT-T (Touvron et al., 2021), SReT-T (Shen et al., 2021), and Swin-T (Liu et al., 2021). Due to the fact that the first (patch embedding) and the last (classification) layer are more sensitive to quantization perturbation compared to intermediate layers, we fix their bitwidth to 8-bit following previous work Yang and Jin (2021). **Training Detail** Following previous quantization methods (Zhou et al., 2016), we adopt real-value pre-trained weights as initialization. The quantization parameters, including scale factors and offset, are initialized using the MSE-based method following Bhalgat et al. (2020). Details of all hyper-parameters and training schemes are shown in the Appendix. ### Comparison with State-of-the-Art Methods Table 3 compares our efficient variation-aware quantization with existing methods for DeiT-T, SReT-T, and Swin-T on the ImageNet-1K dataset. As we utilize different full-precision (FP) models as initialization, the corresponding FP Top-1 accuracy is also reported. To show that our performance improvement cannot simply be summarized as learning from the large teacher model, we also report the results of LSQ+ with vanilla knowledge distillation using the same teacher model. Compared with the baseline FP mode, our 4-bit quantized DeiT-T achieves 74.71% Top-1 accuracy, which is the **first 4-bit quantized model** with accuracy higher than FP initialization (0.96% absolute gain). Similarly, our 4-bit quantized SReT-T and Swin-T achieve 76.99% and 82.42% Top-1 accuracy, which is 1.18% and 1.42% higher than the FP baseline. Compared with the previous quantization methods LSQ+ (Bhalgat et al., 2020), mixed-precision method Q-ViT (Li et al., 2022b), and state-of-the-art Li et al. (2022a), our model also demonstrates remarkable improvement. For example, our 4-bit Swin-T achieves a Top-1 accuracy of 82.42%, which has an absolute gain of 2.83% compared to Q-ViT (Li et al., 2022b). Our method is especially effective for low-precision 2-bit quantization, as our 2-bit quantized Swin-T yields 77.66% Top-1 accuracy, which is 3.35% higher than the previous state-of-the-art method (Li et al., 2022a). In addition, our methods show better efficiency with the help of a multi-crop knowledge distillation scheme. The better quantization scheme and regularization also help our models converge faster than previous methods with the same training configurations. We only train our models with 150 epochs, sufficient to outperform previous methods in terms of accuracy. The total training time for our DeiT-T with 4 NVIDIA A100 GPUs is 57.3 hours, significantly lower than baseline methods shown in Table 5. ### Ablation Study We first perform an overall ablation experiment on 4-bit quantized DeiT-T to look into the effectiveness of all proposed modules. The results are shown in Table. 4. From the average Standard Deviation of the Absolute Mean (SDAM) and accuracy results, we can see that each module helps alleviate the variation influence and improve the performance of quantized ViTs. The following subsections give a more detailed ablation study on each module. \begin{table} \begin{tabular}{c|c c c|c} \hline \hline Method & Top-1 Acc & Top-5 Acc & SDAM \\ \hline Ours & 74.71 & 92.02 & 2.13e-2 \\ \hline Ours w/o Multi-crop Knowledge Distillation & 73.56 & 91.52 & 2.30e-2 \\ Ours w/o Module-dependent Quantization & 73.79 & 91.54 & 7.15e-2 \\ Ours w/o Oscillation-aware Bin Regularization & 74.22 & 91.41 & 3.79e-2 \\ \hline \hline \end{tabular} \end{table} Table 4: Overall ablation experiment on 4-bit quantized DeiT-T. For the experiment “Ours w/o MCKD”, the vanilla knowledge distillation with a ResNet152 teacher is applied. \begin{table} \begin{tabular}{c|c|c|c|c c c c c} \hline \hline Network & Method & Epochs & FP Top-1 & \begin{tabular}{c} Bit-width \\ (W/A) \\ \end{tabular} & Top-1 & \begin{tabular}{c} Bit-width \\ (W/A) \\ \end{tabular} & Top-1 & \begin{tabular}{c} Bit-width \\ (W/A) \\ \end{tabular} & Top-1 \\ \hline \hline \multirow{4}{*}{DeiT-T} & Q-ViT (Li et al., 2022b) & 300 & 72.86 & \(4/^{\ddagger}\) & 72.79 & \(3/^{\ddagger}\) & 69.62 & - & - \\ & LSQ+ (Bhalgat et al., 2020) & 300 & 73.75 & \(4/4\) & 72.62 & \(3/3\) & 68.22 & 2/2 & 54.45 \\ & LSQ+ w/ KD & 300 & 73.75 & \(4/4\) & 73.56 & \(3/3\) & 69.83 & \(2/2\) & 56.29 \\ & Ours & **150** & 73.75 & \(4/4\) & **74.71** & \(3/3\) & **71.22** & \(2/2\) & **59.73** \\ \hline \multirow{4}{*}{SReT-T} & LSQ+ (Bhalgat et al., 2020) & 300 & 75.81 & \(4/4\) & 75.65 & \(3/3\) & 72.59 & \(2/2\) & 62.11 \\ & LSQ+ w/ KD & 300 & 75.81 & \(4/4\) & 76.13 & \(3/3\) & 74.20 & \(2/2\) & 64.98 \\ & Ours & **150** & 75.81 & \(4/4\) & **76.99** & \(3/3\) & **75.40** & \(2/2\) & **67.53** \\ \hline \multirow{4}{*}{Swin-T} & Q-ViT (Li et al., 2022b) & 300 & 80.9 & \(4/^{\ddagger}\) & 80.59 & \(3/^{\ddagger}\) & 79.45 & - & - \\ & LSQ+ (Bhalgat et al., 2020) & 300 & 81.0 & \(4/4\) & 80.61 & \(3/3\) & 79.07 & \(2/2\) & 70.21 \\ \cline{1-1} & LSQ+ w/ KD & 300 & 81.0 & \(4/4\) & 81.37 & \(3/3\) & 80.01 & \(2/2\) & 73.50 \\ \cline{1-1} & Li et al. (Li et al., 2022a)\({}^{\star}\) & 300 & 81.0 & \(4/4\) & 82.10 & \(3/3\) & 80.57 & \(2/2\) & 74.31 \\ \cline{1-1} & Ours & **150** & 81.0 & \(4/4\) & **82.42** & \(3/3\) & **81.37** & \(2/2\) & **77.66** \\ \hline \hline \end{tabular} \({}^{\dagger}\) average bitwidth for mixed-precision quantization \({}^{\star}\) our implementation with the same full-precision model as initialization \end{table} Table 3: Comparison with previous quantization methods on ImageNet-1K. “Bit-width (W/A)” denotes the bitwidth for weights and activations. “Epochs” denote the total training epochs. **Multi-crop Knowledge Distillation** Table 5 compares the Top-1 accuracy of 4-bit quantized DeiT-T without knowledge distillation, with vanilla KD, and with our multi-crop KD of different teachers. The results demonstrate an improvement in both accuracy and efficiency. The teacher model of higher accuracy can improve the performance of student ViTs regardless of architecture. The training time can also be reduced as the soft label is extracted before the training. The time in Table 5 does not include the time for soft label generation, which can be ignored when we have to apply QAT on different models and settings. **Module-dependent Quantization** The proposed module-dependent quantization applies a finer-grained quantization scheme at the module level and scales the scale factors' gradients to ensure the scale factor update is not influenced by the variation in ViTs. We visualize the loss landscape showing the smoothness of optimization following Li et al. (2018) shown in Figure 7(b). Compared to the baseline quantized model, the more centralized and smoother loss landscape reflects that the proposed quantization scheme substantially improves the training stability and efficiency. **Oscillation-aware Bin Regularization** To better know how our oscillation-aware bin regularization can help alleviate the oscillation, we quantify the degree of oscillation during training by measuring the frequency of this phenomenon over time. We define that the oscillation occurs at iteration \(t\) when the quantized integer value changes and the direction of the update in integer value also changes. This can be formulated as: \[x_{t}^{\text{int}}\neq x_{t-1}^{\text{int}},\text{sign}(\Delta_{\text{int}}^{ t})\neq\text{sign}(\Delta_{\text{int}}^{t^{\text{prev}}}), \tag{10}\] where \(x_{t}^{\text{int}}=\lfloor\text{clip}(x^{r}/s,-Q_{N},Q_{P})\rceil\) is the integer value of input real-value \(x^{r}\) following the notion in Equation 5. The update \(\Delta_{\text{int}}^{t}=x_{t}^{\text{int}}-x_{t-1}^{\text{int}}\) and \(t^{\text{prev}}\) is the iteration of last integer value change. Then the frequency of oscillation is measured using an exponential moving average (EMA): \[f^{t}=m\cdot\text{sign}(\Delta_{\text{int}}^{t})+(1-m)\cdot f^{t-1}. \tag{11}\] \begin{table} \begin{tabular}{c|c|c c|c} \hline \hline Method & Teacher & Top-1 Acc & Top-5 Acc & Training Time (h) \\ \hline Ours w/o KD & Ground Truth & 72.62 & 91.19 & - \\ \hline Ours w/ Vanilla KD & ResNet152 (He et al., 2016) & 73.56 & 91.52 & 143.5 \\ \hline \multirow{3}{*}{Ours w/MCKD} & ResNet152 (He et al., 2016) & 74.26 & 91.81 & \multirow{3}{*}{**57.3**} \\ & BEiT-L (Bao et al., 2021) & 74.49 & 91.92 & \\ \cline{1-1} & EfficientNet-L2 (Xie et al., 2020) & **74.71** & **92.02** & \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of different teacher models of knowledge distillation for our 4-bit quantized DeiT-T on ImageNet-1K. “Training Time” indicates the GPU hours of the training process on 4 NVIDIA A100 GPUs. Figure 6: Loss landscape visualization of the 4-bit quantized Swin-T using the baseline (LSQ+ quantization) method and our module-dependent quantization method. We define the weights as oscillating weights at iteration \(t\) as \(f^{t}>\)0.005. The Top-1 Accuracy of 3-bit quantized SReT-T and the percentage of oscillating weights are shown in Table 6. From the results, we can see a clear negative correlation between weight oscillation percentage and model performance. The proposed Oscillation-aware Bin Regularization (OBR) with a gradually increasing coefficient helps stabilize the training to achieve higher model accuracy. ### Attention Map Visualization To demonstrate how our quantization approach preserves the representational capacity of ViT models, we illustrate the attention map of the quantized Swin-T following (Dosovitskiy et al., 2020) and (Abnar and Zuidema, 2020). We fuse the attention heads utilizing maximum operators and exclude low attention pixels to better accentuate the prominent object within the image. As shown in Figure 7, our quantized Swin-T exhibits superior representational capacity by maintaining a more relative ranking within the attention map. This distinction becomes more pronounced when the ViT model is quantized to 3-bit and 2-bit representations. For the baseline LSQ+ quantization (Bhalgat et al., 2020), the attention substantially deteriorates and distributes uniformly across the given input when quantized to extremely low bit-widths. However, our 2-bit quantized Swin-T is still capable of segmenting the salient object region effectively. ## 5 Conclusion In this work, we have provided a comprehensive understanding of the complexities associated with Vision Transformers quantization. Through an in-depth analysis of quantization sensitivity, and contrasting CNNs with ViTs, we elucidate that the **variation** behavior inherent to ViTs poses considerable challenges to quantization-aware training. Specifically, the variation in ViTs can induce oscillatory phenomena, necessitating an extended convergence period due to the consequent instability. To address the challenges presented by variation, we propose an effective variation-aware quantization technique. The multi-crop knowledge distillation strategy enhances accuracy and efficiency by mitigating the variation within the mini-batch. Furthermore, we introduce module-dependent quantization and oscillation-aware bin regularization to ensure that the optimization process remains unaffected by variation and to suppress the oscillatory effect instigated by variation. Through extensive demonstrations, we have shown that our proposed solution to variation in ViTs results in state-of-the-art accuracy on the ImageNet-1K dataset across various ViT architectures. \begin{table} \begin{tabular}{c|l|c c|c} \hline \hline \multicolumn{2}{c|}{Regularization} & Top-1 Acc & Top-5 Acc & Oscillation (\%) \\ \hline \multicolumn{2}{c|}{Baseline} & 75.02 & 92.31 & 7.33 \\ \hline \hline KURE (Chmiel et al., 2020) & 74.85 & 92.24 & 8.12 \\ \hline \multirow{3}{*}{Ours} & \(\lambda\)=cos(0,1) & 75.06 & 92.32 & 0.23 \\ & \(\lambda\)=cos(0,0.1) & **75.40** & **92.49** & 0.78 \\ \cline{1-1} & \(\lambda\)=cos(0,0.01) & 75.11 & 92.36 & 4.36 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of 3-bit quantized SReT-T using different regularization. “Oscillation” indicates the percentage of weights that are oscillated at the last iteration of training. Figure 7: The comparison of attention map visualization of quantized Swin-T using our method and LSQ+ (Bhalgat et al., 2020).
2310.04598
A Neuro-Symbolic Framework for Answering Graph Pattern Queries in Knowledge Graphs
The challenge of answering graph queries over incomplete knowledge graphs is gaining significant attention in the machine learning community. Neuro-symbolic models have emerged as a promising approach, combining good performance with high interpretability. These models utilize trained architectures to execute atomic queries and integrate modules that mimic symbolic query operators. However, most neuro-symbolic query processors are constrained to tree-like graph pattern queries. These queries admit a bottom-up execution with constant values or anchors at the leaves and the target variable at the root. While expressive, tree-like queries fail to capture critical properties in knowledge graphs, such as the existence of multiple edges between entities or the presence of triangles. We introduce a framework for answering arbitrary graph pattern queries over incomplete knowledge graphs, encompassing both cyclic queries and tree-like queries with existentially quantified leaves. These classes of queries are vital for practical applications but are beyond the scope of most current neuro-symbolic models. Our approach employs an approximation scheme that facilitates acyclic traversals for cyclic patterns, thereby embedding additional symbolic bias into the query execution process. Our experimental evaluation demonstrates that our framework performs competitively on three datasets, effectively handling cyclic queries through our approximation strategy. Additionally, it maintains the performance of existing neuro-symbolic models on anchored tree-like queries and extends their capabilities to queries with existentially quantified variables.
Tamara Cucumides, Daniel Daza, Pablo Barceló, Michael Cochez, Floris Geerts, Juan L Reutter, Miguel Romero
2023-10-06T21:31:17Z
http://arxiv.org/abs/2310.04598v2
# A neuro-symbolic framework for answering conjunctive queries ###### Abstract The problem of answering logical queries over incomplete knowledge graphs is receiving significant attention in the machine learning community. Neuro-symbolic models are a promising recent approach, showing good performance and allowing for good interpretability properties. These models rely on trained architectures to execute atomic queries, combining them with modules that simulate the symbolic operators in queries. Unfortunately, most neuro-symbolic query processors are limited to the so-called _tree-like_ logical queries that admit a bottom-up execution, where the leaves are constant values or _anchors_, and the root is the target variable. Tree-like queries, while expressive, fail short to express properties in knowledge graphs that are important in practice, such as the existence of multiple edges between entities or the presence of triangles. We propose a framework for answering arbitrary conjunctive queries over incomplete knowledge graphs. The main idea of our method is to approximate a cyclic query by an infinite family of tree-like queries, and then leverage existing models for the latter. Our approximations achieve strong guarantees: they are _complete_, i.e. there are no false negatives, and _optimal_, i.e. they provide the best possible approximation using tree-like queries. Our method requires the approximations to be tree-like queries where the leaves are anchors or existentially quantified variables. Hence, we also show how some of the existing neuro-symbolic models can handle these queries, which is of independent interest. Experiments show that our approximation strategy achieves competitive results, and that including queries with existentially quantified variables tends to improve the general performance of these models, both on tree-like queries and on our approximation strategy. ## 1 Introduction Knowledge graphs play a crucial role in representing knowledge within organizations and communities. Their usage is now widespread both in industry and in the scientific community (Fensel et al., 2020; Hogan et al., 2021). Knowledge graphs model information as nodes, which represent entities of interest, and edges, that represent relations between entities. During the creation of knowledge graphs, however, information may be stale or conflicting, and certain sources of data may have not been integrated yet. As a consequence, knowledge graphs tend to be _incomplete_ in the sense that some of the entities or relations occurring in the application domain may not be present in the graph. We refer to Ren et al. (2023) for statistics about missing information in knowledge graphs. A particularly important reasoning task on knowledge graphs is the _answering of queries_. Traditional query answering methods, especially those from the data management and semantic web literature, focus on only extracting the information that can be derived from the knowledge _present in the graph (Angles et al., 2017; Hogan et al., 2021; Ali et al., 2022). Given the incomplete character of knowledge graphs, these methods hence fail to address the need to reason about unknown information. This limits their usefulness in many application domains (Nickel et al., 2015). This observation has spurred the development of numerous machine learning approaches to query answering, e.g. Hamilton et al. (2018); Ren et al. (2019); Ren and Leskovec (2020); Zhang et al. (2021); Zhu et al. (2022). We focus on a recently proposed family of approaches, namely, the _neuro-symbolic_ models. They rely on trained (e.g. neural) architectures to execute atomic queries, and combine them with modules that simulate the symbolic logical operators in queries. These approaches have shown promising performance and, more importantly, produce more interpretable models. We refer to Ren et al. (2023) for a comprehensive recent survey on neural and neuro-symbolic approaches for query answering over incomplete knowledge graphs. State-of-the-art neuro-symbolic approaches, however, only support a restricted class of queries, namely, _anchored tree-like_ queries1(Ren et al., 2023). Figure 0(a) shows an example of an anchored tree-like query. Although tree-like queries already capture interesting properties in graphs, they are not capable of checking more complex properties such as the existence of triangles or of multiple edges between entities. The development of neuro-symbolic approaches for more complex query classes remains largely unexplored. In particular, supporting cyclic queries, such as the triangle query, has been identified as an important open challenge by Ren et al. (2023). Figure 0(c) shows an example of a cyclic (triangle) query. **In this paper we propose a neuro-symbolic framework for approximating complex queries by maximally leveraging methods for tree-like queries.** Footnote 1: These queries are also referred simply as _tree-like_ in the literature. We reserve the term tree-like for the generalization where the anchored condition is lifted. More specifically, our **contributions** are as follows. **(1)** We propose an _approximation scheme_ for complex conjunctive queries using tree-like queries. Moreover, the approximation scheme comes with _theoretical guarantees_: It is _complete_ in the sense that no false negative query answers are produced. It is _optimal_ in that we provide the best possible approximation using tree-like queries. **(2)** The approximation scheme is _adaptive_ in the sense that it is parameterized by the notion of _depth_ of tree-like queries. For any depth, an approximation exists and higher depth queries potentially provide better approximations. The choice of depth can be tuned depending on available resources, queries and data at hand. **(3)** Our approach is _generic_ and can be used in combination with any neuro-symbolic query processor, provided that unanchored _tree-like queries_ are supported. Figure 0(b) depicts an unanchored tree-like query in which the input node \(w\) is variable. As an independent contribution, we show how to go from anchored to (unanchored) tree-like queries in some neuro-symbolic methods. **(4)** We implemented our approach on top of the GNN-QE implementation by Zhu et al. (2022). Results show our techniques are a viable strategy for answering cyclic queries, and that our improvements can be carried over with little cost over this standard neuro-symbolic architecture. Figure 1: Different conjunctive queries (CQs) (a) atoms in anchored tree-like CQs are structured as trees where the leaves are anchors and the root the target variable (\(x\) in this case); (b) leaves in tree-like CQs can be anchors or existential varibles (\(w\) in this case); (c) arbitrary CQs can have cycles. Related Work Neural and neuro-symbolic query answering.The machine learning community has produced a wide body of literature investigating how to answer complex queries over incomplete knowledge graphs. These works build on and extend recently successful methods designed for knowledge graph completion (Bordes et al., 2013; Yang et al., 2015; Trouillon et al., 2016; Sun et al., 2019; Schlichtrull et al., 2018; Vashishth et al., 2020; Teru et al., 2020). Following Ren et al. (2023), we can identify two different approaches to complex query answering. Firstly, _neural_ approaches (Hamilton et al., 2018; Kotnis et al., 2021; Liu et al., 2022; Pflueger et al., 2022) answer queries by processing atomic queries and logical operators directly in the embedding space, parameterizing them with neural networks. These methods usually lead to better performance, but at the cost of being much less interpretable. Secondly, there are so-called _neuro-symbolic_ approaches, which combine neural approaches to compute missing links between entities and symbolic approaches to extract answers from the completed data (Bai et al., 2023; Luo et al., 2023; Chen et al., 2022; Ren and Leskovec, 2020; Yin et al., 2023; Zhu et al., 2022). While logical operators are still processed in the latent space, they are biased to better correlate with their symbolic counterparts. We refer to Ren et al. (2023) for more details on the particular workings of each of these models. To our knowledge, none of these approaches deal with general CQs. Approximation of conjunctive queries.The notion of a tree-like approximation of a conjunctive query, as explored in this paper, was originally introduced by the database theory community. Two types of approximations were proposed: _underapproximations_, which yield sound but not necessarily complete answers (Barcelo et al., 2014), and _overapproximations_, which yield complete but not necessarily sound answers (Barcelo et al., 2020). For reasons explained above, in this work we focus on overapproximations, that is, complete approximations of conjunctive queries. The main distinction between our work and previous research is the fact that tree-like approximations are evaluated using a neuro-symbolic approach. Additionally, we present the first working implementation of the concept of CQ approximation, as prior work had only examined its theoretical properties. Finally, previous works deal with a slighly different notion of tree-like, namely, _treewidth-1_ queries, and hence some refinements are needed to obtain our theoretical results. ## 3 Preliminaries Knowledge graphs and conjunctive queries.Knowledge graphs are directed graphs with labeled edges. Formally, let \(\mathsf{Con}\) be a countably infinite set of _constants_. A _knowledge graph_ (KG) is a tuple \(\mathcal{G}=(\mathcal{E},\mathcal{R},\mathcal{S})\) where \(\mathcal{E}\subseteq\mathsf{Con}\) is a finite set of _entities_, \(\mathcal{R}\) is a finite set of _edge types_, and \(\mathcal{S}\subseteq\mathcal{E}\times\mathcal{R}\times\mathcal{E}\) is a finite set of _edges_. We typically denote an edge \((a,R,b)\) by \(R(a,b)\). Let \(\mathsf{Var}\) be a countably infinite set of _variables_. As is common in machine learning, we focus on unary queries, that is, queries with only one target variable. Formally, a _(unary) conjunctive query_ (CQ) \(q\) over a set of edge types \(\mathcal{R}\) is a first-order logic (FO) formula of the form \[q(x)\gets R_{1}(y_{1},z_{1})\wedge\cdots\wedge R_{m}(y_{m},z_{m}),\] where \(x\) is the _target_ variable, each \(R_{i}(y_{i},z_{i})\) is an _atom_ with \(R_{i}\in\mathcal{R}\) and \(\{y_{i},z_{i}\}\subseteq\mathsf{Con}\cup\mathsf{Var}\) (\(y_{i},z_{i}\) are either variables or constants). The variable set \(\mathsf{Var}(q)\) of \(q\) is the set of variables appearing in the atoms of \(q\), that is, the variables appearing in \(\{y_{1},z_{1},\ldots,y_{m},z_{m}\}\). Similarly, we denote by \(\mathsf{Con}(q)\) the constants appearing in the atoms of \(q\). As usual, we assume \(x\in\mathsf{Var}(q)\). The variables in \(\mathsf{Var}(q)\setminus\{x\}\) are the _existentially quantified_ of \(q\). Sometimes we write \(q(x)\) instead of \(q\) to emphasize that \(x\) is the target variable of \(q\). The semantics of CQs is defined using the standard semantics of first-order logic. We denote by \(q(\mathcal{G})\) the _answer_ of the CQ \(q\) over the KG \(\mathcal{G}\). Figure 0(c) shows the CQ \(q(x)\leftarrow\text{Friend}(x,y)\wedge\text{Friend}(y,z)\wedge\text{Coworker}(x,x)\) looking for all persons \(x\) that have a friend \(y\) and a coworker \(z\) that are friends with each other. Here, \(\mathsf{Var}(q)=\{x,y,z\}\), \(x\) is the target variable, and \(y\) and \(z\) are both existentially quantified. The _query graph_ of a CQ \(q\) is the multigraph that has \(\mathsf{Var}(q)\cup\mathsf{ConOcc}(q)\) as nodes, and an edge from node \(u\) to node \(v\) for every atom \(R(u,v)\) in \(q\). Here \(\mathsf{ConOcc}(q)\) is the set of occurrences of constants in \(q\), i.e. if the number of occurrences in different atoms of \(q\) of a constant \(a\in\mathsf{Con}(q)\) is \(k\), then there are \(k\) duplicates of \(a\) in \(\mathsf{ConOcc}(q)\). We say that a CQ \(q(x)\) with target variable \(x\) is tree-like_ if the query graph of \(q\) is an (undirected) tree rooted in node \(x\). In particular, no multiple edges between pair of nodes are allowed. Additionally, \(q\) is _anchored_ if all the leaves of this tree are nodes in \(\mathsf{ConOcc}(q)\); otherwise it is _unanchored_. As we are working with \(\mathsf{ConOcc}(q)\) instead of \(\mathsf{Con}(q)\), different leaves could correspond to the same anchor. The _depth_ of a tree-like CQ is the depth of the corresponding tree formed in its query graph, that is, the length of the longest path from the root to one of its leaves. Finally, \(q\) is _cyclic_ if the query graph of \(q\) has an undirected cycle. Figure 1 contains examples of anchored tree-like, tree-like and cyclic conjunctive queries, depicted using their query graph. Notice that the unanchored query in Figure 0(b) was obtained by existentially quantifying one of the leaves of the query in Figure 0(a). As we mentioned, most neuro-symbolic methods for logical query answering are restricted to _anchored tree-like_ queries. Notice that one could define tree-like queries for the full FO fragment. This is the fragment commonly dealt with in the literature, and our implementation also supports it. We refer to Ren et al. (2023); Yin et al. (2023) for the definitions. CQ containment.A concept we will exploit heavily is that of query containment. We say that a CQ \(q\) is _contained_ in a CQ \(q^{\prime}\), denoted by \(q\subseteq q^{\prime}\), if \(q(\mathcal{G})\subseteq q^{\prime}(\mathcal{G})\), for all KGs \(\mathcal{G}\). That is, the answer of \(q\) is always contained in the answer of \(q^{\prime}\), independently of the underlying KG. While this notion reasons over all KGs, it admits a simple syntactic characterization based on homomorphisms. A _homomorphism_ from CQ \(q(x)\) to CQ \(q^{\prime}(x)\) is a mapping \(h:\mathsf{Var}(q)\cup\mathsf{Con}(q)\to\mathsf{Var}(q^{\prime})\cup\mathsf{ Con}(q^{\prime})\) from the variables and constants of \(q\) to the variables and constants of \(q^{\prime}\) such that \(h(x)=x\), \(h(a)=a\) for all \(a\in\mathsf{Con}(q)\), and \(R(h(y),h(z))\) is an atom of \(q^{\prime}\), for all atoms \(R(y,z)\) of \(q\). That is, a homomorphism is a way of replacing the variables of \(q\) by variables of \(q^{\prime}\) such that each atom of \(q\) becomes an atom of \(q^{\prime}\). The target variable of \(q\) must be mapped to the target variable of \(q^{\prime}\). The following is a well-known characterization of CQ containment. **Proposition 3.1** (Chandra and Merlin (1977)).: _A CQ \(q\) is contained in a CQ \(q^{\prime}\) if and only if there is a homomorphism from \(q^{\prime}\) to \(q\)._ ## 4 Answering CQs via tree-like approximations We now present our framework for answering arbitrary CQs over incomplete KGs. The idea of our method is to approximate a cyclic CQ \(q\) by an infinite family \(\mathcal{U}_{q}=\{\tilde{q}_{d}\}_{d\geq 1}\) of tree-like CQs. As already mentioned, by doing so we can use state-of-the-art neuro-symbolic methods - only designed for tree-like queries - to deal with complex queries as well. The family \(\mathcal{U}_{q}\) is parameterized by the query depth: each \(\tilde{q}_{d}\) is of depth \(d\). By taking greater depths, we obtain better or equal approximations in \(\mathcal{U}_{q}\). Interestingly, the family \(\mathcal{U}_{q}\) provides us with the following formal guarantees: * _Completeness:_ CQs in \(\mathcal{U}_{q}\) are _complete_, that is, their answers always contain the answer of \(q\). In other words, \(q\) is contained in each \(\tilde{q}_{d}\) and hence \(\tilde{q}_{d}\) does not produce false negatives. * _Optimality:_ For each depth \(d\geq 1\), the CQ \(\tilde{q}_{d}\in\mathcal{U}_{q}\) is the best approximation (in a precise sense) among all the complete tree-like approximations of \(q\) with depth at most \(d\). This suggests the following **neuro-symbolic method for answering complex queries**: Take any neuro-symbolic method capable of answering tree-like queries. Then, given a CQ \(q\), we answer \(q\) by feeding one of its tree-like approximations \(\tilde{q}_{D}\in\mathcal{U}_{q}\) (the chosen depth \(D\) is a hyperparameter of our model) to the chosen neuro-symbolic method. We thus leverage existing neuro-symbolic methods for anchored tree-like CQs, both for inference and learning. We remark that current methods work with anchored tree-like CQs, while our approach requires the use of (not necessarily anchored) tree-like CQs. We show in Section 4.3 how to remedy this but first formalize our approach. ### Complete tree-like approximations Let \(q\) be an arbitrary CQ. A _complete tree-like approximation_ of \(q\) is a tree-like CQ \(q^{\prime}\) that contains \(q\). That is, the answer \(q(\mathcal{G})\) is always contained in the answer \(q^{\prime}(\mathcal{G})\), independently of the KG \(\mathcal{G}\). We stress that the notion of completeness is particularly relevant for the setting of incomplete KGs. Indeed, we are given a CQ \(q\) and an (incomplete) KG \(\mathcal{G}\) and the goal is to obtain the answers \(q(\mathcal{G}^{*})\) for the unobservable complete KG \(\mathcal{G}^{*}\). As containment considers all possible KGs, the answer \(q^{\prime}(\mathcal{G}^{*})\) of a complete approximation \(q^{\prime}\) must hence contain the sought answer set \(q(\mathcal{G}^{*})\). By Proposition 3.1, a tree-like CQ \(q^{\prime}\) is a complete approximation of \(q\) if there is a homomorphism from \(q^{\prime}\) to \(q\). Of course, there could be many approximations for the same query. As an example, consider the triangle CQ \(q(x)\leftarrow\text{Friend}(x,y)\wedge\text{Friend}(y,z)\wedge\text{Coworker}(z,x)\) depicted in Figure 2. On the right of Figure 2, we can see three possible complete tree-like approximations for \(q\). Indeed, \(q^{\prime}_{1}\) and \(q^{\prime}_{2}\) can be mapped to \(q\) via the homomorphism \(\{x\mapsto x,y\mapsto y,z\mapsto z,x^{\prime}\mapsto x\}\). For \(q^{\prime}_{3}\), we can use the homomorphism \(\{x\mapsto x,y_{1}\mapsto y,y_{2}\mapsto y,z_{1}\mapsto z,z_{2}\mapsto z,x_{1} \mapsto x,x_{2}\mapsto x\}\). Actually, by taking longer paths, it is easy to get infinitely many approximations for \(q\). Hence, the space of approximations may be infinite. This raises the question which approximation should we choose? We discuss this problem in the next section. ### Complete optimal approximations: unravelings While the number of approximations is infinite, we show there are a special kind of approximations that are optimal in a precise sense, and hence are a natural choice to approximate the original CQ. Let \(q(x)\) be a CQ. A _valid path_ of \(q(x)\) is a sequence \(x_{0},A_{1},x_{1},\ldots,A_{k},x_{k}\), for \(k\geq 0\), such that: * \(x_{0}=x\), each \(x_{i}\in\text{Var}(q)\cup\text{Con}(q)\), and each \(A_{i}\) is an atom of \(q\). * for each \(1\leq i\leq k\), the atom \(A_{i}\) is either of the form \(R(x_{i-1},x_{i})\) (a forward traversal of the atom), or \(R(x_{i},x_{i-1})\) (a backward traversal of the atom). * \(A_{i}\neq A_{i+1}\), for each \(1\leq i<k\). Intuitively, a valid path is a way of traversing the CQ \(q\) starting from the target variable \(x\) and sequentially moving through the atoms of \(q\). We can visit the same variable, constant or atom several times. The only restriction is that an atom cannot be visited multiple times _consecutively_ in the sequence. Hence, once an atom is traversed, we cannot go back via the same atom immediately. The _length_ of a valid path is the number of atoms \(k\). Note that the valid path of length \(0\) is well-defined and corresponds to the sequence \(x\). A valid path is _unanchored_ if it ends at a variable of \(q\); otherwise, we say that it is _anchored_. For a valid path \(P\), we denote by \(\text{end}(P)\in\text{Var}(q)\cup\text{Con}(q)\) the element at the end of path \(P\). Consider the CQ \(q(x)\gets A_{1}\wedge A_{2}\wedge A_{3}\) in Figure 2, where \(A_{1}=\text{Friend}(x,y)\), \(A_{2}=\text{Friend}(y,z)\) and \(A_{3}=\text{Coworker}(z,x)\). An example of an unanchored valid path is \(x\), \(A_{1},y\), \(A_{2},z\), \(A_{3}\), \(x\), which corresponds to a clockwise traversal of length \(3\) starting at \(x\). The anticlockwise traversal of length \(3\) is given by the valid path \(x\), \(A_{3},z,A_{2},y,A_{1},x\). Now we are ready to define our optimal approximations. Let \(q(x)\) be a CQ. The _unraveling_ of \(q(x)\) of depth \(d\geq 1\) is the tree-like CQ \(\tilde{q}_{d}(x)\) constructed as follows: * The variables of \(\tilde{q}_{d}\) correspond to the unanchored valid paths of \(q\) of length at most \(d\). Formally, \(\text{Var}(\tilde{q}_{d}):=\{z_{P}\mid P\text{ unanchored valid path of }q\text{ of length }\leq d\}\). * For valid paths \(P\) and \(P^{\prime}=P,A^{\prime},\text{end}(P^{\prime})\) of \(q\) of lengths \(\leq d\), if \(A^{\prime}=R(\text{end}(P),\text{end}(P^{\prime}))\) then \(\tilde{q}_{d}\) has an atom \(R(o_{P},o_{P^{\prime}})\), where \(o_{W}=z_{W}\) if \(W\) is unanchored, and \(o_{W}=\text{end}(W)\) otherwise. If \(A^{\prime}=R(\text{end}(P^{\prime}),\text{end}(P))\) then \(\tilde{q}_{d}\) has an atom \(R(o_{P^{\prime}},o_{P})\). * The target variable \(x\) of \(\tilde{q}_{d}\) is \(z_{P_{0}}\), where \(P_{0}\) is the valid path of \(q\) of length \(0\). Figure 2: A cyclic CQ \(q\) and three possible complete tree-like approximations. Best viewed in color. The idea is that the unraveling \(\tilde{q}_{d}(x)\) of depth \(d\) of \(q(x)\) is obtained by traversing \(q\) in a tree-like fashion, starting from the target variable \(x\) and moving from one variable to all of its neighbors, through the atoms of \(q\). Every time we add fresh variables to the unraveling and hence this is actually a tree-like CQ. The tree traversal has depth \(d\) and is always restricted to valid paths (no immediate returns to the same atom). The leaves of the unraveling could be anchors or existentially quantified variables. The latter case is unavoidable in general and hence the need of working with (not necessarily anchored) tree-like CQs. Continuing the example from Figure 2, \(q^{\prime}_{3}\) is the depth 3 unraveling of \(q\). Note how the variables \(z_{1}\), \(y_{1}\), \(x_{1}\) of \(q^{\prime}_{3}\) correspond to the valid paths \((x,A_{3},z)\), \((x,A_{3},z,A_{2},y)\), \((x,A_{3},z,A_{2},y,A_{1},x)\). Similarly, the variables \(y_{2}\), \(z_{2}\), \(x_{2}\) correspond to \((x,A_{1},y)\), \((x,A_{1},y,A_{2},z)\), \((x,A_{1},y,A_{2},z,A_{3},x)\). By definition, the unraveling \(\tilde{q}_{d}\) is a tree-like CQ. By inverting the "unraveling" process, we can obtain a homomorphism from \(\tilde{q}_{d}\) to \(q\), and then \(\tilde{q}_{d}\) is always a complete tree-like approximation. Also, we have that \(\tilde{q}_{d+1}\subseteq\tilde{q}_{d}\) holds and hence the family \(\mathcal{U}_{q}=\{\tilde{q}_{d}\}_{d\geq 1}\) provides potentially better approximations as depth increases (\(q\subseteq\cdots\subseteq\tilde{q}_{3}\subseteq\tilde{q}_{2}\subseteq\tilde{q }_{1}\)). These properties are summarized in the following proposition (see Appendix A for a formal proof). **Proposition 4.1**.: _Let \(q\) be a CQ and \(d\geq 1\). The unraveling \(\tilde{q}_{d}\) is a complete tree-like approximation of \(q\). Moreover, \(\tilde{q}_{d+1}\subseteq\tilde{q}_{d}\) holds._ Interestingly, we can show that \(\tilde{q}_{d}\) is optimal in the following sense: **Theorem 4.1**.: _Let \(q\) be a CQ and \(d\geq 1\). Suppose \(q^{\prime}\) is a complete tree-like approximation of depth at most \(d\). Then \(\tilde{q}_{d}\subseteq q^{\prime}\) holds._ In particular, for any complete tree-like approximation \(q^{\prime}\) of \(q\), there exists an unraveling \(\tilde{q}_{d}\) at least as good than \(q^{\prime}\) as an approximation of \(q\), i.e. \(q\subseteq\tilde{q}_{d}\subseteq q^{\prime}\). The proof idea of Theorem 4.1 is to turn any homomorphism \(h\) from \(q^{\prime}\) to \(q\) into a homomorphism from \(q^{\prime}\) to \(\tilde{q}_{d}\) by analyzing the image of \(h\) on \(q\). See Appendix A for complete details. Figure 2 shows the depth 3 unraveling \(\tilde{q}_{3}=q^{\prime}_{3}\) for \(q\), and two additional depth 3 approximations \(q^{\prime}_{1}\) and \(q^{\prime}_{2}\). We see that \(q\subseteq\tilde{q}_{3}\subseteq q^{\prime}_{1}\) and \(q\subseteq\tilde{q}_{3}\subseteq q^{\prime}_{2}\). In conclusion, we have shown that the tree-like queries \(\mathcal{U}_{q}=\{\tilde{q}_{d}\}_{d\geq 1}\) satisfy the desired properties of completeness and optimality. Figure 3 shows an overview of our approach for the triangle query. We next show how to turn the theory into practice. ### A concrete implementation: \(\exists\)Gnn-QE One of the key strengths of our proposed approximation scheme is that it is _generic_. That is, it can be implemented on top of any neuro-symbolic query processing method, provided that the method is capable of dealing with (possibly unanchored) tree-like queries. This claim comes with a small caveat. As already mentioned, state-of-the-art methods deal with _anchored_ tree-like queries (see also Ren et al. (2023)) and modifications are needed to support general tree-like queries. The challenge is to encode unanchored leaf nodes in tree-like queries in the latent space in which the encoding of anchored entities typically reside. Importantly, the encoding needs to simulate existential quantification in line with the semantics of unanchored leaf Figure 3: Overview of our approach. We show unravelings until depth \(4\). The goal is to approximate the answer \(q(\mathcal{G}^{*})\) of \(q\) over the unobservable complete KG \(\mathcal{G}^{*}\). Best viewed in color. nodes. We here describe how this can be done in fuzzy-based neuro-symbolic approaches such as Chen et al. (2022); Zhu et al. (2022), leaving the extension of other approaches to future work. Our implementation is based on GNN-QE by Zhu et al. (2022), a neuro-symbolic architecture that processes anchored tree-like queries in a bottom-up fashion. Anchored leaf nodes are encoded as one-hot vectors in latent space, and edges between entities are processed using an adaptation of the NBFNet graph neural network (Zhu et al., 2021). In each step, a probabilistic vector over entities is obtained, indicating the likelihood of an edge to those entities. Intuitively, the knowledge graph is completed by these edge predictions. Finally, the probability vectors are combined using operations that simulate logical operations. For example, for the query in Figure 0(a), a one-hot vector encoding anchor "Tech" is transformed through the Employee edge into a vector indicating the probability that someone (entity) works in Tech. Following this reasoning, _we encode unanchored leaf nodes as full unitary vectors_ (that is, a vector consisting of all ones). Such a vector indeed gives equal probability to every entity hereby simulating existential quantification. For example, to answer a query such as the one in Figure 0(b), we encode the \(w\) node by the full unitary vector, the anchor node "Tech" remains encoded as before. We then process these vectors as in GNN-QE. We denote our extension by \(\exists\)GNN-QE. Since it can deal with general tree-like queries, we can use it alongside our approximation scheme. In the next section we report how well everything works in practice. ## 5 Experiments We set up our experimental evaluation to address the following questions. The first two questions relate to our support for general (not necessarily anchored) tree-like queries. 1. [label=(**Q0**)] 2. What is the effect on the performance of answering anchored tree-like queries when the training set includes unanchored tree-like queries as well? 3. Similarly, what is the effect on the performance of answering general tree-like queries? Looking ahead, as a contribution of independent interest, our results indicate that we can support general tree like queries (**Q2**) with little or no negative impact for both anchored tree-like queries (**Q1**). This gives us ground to suggest that general tree-like queries should become default members in training and testing sets of future neuro-symbolic architectures. Our third question relates to our approximation scheme. 1. [label=(**Q0**)] 2. What is the performance of our approximation scheme in answering cyclic queries? And related, how does this depend on the chosen depth of the unraveling? Looking ahead, our results show that unravellings can be used to answer cyclic queries: the metrics obtained for our suite of cyclic test queries are competitive, albeit slightly weaker, with similar metrics obtained by recent approaches for complex tree-like queries involving unions and negations of atoms. We thus validate the potential of our approach and promote it to become a competitive standard for future algorithms dealing with complex query types. ### Experimental setup We perform our evaluation on the commonly used _knowledge graphs_ FB15k-237 (Toutanova and Chen, 2015), FB15k (Bordes et al., 2013) and NELL995 (Xiong et al., 2017) with their official training/validation/testing split. With regards to methods, as _baseline_ we use GNN-QE, trained following the specifications of Zhu et al. (2022). That is, it is trained using the queries generated by BetaE (Ren and Leskovec, 2020), consisting of 10 tree-like query types, including queries featuring union and negation (1p/2p/3p/2i/3i/2in/3in/inp/pni/pin). For our method: \(\exists\)GNN-QE we additionally provide a new set of training, validation and test queries without anchors and unravelings of cyclic queries alongside with their corresponding answers for FB15k-237, FB15k and NELL995. These queries adhere to the same query types as before, except they are not anchored. In order to ensure a fair comparison, we trained \(\exists\)GNN-QE keeping the training parameters identical to those used for GNN-QE, but including queries without anchors. Details and statistics of both the new query set and the training can be found in the Appendix B. Metrics of GNN-QE are taken from its original paper by Zhu et al. (2022). Specifically, we report the _mean reciprocal rank_ (**mrr**) of the predicted answer set (compared with the ground truth), and the _Spearman correlation rank_ (**spearman**r) between the total number of answers predicted and the number of answers in the ground truth. We report the remaining metrics used in Zhu et al. (2022) in Appendix B. Results are measured only against GNN-QE, see their original paper for comparison against other methods. ### Results Anchored tree-like queries.In our first batch of experiments we investigate the _effect of training with unanchored queries_ on the performance on the original _anchored_ BetaE queries (**Q1**). We compare the performance of GNN-QE and \(\exists\)GNN-QE on anchored queries on our datasets. Importantly, as mentioned already, GNN-QE is trained using the original BetaE queries, whereas \(\exists\)GNN-QE is trained using additional unanchored BetaE queries. In Table 1 we report the results. The experiments show that training with unanchored queries (\(\exists\)GNN-QE) results in a slight decrease in the mean reciprocal rank metric, and a slight increase in the spearman's rank correlation. We note that we failed to replicate the original numbers obtained in Zhu et al. (2022), so some of these differences may also be due to differences in training hardware. All in all, we see we are either paying a small price, or none at all, in order to enable a much more expressive class of queries. Note that the set of queries with best comparative performance is in queries with _negation_: this is according to our expectations, as negating a small set of entities results in dealing with large number of entities, just as in unanchored entry points. Tree-like queries.Our second batch of results relates to enabling treatment of tree-like queries without anchors (**Q2**). While less interesting, we can also measure the effect of training with queries without anchors. In order to do this, we maintain weights computed by GNN-QE, but enable the processing of relation-projection that is non-anchored. Table 2 shows the results of both GNN-QE and \(\exists\)GNN-QE over the original test set of our benchmark databases. Results, as expected, suggest that training for this type of queries has a drastic increase in performance in all metrics. Cyclic Queries.Next we move to cyclic queries, computed through their unravelings (**Q3**). Because our method relies on approximating the ground truth, we do not train for these types of queries, but rather try them directly in the trained models. To this extent, we construct a new test set for cyclic queries with 2 query-types: triangles, and squares (see Appendix B for more details). \begin{table} \begin{tabular}{|l l l l l l l l l l l l l l l l l l l l l l l|} \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**2p**} & \multirow{2}{*}{**3p**} & \multirow{2}{*}{**21**} & \multirow{2}{*}{**3l**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**21n**} & \multirow{2}{*}{**3l**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**21n**} & \multirow{2}{*}{**3l**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**21n**} & \multirow{2}{*}{**3l**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**21n**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**21n**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**21n**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**21n**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} \\ \hline \multicolumn{11}{|c}{**F115-237**} \\ \hline \multirow{2}{*}{**spearman**r & GNN-QE & 0.948 & 0.551 & 0.895 & 0.992 & 0.970 & **0.937** & 0.911 & 0.981 & 0.968 & **0.864** & 0.880 & 0.987 & - & - \\ & GNN-QE & **0.977** & **0.966** & **0.942** & 0.992 & **0.975** & 0.988 & **0.943** & **0.990** & **0.981** & 0.853 & **0.933** & **0.989** & **0.979** & **0.968** \\ \hline \multirow{2}{*}{**mrr**} & GNN-QE & **0.428** & **0.147** & **0.118** & **0.833** & **0.541** & **0.189** & **0.311** & **0.100** & **0.168** & **0.093** & **0.072** & **0.078** & **0.162** & **0.134** \\ & GNN-QE & 0.321 & 0.107 & 0.096 & 0.339 & 0.501 & 0.174 & 0.268 & 0.063 & 0.139 & 0.080 & 0.053 & 0.048 & 0.119 & 0.103 \\ \hline \multicolumn{11}{|c}{**F115-237**} \\ \hline \multirow{2}{*}{**mrr**} & GNN-QE & 0.955 & **0.978** & **0.940** & **0.984** & **0.984** & 0.972 & **0.916** & **0.936** & 0.980 & 0.907 & **0.965** & **0.944** & **0.978** & - & - \\ & GNN-QE & **0.951** & 0.829 & 0.714 & 0.971 & **0.947** & 0.650 & 0.808 & **0.985** & **0.974** & 0.8321 & 0.967 & 0.995 & 0.939 \\ \hline \multicolumn{11}{|c}{**F115-237**} \\ \hline \multirow{2}{*}{**mrr**} & GNN-QE & **0.885** & **0.693** & 0.587 & 0.977 & **0.835** & **0.704** & 0.699 & **0.447** & 0.417 & **0.420** & 0.301 & **0.343** & 0.741 & **0.610** \\ & GNN-QE & 0.855 & 0.688 & 0.587 & **0.801** & 0.833 & 0.620 & **0.720** & 0.430 & **0.418** & 0.403 & **0.302** & 0.340 & **0.747** & 0.600 \\ \hline \end{tabular} \end{table} Table 1: Mean reciprocal rank and spearman rank correlation on test BetaE queries. Results of GNN-QE are taken from Zhu et al. (2022). Other metrics can be found in Appendix C \begin{table} \begin{tabular}{|l l l l l l l l l l l l l l l l l l l l l l l l l l|} \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**2p**} & \multirow{2}{*}{**3p**} & \multirow{2}{*}{**21**} & \multirow{2}{*}{**3l**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**21n**} & \multirow{2}{*}{**3l**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**1p**} & \multirow{2}{*}{**21n**} Since our unravelings are parameterized by depth, before trying our all query shapes we tuned this parameter with an exploratory analysis for triangles and squares on FB15k-237. Here we are interested in mixing out the spearman correlation rank, because the choice of depth incides directly on the number of answers returned by each query. The result of this analysis for the triangle are shown in Figures 5 and 5. Further results can be found in Appendix C. As we see, deeper unravelings appear to improve, but there seems to be a point after which this effect starts to be cancelled out by natural imprecisions of the trained model. This is evident when we analyze both the mean reciprocal rank (MRR) and the Spearman correlation. While the MRR tends to show slight improvements after a depth of 3, the Spearman correlation starts to diminish or worsen beyond that point. Hence, the remaining results (see Table 3) are reported for unravelings at depths 3 and 4 for triangles on all datasets. Notice that the metrics reported here are comparable to what state-of-the art architectures such as GNN-QE report for complex tree-like queries (see e.g. results for query type combining paths and intersection). We believe that our approximation scheme thus proves as a valid approach for allowing arbitrary CQs on neuro-symbolic architectures. Remaining results can be found on Appendix C. ## 6 Future work In this work, we present an approach to approximate the answers to arbitrary CQs over incomplete knowledge graphs by applying the mature toolbox developed for answering tree-like CQs. As for future work, we plan on expanding other neuro-symbolic architecture with the ability to deal with unanchored queries, so that we can also implement our approach in these architectures. While this approximation is cost-efficient, it can affect the quality of the retrieved answers. In fact, overapproximations may return answers that are not necessarily sound, even when the data is complete. One of our main goals for future work is to develop neuro-symbolic methods for CQs on knowledge graphs that return exact answers when evaluated on complete data. This process can be computationally demanding, but over the last decade, _worst-case optimal_ algorithms have been developed for retrieving such answers in a symbolic manner (Ngo et al., 2013). We plan to investigate how such algorithms can be integrated into the neuro-symbolic framework studied in this paper to provide high-quality answers in a context where data is considered incomplete. Another important issue we aim to address is determining the appropriate semantics for evaluating CQs over incomplete knowledge graphs. Neural approaches for completing knowledge graphs of ten produce a probability or score that indicates the likelihood of a link's existence between two given entities. This places us in the realm of _probabilistic_ data. The data management community has long been studying how queries over probabilistic data should be interpreted (Suciu et al., 2011). We believe it is important to understand how this semantics aligns with the one used in the neuro-symbolic evaluation of tree-like CQs and how the techniques employed to approximate the probabilistic evaluation of CQs can be used in our setting.
2306.01306
Federated Learning Games for Reconfigurable Intelligent Surfaces via Causal Representations
In this paper, we investigate the problem of robust Reconfigurable Intelligent Surface (RIS) phase-shifts configuration over heterogeneous communication environments. The problem is formulated as a distributed learning problem over different environments in a Federated Learning (FL) setting. Equivalently, this corresponds to a game played between multiple RISs, as learning agents, in heterogeneous environments. Using Invariant Risk Minimization (IRM) and its FL equivalent, dubbed FL Games, we solve the RIS configuration problem by learning invariant causal representations across multiple environments and then predicting the phases. The solution corresponds to playing according to Best Response Dynamics (BRD) which yields the Nash Equilibrium of the FL game. The representation learner and the phase predictor are modeled by two neural networks, and their performance is validated via simulations against other benchmarks from the literature. Our results show that causality-based learning yields a predictor that is 15% more accurate in unseen Out-of-Distribution (OoD) environments.
Charbel Bou Chaaya, Sumudu Samarakoon, Mehdi Bennis
2023-06-02T07:12:04Z
http://arxiv.org/abs/2306.01306v1
# Federated Learning Games for Reconfigurable Intelligent Surfaces via Causal Representations ###### Abstract In this paper, we investigate the problem of robust Reconfigurable Intelligent Surface (RIS) phase-shifts configuration over heterogeneous communication environments. The problem is formulated as a distributed learning problem over different environments in a Federated Learning (FL) setting. Equivalently, this corresponds to a game played between multiple RISs, as learning agents, in heterogeneous environments. Using Invariant Risk Minimization (IRM) and its FL equivalent, dubbed FL Games, we solve the RIS configuration problem by learning invariant causal representations across multiple environments and then predicting the phases. The solution corresponds to playing according to Best Response Dynamics (BRD) which yields the Nash Equilibrium of the FL game. The representation learner and the phase predictor are modeled by two neural networks, and their performance is validated via simulations against other benchmarks from the literature. Our results show that causality-based learning yields a predictor that is 15% more accurate in unseen Out-of-Distribution (OoD) environments. Reconfigurable Intelligent Surface (RIS), Federated Learning, Causal Inference, Invariant Learning. ## I Introduction The advent of Reconfigurable Intelligent Surfaces (RISs) will substantially boost the performance of wireless communication systems. These surfaces are manufactured by layering stacks of sheets made out of engineered materials, called meta-materials, built on a planar structure. The reflection coefficients of the meta-material elements, called meta-atoms, vary depending on their physical states. Thus, the direction of incident electromagnetic waves on RISs can be manipulated with the aid of simple integrated circuit controllers that modify meta-atoms' states. In this view, the RIS technology provides a partial control over the wireless propagation environment rendering improved spectral efficiency with a minimal power footprint [1]. RIS is considered a fundamental enabler to achieve the 6G vision of smart radio environments [2]. One of the major challenges in the RIS technology is the accurate tuning of RIS phases. To this extent, a vast majority of the existing literature on RIS-assisted communication relies on the use of Channel State Information (CSI) to train Machine Learning (ML) models that predict the optimal RIS configuration [3, 4, 5]. These methods seek either a centralized-controller driven approach, or a distributed multi-agent optimization technique, such as Federated Learning (FL). Other works such as [6, 7] exploit the users' locations to employ a location-based passive RIS beamforming. Either way, their main focus is to draw on the statistical correlations of the observed data, while overlooking the impacts of heterogeneous system designs (e.g., different RISs, propagation environments, users distribution, etc.). Moreover, these approaches produce a high inference accuracy within a fixed environment, from which the training and testing data are drawn. They subsequently fail to have a good Out-of-Distribution (OoD) generalization in unseen environments. Although FL provides a learning framework where multiple agents train a collaborative model while preserving privacy, its state-of-the-art approach, such as Federated Averaging (FedAVG) [8], is known to perform poorly when the local data is non-identical across participating agents. This is due to the fact that FedAVG (and its variants) solve the distributed learning problem via Empirical Risk Minimization (ERM), that minimizes the empirical distribution of the local losses assuming that the data is identically distributed. To mitigate this issue, the authors in [9] leveraged Distributionally Robust Optimization (DRO) [10] and proposed a federated DRO for RIS configurations. Therein, the distributed learning problem is cast as a minimax problem, where the model's parameters are updated to minimize the maximal weighted combination of local losses. This ensures a good performance for the aggregated model over the worst-case combination of local distributions. On the other hand, [11] used Invariant Risk Minimization (IRM) [12] to formulate the problem of learning optimal RIS phase-shifts. The aim of IRM is to capture causal dependencies in the data, that are invariant across different environments. In [11], the authors empirically showed that using relative distances and angles between the RIS and the transmitter/receiver as causal representations for the CSI, improves the robustness of the RIS phase predictor. However, these representations were not learned by the configuration predictor, but were predefined and fixed. Also, this solution assumes that multiple environments are known to the predictor beforehand, which is an unfeasible assumption. The main contribution of this paper is a novel distributed IRM-based solution to the RIS configuration problem. We cast the problem of RIS phase control as a federated learning problem with multiple RISs controllers defined over heterogeneous environments, using a game-theoretic reformulation of IRM, referred to as Federated Learning Games (FL Games) [13, 14]. The solution of this problem is proven to be the Nash Equilibrium of a strategic game where each RIS updates its configuration predictor by minimizing its local loss function. This game is indexed by a representation learner that is shared among the RISs to extract a causal representation from the CSI data. The representation learner and predictors are trained in a distributed and supervised learning manner. The numerical validations yield that the proposed design improves the accuracy of the predictor tested in OoD settings by 15% compared to state-of-the-art RIS designs. The remainder of this paper is organized as follows. In Section II, we describe the system model and conventional approaches to the RIS configuration problem using FedAVG. The FL Games solution that involves extracting causal representations from the data is discussed in Section III. Section IV presents the simulation results that compare the proposed algorithm with benchmarks. Concluding remarks are drawn in Section V. ## II System Model and Problem Formulation We consider a set of environments \(\mathcal{R}\) where each environment \(r\in\mathcal{R}\) consists of a RIS-assisted downlink communication between a transmitter (Tx) and a receiver (Rx) as shown in Fig. 1. Both the Tx and the Rx are equipped with a single antenna each, and we assume that the direct link between them is blocked in which, the channel of direct link is dominated by the reflected channel. The RIS in environment \(r\) is composed by \(N^{r}=N_{\mathrm{x}}^{r}N_{\mathrm{y}}^{r}\) reflective elements where \(N_{\mathrm{x}}^{r}\) and \(N_{\mathrm{y}}^{r}\) are the number of horizontal and vertical reflective elements, respectively. Additionally, the inter-element distances over horizontal and vertical axes are characterized by \(d_{\mathrm{x}}^{r}\) and \(d_{\mathrm{y}}^{r}\). Each RIS element applies a phase shift on its incident signal and the reflected signals are aggregated at the Rx. Note that the location of the Tx is fixed while the location of Rx is arbitrary, which is sampled by a predefined distribution. The choices of the Rx distribution along with the parameters \((N_{\mathrm{x}}^{r},N_{\mathrm{y}}^{r},d_{\mathrm{x}}^{r},d_{\mathrm{y}}^{r})\) collectively define the environment \(r\). We assume that these environments are completely separate, i.e. each Rx receives only the signal reflected by the RIS in its corresponding environment. ### _Channel Model_ For the notation simplicity, we have omitted the notion of environment during the discussion within this subsection. Let \(\mathbf{g}\in\mathbb{C}^{N}\) be the channel between the RIS and the Rx, which is dominated by its line-of-sight (LoS) component. Hence, by denoting \(\varphi_{r}\) and \(\vartheta_{r}\) as the azimuth and elevation angles-of-departure (AoD) from the RIS respectively, the channel is modeled as \[\mathbf{g}=\sqrt{\alpha_{r}}\:\mathbf{a}_{N}\left(\varphi_{r},\vartheta_{r} \right), \tag{1}\] where \(\alpha_{r}\) represents the path-loss. Additionally, we define the steering function: \[\mathbf{a}_{N}\left(\varphi,\vartheta\right)=\left[e^{\frac{2\pi j}{\lambda} \Delta_{1}(\varphi,\vartheta)},\cdots,e^{\frac{2\pi j}{\lambda}\Delta_{N}( \varphi,\vartheta)}\right]^{\mathsf{T}}, \tag{2}\] and a set of operators for \(n=1,\ldots,N\): \[\Delta_{n}\left(\varphi,\vartheta\right)=i_{N}(n)d_{\mathrm{x}} \cos(\vartheta)\sin(\varphi)+j_{N}(n)d_{\mathrm{y}}\sin(\vartheta), \tag{3}\] \[i_{N}(n)=(n-1)\bmod N_{\mathrm{x}},\qquad j_{N}(n)=\left\lfloor \frac{n-1}{N_{\mathrm{x}}}\right\rfloor, \tag{4}\] where \(\lambda\) is the wavelength, \(\bmod\) and \(\lfloor\cdot\rfloor\) denote the modulus and floor operators. On the other hand, the channel \(\mathbf{h}\in\mathbb{C}^{N}\) between the Tx and the RIS will have both LoS and non line-of-sight (NLoS) components. Therefore, we model \(\mathbf{h}\) using Rician fading with spatial correlation, since the RIS elements are closely distanced. Accordingly, we have: \[\mathbf{h}=\sqrt{\alpha_{t}}\,\left(\sqrt{\frac{\kappa}{1+\kappa}}\,\overline {\mathbf{h}}+\sqrt{\frac{1}{1+\kappa}}\,\widetilde{\mathbf{h}}\right), \tag{5}\] where \(\alpha_{t}\) and \(\kappa\) are the path-loss and the Rician coefficient respectively, \(\overline{\mathbf{h}}\) is the LoS factor, and \(\widetilde{\mathbf{h}}\) represents the small scale fading process in the NLoS component. Further, for the LoS link, the LoS factor is: \[\overline{\mathbf{h}}=\mathbf{a}_{N}\left(\varphi_{t},\vartheta_{t}\right), \tag{6}\] where \(\left(\varphi_{t},\vartheta_{t}\right)\) are the angles-of-arrival (AoA) to the RIS. We model the NLoS link as \(\widetilde{\mathbf{h}}\sim\mathcal{CN}\left(\mathbf{0}_{N},\mathbf{R}\right)\), where \(\mathbf{R}\) is a covariance matrix that captures the spatial correlation among the channels of the RIS elements. In the case of isotropic scattering in front of the RIS, a closed-form expression for \(\mathbf{R}\) is [15, Proposition 1]: \[\left[\mathbf{R}\right]_{m,n}=\mathrm{sinc}\left(\frac{2\left|\mathbf{u}_{m}- \mathbf{u}_{n}\right|}{\lambda}\right)\qquad m,n=1,\ldots,N, \tag{7}\] where \(\mathbf{u}_{n}=\left[i_{N}(n)\,d_{\mathrm{x}},j_{N}(n)\,d_{\mathrm{y}}\right]^ {\mathsf{T}}\) represents the locations of the \(n^{\text{th}}\) element with \(i_{N}\) and \(j_{N}\) being the horizontal and vertical indices given in (4), and \(\mathrm{sinc}(\cdot)\) is the normalized sampling function. ### _Downlink Rate Maximization_ At every transmission slot, in each environment \(r\in\mathcal{R}\), the RIS selects its phases in order to enhance the downlink rate at the Rx. Let \(\boldsymbol{\theta}^{r}=\left[\theta_{1}^{r},\ldots,\theta_{N}^{r}\right]^{ \mathsf{T}}\) denote the phase decisions at the RIS. Thus, the received signal at the Rx is: \[y_{r}=\left(\mathbf{h}_{r}^{\mathsf{H}}\boldsymbol{\Theta}_{r}\,\mathbf{g}_{r} \right)s_{r}+z_{r}, \tag{8}\] where \(\boldsymbol{\Theta}_{r}=\mathrm{diag}\left(e^{j\theta_{1}^{r}},\ldots,e^{j \theta_{N}^{r}}\right)\) is the RIS reflection matrix, \(s_{r}\) is the transmitted signal satisfying the power budget constraint \(\mathbb{E}\left[|s_{r}|^{2}\right]=p\), and \(z_{r}\sim\mathcal{CN}\left(0,\sigma^{2}\right)\) is the additive Fig. 1: System model illustrating different RIS-assisted communication scenarios. Each RIS is conceived differently from other RISs, and serves differently distributed receivers. noise with power \(\sigma^{2}\). In this view, the objective of downlink rate maximization can be cast as follows: \[\underset{\boldsymbol{\theta}^{r}\in\mathcal{C}}{\text{maximize}}\qquad\log_{2} \left(1+\frac{|\mathbf{h}_{r}^{\text{th}}\mathbf{\Theta}_{r}\,\mathbf{g}_{r}|^{ 2}p}{\sigma^{2}}\right), \tag{9}\] where \(\mathcal{C}\) is the set of feasible RIS configurations. In order to solve (9), a perfect knowledge of both channels \(\mathbf{h}\) and \(\mathbf{g}\) is assumed. Even under perfect CSI, determining the optimal set of phase shifts \(\boldsymbol{\theta}^{r\star}\) requires a heuristic search due to the notion of configuration classes \(\mathcal{C}\). Such solutions cannot be practically adopted since they are not scalable with the number of RIS elements. As a remedy, we resort to ML in order to devise a data-driven solution. In this context, consider that the RIS in environment \(r\in\mathcal{R}\) (later referred to as agent \(r\)) has a dataset \(\mathcal{D}_{r}=\left\{\left(\boldsymbol{x}_{j}^{r},c_{j}^{r}\right)\,\big{|} \,j=1,\ldots,D_{r}\right\}\) containing \(D_{r}\) samples of observed CSI \(\boldsymbol{x}_{j}^{r}=\left(\mathbf{h}_{j}^{r},\mathbf{g}_{j}^{r}\right)\) that are labeled by \(c_{j}^{r}\) corresponding to the optimal RIS phase shifts \(\boldsymbol{\theta}^{r\star}\) solving (9). We then seek to construct a mapping function \(f_{\boldsymbol{w}}(\cdot)\) parameterized by \(\boldsymbol{w}\), that solves: \[\underset{\boldsymbol{w}}{\text{minimize}}\qquad\frac{1}{|\mathcal{R}|}\sum _{r=1}^{|\mathcal{R}|}\frac{1}{D_{r}}\sum_{j=1}^{D_{r}}\ell\left(c_{j}^{r},f_{ \boldsymbol{w}}(\boldsymbol{x}_{j}^{r})\right), \tag{10}\] where \(\ell(\cdot\,,\cdot)\) is the loss function in terms of phase prediction. In order to optimize the model parameter \(\boldsymbol{w}\) in the ERM formulation in (10), all agents are required to share their datatsets \(\mathcal{D}_{r}\) with a central server. Due to privacy concerns and communication constraints, (10) is recast as a FL problem by minimizing a global loss function as follows: \[\underset{\boldsymbol{w}}{\text{minimize}}\qquad\frac{1}{|\mathcal{R}|}\sum _{r=1}^{|\mathcal{R}|}\ell_{r}\left(f_{\boldsymbol{w}}\right), \tag{11}\] where \(\ell_{r}\left(f_{\boldsymbol{w}}\right)=\frac{1}{D_{r}}\sum_{j=1}^{D_{r}} \ell\left(c_{j}^{r},f_{\boldsymbol{w}}(\boldsymbol{x}_{j}^{r})\right)\) is the local loss function of agent \(r\). One of the most popular approaches in FL to solve (11) is the FedAVG algorithm [8]. However, the formulation in (11) assumes that all agents in \(\mathcal{R}\) have an equal impact on training the global model \(\boldsymbol{w}\). This falls under the strong assumption of uniform and homogeneous data distribution across agents. Under data heterogeneity, drifts in the agents' local updates with respect to the aggregated model might occur, since the local optima do not necessarily coincide with the global optima. Thus, the obtained model suffers from the instability in convergence, and fails to generalize to OoD samples. To obviate this issue, we resort to Invariant Risk Minimization (IRM) [12] and its FL variant dubbed FL Games[14]. ## III FL Games for Phase Optimization The key deficiency of using FedAVG on limited datasets distributed across agents is that its trained predictor heavily relies on statistical correlations among observations. These correlations are specious since they depend on the environment from which they were sampled. Thus, overfitting to these correlations prevents FedAVG from training a predictor that is robust in unseen environments. To overcome this issue, we turn our attention to algorithms that learn causal representations that are invariant across different agents, which improves the model's OoD generalization across many environments. In this direction, IRM [12] jointly trains an _extraction function_\(f_{\boldsymbol{\phi}}\) and a _predictor_\(f_{\boldsymbol{w}}\) across training environments \(\mathcal{R}\) in such a way that \(f_{\boldsymbol{w}}\circ f_{\boldsymbol{\phi}}\) generalizes well in unseen environments \(\mathcal{R}_{\text{all}}\supset\mathcal{R}\). The main idea is to build a parameterized feature extractor \(f_{\boldsymbol{\phi}}\) that reveals the causal representations in the samples, so as to perform causal inference by optimizing \(f_{\boldsymbol{w}}\). Thus, given an extraction function, the predictor \(f_{\boldsymbol{w}}\) is the one that is simultaneously optimal across all training environments \(\mathcal{R}\). Formally, this boils down to solving the following problem: \[\underset{\boldsymbol{\phi},\,\boldsymbol{w}}{\text{minimize}} \qquad\frac{1}{|\mathcal{R}|}\sum_{r=1}^{|\mathcal{R}|}\ell_{r} \left(f_{\boldsymbol{w}}\circ f_{\boldsymbol{\phi}}\right)\] (12) subject to \[\boldsymbol{w}\in\underset{\boldsymbol{w}^{\prime}}{\text{arg}\min} \,\,\ell_{r}\left(f_{\boldsymbol{w}^{\prime}}\circ f_{\boldsymbol{\phi}} \right)\,\,\,\,\,\,\,\,\,\forall\,r\in\mathcal{R}.\] Note that IRM is formulated as a single agent optimization problem, and assumes that training environments are known to the agent beforehand. Extending IRM to the distributed setting can be done using game theory (IRM Games[13]). Therein, different agents, each equipped with its own predictor \(\boldsymbol{w}_{r}\), cooperate to train an ensemble model: \(\boldsymbol{w}^{\text{av}}=\sum_{r\in\mathcal{R}}\frac{D_{r}}{\sum_{n\in \mathcal{R}}D_{n}}\boldsymbol{w}_{r}\). In contrast to (12) that designs a unique predictor across training environments, the aggregate model \(\boldsymbol{w}^{\text{av}}\) satisfies: \[\underset{\boldsymbol{\phi},\,\boldsymbol{w}^{\text{av}}}{\text{ minimize}}\qquad\frac{1}{|\mathcal{R}|}\sum_{r=1}^{|\mathcal{R}|}\ell_{r}\left(f_{ \boldsymbol{w}^{\text{av}}}\circ f_{\boldsymbol{\phi}}\right)\] (13) subject to \[\boldsymbol{w}_{r}\in\underset{\boldsymbol{w}^{\prime}_{r}}{ \text{arg}\min}\,\,\ell_{r}\bigg{(}\frac{f_{\boldsymbol{w}^{\prime}_{r}}+ \sum\limits_{n\in\mathcal{R}\setminus\{r\}}f_{\boldsymbol{w}_{n}}}{|\mathcal{R }|}\circ f_{\boldsymbol{\phi}}\bigg{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ a Gradient Descent (GD) update that takes larger steps in the direction of the global optimum. In contrast to [14], which considered highly correlated datasets, the data in our setting shows negligible oscillations in parameter updates under BRD, in which, we do not adopt buffers in training. These buffers can be used by each agent to store the historically played actions of its opponents. Then, an agent responds to a uniform distribution over these past actions. This smoothens the oscillations of BRD caused by the local correlations in the datasets. Finally, the detailed steps of the V-FL Games algorithm that trains both a representation learner and a predictor are presented in Algorithm 1. ## IV Simulation Results ### _Simulation Settings_ For our experiments, we consider three different environments \(\mathcal{R}\). In all three environments, the Tx is located at the coordinate \((0,35,3)\) and the RIS comprising \(N=10\times 10\) reflective elements is at \((10,20,1)\), with the coordinates given in meters within a Cartesian system. The Rx is located in an annular region around the RIS with inner and outer radii \(R_{\text{min}}=1\,\text{m}\) and \(R_{\text{max}}=5\,\text{m}\). The Rician factor \(\kappa\) is set to \(5\), and the pathloss coefficients are calculated by \(\alpha_{t}=\frac{Nd_{\text{x}}d_{\text{y}}}{4\pi d_{\text{z}}^{2}}\) and \(\alpha_{r}=\frac{Nd_{\text{x}}d_{\text{y}}}{4\pi d_{\text{z}}^{2}}\). To simplify the exhaustive search for optimal phases, we assume \(\mathcal{C}\) contains two configurations classes, namely \(\mathbf{\theta}_{1}=[0,0,\dots,0]^{\mathsf{T}}\) and \(\mathbf{\theta}_{2}=[0,\pi,0,\pi,\dots,0,\pi]^{\mathsf{T}}\). In environment 1, the RIS elements are distanced by \(d_{\text{x}}=d_{\text{y}}=0.5\lambda\). Therein, the Rx is uniformly placed around the RIS with a tendency to be deployed closer to the RIS following the distributions illustrated under Environment 1 in Fig. 2. In environment 2, the RIS is characterized by \(d_{\text{x}}=d_{\text{y}}=0.25\lambda\). In contrast to environment 1, here, the Rx is higher likely to be placed far from the RIS (see the distributions under Environment 2 in Fig. 2). The RIS in environment 3 has \(d_{\text{x}}=d_{\text{y}}=0.4\lambda\). Therein, the Rx's distance to the RIS is uniform but the angle distribution is concentrated in one direction as illustrated in Fig. 2 under Environment 3. Note that the data from environments 1 and 2 are used to compose the training data while environment 3 is used only for testing. In this work, we leverage V-FL Games that exhibits a superior performance than F-FL Games\((\)where \(f_{\mathbf{\phi}}=\text{I})\). On the other hand, the authors in [11] showed by empirical simulations, due to the fact that RIS channels have strong LoS components, that one can select fixed causal representations based on the channels in (1) and (5). The causal representations in this case are the AoA and AoD at the RIS, \((\varphi_{t},\vartheta_{t})\) and \((\varphi_{r},\vartheta_{r})\), and the relative distances RIS-Tx \(d_{t}\) and RIS-Rx \(d_{r}\). This benchmark variant of FL Games\(\)where \(f_{\mathbf{\phi}}\) is fixed to \((\varphi_{t},\vartheta_{t},\varphi_{r},\vartheta_{r},d_{t},d_{r})\) is called F-FL Games in this paper, and is not to be confused with F-FL Games in [14], where \(f_{\mathbf{\phi}}=\text{I}\). For training in FedAVG and V-FL Games, we collect \(D_{r}=1500\) CSI samples \(\mathbf{x}_{j}^{r}=\left(\mathbf{h}_{j}^{r},\mathbf{g}_{j}^{r}\right)\) from environments 1 and 2, that are decoupled over real and imaginary parts. This data is scaled in such a way that the normalized mean Fig. 3: Structure of the neural networks used for the representation learner and the predictor: circles represent activation functions and trapezoids correspond to the weights and biases. The activation function type and the output dimension are shown at the bottom and top of each layer. Fig. 2: Distributions of the receiver’s position (distance \(r\) and angle \(\theta\) from the RIS) in different environments. is zero and the normalized variance is one. We also record the parameters \((\varphi_{t},\vartheta_{t},\varphi_{r},\vartheta_{r},d_{t},d_{r})\) to train F-FL Games, which are scaled using a minmax scaler that normalizes the data to the interval \([-1,1]\) by dividing by the absolute maximum. Unless stated otherwise, we use \(1000\) samples collected from environment 3 for testing. The design of the neural networks of the extractor and the predictor of V-FL Games is based on the multi-layer perceptrons architecture, and is shown in Fig. 3. Note that FedAVG and F-FL Games only use the predictor part. The considered loss function is the cross-entropy. The mini-batch size used for training the predictor is \(m=32\), and the learning rates are fixed at \(\eta_{\mathbf{\phi}}=5\times 10^{-4}\) and \(\eta_{\mathbf{w}}=2\times 10^{-3}\). In the figures, lines correspond to the simulation results that are averaged over five runs while the their respective standard deviations are shown as shaded areas. ### _Discussion_ We first plot the evolution of the testing accuracy of all algorithms in Fig. 4(a). Within the same environment (Environments 1 and 2), FedAVG and V-FL Games perform similarly with an accuracy of \(90\%\), while F-FL Games gives an accuracy of \(87\%\). When testing in a different environment (Environment 3), FedAVG's accuracy drops to \(68\%\), highlighting its lack of robustness. In this case, F-FL Games and V-FL Games yield slightly lower accuracies of \(85\%\) and \(80\%\), implying they do not overfit to the statistical correlations in the channels. It is also interesting to observe the convergence rate of the different methods. We notice that V-FL Games converges faster than F-FL Games, requiring around \(500\) communication epochs in both cases. Fig. 4(b) demonstrates the impact of mixing data from different environments on model training. Here, we keep the total number samples to be \(D_{1}+D_{2}=3000\) and vary the amount of data from the two environments, in which, \(\alpha_{e}=\frac{D_{1}}{D_{2}}\) represents the fraction of samples of environment 1 compared to environment 2. We first notice that all algorithms reach their peak performance with balanced datasets, i.e. \(\alpha_{e}=0.5\). When the datatsets are biased towards one of the environments, FedAVG loses \(15\%\) of its performance. On the other hand, F-FL Games and V-FL Games maintain a steady accuracy with a slight degradation of about \(3\)-\(4\%\). Inspired by FedAVG's flexibility in allowing more local computations at each agent before sharing their models, we modify our proposed algorithm to study the effect of the number of local iterations on the model accuracy. Insofar, each agent performs a few SGD updates locally prior to model sharing. Fig. 4(c) shows the impact of the number of local steps on the testing accuracy. It can be noted that the accuracy drops when each RIS performs more local computations when using FL Games due to the fact that the models are overfitting to the local datasets. However, the performance of Fig. 4: Performance comparison of different algorithms in OoD testing datasets. F-FL Games is more consistent with the local steps, losing only \(2\%\) of accuracy with seven local iterations, while V-FL Games loses around \(8\%\) of accuracy with \(15\) local updates. The reason for this behavior is that by letting each agent run over more samples from its local dataset, the testing accuracy at equilibrium decreases, since the played strategies do not account for the opponents' actions. On the other hand, the accuracy of FedAVG slightly increases with more local iterations, but still performs poorly. The effect of the dataset size \(D_{r}\) per agent on the achievable spectral efficiency is illustrated in Fig. 4(d). Note that all agents use an equal amount of samples, i.e., \(\alpha_{e}=0.5\) is held. For the comparison, we additionally present the optimal rate given by the best configurations (indicated by Best) and the rates given by random phase decision making (indicated by Random). All algorithms reach their best performance with the highest number of samples \(D_{r}=1250\) with about \(21\%\), \(14\%\) and \(32\%\) losses compared to Best rates in V-FL Games, F-FL Games, and FedAVG, respectively. The advantage of learning invariant causal representations with minimal amount of data is highlighted when \(D_{r}\leq 750\). FedAVG looses its performance rapidly. Even with \(100\) samples per environment, FL Games algorithms lose only \(6\%\) of their performance, while FedAVG incurs more than \(15\%\) of its accuracy. FedAVG requires around \(750\) samples per agent to reach its best performance, that is more than \(10\%\) less than that given by V-FL Games, underscoring its weakness in OoD settings. Additionally, with \(1250\) samples per environment, the error variance of V-FL Games and F-FL Games is \(73\%\) and \(98\%\) less than that of FedAVG. Finally, we vary the number of agents per environment as shown in Fig. 4(e). For this experiment, \(1500\) samples from each environment are shared among all agents, so more agents having less data are involved. The achieved testing accuracies of the FL Games algorithms are still superior than the FedAVG benchmark. Surprisingly, doubling the number of collaborating RISs from \(8\) to \(16\) induces an \(82\%\) increase in the number of training epochs for convergence in F-FL Games. The same does not hold for V-FL Games that suffers from an \(8\%\) increase, while losing \(3\)-\(4\%\) in accuracy compared to F-FL Games. This implies that, with more agents owning fewer data, the training of a causal extractor and a predictor converges faster than training of only a predictor. ## V Conclusions This paper proposes a distributed phase configuration control for RIS-assisted communication systems. The rate maximization problem is formulated using federated IRM as opposed to a heterogeneity-unaware ERM approach. Our novel robust RIS phase-shifts controller leverages the underlying causal representations of the data that are invariant over different environments. A neural network based feature extractor first uncovers the causal structure of the CSI data, then feeds it to another neural network based configuration predictor. Both neural networks are trained in a distributed supervised learning fashion, and the results are compared with the environment-unaware FedAVG and an IRM-based predictor. The numerical results show that a phase predictor trained with the geometric properties of the environments demonstrated a better performance than a representation learner followed by a predictor. Moreover, the extractor-predictor network exhibits faster training convergence when using more RISs. The extensions for multiple users and multiple antennas at Tx and Rx are left for future works.
2305.18307
Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study
Auditing plays a pivotal role in the development of trustworthy AI. However, current research primarily focuses on creating auditable AI documentation, which is intended for regulators and experts rather than end-users affected by AI decisions. How to communicate to members of the public that an AI has been audited and considered trustworthy remains an open challenge. This study empirically investigated certification labels as a promising solution. Through interviews (N = 12) and a census-representative survey (N = 302), we investigated end-users' attitudes toward certification labels and their effectiveness in communicating trustworthiness in low- and high-stakes AI scenarios. Based on the survey results, we demonstrate that labels can significantly increase end-users' trust and willingness to use AI in both low- and high-stakes scenarios. However, end-users' preferences for certification labels and their effect on trust and willingness to use AI were more pronounced in high-stake scenarios. Qualitative content analysis of the interviews revealed opportunities and limitations of certification labels, as well as facilitators and inhibitors for the effective use of labels in the context of AI. For example, while certification labels can mitigate data-related concerns expressed by end-users (e.g., privacy and data protection), other concerns (e.g., model performance) are more challenging to address. Our study provides valuable insights and recommendations for designing and implementing certification labels as a promising constituent within the trustworthy AI ecosystem.
Nicolas Scharowski, Michaela Benk, Swen J. Kühne, Léane Wettstein, Florian Brühlmann
2023-05-15T09:51:10Z
http://arxiv.org/abs/2305.18307v1
# Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study ###### Abstract. Auditing plays a pivotal role in the development of trustworthy AI. However, current research primarily focuses on creating auditable AI documentation, which is intended for regulators and experts rather than end-users affected by AI decisions. How to communicate to members of the public that an AI has been audited and considered trustworthy remains an open challenge. This study empirically investigated _certification labels_ as a promising solution. Through interviews (\(N=12\)) and a census-representative survey (\(N=302\)), we investigated end-users' attitudes toward certification labels and their effectiveness in communicating trustworthiness in low- and high-stakes AI scenarios. Based on the survey results, we demonstrate that labels can significantly increase end-users' trust and willingness to use AI in both low- and high-stakes scenarios. However, end-users' preferences for certification labels and their effect on trust and willingness to use AI were more pronounced in high-stake scenarios. Qualitative content analysis of the interviews revealed opportunities and limitations of certification labels, as well as facilitators and inhibitors for the effective use of labels in the context of AI. For example, while certification labels can mitigate data-related concerns expressed by end-users (e.g., privacy and data protection), other concerns (e.g., model performance) are more challenging to address. Our study provides valuable insights and recommendations for designing and implementing certification labels as a promising constituent within the trustworthy AI ecosystem. AI, Audit, Documentation, Label, Seal, Certification, Trust, Trustworthy, User + Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [leftmargin=*] †: valuable artifacts to inform audit decisions, they are tailored to regulators and experts and not intended to certify and communicate to end-users that an AI has met the auditing criteria. For this reason, our work focuses on communicating the outcomes of auditing processes to end-users, a topic that has received little attention in previous work. Specifically, we investigate the use of _certification labels_, which are commonly used in other domains, such as food and energy (Golovolov et al., 2016; Golovolov et al., 2017; Golovolov et al., 2018). Certification labels are relevant in the context of trustworthy AI for three reasons. First, through the use of simple language, icons, or color-coding, they are usually designed to be accessible to various stakeholder groups, including end-users with limited knowledge and time (Golovolov et al., 2018). Second, if reflecting a genuine and credible auditing process, certification labels can communicate the criteria used in an audit, thereby serving as a "trustworthiness cue" for end-users (Golovolov et al., 2016; Golovolov et al., 2018). Third, labels have shown to promote trustworthiness of a product in other domains (Golov et al., 2016) facing similar challenges on how to certify that a product meets certain criteria, such as agricultural standards (e.g., organic foods (Golovolov et al., 2018)) or low ecological impact (e.g., sustainable hotels (Golovolov et al., 2018)). However, end-users' attitudes toward AI certification labels and their effectiveness in communicating the trustworthiness of AI remain to be explored. We addressed this gap by conducting a mixed-method study with both interviews (\(N=12\)) and a census-representative survey (\(N=302\)) with end-users. Our results provide evidence that certification labels can effectively communicate AI trustworthiness. Qualitative findings revealed that end-users have positive attitudes toward AI certification labels and that labels can increase perceived transparency and fairness and are regarded as an opportunity to establish standards for AI systems. Particularly, data-related concerns expressed by end-users, such as privacy and data protection, can be mitigated through the use of certification labels. However, labels may not be able to address all raised concerns, such as model performance, suggesting that they should be considered one promising constituent among others for trustworthy AI. Furthermore, our results provide insights into facilitators and inhibitors for the effective design of certification labels in the context of AI. For example, end-users expressed strong preferences for independent audits and highlighted the challenge of communicating subjective criteria such as "fairness," whose meaning can be ambiguous. Quantitative findings showed that a certification label significantly increases end-users' trust and willingness to use AI in both low- and high-stake AI scenarios. Nevertheless, end-users reported a higher preference for certification labels in high-stake scenarios (e.g., hiring procedure) than in low-stake scenarios (e.g., price comparison), and the positive effect of a label on trust and willingness to use AI was more pronounced in high-stake scenarios. This suggests that compliance with mandatory requirements for AI in high-stake scenarios could be effectively communicated to end-users through certification labels in addition to the proposed voluntary labeling for low-stake AI scenarios (Golovolov et al., 2018; Golovolov et al., 2018). To summarize, our study is the first to demonstrate the potential of certification labels as a promising approach for communicating to end-users that an audit has certified an AI to be trustworthy. We contribute to the trustworthy AI literature by highlighting opportunities and challenges for designing and effectively implementing certification labels. ## 2. Auditing for trustworthy AI A growing body of work recognizes the critical role of algorithmic or AI auditing in enabling the trustworthiness of AI systems (Golovolov et al., 2016; Golovolov et al., 2017; Golovolov et al., 2018). Prior work suggests that auditing improves fairness (Golov et al., 2018), accountability (Golovolov et al., 2018), and governance (Golovolov et al., 2018), among others. These elements are considered to contribute to trust in and acceptance of AI2. Moreover, audits have the ability to expose problematic behavior, such as algorithmic discrimination, distortion, exploitation, and misjudment (Bahdan et al., 2018). In safety-critical industries such as aerospace, medicine, and finance, audits are a long-standing practice (Golov et al., 2018). However, only recently have researchers recognized that these areas could inform AI auditing and acknowledged the importance of considering insights from the social sciences, where audits have emerged from efforts toward racial equity and social justice (Golov et al., 2018). Footnote 2: The definition of trust in AI and its operationalization is an ongoing debate (Golov et al., 2018; Golov et al., 2018; Golov et al., 2018). As an extensive theoretical discussion is out of scope of this work, we focus on trustworthiness, a property of the trustee, rather than on trust as a process that can be affected by numerous contextual and personal factors (Golov et al., 2018; Golov et al., 2018). While the importance of AI auditing has been identified, the development of common audit practices, standards, or regulatory guidance is ongoing (Bahdan et al., 2018; Golov et al., 2018) and efforts to create auditing frameworks throughout the AI development life-cycle are still in their early stages (Golov et al., 2018). Auditing can be defined as "an independent evaluation of conformance of software products and processes to applicable regulations, standards, guidelines, plans, specifications, and procedures." (Golov et al., 2018, p. 30). At least three types of AI auditing can be distinguished, including first-party internal auditing, secondary audits conducted by contractors, and independent third-party audits (Golov et al., 2018). However, whether auditing should be conducted by independent third-parties or internally within organizations is a topic of ongoing academic discussion (Golov et al., 2018; Golov et al., 2018; Golov et al., 2018), with both approaches having their advantages and drawbacks. Raji et al. argue that external auditing may be constrained by a lack of access to organizations' internal processes and information that are often subject to trade secrets. In contrast, Falco et al. point out that the outcomes of internal audits are typically not publicly disclosed and that it often remains unclear whether the auditor's recommendations are effectively implemented or not. The question of whether end-users prefer internal or external audits remains to be investigated. In addition to defining standards and best practices for AI auditing, it is crucial to consider how the outcomes of audits can be communicated to different stakeholders with varying knowledge and needs (Golov et al., 2018). Current research has mainly focused on approaches for documenting machine learning (ML) models and training datasets. These artifacts play an important role in the AI trustworthiness ecosystem by increasing transparency and allowing auditors and regulators to determine whether principles of trustworthy AI (e.g., fairness, robustness, privacy (Golov et al., 2018)) have been met (Golov et al., 2018). For example, "model cards" (Golov et al., 2018; Golov et al., 2018) disclose information about a model's purpose and design process, its underlying assumptions, and the model's performance characteristics. Similarly, Gebru et al. introduced "datasheets," which summarize the motivation, composition, collection process, and recommended uses for datasets, and Floridi et al. recommended the use of "summary datasheets" and "external scorecards." The former is aligned with the goals of "datasheets" and synthesizes key information about the AI, including its purpose, status, and contact information. The latter is conceptually closely related to "model cards" and evaluates the AI system along several dimensions to form an overall risk score (Krishnan et al., 2017). However, these documentations are tailored to AI practitioners, and regulators (Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2019), rather than end-users affected by AI decisions. Often, end-users have neither the access nor the expertise to understand the technical information that AI documentation provides (Bahdan et al., 2017). It is unlikely that end-users can effectively utilize ML model documentation or data documentation to make informed judgments about trusting or using AI (Krishnan et al., 2019). For this reason, end-users depend on auditors and regulators who can use these artifacts to verify and ensure the trustworthiness of AI. Yet, it remains an open research question of how to effectively communicate to end-users that an audit has considered an AI trustworthy. End-users require accessible communication tailored to their specific values and concerns (Krishnan et al., 2019). A potentially effective way to provide such information is through the use of _certification labels_, which we will introduce in the following. ## 3. Certification labels for audited AI Labels are widely used for displaying specific product or service attributes to help consumers make more informed decisions. They are well-established in various fields, such as agriculture (Krishnan et al., 2019), food (Krishnan et al., 2019), energy (Krishnan et al., 2019), and e-commerce (Krishnan et al., 2019). Different kinds of labels exist, and various classification systems have been proposed (Krishnan et al., 2019; Krishnan et al., 2019; Krishnan et al., 2019). For example, in the food industry, "nutrition labels" provide consumers with simplified and easily understandable information to identify a product's nutritional content. While this information can also be found in detailed tables on the back of food packing, for many consumers, this information is too complex, revealing similar challenges end-users face with AI documentation. This is where labels can provide information in a clear and accessible manner, utilizing simple language, icons, and color coding, which makes labels accessible to individuals from different backgrounds (Krishnan et al., 2019; Krishnan et al., 2019). Prior work in consumer research has shown that labels can communicate the outcomes of audits and thereby enhance trust in a product (Krishnan et al., 2019). In this study, we focus on _certification labels_, which certify that a product or service meets one or several criteria and are thus suitable for the case of audited AI. Certification labels are exclusively awarded to products that have undergone an auditing process, typically conducted by a third-party organization (Krishnan et al., 2019). By communicating an institutional assurance of trustworthiness, third-party organizations can serve as "trust surrogates" for the consumer, shifting the trust relation from trust in the AI to trust in the institution that provides the certification (Krishnan et al., 2019). In this case, a certification label serves as a trustworthiness cue (Krishnan et al., 2019) that signals compliance with governance structures. Our work thus closely aligns with the proposal by Liao and Sundar, highlighting that the trustworthiness of AI is not inherently given but must be communicated and perceived as such by the user, for instance, through transparency affordances. According to the authors, people then use heuristics (i.e., mental rules of thumb) to evaluate these affordance cues to form judgments about the trustworthiness of AI. The authors further suggest that certifications from regulatory bodies that have audited the AI could serve as trustworthiness cues, invoking these heuristics. Therefore, certification labels in the context of AI are a promising approach to communicate that a regulatory body has audited an AI and considered it trustworthy. There have been several initiatives at a national and international level to introduce AI labels in both industry (e.g., (Krishnan et al., 2019; Krishnan et al., 2019; Krishnan et al., 2019)) and government (e.g., (Krishnan et al., 2019; Krishnan et al., 2019)). These initiatives vary in their intended scope but are mostly still in an early stage. Previous studies have also emphasized the potential of labels as a means of AI certification (Krishnan et al., 2019; Krishnan et al., 2019; Krishnan et al., 2019). Holland et al. proposed the concept of a "Data Set Nutrition Label", which would summarize key aspects of a dataset (e.g., metadata and the data source) prior to the development of ML models. Seifert et al. further suggested labels for trained ML models that independent reviewers have evaluated based on properties such as accuracy, fairness, and transparency. A recent study by Stuurman and Lachaud commented on various labels to provide information to end-users affected by AI decisions. Drawing from the EU Act on AI (Krishnan et al., 2019), the study distinguished between low-stake and high-stake AI systems and proposed a voluntary labeling system for AI not considered high-stake. This distinction aligns with recommendations from the EU's "white paper on artificial intelligence," (Bahdan et al., 2017) which encourages organizations to use labels to demonstrate the trustworthiness of their AI-based products and services. A survey conducted with individuals and organizations directly or indirectly engaged in audits found that while respondents believed that AI audits should be mandatory, 53% supported mandating them only for high-stakes systems (Bahdan et al., 2017). End-users' perceptions of certification labels in low and high-stakes AI scenarios have not yet been investigated. Despite this extensive theoretical work on labels in the context of AI and their gradual adoption in industry and government, there is currently a lack of empirical research exploring end-users' attitudes toward AI certification labels and their effectiveness in communicating trustworthiness in low- and high-stake AI scenarios. This study aims to address this research gap and inform current industry and government initiatives. ## 4. Research questions Based on the aforementioned considerations, we investigated the following research questions: * What are end-users' attitudes toward certification labels in the context of AI? * How do certification labels affect end-users' trust and willingness to use AI in low- and high-stake scenarios? ## 5. Methods To answer these research questions, we used a mixed-method research approach consisting of semi-structured interviews and a subsequent survey to collect quantitative data as part of a within-subjects design study. For both the interviews and the survey, we used a scenario-based approach to investigate people's attitudes and the effects of a certification label, inspired by past research (Bahdan et al., 2017; Krishnan et al., 2019; Krishnan et al., 2019). In the interviews, we asked participants about their attitudes toward AI and certification labels. As a follow-up within-subjects study, we implemented a survey to investigate the effect of a certification label quantitatively. The semi-structured interviews served as a basis for the survey and a means to enrich the quantitative results. The quantitative survey complemented the qualitative interviews by extending our results to a larger census-representative sample. In the following, we will introduce the certification label used in our study before describing the procedures of each method in more detail. ### The certification label To investigate labels in the context of AI, we used a certification label that has already been developed for the broader context of digital trust. Using an existing label had the advantage that it had undergone an extensive design process and thus did not need to be created from scratch. The non-profit foundation Swiss Digital Initiative laid the groundwork for developing this certification label. At the label's core lies a catalog of verifiable and auditable criteria, co-developed by an academic expert group based on a user study on digital trust. A panel of independent experts from academia, data and consumer protection, and digital ethics further developed the label catalog. Involving digital service providers and auditors in the designing process ensured that the criteria were auditable and verifiable. The catalog that forms the basis of the audit currently contains 35 criteria that are summarized into four categories: 1. Security (criteria 1 - 12): What is the security standard? The service provider shall, e.g., ensure that the data is encrypted as it transfers so that third-parties cannot access it. 2. Data protection (criteria 13 - 20): How is the data protected? The service provider shall, e.g., assume responsibility for the appropriate management of the data. 3. Reliability (criteria 21 - 29): How reliable is the service or product? The service provider shall, e.g., take all actions required to safeguard the continuity of the service. 4. Fair user interaction (criteria 30 - 35): Is automated decision-making involved? The service provider shall, e.g., ensure that all users receive equal treatment and that there is no data-based service or price discrimination. If an organization would like its digital product or service (e.g., a chatbot) to receive the certification label, it can voluntarily request an audit and thus participate in the certification process. After a scoping call with third-party auditors, an audit is performed along the criteria catalog. The audit leads to an audit report detailing the performance per criterion, which is double-checked by an independent label certification committee composed of auditing experts. If non-conformities are identified, the organization applying for the label must fix the identified issues, e.g., adjust its privacy policy. After a successful auditing report, the certification label is awarded for a period of three years with two audits during that period. ### Scenario selection Participants were presented with real-world examples of AI systems, adapted from Kapania et al., namely _medical diagnosis_, _loan approval_, _hiring procedure_, _music preference_, _route planning_ and _price comparison_ (see materials on OSF: [https://osf.io/gzp5k/](https://osf.io/gzp5k/)). One advantage of using hypothetical scenarios instead of real consumer applications is that differences in participants' prior experience with the applications can be controlled for Kapania et al. and Woods et al. proposed that people's behavior in scenario-based experiments corresponds to their real-life behavior. To answer our second research question and following Kapania et al. we explored both low-stake scenarios (music preference, route planning, price comparison) and high-stake scenarios (medical diagnosis, hiring procedure, loan approval). This distinction was crucial since other researchers (Kapania et al., 2017; Kapania et al., 2018) and the "EU AI Act" (Kapania et al., 2018) have discussed the use of AI labels for "low-stake" and "high-stake" scenarios. This classification was based on the AI's respective impact on affected parties and the involvement of significant risks, in particular with respect to safety, consumer rights, and the use of personal data. ### Interviews #### Participants Initially, we invited 16 participants to an interview on-site at the university. The recruitment was carried out through a university-internal database and an online marketplace where scientific studies can be advertised. To ensure that our sample consisted of end-users (i.e., laypeople who may be affected directly or indirectly by the outcomes of AI systems), we used screening questions following Kapania et al. and asked potential participants about their knowledge of AI and experience working with AI-based systems. We selected participants who indicated that they have heard about AI but did not work with it and provided a comprehensible description or adequate example of what AI is without overly restricting the valid responses (e.g., "robots" was valid while obvious nonsense answers such as "E.T. the alien" was deemed invalid). In addition, we asked participants to indicate their age, gender, profession, and English language proficiency so that we could design the interviews as balanced as possible and present materials in English. However, four interviews did not take place due to not shows. We, therefore, conducted 12 interviews with end-users of different backgrounds, ages, and genders that lasted 60 - 90 minutes. The interviews were conducted in German and recorded through field notes and audio recordings. Each participant received compensation in the form of a gift card worth CHF 10.00 from a Swiss retail company. The final sample (\(M_{age}=35.42\), \(SD_{age}=12.50\), \(Min_{age}=23\), \(Max_{age}=66\)) consisted of students (P2, P3, P4, P8, P11) enrolled in linguistics and literature (P2), fine arts (P3), and psychology (P4, P8, P11), as well as individuals who described their occupation as a bike messenger (P12), waitress (P1), dancer (P9), course manager (P7), management assistant (P6), intern (P10) and retired teacher (P5). The sample was predominantly female, with ten women and two men. Figure 1. The ”Digital Trust Label,” which we adopted as a certification label for AI. ©2023 Swiss Digital Initiative #### 5.3.2. Procedure Before the interviews, participants had to read and sign a declaration of consent. In the declaration, we informed participants of the purpose and rationale of the study, the researcher affiliations, the voluntary nature of study participation, and how their data will be analyzed and shared. All personally identifiable information was deleted to ensure privacy, and the anonymous data was stored without actual reference to the participants. During the interviews, we asked attituinal questions about AI, specifically where participants saw opportunities and challenges in using AI. We then presented the six scenarios to the participants without specifying the low- and high-stake categorization we had made in advance. Based on the respective headings of the scenarios (e.g., music preference), without further information, we asked participants to order the scenarios via drag and drop from "most impactful" (rank 1) to "least impactful" (rank 6). To ensure comparability, we defined "most impactful" for participants as "the scenario that would have the greatest impact on your personal life." This question aimed to validate our categorization in low- and high-stake scenarios. Next, we presented participants with one low-stake and one high-stake scenario and asked how they differed from one another. After this, participants were introduced to the certification label and asked how they perceived it, whether the label criteria were comprehensible or not, and where they saw opportunities and drawbacks of a certification label. The goal of the interviews was not only to gather qualitative data, but also to identify and determine which questions best suited the subsequent survey. We, therefore, made sure the questions were comprehensible and free of ambiguities. Any difficulties encountered during the interviews were discussed within the research team, and, if necessary, the respective questions were revised or removed. We refer to the digital repository for the complete interview manual. ### Survey #### 5.4.1. Participants To gain insights into how a general population perceives a label in the context of AI, we hired a market research agency ([https://www.bilendi.ch/](https://www.bilendi.ch/)) to provide us with a Swiss census-representative sample regarding age and gender (quota sampling). We used the same screening questions as in the interviews and initially recruited 395 participants that received CHF 3.00 for taking part in the 15-minute online survey. Following a quality assessment using a self-reported single item as an indicator of careless responding (Bowman et al., 2017; D'Alessio et al., 2018), 302 participants remained for data analysis. The sample is census-representative regarding age (\(M_{age}=43.88\), \(SD_{age}=16.08\), \(M_{stage}=18\), \(Max_{age}=79\)) and the gender distribution (150 women, 151 men, one non-binary person). #### 5.4.2. Procedure and measures The survey consisted of three parts. First, after providing informed consent and a brief introduction to the study, participants were free to select one scenario from the low-stake and one from the high-stake categorization. After making their choice, they received full descriptions of the two scenarios (see Appendix A) and were asked to rate their trust ("how much would you trust the AI in the scenario presented?") and willingness to use ("how much would you be willing to use the AI in the scenario presented?") on a scale from 0 (= not at all) to 100 (= absolutely). In addition, participants were asked in which scenario they would more readily accept the AI's decision/recommendation (i.e., "in which of the two scenarios would you be more willing to accept the decision/recommendation made by AI?"). Participants were introduced to the certification label in the second part of the survey. They were asked for their impression and rated the importance of each criterion (i.e., "how important are the label criteria for you in the context of AI?") on a scale from 0 (= not at all) to 100 (= absolutely). Participants were also asked what effect the certification label had on their acceptance (i.e., "would you be more likely to accept an AI's decision/recommendation if it had received a label?") and preference (i.e., "in which one of the two scenarios would you prefer the use of a label?"). To understand end-users' preferences regarding external and internal auditing, we included an open-ended question (i.e., "who do you think should be responsible for awarding such a label?"). Finally, in the fourth part, we again let participants rate the AI in the same low- and high-stake scenario on trust and willingness to use, this time with the information that the AI had been awarded a certification label. This second assessment allowed us to examine the certification label's effect on trust and willingness to use ratings. Similarly to the first assessment, we asked participants to justify their ratings and why a label led to increased/decreased or unchanged ratings. At the end of the survey, we asked the participants for feedback and the question, "_in your honest opinion, should we use your data in our analyses in this study? Do not worry, this will not affect your payment. You will receive the compensation either way_," as an additional quality check. The complete survey can be found on the digital repository. ### Analysis and coding procedure We used the qualitative interview data to answer RQ1 and the quantitative survey data to answer RQ2. The interview data was evaluated using qualitative content analysis (Zhu et al., 2017), more specifically summarizing content analysis. We followed the procedure according to Mayring and Fenzl by determining the coding unit, paraphasing, generalization to the level of abstraction, first reduction, and second reduction to form a cross-case category system. Coding was carried out by three researchers who independently went through four interviews each. To ensure consistency, one interview was evaluated by all researchers. Any ambiguities and discrepancies were resolved through open discussions, and the final cross-case category system was formed in a group session. The quantitative data analysis was carried out in R (version 4.2.2. (Zhu et al., 2017)). We used the _g to our current research objective. Categories may consist of further subcategories. Table 1 contains the subcategories and corresponding example quotes from end-users' attitudes toward certification labels. The complete content analysis with all topics is available on the digital repository. #### 6.1.1. Opportunities and facilitators Participants in the interview study indicated that the label covered essential concerns. The content analysis revealed that the topic "concerns, risks, and problems" predominantly consisted of data-related concerns such as data privacy (i.e., protecting data from attack and malicious use), data storage (i.e., how data is handled and stored), and third-party involvement (i.e., unwanted and unknown disclosure of data). Regarding data-related concerns, a certification label for AI systems was perceived as an effective tool to convey compliance with these requirements and hold the certified parties more accountable. In particular, the security and data protection criteria were perceived as minimal standards that must be met for them to consider using AI. Participants emphasized that a certification label provides a certain level of transparency that removes the burden of examining these criteria from end-users. In addition, they viewed the certification labels and corresponding auditing process as an opportunity for more fairness and to establish standards for AI systems, allowing them to compare products and services critically. The interviewed participants indicated that a certification label could increase their trust for all these reasons. For a label to be convincing, participants emphasized that additional information regarding the label is needed. This includes information about the label's criteria (i.e., how were they formed?), the auditing process itself (i.e., how were these criteria weighted?), and the auditors (i.e., who was responsible for awarding a label?). Participants also placed a strong emphasis on the independence of the auditing process, noting that the auditors should have no financial ties to or other direct dependencies on the organizations for whose products or services the label is awarded in order not to undermine their credibility. Additionally, participants stressed the importance of widespread participation in the auditing and certification process, as this was deemed necessary for adopting AI standards and the label's credibility. As a crucial factor for the \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} **Category** & **Subcategory** & **Example quote** \\ \hline Opportunities for certification labels & Increasing trust & _Because if it is monitored and these various criteria have to be met in order to get the label, then 1 as a consumer can, of course, trust better and also know that there are perhaps controls and random checks, so I would definitely trust more."_ (_PG_) \\ & Increasing perceived transparency & _If think that if there is such an established label, it will certainly help to increase transparency._ (_PG_) \\ & Increasing perceived fairness & _"With the Fair User Interaction aspect, yes, probably so [fairness is increased]... if the AI is now checked for this, and it can be determined that it is not data-based, treated differently._" (_PI_2) \\ & Auditing of AI systems & _Because I’m not an expert in the field and the label..., gives me proof... that it’s tested by experts."_ (_P_4) \\ & Establishing standards for AI systems & _“So I could imagine that if it is a bit more standardized, so to speak, because you have to meet_ for AI systems & _certain standards, that it could introduce a general level of fairness."_ (_P_9) \\ & Covering relevant concerns & _The concern [responsibility] was covered and then just the general concern with all just how_ _our data is also used and hopefully not misused, or yes. That is also covered._” (_P_10) \\ \hline Facilitators for effective certification labels & Additional label information & _“If would like to] find out what this ‘Tair User Interaction” means, what it refers to, how my_ _data is protected... how is it designed and who monitors this label. Exactly by whom was it created and by whom it is administered, awarded and so, that’s what I would like to know._” \\ & Independent party awarding the label & _“Ideally, it would be an overarching body that is, for example, also external and has the_ _compences and the_ _knowledge... ideally,_ an _NCO that runs it without any vested interest._” \\ & Recognition of label clarity of label criteria & _“If many companies get involved in using this label. Then I think it could have an impact.”_ (_P_9) \\ & Clarity of label criteria & _“The criteria are totally comprehensible to me, in any case. It’s also something that would be important to me if we were to use a program.”_ (_P_9) \\ & Actuality of label & _“You could say that the label guarantees that work on AI is ongoing."_ (_P_11) \\ \hline Limitations of certification labels & Unaddressed concerns & _“What you could include is a criterion for the AI. That an AI has been used enough times and has, for example, been 99\% correct and always had the right answers, rather than 90\%.”_ (_P_4) \\ & Lack of persuasiveness & _“I think there are still a lot of people, or some people, who will be critical of these systems even though it has a label?”_ (_P_3) \\ \hline Inhibitors for effective certification labels & Overabundance of labels & _“Because you can see that in the organic sector, there are now 20 labels and as a consumer you can almost no longer categorize, then, to think it’s so important now that there is also Bio-Susie [an organic label] or something like that in Switzerland, they have established themselves well, but I think you always have to stick to that as a label”_ (_P_6) \\ & Vacuousness of label criteria & _“If then leave four points are so common. And then is, maybe we don’t really analyze what_ _is written. Or don’t even read. I can’t speak of everyone, that speaking of myself. I often just don’t read messages.”_ \\ & Subjectivity of label criteria & _“Yes, so what is complete transparency? That brings us back to fairness... what is fair? These_ _are all such subjective terms that, in my eyes, you can’t use it in natural sciences– where you calculate and then they’s a result - it’s soft science where where you’ve working in.”_ (_P_8) \\ & Overlaps of label criteria & _“Overlap. I think it all goes a bit in a similar direction, except maybe the last point [Fair User Interaction], which is a bit different again.”_ (_P_10) \\ \hline \end{tabular} \end{table} Table 1. End-users’ attitudes toward certification labels effectiveness of a certification label, participants identified regular updates that align with industry standards and best practices to ensure that the label remains relevant and useful. #### 6.1.2. Limitations and inhibitors While participants acknowledged that a certification label covers essential issues, they also noted that it does not address all their AI-related concerns. These concerns included the lack of model performance (e.g., accuracy measures). Some participants noted that a certification label alone could even lead to "blind trust" in AI systems without accuracy measures. Additionally, participants noted that while a certification label provides some level of transparency, it does not provide complete documentation (e.g., source code) of the AI system and the ethical reasoning behind the auditors' decision to approve the use of AI in a particular application in the first place. As a result of these limitations, participants felt that a certification label might not be sufficiently persuasive to convey trustworthiness for critical individuals. Furthermore, participants identified several reasons why a certification label may not be effective. One reason was a potential overabundance of labels with different standards, diluting compliance with regulations and leading to confusion among end-users. In line with this, participants emphasized the importance of ensuring that the label's criteria are not just "empty promises" but that they are actually adhered to by organizations. They also pointed out the difficulty of measuring the label's criteria and the degree of subjectivity involved. Concepts such as security and fairness can mean different things to different people. Results showed that some criteria were more easily understood (e.g., security) than others (e.g., fair user interaction). For example, 11/12 participants implied that the definition of the security criteria covered what they had in mind. For data protection, this was the case for 9/12 participants, followed by 8/12 participants for reliability. However, merely 2/12 participants indicated that the criterion "fair user interaction" captured what they thought it would encompass. In addition to these differences in comprehension, participants pointed out conceptual overlaps for some criteria (e.g., security and data protection) that were not readily understood without further clarification. All these factors might diminish the effectiveness of a certification label. ### Effects of certification labels Participants in the survey study were asked to select one case each from the high-stake (medical diagnosis, hiring procedure, loan approval) and one from the low-stake (music preference, route planning, price comparison) scenarios without explicitly being informed of this distinction. Validation of this distinction between low- and high-stake was provided by participants' "impactfulness" rankings. Calculating a mode revealed that the three high-stake scenarios were perceived as the most impactful ones (i.e., 1 = medical diagnosis, 2 = hiring process, 3 = loan approval, 4 = price comparison, 5 = music preference, 6 = route planning). The majority of participants indicated that they would be more likely to accept the AI's decision/recommendation in low-risk scenarios (74.2%, \(n=224\)) than in high-risk scenarios (17.9%, \(n=54\)) and 7.9% (\(n=24\)) indicating no preference, which we considered an additional confirmation of the distinctiveness of the two scenarios. Participants in the interview study distinguished between low- and high-stakes scenarios primarily on the level of risk associated with the scenario. They reported that high-stakes scenarios carry higher self-relevance and long-term consequences. Before being presented with the certification label, participants reported both higher trust (\(M=66.72\), \(SD=24.27\)) and willingness to use (\(M=71.54\), \(SD=25.54\)) ratings for the low-stake scenarios, compared to ratings in high-stake scenarios for trust (\(M=49.37\), \(SD=30.76\)) and willingness to use (\(M=52.89\), \(SD=32.63\)). After being presented with the certification label, participants' trust and willingness to use ratings revealed statistically significant increases in both low- and high-stakes scenarios (see Figure 2). A dependent Student's \(t\)-test indicated that the presence of a certification label resulted in the highest increase for trust (\(M_{\Delta}=9.12\), \(SD=17.92\), \(t(301)=8.84\), \(p<.001\)) and willingness to use (\(M_{\Delta}=8.41\), \(SD=17.69\), \(t(301)=8.26\), \(p<.001\)) ratings in high-stake scenarios, followed by trust (\(M_{\Delta}=6.57\), \(SD=13.26\), \(t(301)=8.61\), \(p<.001\)) and willingness to use (\(M_{\Delta}=4.60\), \(SD=17.03\), \(t(301)=4.70\), \(p<.001\)) ratings in low-stake scenarios. Hedges' \(g\) for effect sizes ranged between.27 -.51 and can thus be considered small (for low-stake scenarios) to medium (for high-stake scenarios) (Stein see that come with the use of AI, while 20.9% (\(n=63\)) stated "no" and 23.8% (\(n=72\)) indicated that no statement was possible. When being asked the question of who should be responsible for awarding a label, the open-ended responses from the survey revealed that a majority of participants expressed a preference for external entities to conduct the auditing, with 48.7% (\(n=147\)) of the answers being coded as "government" and 37.4% (\(n=113\)) as "NGO" Only 5.3% (\(n=16\)) of the answers were coded as "company". Additionally, 8.6% (\(n=26\)) of the responses were coded as "other," which included mentions of entities such as "ethic committee," "consumer protection," or "citizen's association." ## 7. Discussion The quantitative findings reveal that the presence of a certification label significantly increases participants' trust and willingness to use AI in _both_ low- and high-stake scenarios, thereby answering our second research question. Most participants (81%) of the census-representative survey preferred using AI with a certification label, and a large proportion of participants (71%) responded that they would be more likely to accept an AI's decision or recommendation if it had been awarded a certification label. The results further show that a majority of participants (63%) not only indicated a preference for certification labels in high-stake scenarios, but that certification labels also had a larger effect on trust and willingness to use AI in high-stake scenarios. For example, willingness to use ratings for the "hiring procedure" scenario increased from 36 to 64 points, compared to an increase from 75 to 80 points for the "price comparison" scenario. While Stuurman and Lachaud and the EU's "white paper on artificial intelligence" distinguish between regulating high-stake Figure 2. Plots showing the individual scores for trust and willingness to use and their respective changes from T1 (without label) to T2 (with label). The plots also depict the medians, means, and distribution of the aggregated low- and high-stake scenarios. All comparisons revealed statistically significant differences. AI through mandatory requirements and proposed voluntary labeling only for low-stake AI, our results demonstrate the relevance of certification labels for end-users, specifically in high-stake scenarios. Based on these findings, we argue that parallel to voluntary labeling for low-stake AI scenarios, compliance with mandatory requirements for AI in high-stake scenarios could also be communicated through certification labels, potentially increasing end-users' trust in and willingness to use awarded AI systems. Qualitative findings allowed us to answer our first research question and provide a more nuanced picture of which aspects to consider for effective certification labels in the context of AI. The certification label we investigated in this study was designed for digital trust more generally. However, end-users' attitudes toward the certification label were primarily positive, and the label's criteria of security, data protection, reliability, and fair user interaction were also relevant to end-users in the context of AI. We derive this from survey participants' high "importance" ratings for the existing label criteria. Concerning _opportunities_ for AI labels, participants in the interview study indicated that a certification label could increase perceived transparency and fairness and serve as a means to establish standards for AI systems. It became apparent from the interviews that certification labels can especially cover end-users' data-related concerns (e.g., privacy, data protection, and third-party involvement) that map to previous work [65]. However, our results also reveal that certification labels have _limitations_ and do not alleviate all issues end-users face regarding the use of AI. Only half of the participants in the survey indicated that a certification label addresses their AI-related concerns/challenges/risks, suggesting that end-users seem to hold differentiated needs. For example, participants in our interviews pointed out that a certification label does not provide indicators about the AI's performance (e.g., accuracy measures). They remarked that performance indicators are essential in deciding in which cases the AI can be trusted and when it must be questioned. This led participants to remark that a label could inadvertently foster "blind trust" if performance indicators are absent. Thus, we suggest that certification labels should either include performance indicators as part of the label criteria or be supplemented with them. Based on these results, we argue that certification labels can more readily signal trustworthiness than untrustworthiness. This is because it is not possible to distinguish if a digital product or service has not yet been audited or whether it has failed to meet specific audit criteria, particularly if certification labels remain voluntary. We regard certification labels as _one_ component of an "AI trustworthiness ecosystem" [2] that meets essential needs for end-users but which ideally should be combined with other transparency approaches to signal untrustworthiness (e.g., accuracy measures) and form a "chain of trust" [65]. As potential _inhibitors_ for effective certification labels, participants in our interviews pointed out certain overlaps and the subjective nature of the label's criteria. Ultimately, "fairness" and "security" are subjective judgments that vary from one person to the next, and our results showed that the criterion "fair user interaction," in particular, did not reflect what study participants thought it encompassed. The challenge for auditing of defining and measuring concepts that are inherently difficult to quantify has been discussed by previous research [37, 58, 66]. Our results indicate that this subjectivity is recognized by end-users and can impair the effectiveness of a label. To avoid a discrepancy between, for example, the auditors' definition of fairness and what people commonly associate with this term, auditors should be in dialogue with end-users so that their values are represented in a label. This is in line with Costanza-Chock et al., who had criticized that the involvement of affected communities plays a minor role in AI audits. They argued that real-world harms and sociological phenomena could only be understood by engaging with people to inform auditing. Figure 3. Plots showing the different distributions for trust and willingness to use ratings for the different high-stake (hiring procedure, loan approval, medical diagnosis) and low-stake (music preference, price comparison, route planning) without a label at T1 and with a label at T2. Our interview results highlight that end-users request not only information on the label's criteria but also information regarding the criteria content (i.e., how they were formed), the auditing process itself (i.e., how the criteria informed the audit), and particularly about the auditors (i.e., who awarded the label). We identified this demand for additional information as a potential _facilitator_, indicating that an effective certification label is more than just a list of evaluation criteria. A large majority (86%) of survey participants responded that either the government (49%) or a non-governmental organization (37%) should ideally be responsible for awarding a label, with only 5.3% of responses indicating that a company should be responsible. Participants in the interview study emphasized the auditors' independence (e.g., financially, with no conflict of interest) as a prerequisite for the effectiveness of a certification label. These findings support the notion that auditing can only foster trust if the auditors themselves are trusted (Bordes and Seth, 2015) and are in line with results of label studies in other domains (Seth, 2015; Seth, 2015), which show that third-party certification positively affects trust in eco-labels. We contribute to the ongoing discussion regarding internal vs. external auditing by showing that end-users favor independent auditors. To account for this independence on the one hand and the structural advantages of internal audits on the other, "cooperative audits" (Seth, 2015) could be a way forward, balancing between the advantages and challenges of the two approaches. In addition to these facilitators and inhibitors, auditors and regulators should also be mindful that an overabundance of labels with different standards can inhibit the persuasiveness and trustworthiness of their certification label. Such effects have been reported for eco-labels, where an extensive number of existing labels result in different standards that remain unclear to consumers (Seth, 2015). These findings speak for a certain harmonization and regulation of certification labels. Moreover, organizational compliance with a label's criteria should be established so end-users do not perceive them as "empty promises" but instead as a means for increased accountability for organizations and more trustworthy AI (Seth, 2015). A prominent instance of such a challenge is the case of the CE (conformite europeenne) marking, in which some products use the mark without actually being manufactured to EU quality standards (Seth, 2015). This illegitimate use has led, among other things, to the introduction of supplementary certification labels to certify product quality, which unintentionally contribute to consumer confusion (Seth, 2015). To realize their full potential, certification labels should have a thorough auditing process, be regularly updated to reflect current industry standards, and ideally, be used by a wide range of organizations to increase recognition. ## 8. Limitations and Future Work We conducted a within-subjects survey study where participants were presented with the AI scenarios with and without a certification label. While this provided valuable insights into the general effectiveness of certification labels, future work could compare label classes or designs (e.g., nutrition labels vs. certification labels) in a between-subjects experimental design. Certification labels are limited in their ability to communicate untrustworthiness. While other kinds of labels have a more differentiated rating system (e.g., color-codings or grades) that allows comparisons, certification labels only provide dichotomous information by either being present or not. Thus, it is not possible to differentiate if a product without a certification label is untrustworthy because it failed to meet a label's criteria or has yet to be audited. A between-subjects design could provide evidence about the effectiveness of different kinds of labels and identify the factors that make labels more or less effective in communicating trustworthiness and untrustworthiness. Moreover, we used single-item questions to measure trust and willingness to use. Trust, in particular, is a complex psychological construct (Seth, 2015) and might not be adequately operationalized using single-items measures. However, a recent study has shown that single-item trust measures are equivalent to validated questionnaires regarding sensitivity to changes in trust and a reliable tool in longer surveys where questionnaires are not feasible (Seth, 2015). Future work should confirm the effectiveness of certification labels in fostering trust with validated psychometric measures and explore their effect on trusting dynamics that emerge over time in real-world human-AI interactions. ## 9. Conclusion This study empirically investigated certification labels to communicate trustworthy AI to end-users. For this purpose, we explored end-users' attitudes toward certification labels in the context of AI and how labels affect trust and willingness to use AI in both low- and high-stakes scenarios. We used a mixed-methods approach to collect both qualitative and quantitative data through interviews (\(N=12\)) and a census-representative survey (\(N=302\)) with end-users. The quantitative results of this study show that certification labels can be a promising way to communicate the outcome of audits to end-users, increasing both trust and willingness to use AI in low- and high-stake AI scenarios. Based on the qualitative findings, we further identified opportunities and limitations of certification labels, as well as inhibitors and facilitators for the effective design and implementation of certification labels. Our work provides the first empirical evidence that labels may be a promising constituent in the more extensive "trustworthiness ecosystem" for AI. ## 10. Funding, Declaration of Conflicting Interests and Data Availability This research was primarily funded by an independent research group, but additional funding (CHF 2,500.00) was granted by the Swiss Digital Initiative, an independent non-profit foundation, to obtain a representative sample. The entire research process, including the development of the research design, data analysis, interpretation of the results, and the writing of this paper, was conducted exclusively by independent researchers with no other affiliations with the Swiss Digital Initiative Foundation than those mentioned here. All data, corresponding R-scripts, and supplementary materials are available on OSF: [https://osf.io/gp5k/](https://osf.io/gp5k/). ## Acknowledgments Special thanks to Ariane Haller and the Swiss Digital Initiative for the permission to use their label for the purpose of our study, especially Nicolas Zahn, who was our contact person at the foundation.
2306.11115
Jack Littlewood-Richardson Coefficients and the Nazarov-Sklyanin Lax Operator
We continue the work begun by Mickler-Moll investigating the properties of the polynomial eigenfunctions of the Nazarov-Sklyanin quantum Lax operator. By considering products of these eigenfunctions, we produce a novel generalization of a formula of Kerov relating Jack Littlewood-Richardson coefficients and residues of certain rational functions. Precisely, we derive a system of constraints on Jack Littlewood-Richardson coefficients in terms of a simple multiplication operation on partitions.
Ryan Mickler
2023-06-19T18:30:32Z
http://arxiv.org/abs/2306.11115v2
# Jack Littlewood-Richardson coefficients and ###### Abstract. We continue the work begun by Mickler-Moll [10] investigating the properties of the polynomial eigenfunctions of the Nazarov-Sklyanin quantum Lax operator. By considering products of these eigenfunctions, we produce a novel generalization of a formula of Kerov relating Jack Littlewood-Richardson coefficients and residues of certain rational functions. Precisely, we derive a system of constraints on Jack Littlewood-Richardson coefficients in terms of a simple multiplication operation on partitions. ###### Contents * 1 Introduction * 2 Preliminaries * 3 Three Decompositions * 4 Distinguished Elements * 5 The \(\mathrm{SH}^{c}\) algebra * 6 Structure of twists * A. ## 1. Introduction Let \(\Lambda\) be the ring of symmetric functions and \(\mathbb{C}_{\varepsilon}=\mathbb{C}(\varepsilon_{1},\varepsilon_{2})\). For \(\lambda\) a partition, we let \(s\in\lambda\) be a box of its corresponding Young diagram. For such a box, we write \(s=(s_{1},s_{2})\in\mathbb{N}^{2}\), labelling the grid position of its bottom corner, and we define the content map \([s]:=s_{1}\varepsilon_{1}+s_{2}\varepsilon_{2}\). Let \(j_{\lambda}\) be the homogenous versions (c.f. 21) of the integral Jack symmetric functions \(J_{\lambda}\) from [9], which we review in section 2.1. The Jack Littlewood-Richardson (LR) coefficients \(c_{\mu,\nu}^{\lambda}\) are defined as the coefficients of a product of Jack functions expanded in the basis of Jacks: \[j_{\mu}\cdot j_{\nu}=\sum_{\lambda}c_{\mu,\nu}^{\lambda}\,j_{\lambda}. \tag{1}\] In this paper, we prove the following'sum-product' combinatorial identity that captures deep structure of these Jack Littlewood-Richardson coefficients: **Main Result** (Theorem 4.26).: _For any partitions \(\mu,\nu\), the Jack Littlewood-Richardson coefficients \(c_{\mu\nu}^{\gamma}\) satisfy the following identity of rational functions in the variable \(u\),_ \[\sum_{\gamma\supset\mu\cup\nu}c_{\mu\nu}^{\gamma}\frac{\varpi_{\gamma}}{\varpi _{\mu}\varpi_{\nu}}\left(\sum_{s\in\gamma/(\mu\cup\nu)}\frac{1}{u-[s]}\right)=T _{\mu\star\nu}(u)-1, \tag{2}\] _where \(\mu\cup\nu\) is the union as sets of boxes, \(\varpi_{\lambda}\coloneqq\prod_{s\in\lambda\setminus(0,0)}[s]\), and_ \[T_{\mu\star\nu}(u):=\prod_{x\in\mu,y\in\nu}N(u-[x+y]),\qquad N(u):=\frac{(u-[0,0])(u-[1,1])}{(u-[1,0])(u-[0,1])}. \tag{3}\] In the case \(\mu=1\), this theorem recovers a well known result of Kerov [8]: \[\sum_{\nu+s\supset\nu}c_{1\nu}^{\nu+s}\left(\frac{1}{u-[s]}\right)=u^{-1}T_{ \nu}(u). \tag{4}\] By expanding at various poles in \(u\), the identity 2 gives us a family of relations amongst the \(c_{\mu\nu}^{\gamma}\). We provide a simple yet illustrative example in 4.27. Although these equations are underdetermined, they do provide an explicit closed form expression for large families of Jack LR coefficients, which we investigate in a follow up article w/ P. Alexandersson [1], along with connections to various conjectures of Stanley on the structure of these coefficients [15]. We repackage and interpret the above result in terms of a simple map: **Interpretation** (Theorem 5.6).: _Consider the following 'basic' evaluation map on symmetric functions \(\Delta:\Lambda\to\mathbb{C}_{\epsilon}(u)\), defined on the basis of homogeneous Jack symmetric functions \(j_{\lambda}\) as_ \[\Delta(j_{\lambda}):=\varpi_{\lambda}\,\sum_{s\in\lambda}\frac{1}{u-[s]}. \tag{5}\] _For two partitions \(\mu,\nu\) of arbitrary size, this evaluation map satisfies_ \[\Delta(j_{\mu}\cdot j_{\nu})=\varpi_{\mu}\varpi_{\nu}\left(T_{\mu\star\nu}(u)- 1\right). \tag{6}\] Note that the map \(\Delta\) is _not_ a ring homomorphism, and furthermore it degenerates in the Schur case (\(\varepsilon_{1}+\varepsilon_{2}=0\)) as in this case it vanishes on all non-hook partitions. This paper is the sequel to [10], where a spectral theorem was proven for the quantum Lax operator \(\mathcal{L}\) introduced by Nazarov-Sklyanin [11]. The polynomial eigenfunctions \(\psi_{\lambda}^{s}\in\Lambda[w]\) of \(\mathcal{L}\) depend on a partition \(\lambda\) and a choice of location \(s\) where a box can be added to \(\lambda\). The central idea of this second paper is to consider product expansions of these Lax eigenfunctions \[\psi^{s}_{\lambda}\cdot\psi^{t}_{\nu}=\sum_{\gamma,u}c^{s,t;\gamma}_{\lambda,\nu; u}\psi^{u}_{\gamma}, \tag{7}\] and analyse their structure. Here, we introduce a new object, the **Jack-Lax Littlewood-Richardson coefficients**, \(c^{s,t;\gamma}_{\lambda,\nu;u}\), the structure of which will be illuminated throughout this paper. Indeed, these Jack-Lax LR coefficients reproduce the Jack LR coefficients (1) under summation, \[c^{\gamma}_{\lambda,\nu}=\sum_{u}c^{s,t;\gamma}_{\lambda,\nu;u}. \tag{8}\] We will demonstrate that in many ways this refined algebra of eigenfunctions is _easier to understand_ than the algebra of Jack functions due to the action of \(\mathcal{L}\), and leads ultimately to a proof of the Main Result 4.26. ### Organization of the paper In Section 2, we review some preliminary material, and recall the main results of the previous paper in this series [10]. We introduce the Nazarov-Skylanin Lax operator, and describe is spectrum. In Section 3, we begin the task of understand the structure of the basis of Lax eigenfunctions. Here, we lay out the central new focus of this work, which is the algebra of products of these Lax eigenfunctions. We introduce a family of linear maps, the Trace functionals, that help us to explore the properties of the Lax eigenfunction products. These traces are associated with three different decompositions of the Hilbert space. We describe a cohomological approach to the understanding of the combined Trace map and compute its kernel and cokernel. In Section 4, we produce special elements of the algebra, the \(\beta\) and \(\theta\) elements, and show a key relation between their traces. We then use this relation to compute the traces of these elements, demonstrating a connection to the Jack Littlewood-Richardson coefficients. We conclude this section with the main theorem (4.26). In Section 5, we reinterpret the main results in terms of the language of double affine Hecke algebras, motivated by the results of Bourgine-Matsuo-Zhang [4]. In Section 6, we close out this work with some conjectures on the deeper structure of the algebra of Lax eigenfunctions. These conjectures would give a more direct explanation of the central results of this article. ### Acknowledgements The author would like to greatly thank Alexander Moll for introducing him to the key concepts in the work of Nazarov-Skylanin over five years ago, and for many long helpful discussions and his contributions to this work and feedback with editing the drafts of this paper. The author also wants to thank Jean-Emile Bourgine for illuminating discussions on the SH\({}^{c}\) and its holomorphic presentation, and Per Alexandersson for helpful discussions on matters of combinatorics. ## 2. Preliminaries ### Combinatorics #### 2.1.1. Partitions A partition \(\lambda=(\lambda_{1},\lambda_{2},...)\) is a sequence of non-increasing non-negative integers with a finite number of nonzero terms. The _size_ of a partition is denoted \(|\lambda|\). We often use condensed partition notation, e.g. \((1,1,2,3,3)=\{1^{2},2,3^{2}\}\). For a partition \(\lambda\), we write \(b\in\lambda\) to index the _boxes_ of the Young diagram of \(\lambda\). We represent boxes by their lower left corner \(b=(i,j)\in\mathbb{Z}^{2}\), where \(0\leq j\leq\lambda_{i}-1\). We denote by \(\lambda^{\times}\) the collection of boxes \(\lambda^{\times}:=\{b\in\lambda:b\neq(0,0)\}\). Let \(\varepsilon_{1}<0<\varepsilon_{2}\in\mathbb{R}\) be parameters1. For \(s=(s_{1},s_{2})\in\mathbb{Z}^{2}\) a box, we denote the _box content_ by Footnote 1: These are the equivariant Omega background parameters, c.f. [13]. \[[s]=[s_{1},s_{2}]:=s_{1}\varepsilon_{1}+s_{2}\varepsilon_{2} \tag{9}\] For a partition \(\lambda\), we use the following standard conventions. For a box \(b\in\lambda\) let \(h^{U}(b)\) (resp. \(h^{L}(b)\)) be the upper (resp. lower) hook length of the box (see e.g. Stanley [15]). For example \(h^{U}_{1^{2}2^{2}3}((2,0))=[2,-2]\). Let \(\lambda^{\prime}\) denote the transposed partition to \(\lambda\). Let \(\mathcal{A}_{\lambda}\) (resp. \(\mathcal{R}_{\lambda}\)) be the collection of boxes that can be added, the _add-set_, (resp. removed, the _rem-set_) from \(\lambda\). We use the notation \(\mathcal{R}_{\lambda}^{+}=\{b+[1,1]:b\in\mathcal{R}_{\lambda}\}\), to indicate the _outer_ corners of the boxes that can be removed. In this paper, we draw partition _diagrams_ (and their associated partition _profiles_) in the Russian form, following the notation of [13]. In this way, the elements of \(\mathcal{A}_{\lambda}\) (resp \(\mathcal{R}_{\lambda}^{+}\)) correspond to minima (resp. maxima) of the partition profile, as illustrated by the figure 1. [FIX] #### 2.1.2. Symmetric Functions We refer to the canonical source [9] for all foundational results. Consider the ring of symmetric functions \(\Lambda:=\mathbb{C}[x_{i}]^{\mathfrak{S}}\) in infinitely many variables \(x_{i}\). Define the power sum symmetric functions as \(p_{k}=\sum_{i}x_{i}^{k}\). For \(\mu=(\mu_{1},\mu_{2},\ldots)\) a partition, we write \(p_{\mu}=\prod_{k}p_{\mu_{k}}\), and denote the monomial symmetric functions \(m_{\mu}=x^{\mu}+\ldots\).. For \(\alpha\in\mathbb{R}\), we define the \(\alpha\)_-deformed Hall inner product_, by \[\langle p_{\mu},p_{\nu}\rangle_{\alpha}:=\delta_{\mu,\nu}z_{\mu}\alpha^{|\mu|},\text{ where }z_{\mu}=\prod_{k}(\mu_{k}!\,k^{\mu_{k}}). \tag{10}\] #### 2.1.3. Jack Functions Define the real _deformation parameter_\(\alpha=-\varepsilon_{2}/\varepsilon_{1}\). The (integral form) Jack symmetric functions \(J_{\lambda}^{(\alpha)}\), indexed by partitions \(\lambda\), are a family of symmetric functions depending on the deformation parameter \(\alpha\), introduced in [7]. When \(\alpha=1\), these reduce to the standard Schur symmetric functions \(s_{\lambda}\). **Proposition 2.1** (Jack Functions [9] Chap. VI, (4.5)).: _There exists a unique family of symmetric functions \(J_{\lambda}\in\Lambda[\alpha]\), indexed by partitions \(\lambda\), which satisfy the following three properties_ * _Orthogonality:_ (11) \[\langle J_{\lambda},J_{\mu}\rangle_{\alpha}=0\text{ when }\lambda\neq\mu.\] * _Triangularity:_ (12) \[J_{\lambda}=\sum_{\mu<_{d}\lambda}c_{\lambda\mu}m_{\mu},\] _where_ \(<_{d}\) _indicates dominance order on partitions._ * _Normalization:_ (13) \[[m_{1^{n}}]J_{\lambda}=n!.\] This normalization is known as the _integral_ form of the Jack symmetric functions, as they have the property that \(J_{\lambda}^{(\alpha)}\in\mathbb{Z}[\alpha,p_{1},p_{2},\ldots]\) in the basis of the power-sum symmetric Figure 1. The basic objects of our notation for partitions. functions with the expansion \[J_{\lambda}^{(\alpha)}=1.p_{1}^{|\lambda|}+\ldots \tag{14}\] For example, for partitions of size \(n=3\), the Jack functions are \[J_{\{1^{3}\}}^{(\alpha)}(p)= p_{1}^{3}-3p_{2}p_{1}+2p_{3} =6m_{1^{3}}, \tag{16}\] \[J_{\{1,2\}}^{(\alpha)}(p)= p_{1}^{3}+(\alpha-1)p_{2}p_{1}-\alpha p_{3} =6m_{1^{3}}+(\alpha+2)m_{1,2},\] (17) \[J_{\{3\}}^{(\alpha)}(p)= p_{1}^{3}+3\alpha p_{2}p_{1}+2\alpha^{2}p_{3} =6m_{1^{3}}+3(\alpha+1)m_{1,2}+(\alpha+1)(2\alpha+1)m_{3}. \tag{15}\] The triangularity of the Jack functions in the monomial basis is evident. #### 2.1.4. Fock Module We use the ring of coefficients \(\mathbb{C}_{\varepsilon}=\mathbb{C}(\varepsilon_{1},\varepsilon_{2})\). Out of the two deformation parameters, we build two secondary parameters: The _quantum_ parameter \(\hbar=-\varepsilon_{1}\varepsilon_{2}=-[(1,0)][(0,1)]\), and the _dispersion_ parameter \(\overline{\varepsilon}=\varepsilon_{1}+\varepsilon_{2}=[(1,1)]\). We consider a \(\hat{\mathfrak{gl}}_{1}\) Heisenberg current \(V(z)=\sum_{k}V_{k}z^{k}\), with \(V_{0}=0\) and \([V_{n},V_{m}]=\hbar n\delta_{n+m,0}\). This current acts on the Fock module \(\mathcal{F}=\mathbb{C}_{\varepsilon}[V_{1},V_{2},\cdots]\), via \[V_{-k}=\hbar k\partial_{V_{k}},\quad k>0. \tag{18}\] In this paper, we work with an alternate presentation of the ring of symmetric functions by embedding them into the Fock module via \(p_{k}\to(-\varepsilon_{1})^{-1}V_{k}\). In this basis, the Hall inner product becomes \[||V_{1}^{d_{1}}V_{2}^{d_{2}}\cdots||_{h}^{2}=\prod_{k=1}^{\infty}(\hbar k)^{d_ {k}}d_{k}!. \tag{19}\] With this, we have \[V_{-k}=V_{k}^{\dagger}. \tag{20}\] This ring has the natural grading operator \(\mathcal{N}\), where \(V_{k}\) has degree \(k\). #### 2.1.5. Homogeneous Integral Normalization We will use _homogeneous_ normalization of the integral Jack functions (henceforth denoted with a lower case \(j\)), considered as elements in the Fock module \(\mathcal{F}\), given by: \[j_{\lambda}(V|\varepsilon_{1},\varepsilon_{2}):=(-\varepsilon_{1})^{|\lambda| }\cdot J_{\lambda}^{(\alpha=-\varepsilon_{2}/\varepsilon_{1})}(p=(-\varepsilon _{1})^{-1}V)\in\mathcal{F}. \tag{21}\] With this normalization, the three homogeneous Jack functions for \(n=3\) are given by \[j_{\{1^{3}\}}= V_{1}^{3}+3\varepsilon_{1}V_{1}V_{2}+2\varepsilon_{1}^{2}V_{3}, \tag{23}\] \[j_{\{1,2\}}= V_{1}^{3}+(\varepsilon_{1}+\varepsilon_{2})V_{1}V_{2}+ \varepsilon_{1}\varepsilon_{2}V_{3},\] (24) \[j_{\{3\}}= V_{1}^{3}+3\varepsilon_{2}V_{1}V_{2}+2\varepsilon_{2}^{2}V_{3}. \tag{22}\] Note that these are all homogenous integral polynomials in \(\mathbb{Z}[\varepsilon_{1},\varepsilon_{2},V_{1},V_{2},\ldots]\), and that in this normalization we have the explicit transpositional symmetry: \[j_{\lambda}(V|\varepsilon_{1},\varepsilon_{2})=j_{\lambda^{\prime}}(V| \varepsilon_{2},\varepsilon_{1}). \tag{25}\] **Lemma 2.2** (Principal Specialization [15] Thm 5.4).: _For a partition \(\lambda\), we have_ \[j_{\lambda}(V_{i}=z)=\prod_{b\in\lambda}(z+[b]), \tag{26}\] _and hence_ \[\varpi_{\lambda}:=[V_{n}]j_{\lambda}=\prod_{b\in\lambda^{\times}}[b]. \tag{27}\] #### 2.1.6. Jack Littlewood-Richardson (LR) Coefficients Much of the work in this paper will be concerning the _Jack Littlewood-Richardson coefficients_, \(c_{\mu,\nu}^{\lambda}\), defined as the expansion coefficients in the product of Jack functions, \[j_{\mu}\cdot j_{\nu}=\sum_{\lambda}c_{\mu,\nu}^{\lambda}\,j_{\lambda}. \tag{28}\] In the literature, these are often denoted as \(c_{\mu,\nu}^{\lambda}(\alpha)\) to indicate the dependence on the deformation parameter \(\alpha\). In general, it is very difficult to find explicit closed-form expressions for these coefficients, instead most formulas involving them are recursive in nature. In this paper, we find new families of relations between these coefficients. In the sequel paper, [1], we make progress towards finding explicit closed form expressions. The most well known explicit formula for Jack LR coefficients is given by **Proposition 2.3** (Pieri Rule - Stanley '89 [15] Thm 6.1).: _Let \(\mu\subset\lambda\), and \(\lambda/\mu\) be a horizontal \(r\)-strip, i.e. no two boxes in the quotient shape \(\lambda/\mu\) are adjacent in a row. Then_ \[c_{1^{r},\mu}^{\lambda}=\frac{\left(\prod_{s\in\{1^{r}\}}h_{1^{r}}^{L}(s) \right)\left(\prod_{t\in\mu}A_{\mu}(t)\right)}{(\prod_{v\in\lambda}A_{\lambda} (v))}, \tag{29}\] _where_ \[A_{\sigma}(b)=h_{\sigma}^{U}(b)\text{ if }\lambda/\mu\text{ does not intersect the row of }b\text{, }h_{\sigma}^{L}(b)\text{ otherwise.} \tag{30}\] ### The Nazarov-Skylanin Lax Operator In this section, we recall the work of the previous paper in this series, [10], which explores the extraordinary quantum Lax operator introduced by Nazarov-Skylanin in [11]. #### 2.2.1. Preliminaries We enlarge our Hilbert space and work in the extended Fock module \(\mathcal{H}=\mathcal{F}\otimes\mathbb{C}[w]\), where \(w\) is of degree \(1\). The inner product is \[\langle V_{\mu}w^{m},V_{\nu}w^{n}\rangle=\delta_{n,m}\langle V_{\mu},V_{\nu} \rangle_{\hbar}. \tag{31}\] The total grading operator for \(\mathcal{H}\) is \[\mathcal{N}^{*}:=\mathcal{N}+w\partial_{w}=\hbar^{-1}\sum_{k>0}V_{k}V_{-k}+w \partial_{w}. \tag{32}\] This gives us the graded decomposition \[\mathcal{H}=\bigoplus_{n\geq 0}\mathcal{H}_{n}. \tag{33}\] On this space, we have several important projections on \(\mathcal{H}\). Firstly, \(\pi_{0}:\mathcal{H}\to\mathcal{F}\subset\mathcal{H}\) projects just onto the \(w^{0}\) component. \(\pi_{+}\) is its complement, projecting only onto positive powers of \(w\). \(\pi_{w}:F[w,w^{-1}]\to F[w]\) is the map that projects onto non-negative powers of \(w\). #### 2.2.2. Lax Operator We at last come to introducing the main actor in our story. **Definition 2.4**.: _The_ **Nazarov-Sklyanin Lax Operator** _[11] is the linear operator on \(\mathcal{H}=\mathcal{F}[w]\) given by_ \[\mathcal{L}=\pi_{w}\sum_{k>0}\left(w^{-k}V_{k}+w^{k}V_{-k}\right)+\overline{ \varepsilon}w\partial_{w} \tag{34}\] In the basis \(\mathcal{H}=\oplus_{k}(w^{k}\mathcal{F})\), we can express \(\mathcal{L}\) as the semi-infinite matrix operator, with coefficients in \(\operatorname{End}(\mathcal{F})\), \[\mathcal{L}:=\begin{pmatrix}0&V_{1}&V_{2}&V_{3}&\cdots\\ V_{-1}&\overline{\varepsilon}&V_{1}&V_{2}&\cdots\\ V_{-2}&V_{-1}&2\overline{\varepsilon}&V_{2}&\cdots\\ V_{-3}&V_{-2}&V_{-1}&3\overline{\varepsilon}&\cdots\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{pmatrix} \tag{35}\] One can check that \(\mathcal{L}\) commutes with grading operator 32, so let \(\mathcal{L}_{n}=\mathcal{L}|_{\mathcal{H}_{n}}\). Furthermore, let \(\mathcal{L}^{+}=\pi_{+}\mathcal{L}|_{\mathcal{H}_{+}}\) be the restrictions of \(\mathcal{L}\) to only the positive powers of \(w\). **Corollary 2.5** (Shift property).: (36) \[w^{-1}\mathcal{L}^{+}_{n+1}w=\mathcal{L}_{n}+\overline{\varepsilon}\] Note that from the definition 34 its clear that \(\mathcal{L}\) acts as derivation if either of the factors is in \(\mathcal{H}^{0}:=\pi_{0}\mathcal{H}=\mathcal{F}\), i.e. \[\mathcal{L}(\zeta\cdot\xi)=\mathcal{L}(\zeta)\cdot\xi+\zeta\cdot\mathcal{L}( \xi),\qquad\text{ if }\zeta\in\mathcal{H}^{0}. \tag{37}\] #### 2.2.3. Spectral Factors I We will make extended use of the following rational functions that are associated to partitions \(\lambda\). \[T_{\lambda}(u):=\prod_{s\in\lambda}N(u-[s]),\hskip 28.452756pt\text{where }N(u)=\frac{(u-[0,0])(u-[1,1])}{(u-[1,0])(u-[0,1])}. \tag{38}\] For example, in the simplest case \(T_{1}(u)\) has a zero at the top and bottom corners and poles at each of the side corners, illustrated in figure 2. Note that we have the cancellations of poles and zeros on the internal corners of the partition, and we are left with poles at the 'inner' corners, and zeros at the 'outer' corners, that is, \[T_{\lambda}(u)=u\cdot\frac{\prod_{t\in\mathcal{R}^{+}_{\lambda}}(u-[t])}{\prod _{s\in\mathcal{A}_{\lambda}}(u-[s])}. \tag{39}\] #### 2.2.4. Integrable Hierarchy We consider the following 'transfer' operator for \(\mathcal{L}\), \[\mathcal{T}(u):=\pi_{0}(u-\mathcal{L})^{-1}:\operatorname{End}(\mathcal{F}) \otimes\mathbb{C}(u). \tag{40}\] The motivating result for most of this work is the following remarkable property of the Lax operator \(\mathcal{L}\). **Theorem 2.6** ( Nazarov-Sklyanin (2013) [11] ).: _The transfer operator \(\mathcal{T}(u)\) is diagonalized on the homogenous Jack functions \(j_{\lambda}\in\mathcal{F}\),_ \[\mathcal{T}(u)\,j_{\lambda}=u^{-1}T_{\lambda}(u)\cdot j_{\lambda} \tag{41}\] _where \(T_{\lambda}(u)\) is the spectral factor 38._ #### 2.2.5. Transition Measures The influential work of Kerov [8] introduces the following objects. **Definition 2.7** ([8] eq. (3.1)).: _For \(s\in\mathcal{A}_{\lambda}\), define the co-transition measure_ \[\tau_{\lambda}^{s}:=\operatorname*{Res}_{u=[s]}u^{-1}T_{\lambda}(u)=\frac{\prod _{t\in\mathcal{R}_{\lambda}^{+}}[s-t]}{\prod_{s^{\prime}\in\mathcal{A}_{\lambda },s^{\prime}\neq s}[s-s^{\prime}]}. \tag{42}\] Note that for \(\varepsilon_{2}<0<\varepsilon_{1}\), it can be shown that \(\tau_{\lambda}^{s}>0\) and \(\sum_{s\in\mathcal{A}_{\lambda}}\tau_{\lambda}^{s}=1\), hence these coefficients define a probability measure on the add-set of \(\lambda\). A connection between these measures and Jack LR coefficients was shown by Kerov. **Lemma 2.8** (Kerov '97 [8] thm. (6.7)).: _The simplest Jack LR coefficient coincides with the co-transition measure,_ \[c_{1,\lambda}^{\lambda+s}=\tau_{\lambda}^{s}. \tag{43}\] _That is, Jack functions satisfy the following simple multiplication formula ('Pieri' rule)_ \[j_{1}\cdot j_{\lambda}=\sum_{s\in\mathcal{A}_{\lambda}}\tau_{\lambda}^{s}\,j _{\lambda+s}. \tag{44}\] From this, we can write \[u^{-1}T_{\lambda}(u)=\sum_{s\in\mathcal{A}_{\lambda}}c_{1,\lambda}^{\lambda+s }\frac{1}{u-[s]}. \tag{45}\] One of the main results (Theorem 4.26) of this paper is a generalization this Kerov relation between Jack LR coefficients and residues of certain'spectral' rational functions. Kerov also introduces the _transition measures_, for \(t\in\mathcal{R}_{\lambda}^{+}\), \[\tilde{\tau}_{\lambda}^{t}=\operatorname*{Res}_{u=[t]}u\,T_{\lambda}(u)^{-1}. \tag{46}\] ### Spectral Theorem Here we recall the results of Mickler-Moll [10] on the spectrum of the Nazarov-Skylanin Lax operator. Fix \(n\in\mathbb{N}\). Consider the \(\mathcal{L}\)-cyclic subspaces of \(\mathcal{H}_{n}\) generated by the Jack polynomials \(j_{\lambda}\in\mathcal{H}_{n}^{0}\). **Definition 2.9**.: _The "Jack-Lax" cyclic subspaces of \(j_{\lambda}\) under \(\mathcal{L}_{n}\) are denoted by_ \[z_{\lambda}:=Z(j_{\lambda},\mathcal{L}_{n})\subset\mathcal{H}_{n}. \tag{47}\] One immediate corollary of the NS theorem (41) in this language is **Corollary 2.10**.: \(\pi_{0}z_{\lambda}=j_{\lambda}\)_._ We can now state the main result of the first paper in this series. **Theorem 2.11** (Spectral Decomposition [10]).: _Under the action of the Nazarov-Skylanin Lax operator_ \(\mathcal{L}\)_, the space_ \(\mathcal{H}_{n}\) _has the following cyclic decomposition_ (48) \[\mathcal{H}_{n}=\bigoplus_{\lambda\vdash\,n}\mathpzc{z}_{\lambda}.\] _under which_ \(\mathcal{L}\) _acts in block diagonal form_ \(\mathcal{L}=\oplus\mathcal{L}_{\lambda}\)_, where_ \(\mathcal{L}_{\lambda}=\mathcal{L}_{n}|\mathpzc{z}_{\lambda}\)_._ * _The eigenfunctions of_ \(\mathcal{L}_{\lambda}\) _on_ \(\mathpzc{z}_{\lambda}\) _are labelled_ \(\{\psi_{\lambda}^{s}:s\in\mathcal{A}_{\lambda}\}\) _with eigenvalues given by the corresponding box content_ (49) \[\mathcal{L}\,\psi_{\lambda}^{s}=[s]\cdot\psi_{\lambda}^{s}.\] _Thus, the cyclic subspace_ \(\mathpzc{z}_{\lambda}\) _is given as_ (50) \[\mathpzc{z}_{\lambda}=\operatorname*{Span}_{s\in\mathcal{A}_{\lambda}}\{\psi_ {\lambda}^{s}\}\subset\mathcal{H}_{n}.\] \(\bullet\) _The eigenfunctions of_ \(\mathcal{L}_{\lambda}^{+}\) _on_ \(\mathpzc{z}_{\lambda}^{+}:=\pi_{+}\mathpzc{z}_{\lambda}\) _are labelled_ \(\{\tilde{\psi}_{\lambda}^{t}:t\in\mathcal{R}_{\lambda}^{+}\}\) _with eigenvalues given by the corresponding box content_ (51) \[\mathcal{L}^{+}\,\tilde{\psi}_{\lambda}^{t}=[t]\cdot\tilde{\psi}_{\lambda}^{t}.\] \(\bullet\) _These eigenfunctions can be normalized to satisfy_ (52) \[\pi_{0}\psi_{\lambda}^{s}=j_{\lambda},\qquad\tilde{\psi}_{\lambda}^{t}=\frac{1 }{[t]-\mathcal{L}}j_{\lambda}.\] _With this normalization, we have_ (53) \[j_{\lambda}=\sum_{s\in\mathcal{A}_{\lambda}}\tau_{\lambda}^{s}\psi_{\lambda}^ {s}\in\mathpzc{z}_{\lambda},\quad\text{ where }\quad\tau_{\lambda}^{s}:= \operatorname*{Res}_{u=[s]}\left(u^{-1}T_{\lambda}(u)\right).\] (54) \[\mathcal{L}_{n}j_{\lambda}=\sum_{t\in\mathcal{R}_{\lambda}^{+}}\tilde{\tau}_{ \lambda}^{t}\tilde{\psi}_{\lambda}^{t}\in\mathpzc{z}_{\lambda}^{+},\quad\text { where }\quad\tilde{\tau}_{\lambda}^{t}:=\operatorname*{Res}_{u=[t]}\left(uT_{ \lambda}(u)^{-1}\right).\] We also recall the following useful result: **Lemma 2.12** (Principal Specialization [10]).: (55) \[\psi_{\lambda}^{s}(z,1)=\prod_{b\in(\lambda+s)^{\times}}(z+[b]),\] _and hence_ \[[w^{|\lambda|}]\psi_{\lambda}^{s}(V,w)=[s]\varpi_{\lambda}. \tag{56}\] ## 3. Three Decompositions In this section, we begin the new work of this paper. As mentioned in the introduction, our primary objective will be to provide a new approach the classical problem of understanding the structure coefficients of products of Jack functions \[j_{\lambda}\cdot j_{\nu}=\sum_{\gamma}c_{\lambda,\nu}^{\gamma}j_{\gamma}. \tag{57}\] The central claim of this paper is that by considering the structure of the algebra of Lax eigenfunctions \[\psi_{\lambda}^{s}\cdot\psi_{\nu}^{t}=\sum_{\gamma,u}c_{\lambda,\nu;u}^{s,t; \gamma}\psi_{\gamma}^{u}, \tag{58}\] we will gain insight into the products of Jack functions. The _Jack-Lax Littlewood-Richardson_ coefficients, \(c_{\lambda,\nu;u}^{s,t;\gamma}\), will be illuminated throughout this paper. This algebra reproduces the Jack LR coefficients under the projection to \(\pi_{0}\), and hence \[c_{\lambda,\nu}^{\gamma}=\sum_{u\in\mathcal{A}_{\gamma}}c_{\lambda,\nu;u}^{s, t;\gamma}. \tag{59}\] We will build up towards goal of understanding these Jack-Lax LR coefficients, by first developing a structural theory for \(\psi_{\lambda}^{s}\). ### Norm Formulae We recall a famous formula of Stanley for the norm squared of Jack functions. **Proposition 3.1** (Stanley [15] Thm 5.8).: (60) \[|j_{\lambda}|^{2}=\prod_{b\in\lambda}h_{\lambda}^{U}(b)h_{\lambda}^{L}(b)\] We next prove a similar formula for Lax eigenfunctions. Let \(c_{b}(\lambda)\) (resp. \(r_{b}(\lambda)\)) be the subset of boxes of \(\lambda\) that are in the same column (resp. row) as \(b\). **Lemma 3.2** (\(\psi\) Norm Formula).: (61) \[|\psi_{\lambda}^{s}|^{2}=\prod_{b\in\lambda}C_{\lambda,s}^{U}(b)C_{\lambda,s}^ {L}(b)\] _where_ \[C_{\lambda,s}^{U}(b)=h_{\lambda}^{U}(b)\text{ if }b\notin c_{s}(\lambda)\text{, }h_{ \lambda+s}^{U}(b)\text{ otherwise.} \tag{62}\] \[C_{\lambda,s}^{L}(b)=h_{\lambda}^{L}(b)\text{ if }b\notin r_{s}(\lambda)\text{, }h_{ \lambda+s}^{L}(b)\text{ otherwise.} \tag{63}\] Proof.: As observed in the first paper in this series, the two ways of expanding the expression \(\langle\psi_{\lambda}^{s},j_{\lambda}\rangle\) lead to the formula \[|\psi_{\lambda}^{s}|^{2}=\frac{|j_{\lambda}|^{2}}{\tau_{\lambda}^{s}}. \tag{64}\] Using Kerov's identity 43 and Stanley's Pieri formula 29, we get \[\tau_{\lambda}^{s}=c_{1,\lambda}^{\lambda+s}=\frac{h_{1}^{L}((0,0))\left(\prod_{b \in\lambda}A_{\lambda}(b)\right)}{(\prod_{b\in\lambda+s}A_{\lambda+s}(b))}. \tag{65}\] Expanding this out, the factors not in the row-column shared with \(s\) cancel, and by using \(h_{1}^{L}((0,0))=h_{\lambda+s}^{L}(s)\), we get \[\tau_{\lambda}^{s}=\frac{\left(\prod_{b\in r_{s}(\lambda)}h_{\lambda}^{L}(b) \right)}{\left(\prod_{b\in r_{s}(\lambda)}h_{\lambda+s}^{L}(b)\right)}\frac{ \left(\prod_{b\in c_{s}(\lambda)}h_{\lambda}^{U}(b)\right)}{\left(\prod_{b\in c _{s}(\lambda)}h_{\lambda+s}^{U}(b)\right)}, \tag{66}\] and the result follows from 60. **Example 3.3**.: _We can compute (using \((\varepsilon_{1},\varepsilon_{2})=(X,Y)\) for readability)_ \[\psi_{1,2^{2}}^{(2,1)} = w^{0}(V_{1}^{5}+2(2X+Y)V_{1}^{3}V_{2}+(3X^{2}+XY+Y^{2})V_{2}^{2} V_{1}+2X(X+3Y)V_{1}^{2}V_{3}\] \[\quad+2X(X^{2}+Y^{2})V_{2}V_{3}+XY(7X+Y)V_{1}V_{4}+2X^{2}Y(X+Y)V_ {5})+\] \[w^{1}((2X+Y)V_{1}^{4}+2(3X^{2}+XY+Y^{2})V_{1}^{2}V_{2}+Y(5X^{2}-3 XY+Y^{2})V_{2}^{2}\] \[\quad+4X(X^{2}+Y^{2})V_{1}V_{3}+XY(4X^{2}-XY+Y^{2})V_{4})+\] \[w^{2}(2X(X+3Y)V_{1}^{3}+6X(X^{2}+Y^{2})V_{1}V_{2}+2X(2X^{2}-3XY+3 Y^{2})V_{3})+\] \[w^{3}(2XY(7X+Y)V_{1}^{2}+2XY(4X2-XY+Y^{2})V_{2})+\] \[w^{4}(10X^{2}y(X+Y)V_{1})+\] \[w^{5}(2X^{2}Y(2X^{2}+3XY+Y^{2})).\] _We also use the Stanley Pieri formula (29) to compute,_ \[|j_{1,2^{2}}|^{2}=[1,0][2,-1][3,-1][1,0][2,0]\cdot[0,-1][1,-2][2,-2][0,-1][1,-1], \tag{67}\] _where we have grouped as L/U hooks. Then (using red to highlight the hooks that have changed), we have_ \[|\psi_{1,2^{2}}^{(2,1)}|^{2}=[1,-1][2,-1][3,-1][1,0][2,0]\cdot[0,-1][1,-2][2,-2 ][1,-1][2,-1]. \tag{68}\] ### Action of \(w\) Next, we investigate the relationship between eigenvalues of different degrees via the action of multiplication by \(w:\mathcal{H}_{n}\to\mathcal{H}_{n+1}\) in terms of the basis of Lax-eigenfunctions \(\psi\). We begin with a \(\mathcal{L}_{n}\) eigenfunction \(\psi_{\gamma}^{t}\), for \(\gamma\,\vdash\,n\). We note that the shift property \(w^{-1}\mathcal{L}_{n+1}^{+}w=\mathcal{L}_{n}+\overline{\varepsilon}\) (2.5), yields \[\mathcal{L}_{n+1}^{+}\left(w\psi_{\gamma}^{t}\right)=w(\mathcal{L}_{n}+ \overline{\varepsilon})\psi_{\gamma}^{t}=[t+(1,1)]w\psi_{\gamma}^{t}. \tag{69}\] That is, \(w\psi_{\gamma}^{t}\in\mathcal{H}_{n+1}\) is in the \([t+(1,1)]\) eigenspace of \(\mathcal{L}_{n+1}^{+}\). First, we determine precisely what eigenfunction this is, as this eigenspace is generically greater than one dimensional. The spectrum of \(\mathcal{L}_{n+1}^{+}\) was provided in the spectral theorem 2.11. **Theorem 3.4** (Shift Theorem).: _We have followed equality of eigenfunctions of \(\mathcal{L}_{n+1}^{+}\),_ \[w\,\psi_{\gamma}^{t}=\tilde{\psi}_{\gamma+t}^{t+(1,1)}. \tag{70}\] Proof.: We prove by induction on size of the partition \(\gamma\). For the base case, we check \(\tilde{\psi}_{1}^{(1,1)}=w=w\psi_{\emptyset}^{(0,0)}\). For the inductive step, assume equation (70) holds for all \(\gamma\) with \(|\gamma|\leq n\). We begin with the Pieri rule (44), for \(\lambda\,\vdash\,n\): \[j_{1}j_{\lambda}=\sum_{s\in\mathcal{A}_{\lambda}}\tau_{\lambda}^{s}\,j_{ \lambda+s}. \tag{71}\] We first act with \(\mathcal{L}\) on both sides of this equation. In general \(\mathcal{L}\) is not a derivation. However, when acting on terms of degree zero, it is (37). After dividing both sides by \(w\), this yields \[q_{1}j_{\lambda}+j_{1}q_{\lambda}=\sum_{s\in\mathcal{A}_{\lambda}}\tau_{ \lambda}^{s}\,q_{\lambda+s}, \tag{72}\] where we have used the following definition for \(\gamma\,\vdash\,(n+1)\), \[q_{\gamma}:=w^{-1}\mathcal{L}_{n+1}j_{\gamma}\in\mathcal{H}_{n}. \tag{73}\] Note, from 54 we have \[q_{\gamma}=\sum_{t\in\mathcal{R}_{\gamma}}\tilde{\tau}_{\gamma}^{t+(1,1)}w^{- 1}\tilde{\psi}_{\gamma}^{t+(1,1)}\in\pi^{+}\mathcal{H}_{n+1}. \tag{74}\] The strategy will be to hit both sides with the projector \(P_{[s]}\) onto the \([s]\) eigenspace of \(\mathcal{L}\), for a choice of \(s\in\mathcal{A}_{\lambda}\), and equate the results. On the right hand side of (72), the only term in the sum is not annihilated by the spectral projector \(P_{[s]}\) is \(q_{\lambda+s}\), since by (69) the eigenvalues appearing in the \(\psi\) decomposition of some \(q_{\gamma}\) are precisely \(\{[t]:t\in\mathcal{R}_{\gamma}\}\), and only \(\lambda+s\) has a maximum at \(s+(1,1)\). Thus, only term that survives the projection is \[P_{[s]}q_{\lambda+s}=\tilde{\tau}_{\lambda+s}^{s+(1,1)}w^{-1}\tilde{\psi}_{ \lambda+s}^{s+(1,1)}. \tag{75}\] Now for the left hand side of (72), we expand \[q_{1}j_{\lambda}+j_{1}q_{\lambda}=\hbar\sum_{u\in\mathcal{A}_{\lambda}}\tau_{ \lambda}^{u}\psi_{\lambda}^{u}+j_{1}w^{-1}\sum_{t\in\mathcal{R}_{\lambda}} \tilde{\tau}_{\lambda}^{t}\tilde{\psi}_{\lambda}^{t+(1,1)}. \tag{76}\] We then use the inductive hypothesis, \(w^{-1}\tilde{\psi}_{\lambda}^{t+(1,1)}=\psi_{\lambda-t}^{t}\), to rewrite the last term in this expression \[\hbar\sum_{u\in\mathcal{A}_{\lambda}}\tau_{\lambda}^{u}\psi_{\lambda}^{u}+j_{ 1}\sum_{t\in\mathcal{R}_{\lambda}}\tilde{\tau}_{\lambda}^{t}\psi_{\lambda-t}^ {t}. \tag{77}\] To expand this further, we need to explicitly expand out \(j_{1}\cdot\psi_{\lambda-t}^{t}\). We first note that from the derivation property 37, we have \[(\mathcal{L}-[t])\left(j_{1}\cdot\psi_{\lambda-t}^{t}\right)=\hbar\,w\,\psi_{ \lambda-t}^{t}. \tag{78}\] This tells us that \[j_{1}\cdot\psi^{t}_{\lambda-t}=(\mathcal{L}-[t])^{-1}\hbar\,w\,\psi^{t}_{\lambda- t}+r^{t}, \tag{79}\] where \(r^{t}\) is the \([t]\) eigenspace of \(\mathcal{L}\). We use the inductive hypothesis a second time, in conjunction with the formula (52), to write \[w\,\psi^{t}_{\lambda-t}=\tilde{\psi}^{t+(1,1)}_{\lambda}=\frac{1}{[t+(1,1)]- \mathcal{L}}j_{\lambda}=\sum_{s\in\mathcal{A}_{\lambda}}\frac{1}{[t+(1,1)-s]} \tau^{s}_{\lambda}\psi^{s}_{\lambda} \tag{80}\] So we find \[j_{1}\cdot\psi^{t}_{\lambda-t}=\sum_{s\in\mathcal{A}_{\lambda}}\frac{\hbar\, \tau^{s}_{\lambda}}{[t-s][t+(1,1)-s]}\psi^{s}_{\lambda}+r^{t} \tag{81}\] But we are not yet done. We decompose the term \(r^{t}\) it into its \(\mathpzc{z}_{\gamma}\) components, \[r^{t}=\sum_{\gamma\,:\,t\in\mathcal{A}_{\gamma}}r_{\gamma}\psi^{t}_{\gamma}, \quad r_{\gamma}\in\mathbb{C}. \tag{82}\] We know only a single \(\psi\) can appear in each \(\mathpzc{z}_{\gamma}\), since the spectrum of \(\mathcal{L}_{\gamma}:=\mathcal{L}|_{\mathpzc{z}_{\gamma}}\) is multiplicity free. Note that no \(\mathpzc{z}_{\lambda}\) component appears in this decomposition of \(r^{t}\), since \(t\) is not in \(\mathcal{A}_{\lambda}\). Rather, all the \(\mathpzc{z}_{\lambda}\) components in equation (81) are in the first term on the right hand side. We now apply \(\pi_{0}\) to (81) after using the decomposition (82), to get \[j_{1}\cdot j_{\lambda-t} = \left(\sum_{s\in\mathcal{A}_{\lambda}}\frac{\hbar\,\tau^{s}_{ \lambda}}{[t-s][t+(1,1)-s]}\right)j_{\lambda}+\sum_{\gamma\,:\,t\in\mathcal{A }_{\gamma}}r_{\gamma}j_{\gamma} \tag{84}\] \[= \tau^{t}_{\lambda-t}j_{\lambda}+\sum_{\gamma\,:\,t\in\mathcal{A} _{\gamma}}r_{\gamma}j_{\gamma} \tag{83}\] where we have used the identity (448) to evaluate the coefficient of \(j_{\lambda}\). Comparing this result with the Pieri rule (44), we can read off the \(r_{\gamma}\) coefficients to give us \[r^{t}=\sum_{u\in\mathcal{A}_{\lambda-t},u\neq t}\tau^{u}_{\lambda-t}\psi^{t}_ {\lambda-t+u}. \tag{85}\] After all this we find the explicit expression we sought \[j_{1}\cdot\psi^{t}_{\lambda-t} = \sum_{v\in\mathcal{A}_{\lambda}}\frac{\hbar\,\tau^{v}_{\lambda}} {[t-v][t+(1,1)-v]}\psi^{v}_{\lambda} \tag{87}\] \[+\sum_{u\in\mathcal{A}_{\lambda-t},u\neq t}\tau^{u}_{\lambda-t} \psi^{t}_{\lambda-t+u}. \tag{86}\] Plugging all this back into (77) we find \[q_{1}j_{\lambda}+j_{1}q_{\lambda} = \hbar\sum_{u\in\mathcal{A}_{\lambda}}\tau^{u}_{\lambda}\psi^{u}_{\lambda} \tag{89}\] \[+\sum_{t\in\mathcal{R}_{\lambda}}\bar{\tau}^{t+(1,1)}_{\lambda} \sum_{v\in\mathcal{A}_{\lambda}}\frac{\hbar\,\tau^{v}_{\lambda}}{[t-v][t-v]} \psi^{v}_{\lambda} \tag{88}\] \[+\sum_{t\in\mathcal{R}_{\lambda}}\tilde{\tau}_{\lambda}^{t+(1,1)}\sum_{u\in \mathcal{A}_{\lambda-t},u\neq t}\tau_{\lambda-t}^{u}\psi_{\lambda-t+u}^{t}. \tag{90}\] Because \(s\in\mathcal{A}_{\lambda}\) can not be of the form \(t\) for any \(t\in\mathcal{R}_{\lambda}\), we can see that the only eigenfunction with eigenvalue \([s]\) for our specific choice of \(s\in\mathcal{A}_{\lambda}\) that can appear on the left hand side is \(\psi_{\lambda}^{s}\), with coefficient \[P_{[s]}\left(q_{1}j_{\lambda}+j_{1}q_{\lambda}\right) = \tau_{\lambda}^{s}\left(\hbar+\sum_{t\in\mathcal{R}_{\lambda}} \tilde{\tau}_{\lambda}^{t+(1,1)}\frac{\hbar}{[t-s][t+(1,1)-s]}\right)\psi_{ \lambda}^{s} \tag{92}\] \[= \tau_{\lambda}^{s}\left(\tilde{\tau}_{\lambda+s}^{s+(1,1)}\right) \psi_{\lambda}^{s} \tag{91}\] where we have used the identity (449) to evaluate the sum. Thus, the result of hitting equation (72) with \(P_{[s]}\) is the equality, \[\tau_{\lambda}^{s}\tilde{\tau}_{\lambda+s}^{s+(1,1)}\psi_{\lambda}^{s}=\tau_ {\lambda}^{s}\tilde{\tau}_{\lambda+s}^{s+(1,1)}w^{-1}\tilde{\psi}_{\lambda+s} ^{s+(1,1)}, \tag{93}\] That is, \(w\cdot\psi_{\lambda}^{s}=\tilde{\psi}_{\lambda+s}^{s+(1,1)}\). Seeing as \(s\in\mathcal{A}_{\lambda}\) was arbitrary, we find that equation (70) holds for all \(\gamma\,\vdash\,(n+1)\), and so we have completed the inductive step. With this result, we no longer need to mention the eigenvalues of \(\mathcal{L}^{+}\), as they are determined by \(w\) and the eigenvalues of \(\mathcal{L}\). In this spirit, we use the result (80) to determine the action of \(w\) in the \(\psi\) basis, **Corollary 3.5**.: _The action of \(w\) in the basis of Lax eigenfunctions \(\psi\) is given by_ \[w\cdot\psi_{\lambda}^{t}=\sum_{s\in\mathcal{A}_{\lambda+t}}\frac{\tau_{ \lambda+t}^{s}}{[s-t-(1,1)]}\psi_{\lambda+t}^{s}. \tag{94}\] In the remainder of this paper, we will make use of the following elements that appeared in the above proof, **Definition 3.6**.: (95) \[q_{\gamma}:=w^{-1}\mathcal{L}_{n+1}j_{\gamma}\in\mathcal{H}_{n}.\] ### Littlewood-Richardson Coefficients Here, we can state the first example of the Jack-Lax Littlewood-Richardson coefficients (58), by providing a refinement of the simplest Pieri rule (44), **Lemma 3.7**.: (96) \[\psi_{1}^{v}\cdot\psi_{\lambda}^{s} = \sum_{u\in\mathcal{A}_{\lambda+s}}\frac{[-v][s-u-v+(1,1)]}{[s-u][s -u+(1,1)]}\,\tau_{\lambda+s}^{u}\psi_{\lambda+s}^{u}\] (97) \[+\sum_{t\in\mathcal{A}_{\lambda},t\neq s}\tau_{\lambda}^{t}\psi_ {\lambda+t}^{s}.\] Proof.: We note that \(\psi_{1}^{v}=j_{1}+[v]w\). We combine the formula (86) \[j_{1}\cdot\psi_{\lambda}^{s} = \sum_{b\in\mathcal{A}_{\lambda+s}}\frac{\hbar\,\tau_{\lambda+s}^{b} }{[s-b][s+(1,1)-b]}\psi_{\lambda+s}^{b} \tag{99}\] \[+\sum_{t\in\mathcal{A}_{\lambda},t\neq s}\tau_{\lambda}^{t}\psi_{ \lambda+t}^{s}, \tag{98}\] with (94) \[[v]w\cdot\psi_{\lambda}^{s} = \sum_{b\in\mathcal{A}_{\lambda+s}}\frac{[v]\tau_{\lambda+s}^{b}} {[b-s-(1,1)]}\psi_{\lambda+s}^{b}, \tag{100}\] to see that the coefficient of \(\tau_{\lambda+s}^{b}\psi_{\lambda+s}^{b}\) is \[\frac{\hbar}{[s-b][s+(1,1)-b]}+\frac{[v]}{[b-s-(1,1)]}=\frac{[v][(1,1)-v]-[v ][s-b]}{[s-b][s+(1,1)-b]}, \tag{101}\] and the result follows. If we apply \(\pi_{0}\) to this formula, we recover the usual Pieri rule (44). ### \(\mathcal{Y}\!y\!z\) Decomposition One of the ways we will gain insight into the structure of the algebra 58 is through various decompositions of the space \(\mathcal{H}\). The first of these is given by the subspaces \(\mathpzc{z}_{\lambda}:=\operatorname{Span}_{s\in\mathcal{A}_{\lambda}}\{\psi_ {\lambda}^{s}\}\subset\mathcal{H}_{|\lambda|}\), that is, \[\mathcal{H}_{n}=\bigoplus_{\lambda\vdash\,n}\mathpzc{z}_{\lambda}. \tag{102}\] **Example 3.8**.: The second decomposition of \(\mathcal{H}\) is the eigen-decomposition under the Lax operator \(\mathcal{L}\). Denote the \([s]\) eigenspace of \(\mathcal{L}_{n}\) on \(\mathcal{H}_{n}\) as \(\mathpzc{y}_{n}^{s}\). We then have \[\mathcal{H}_{n}=\bigoplus_{s:\,[s]\in\operatorname{Spec}\mathcal{L}_{n}}\mathpzc {y}_{n}^{s}. \tag{103}\] Motivated by the results of the previous section, we define the third decomposition into subspaces **Definition 3.9**.: (104) \[\mathpzc{\Upsilon}_{\gamma}:=\operatorname{Span}_{t\in\mathcal{R}_{\gamma}}\{ \psi_{\gamma-t}^{t}\}\subset\mathcal{H}_{|\gamma|-1}\] These spaces also give us a decomposition of \(\mathcal{H}\). **Corollary 3.10**.: (105) \[\mathcal{H}_{n}=\bigoplus_{\gamma^{\perp}\,(n+1)}\mathcal{X}_{\gamma}.\] With this definition, the results of the previous section can be re-stated in the following simple way. **Corollary 3.11**.: _For \(\gamma\,\vdash\,(n+1)\), the \(\mathfrak{X}\) spaces are also Lax orbits_ \[\mathfrak{X}_{\gamma}=Z(q_{\gamma},\mathcal{L}_{n})\subset\mathcal{H}_{n}. \tag{106}\] _or equivalently,_ \[q_{\gamma}=\sum_{t\in\mathcal{R}_{\gamma}}\tilde{\tau}_{\gamma}^{t+(1,1)}\psi _{\gamma-t}^{t}\in\mathfrak{X}_{\gamma}. \tag{107}\] _Furthermore, multiplication by \(w\),_ \[w:\mathfrak{X}_{\lambda}\to\mathfrak{Z}_{\lambda}^{+}, \tag{108}\] _is an isomorphism._ If we let \(\Pi=w^{-1}\pi_{+}:\mathcal{H}_{n}\to\mathcal{H}_{n-1}\), we have the inverse statement to 94 **Lemma 3.12**.: (109) \[\Pi\psi_{\lambda}^{s}=\sum_{t\in\mathcal{R}_{\lambda}}\frac{\tilde{\tau}_{ \lambda}^{t+(1,1)}}{[t+(1,1)-s]}\psi_{\lambda-t}^{t}\in\mathfrak{X}_{\lambda}.\] Proof.: We note from [10] (A.5.3) we have \(\psi_{\lambda}^{s}=j_{\lambda}+\frac{1}{\mathcal{L}^{+}-[s]}wq_{\lambda}\), from which we find \(\pi_{+}\psi_{\lambda}^{s}=w\frac{1}{\mathcal{L}+\overline{\varepsilon}-[s]}q_{\lambda}\). With these definitions, we can provide a lifting of the obvious identity \(|\mathcal{A}_{\lambda}|=1+|\mathcal{R}_{\lambda}|\) to the level of vector spaces. **Corollary 3.13** (Structural Theorem).: (110) \[\mathbb{z}_{\lambda}=\mathbb{C}\,j_{\lambda}\oplus w\cdot\mathbb{x}_{\lambda}.\] We have shown that we have three decompositions of \(\mathcal{H}\), \[\mathcal{H}_{n}=\bigoplus_{\gamma\,\vdash\,n+1}\mathbb{x}_{\gamma}=\bigoplus_{ [s]\in\operatorname{Spec}\mathcal{L}_{n}}\mathbb{y}^{s}=\bigoplus_{\lambda \vdash\,n}\mathbb{z}_{\lambda}. \tag{111}\] We note that intersection of any two of \(\mathbb{z}_{\lambda},\mathbb{x}_{\lambda+s}\) or \(\mathbb{y}^{s}\) is the line \(\mathbb{C}\cdot\psi_{\lambda}^{s}\subset\mathcal{H}_{n}\), and that the intersection of any two of any type of these subspaces is at most one dimensional. #### 3.4.1. \(\pi_{\circ}\) operator For a brief interlude, we introduce another projection operator. **Lemma 3.14**.: _Let \(A:=\pi_{0}\mathcal{L}_{n+1}w:\mathcal{H}_{n}\to\pi^{0}H_{n+1}\), and \(B:=w^{-1}\mathcal{L}_{n+1}:\mathcal{H}_{n+1}^{0}\to\mathcal{H}_{n}\), then we have_ \[AB=n\hbar\operatorname{Id}_{\pi^{0}\mathcal{H}_{n+1}}. \tag{112}\] Let \(\pi_{\circ}\in\operatorname{End}\mathcal{H}_{n}\) be defined by \(\pi_{\circ}=(n\hbar)^{-1}BA\). Then \(\pi_{\circ}{}^{2}=\pi_{\circ}\), that is, \(\pi_{\circ}\) is a projection operator of rank \(\dim\pi^{0}\mathcal{H}_{n+1}=p(n+1)\). We have shown that \(Bj_{\lambda}=q_{\lambda}\) and \(Aq_{\lambda}=n\hbar j_{\lambda}\), indeed \(A\psi_{\gamma-s}^{s}=j_{\gamma}\). Hence \(\pi_{\circ}\) preserves the \(\mathfrak{X}\) decomposition of \(\mathcal{H}_{n}\), and \[\pi_{\circ}|_{\mathbb{X}_{\gamma}}=\pi_{q_{\gamma}}, \tag{113}\] that is, \(\pi_{\circ}\) restricted to \(\mathfrak{X}_{\lambda}\) is rank one and is equal to the projection operator onto \(q_{\lambda}\). Compare this to the companion statement \[\pi_{0}|_{\mathbb{Z}_{\lambda}}=\pi_{j_{\lambda}}. \tag{114}\] ### Traces #### 3.5.1. Top Powers The results so far concern looking at \(\pi_{0}\) of a resolvent, that is, in the lowest powers of \(w\). In this section, we rather look at the top component in powers of \(w\). For any \(\zeta\in\mathcal{H}_{n}\), define \[\pi_{*}\zeta:=[w^{n}]\zeta=\langle w^{n},\zeta\rangle\in\mathbb{C}_{\varepsilon}. \tag{115}\] **Lemma 3.15**.: _The highest and lowest \(w\)-components of the Lax resolvent acting on a Jack function are given by_ \[\frac{1}{u-\mathcal{L}}j_{\lambda}=u^{-1}T_{\lambda}(u)\cdot j_{\lambda}w^{0}+ \cdots+\varpi_{\lambda}\left(T_{\lambda}(u)-1\right)\cdot w^{|\lambda|}, \tag{116}\] _where \(\varpi_{\lambda}\coloneqq[V_{|\lambda|}]j_{\lambda}\) as before (27)._ Proof.: Recall from 41, we have \(\pi_{0}\frac{1}{u-\mathcal{L}}j_{\lambda}=u^{-1}T_{\lambda}(u)\cdot j_{\lambda}\). We use \(j_{\lambda}=\sum_{s\in\mathcal{A}_{\lambda}}\tau_{\lambda}^{s}\psi_{\lambda}^{s}\), and compute the top component \[\pi_{*}\frac{1}{u-\mathcal{L}}j_{\lambda} = \sum_{s\in\mathcal{A}_{\lambda}}\tau_{\lambda}^{s}\frac{1}{u-[s] }\pi_{*}\psi_{\lambda}^{s} \tag{118}\] \[= \varpi_{\lambda}\sum_{s\in\mathcal{A}_{\lambda}}\frac{[s]\tau_{ \lambda}^{s}}{u-[s]}\] (119) \[= \varpi_{\lambda}\left(\sum_{s\in\mathcal{A}_{\lambda}}\frac{u\, \tau_{\lambda}^{s}}{u-[s]}-\sum_{s\in\mathcal{A}_{\lambda}}\tau_{\lambda}^{s}\right)\] (120) \[= \varpi_{\lambda}\left(T_{\lambda}(u)-1\right). \tag{117}\] On the second line we used the principal specialization result (56) \(\pi_{*}\psi_{\lambda}^{s}=[s]\varpi_{\lambda}\). To reorient the rest of our work towards working with these top powers of \(w\), we introduce new conventions. **Definition 3.16**.: _Define the rescaled 'hatted' elements_ \[\hat{\psi}_{\lambda}^{s}:=\psi_{\lambda}^{s}/\pi_{*}(\psi_{\lambda}^{s})=\psi_ {\lambda}^{s}/[s]\varpi_{\lambda},\quad\hat{j}_{\lambda}:=j_{\lambda}/\varpi_ {\lambda},\quad\hat{q}_{\lambda}:=q_{\lambda}/\varpi_{\lambda}, \tag{121}\] \[\hat{\tau}_{\lambda}^{s}:=[s]\tau_{\lambda}^{s}. \tag{122}\] With these redefinitions we have \[\hat{j}_{\lambda}=\sum_{s\in\mathcal{A}_{\lambda}}\hat{\tau}_{\lambda}^{s} \hat{\psi}_{\lambda}^{s},\qquad\sum_{s\in\mathcal{A}_{\lambda}}\hat{\tau}_{ \lambda}^{s}=0,\qquad\hat{q}_{\gamma}=\sum_{t\in\mathcal{R}_{\gamma}}\bar{ \tau}_{\gamma}^{t+(1,1)}\hat{\psi}_{\gamma-t}^{t}. \tag{123}\] **Example 3.17**.: (124) \[\hat{j}_{\{2^{2}\}}=\frac{1}{\varepsilon_{1}\varepsilon_{2}(\varepsilon_{1}+ \varepsilon_{2})}V_{1}4+\frac{2}{\varepsilon_{1}\varepsilon_{2}}V_{1}^{2}V_{2} +\frac{\varepsilon_{1}^{2}-\varepsilon_{1}\varepsilon_{2}+\varepsilon_{2}^{2}} {\varepsilon_{1}\varepsilon_{2}(\varepsilon_{1}+\varepsilon_{2})}V_{2}^{2}+ \frac{4}{\varepsilon_{1}+\varepsilon_{2}}V_{1}V_{3}+V_{4}.\] It is important to note that this normalization does **not** work in the Schur case, where \((\varepsilon_{2},\varepsilon_{1})=(1,-1)\), since for any partition \(\mu\) containing the box \((1,1)\), the factor \([1,1]=\varepsilon_{1}+\varepsilon_{2}\) in \(\varpi_{\mu}=\prod_{s\in\mu^{\times}}[s]\) vanishes. For the remainder of this article, we will assume that \(\varepsilon_{1}+\varepsilon_{2}\neq 0\). We leave the investigation of Schur polynomials and ordinary Littlewood-Richardson coefficients via the \(\varepsilon_{1}+\varepsilon_{2}\to 0\) degeneration of our methods and results to future research. #### 3.5.2. Trace functionals We continue the shifting of emphasis in our investigation to the coefficients of top powers of \(w\) by introducing the first of three _trace_ functionals. **Definition 3.18**.: _The \(\boldsymbol{y}\)-trace functional \(\boldsymbol{y}_{u}:\mathcal{H}_{n}\to\mathbb{C}_{\boldsymbol{\varepsilon}}(u)\), is given by_ \[\boldsymbol{y}_{u}(\zeta):=\pi_{*}\frac{1}{u-\mathcal{L}}\zeta. \tag{125}\] With the above redefinitions, we can restate Lemma 3.15 concisely **Corollary 3.19**.: (126) \[\boldsymbol{y}_{u}(\hat{j}_{\lambda})=T_{\lambda}(u)-1.\] We note that this trace is closely associated with the \(\mathscr{Y}\) subspaces (the \(\mathcal{L}\)-eigenspaces). We extend this definition the following three families of linear functionals \(\mathcal{H}_{n}\to\mathbb{C}_{\boldsymbol{\varepsilon}}\) associated to each of the three decompositions (111) of \(\mathcal{H}\) **Definition 3.20**.: (127) \[\boldsymbol{x}_{\gamma}(\zeta)=\pi_{*}P_{\chi_{\gamma}}\zeta,\qquad \boldsymbol{y}^{s}(\zeta)=\pi_{*}P_{\mathscr{Y}^{s}}\zeta,\qquad\boldsymbol{z }_{\lambda}(\zeta)=\pi_{*}P_{\mathscr{Z}_{\lambda}}\zeta.\] Note that we have \(\boldsymbol{y}^{s}(\zeta)=\operatorname{Res}_{u=[s]}\boldsymbol{y}_{u}(\zeta)\). **Lemma 3.21**.: _The traces are given by the following expressions,_ \[\boldsymbol{x}_{\gamma}(\zeta)=\langle\frac{\hat{q}_{\gamma}}{|\hat{j}_{ \gamma}|^{2}},\zeta\rangle,\qquad\boldsymbol{y}_{u}(\zeta)=\langle\frac{1}{u- \mathcal{L}}w^{n},\zeta\rangle,\qquad\boldsymbol{z}_{\lambda}(\zeta)=\langle \frac{w\hat{q}_{\lambda}}{|\hat{j}_{\lambda}|^{2}},\zeta\rangle. \tag{128}\] _Equivalently,_ \[\frac{\hat{q}_{\gamma}}{|\hat{j}_{\gamma}|^{2}}=\sum_{t\in\mathcal{R}_{\gamma }}\frac{\hat{\psi}_{\gamma-t}^{t}}{|\hat{\psi}_{\gamma-t}^{t}|^{2}},\qquad w^ {n}=\sum_{\lambda}\frac{w\hat{q}_{\lambda}}{|\hat{j}_{\lambda}|^{2}},\qquad \frac{w\hat{q}_{\lambda}}{|\hat{j}_{\lambda}|^{2}}=\sum_{s\in\mathcal{A}_{ \lambda}}\frac{\hat{\psi}_{\lambda}^{s}}{|\hat{\psi}_{\lambda}^{s}|^{2}}. \tag{129}\] Proof.: The middle result of 128 is just a restating of 125. By definition, \(\boldsymbol{z}_{\lambda}\) is the linear functional that takes the value \(1\) on every \(\hat{\psi}_{\lambda}^{s}\in\mathcal{Z}_{\lambda}\), and vanishes for every other basis element in \(\mathcal{H}\). Similarly for \(\boldsymbol{x}_{\gamma}\) and \(\hat{\psi}_{\gamma-t}^{t}\in\mathcal{X}_{\gamma}\). Hence the equivalence between the first and last entries of each of 128 and 129. To check 129, we start with \(\mathcal{L}V_{n}=n\hbar w^{n}\), so \[w^{n}=(n\hbar)^{-1}\mathcal{L}V_{n}=(n\hbar)^{-1}\sum_{\lambda}\langle V_{n}, \hat{j}_{\lambda}\rangle\frac{\mathcal{L}\hat{j}_{\lambda}}{|\hat{j}_{\lambda} |^{2}}=\sum_{\lambda}\frac{w\hat{q}_{\lambda}}{|\hat{j}_{\lambda}|^{2}}. \tag{130}\] Next, we expand \[\frac{\mathcal{L}\hat{j}_{\lambda}}{|\hat{j}_{\lambda}|^{2}}=\sum_{s\in \mathcal{A}_{\lambda}}\frac{[s]\hat{\tau}_{s}^{s}\hat{\psi}_{\lambda}^{s}}{| \hat{j}_{\lambda}|^{2}}=\sum_{s\in\mathcal{A}_{\lambda}}\frac{\hat{\psi}_{ \lambda}^{s}}{|\hat{\psi}_{\lambda}^{s}|^{2}}. \tag{131}\] Here we've used \(|\hat{\psi}_{\gamma}^{s}|^{2}=|\hat{j}_{\gamma}|^{2}/[s]\hat{\tau}_{\gamma}^{s}\), from 61. For the first of 129, we compute \[\langle\hat{\psi}^{t}_{\gamma-t},\hat{q}_{\gamma}\rangle=\langle w\hat{\psi}^{t}_ {\gamma-t},w\hat{q}_{\gamma}\rangle \tag{132}\] Next we use 94, which still holds with the new conventions, \[w\cdot\hat{\psi}^{t}_{\gamma-t}=\sum_{s\in\mathcal{A}_{\gamma}}\frac{\hat{ \tau}^{s}_{\gamma}}{[s-t-(1,1)]}\hat{\psi}^{s}_{\gamma}. \tag{133}\] To get \[\sum_{s\in\mathcal{A}_{\gamma}}\frac{\hat{\tau}^{s}_{\gamma}}{[s-t-(1,1)]} \langle\hat{\psi}^{s}_{\gamma},\mathcal{L}\hat{j}_{\gamma}\rangle=\sum_{s\in \mathcal{A}_{\gamma}}\frac{\hat{\tau}^{s}_{\gamma}}{[s-t-(1,1)]}[s]\hat{\tau}^ {s}_{\gamma}|\hat{\psi}^{s}_{\gamma}|^{2}=\sum_{s\in\mathcal{A}_{\gamma}} \frac{\hat{\tau}^{s}_{\gamma}}{[s-t-(1,1)]}|\hat{j}_{\gamma}|^{2} \tag{134}\] Then we use \[\sum_{s\in\mathcal{A}_{\gamma}}\frac{\hat{\tau}^{s}_{\gamma}}{u-[s]}=T_{ \gamma}(u)-1 \tag{135}\] To find \[\langle\hat{\psi}^{t}_{\gamma-t},\hat{q}_{\gamma}\rangle=|\hat{j}_{\gamma}|^ {2}\left(1-T_{\gamma}([t+(1,1)])\right)=|\hat{j}_{\gamma}|^{2}, \tag{136}\] since \(t\in\mathcal{R}_{\gamma}\implies T_{\gamma}([t+(1,1)])=0\). Thus we have \[\hat{q}_{\gamma}=\hat{\psi}^{t}_{\gamma-t}\frac{|\hat{j}_{\gamma}|^{2}}{|\hat {\psi}^{t}_{\gamma-t}|^{2}}+\cdots, \tag{137}\] and the result follows. Comparing the two formulas for \(\hat{q}_{\gamma}\), 123 and 129, we find the following striking identity **Corollary 3.22**.: _For \(s\in\mathcal{A}_{\lambda}\), we have_ \[\frac{|j_{\lambda+s}|^{2}}{|j_{\lambda}|^{2}}=\frac{\tilde{\tau}^{s+(1,1)}_{ \lambda+s}}{\tau^{s}_{\lambda}}. \tag{138}\] Proof.: By looking at the \(\hat{\psi}^{t}_{\gamma-t}\) components of the two expressions for \(\hat{q}_{\gamma}\), we get \(\tilde{\tau}^{t+(1,1)}_{\gamma}=|\hat{j}_{\gamma}|^{2}|\hat{\psi}^{t}_{\gamma- t}|^{-2}\). Then we use 61 again, i.e. \(|\hat{\psi}^{t}_{\gamma-t}|^{2}=|\hat{j}_{\gamma-t}|^{2}/[t]\hat{\tau}^{t}_{ \gamma-t}\), to find \(\tilde{\tau}^{t+(1,1)}_{\gamma}/[t]\hat{\tau}^{t}_{\gamma-t}=|\hat{j}_{\gamma }|^{2}/|\hat{j}_{\gamma-t}|^{2}\). We call these \(\boldsymbol{x},\boldsymbol{y}\) and \(\boldsymbol{z}\)_trace_ maps because of the following important properties, **Corollary 3.23**.: _For any \(\zeta\in\mathcal{H}\), we have_ \[\boldsymbol{x}_{\gamma}(\zeta)=\boldsymbol{x}_{\gamma}(\pi_{\circ}\zeta), \qquad\boldsymbol{z}_{\lambda}(\zeta)=\boldsymbol{z}_{\lambda}(\pi_{+}\zeta), \qquad\boldsymbol{z}_{\sigma}(w\zeta)=\boldsymbol{x}_{\sigma}(\zeta). \tag{139}\] \[\boldsymbol{y}_{u}\left(\mathcal{L}\zeta\right)=u\cdot\boldsymbol{y}_{u}(\zeta )-\pi_{*}\zeta. \tag{140}\] \[\boldsymbol{x}_{\sigma}(\Pi\zeta)=\boldsymbol{z}_{\sigma}(\zeta). \tag{141}\] ### The Full Trace Next, we investigate the combined trace map, given by taking the direct sum of the three traces defined so far. **Definition 3.24**.: (142) \[\operatorname{Tr}_{n}=\oplus_{\gamma}\boldsymbol{x}_{\gamma}\oplus_{s} \boldsymbol{y}^{s}\oplus_{\lambda}\boldsymbol{z}_{\lambda}:\mathcal{H}_{n} \to\mathcal{W}_{n},\] _where the range \(\mathcal{W}_{n}\) of the trace map is_ \[\mathcal{W}_{n}:=\mathbb{C}^{p(n+1)}\oplus\mathbb{C}^{q(n)-1}\oplus\mathbb{C} ^{p(n)}. \tag{143}\] Here \(p(n)\) are the partition numbers (c.f. A.1), and \(q(n)=|\Omega(n)|\) is the size of the set of points \(\Omega(n)\subset\mathbb{Z}^{2}\) that are inner-corners for partitions of size \(n\). Equivalently, it is the number of integral points beneath the hyperbola \[q(n)=|\{(m,n):(m+1)(n+1)\leq n+1\}|\sim(n+1)\log(n+1). \tag{144}\] These numbers are given by the generating function2, Footnote 2: c.f. Sloane’s \({}^{*}\)On-Line Encyclopedia of Integer Sequences’ [https://oeis.org/A006218](https://oeis.org/A006218) \[Q(x)\coloneqq\sum q(n)x^{n}=1+\frac{1}{1-x}\left(-1+\frac{1}{x}\sum_{k\geq 1 }\frac{x^{k}}{1-x^{k}}\right). \tag{145}\] **Lemma 3.25**.: _The graded dimension of the extended Fock module \(\mathcal{H}=\mathcal{F}[w]\) is given by_ \[\sum(\dim\mathcal{H}_{n})x^{n}=\frac{1}{1-x}P(x), \tag{146}\] _where \(P(x)\) is the generating function for the partition numbers (c.f. A.1)._ Proof.: This follows from \(\dim\mathcal{H}_{n}=\dim\bigoplus_{k\leq n}w^{n-k}\mathcal{F}_{k}=\sum_{k\leq n }p(k)\). When working with traces, we often use the condensed notation: \[\operatorname{Tr}(\zeta)=(\{x_{\gamma}\},\{y^{s}\},\{z_{\lambda}\}). \tag{147}\] We will omit indices if they can be inferred. For example, \[\operatorname{Tr}_{n}(\hat{\psi}_{\lambda}^{s})=(\{\delta_{\lambda+s}\},\{ \delta^{s}\},\{\delta_{\lambda}\}), \tag{148}\] where \(\delta_{\lambda}\) indicates the \(p(|\lambda|)\)-vector of values that has \(1\) only in the \(\lambda\) position, etc. Our goal for the remainder of this section will be to determine the kernel and cokernel of the full trace, \[\ker\operatorname{Tr}_{n}\to\mathcal{H}_{n}\stackrel{{ \operatorname{Tr}_{n}}}{{\longrightarrow}}\mathcal{W}_{n}\to\operatorname{ coker}\operatorname{Tr}_{n}. \tag{149}\] #### 3.6.1. Cokernel **Theorem 3.26**.: _The cokernel of \(\mathrm{Tr}_{n}\) contains the following relations_ \[\mathrm{coker}\ \mathrm{Tr}_{n}\supset Span\{R_{n}^{s}:s\in\Omega(n)\}, \tag{150}\] _where_ \[R_{n}^{s}:=\boldsymbol{y}^{s}-\sum_{\gamma^{\vdash}\,n+1:s\in\gamma} \boldsymbol{x}_{\gamma}+\sum_{\lambda^{\vdash}\,n:s\in\lambda}\boldsymbol{z}_ {\lambda}, \tag{151}\] _are the residues of the function_ \[R_{n}(u):=\boldsymbol{y}_{u}-\sum_{\gamma^{\vdash}\,n+1}\boldsymbol{x}_{ \gamma}\left(\sum_{t\in\gamma}\frac{1}{u-[t]}\right)+\sum_{\lambda^{\vdash}\,n }\boldsymbol{z}_{\lambda}\left(\sum_{v\in\lambda}\frac{1}{u-[v]}\right), \tag{152}\] _i.e. \(R_{n}^{s}:=\mathrm{Res}_{u=[s]}\,R_{n}(u)\)._ Proof.: We evaluate the generating relation \(R_{n}(u)\) 152 on the traces of the basis elements \(\zeta=\hat{\psi}_{\lambda}^{s}\). We have \(\mathrm{Tr}_{n}(\hat{\psi}_{\lambda}^{s})=(\{\delta_{\lambda+s}\},\{\delta_{s }\},\{\delta_{\lambda}\})\) (148), and so \[R_{n}(u)\mathrm{Tr}_{n}(\hat{\psi}_{\lambda}^{s})=\frac{1}{u-[s]}-\left(\sum_ {t\in\lambda+s}\frac{1}{u-[t]}\right)+\left(\sum_{v\in\lambda}\frac{1}{u-[v]} \right)=0. \tag{153}\] Thus the relations hold on the traces of all basis elements in \(\mathcal{H}\), i.e they hold on \(\mathcal{W}\). Clearly the relations 151 are all linearly independent, because \(R_{n}^{s}\) is the only such relation that contains \(\boldsymbol{y}^{s}\). **Example 3.27**.: _In \(n=4\) we have the 10 relations given by_ \[R_{4}^{(4,0)} = \boldsymbol{y}^{(4,0)}-\boldsymbol{x}_{1^{5}}, \tag{155}\] \[R_{4}^{(3,0)} = \boldsymbol{y}^{(3,0)}+\boldsymbol{z}_{1^{4}}-\boldsymbol{x}_{1^ {5}}-\boldsymbol{x}_{1^{3},2},\] (156) \[R_{4}^{(1,1)} = \boldsymbol{y}^{(1,1)}+\boldsymbol{z}_{2^{2}}-\boldsymbol{x}_{1^ {3}2}-\boldsymbol{x}_{1,4},\] (157) \[R_{4}^{(2,0)} = \boldsymbol{y}^{(2,0)}+\boldsymbol{z}_{1^{4}}+\boldsymbol{z}_{1^ {2},2}-\boldsymbol{x}_{1,4}-\boldsymbol{x}_{1^{5}}-\boldsymbol{x}_{1,2^{2}}- \boldsymbol{x}_{1^{2}3},\] (158) \[R_{4}^{(1,0)} = \boldsymbol{y}^{(1,0)}+\boldsymbol{z}_{1^{4}}+\boldsymbol{z}_{1^ {2}2}+\boldsymbol{z}_{2^{2}}+\boldsymbol{z}_{1,3}-\boldsymbol{x}_{1^{5}}- \boldsymbol{x}_{1^{3}2}-\boldsymbol{x}_{12^{2}}-\boldsymbol{x}_{1^{2}3}- \boldsymbol{x}_{2,3}-\boldsymbol{x}_{1,4},\] (159) \[R_{4}^{(0,0)} = \boldsymbol{y}^{(0,0)}+\sum_{\lambda}\boldsymbol{z}_{\lambda}- \sum_{\gamma}\boldsymbol{x}_{\gamma}, \tag{154}\] _and their transposes (i.e. transpose all partitions and boxes appearing in the relation)._ Note, for \(n>0\), we know that we also have the cokernel relation \(\boldsymbol{y}^{(0,0)}=0\), since the only partition with minima at \((0,0)\) is the empty partition. So, for \(n>0\) we redefine \(R_{n}^{(0,0)}=\sum_{\lambda}\boldsymbol{z}_{\lambda}-\sum_{\gamma}\boldsymbol {x}_{\gamma}\). For \(n=0\), we have only the single relation \(R_{0}^{(0,0)}=\boldsymbol{y}^{(0,0)}-\boldsymbol{x}_{\{1\}}\), since \(\boldsymbol{z}_{\emptyset}\) does not appear in \(R_{0}(u)\). Later, in corollary 3.33, we show that these relations exhaust the cokernel. If we write the generating relation 152 using the formulas for the traces given by Lemma 3.21, we recover a key identity in \(\mathcal{H}\). **Corollary 3.28**.: (160) \[\frac{1}{u-\mathcal{L}}w^{n}=\sum_{\gamma\vdash(n+1)}\left(\sum_{t\in\gamma} \frac{1}{u-[t]}\right)\frac{\hat{q}_{\gamma}}{|\hat{j}_{\gamma}|^{2}}-\sum_{ \lambda\vdash n}\left(\sum_{s\in\lambda}\frac{1}{u-[s]}\right)\frac{w\hat{q}_ {\lambda}}{|\hat{j}_{\lambda}|^{2}}.\] Later, in section 5, we explore the algebraic significance of this relation. #### 3.6.2. The bicomplex To continue our study of the traces and their kernels, we need to introduce a homological construction. Let \(K_{n}^{\ell}\), for \(n,\ell\geq 0\) be the \(\mathbb{C}\)-linear span on the space of symbols \(\Psi_{\gamma}^{a_{1},\cdots,a_{\ell}}\) with \(\gamma\,\vdash\,n\) and \(\ell\) distinct points \(a_{i}\in\mathcal{A}_{\gamma}\) in the addset of \(\gamma\)\(2.1.1\) and which are antisymmetric in the \(a_{i}\). We have \(K_{n}^{0}=\operatorname{Span}_{\gamma\vdash n}\{\Psi_{\gamma}\}\cong\mathbb{C }^{p(n)}\). The spaces \(K_{n}^{\ell}\) sit in a double complex \((K,d,\partial)\), with first differential \(d:K_{n}^{\ell}\to K_{n}^{\ell-1}\), given by \[d:\Psi_{\sigma}^{a_{1},\ldots,a_{\ell}}\mapsto\sum_{i=1}^{\ell}(-1)^{i}\Psi_{ \sigma}^{a_{1},\ldots,a_{\ell}}. \tag{161}\] The second is \(\partial:K_{n-1}^{\ell}\to K_{n}^{\ell-1}\), given by \[\partial:\Psi_{\sigma}^{a_{1},\ldots,a_{\ell}}\rightarrow\sum_{i=1}^{\ell}(- 1)^{i}\Psi_{\sigma+a_{i}}^{a_{1},\ldots,a_{\ell}}. \tag{162}\] One can easily check that the following digram commutes \[\begin{CD}K_{n+1}^{\ell-1}@>{d_{\ell-1}}>{}>K_{n+1}^{\ell-2}\\ @V{\partial_{\ell}}V{}V@V{}V{\partial_{\ell-1}}V\\ K_{n}^{\ell}@>{d_{\ell}}>{}>K_{n}^{\ell-1}\end{CD} \tag{163}\] This construction is motivated by the bijection \(\iota:K_{n}^{1}\rightarrow\mathcal{H}_{n}\) given by \[\iota:\Psi_{\eta}^{a}\mapsto\hat{\psi}_{\eta}^{a}. \tag{164}\] The following result gives a cohomological interpretation of the \(\mathpzc{z}\) and \(\mathpzc{x}\) traces. **Corollary 3.29**.: _Under the bijection \(\iota\), we have \(\mathpzc{z}=d_{1}\circ\iota^{-1}\), where we identify \(K_{n}^{0}=\oplus_{\eta}\mathbb{C}\{\Psi_{\eta}\}\cong\mathbb{C}^{p(n)}\) with the image of the trace \(\mathpzc{z}=\oplus_{\eta}\mathpzc{z}_{\eta}\). Similarly, we have \(\mathpzc{x}=\partial_{1}\circ\iota^{-1}\)._ After setting \(r=1\) and applying \(\iota\), the diagram 163 becomes (165) #### 3.6.3. Kernel We now determine the kernel of the full trace map \(\operatorname{Tr}_{n}\). **Theorem 3.30**.: _The kernel of \(\operatorname{Tr}_{n}\) is given by_ \[\ker\operatorname{Tr}_{n}=\iota(\operatorname{Im}d\partial:K_{n-1}^{3}\to K_{n }^{1})=\operatorname{Span}\{\Gamma_{\eta}^{s_{1},s_{2},s_{3}}:\eta\vdash(n-1), s_{i}\in\mathcal{A}_{\eta}\}, \tag{166}\] _where \(\Gamma\) are the 'hexagon' elements_ \[\Gamma_{\eta}^{a,b,c}:=\iota d\partial\Psi_{\eta}^{a,b,c}=\hat{\psi}_{\eta+a}^{ c}-\hat{\psi}_{\eta+a}^{b}+\hat{\psi}_{\eta+b}^{a}-\hat{\psi}_{\eta+b}^{c}+ \hat{\psi}_{\eta+c}^{b}-\hat{\psi}_{\eta+c}^{a}\in\mathcal{H}_{n}, \tag{167}\] _for \(\eta\vdash(n-1),a,b,c\in\mathcal{A}_{\eta}\)._ Proof.: From the bijection, we know that \(\mathpzc{z}_{n}\sim d_{1}:K_{n}^{1}\to K_{n}^{0}\), and \(\mathpzc{x}_{n}\sim\partial_{1}:K_{n}^{1}\to K_{n+1}^{0}\). Note that we know from 151 that \(\ker\mathpzc{x}\cap\ker\mathpzc{z}\subset\ker\mathpzc{y}\). Thus, \(\ker\operatorname{Tr}=\ker\mathpzc{x}\cap\ker\mathpzc{y}\cap\ker\mathpzc{z}= \ker\mathpzc{x}\cap\ker\mathpzc{z}=\iota(\ker\partial_{1}\cap\ker d_{1})\). Since the bicomplex is acyclic, we have \(\ker\partial_{1}\cap\ker d_{1}=\operatorname{Im}d_{2}\partial_{3}\). **Example 3.31**.: \(\ker\operatorname{Tr}_{4}=\mathbb{C}\cdot\Gamma_{1,2}^{(2,0),(1,1),(0,2)}\)_, and \(\ker\operatorname{Tr}_{5}=\operatorname{Span}\{\Gamma_{1,3}^{(2,0),(1,1),(0,3 )},\Gamma_{1^{2},2}^{(3,0),(1,1),(0,2)}\}\)._ **Proposition 3.32**.: _The dimension the kernel of the full trace is given by the generating function_ \[\sum_{n\geq 0}(\dim\ker\operatorname{Tr}_{n})x^{n}=\frac{1+(x^{2}+x-1)P(x)}{x(1- x)}=x^{4}+2x^{5}+5x^{6}+\ldots. \tag{168}\] Proof.: The bicomplex provides a resolution of the kernel (169) We use this to compute its dimension. We do this by looking at the horizontal sub-complexes: \[K_{k}^{\ell+\bullet}:=\cdots\to K_{k}^{\ell+2}\overset{d}{\to}K_{k}^{\ell+1} \overset{d}{\to}K_{k}^{\ell}. \tag{170}\] In terms of these sub-complexes, we have \[\sum_{n\geq 0}\dim(\ker{\rm Tr}_{n})x^{n} = \sum_{n\geq 0}\left(\sum_{i=0}^{\infty}(-1)^{i}\chi(K_{n-(i+1)}^{3+i+ \bullet})\right)x^{n}\] \[= \sum_{i=0}^{\infty}(-1)^{i}\left(\sum_{n\geq 0}\chi(K_{n}^{3+i+ \bullet})x^{n}\right)x^{i+1}.\] Consider just the sub-complex contribution of a partition \(\eta\,\vdash\,k\) with \(r\) minima. At each stage of the complex we choose the appropriate number of those corners \(S\subset\mathcal{A}(\eta)\) to include in the symbol \(\Psi_{\eta}^{S}\), and thus the contribution from \(\eta\) is \[K_{k,r}^{\ell+\bullet}:=\cdots\to\mathbb{C}^{\binom{r}{\ell+2}}\stackrel{{ d}}{{\to}}\mathbb{C}^{\binom{r}{\ell+1}}\stackrel{{ d}}{{\to}}\mathbb{C}^{\binom{r}{\ell}}. \tag{171}\] The dimension of this sub-complex is \[\chi(K_{k,r}^{\ell+\bullet})=\sum_{s=\ell}^{r}(-1)^{s-\ell}\binom{r}{s}= \binom{r-1}{\ell-1}=\left[\left(\frac{1}{(\ell-1)!}(\partial_{t})^{\ell-1}t^ {-1}\right)t^{r}\right]_{t=1}. \tag{172}\] We then have \[K_{k}^{\ell+\bullet}\cong\bigoplus_{r}\left(K_{k,r}^{\ell+\bullet}\right)^{p(k,r)}. \tag{173}\] Thus, the dimension of the horizontal complex is \[\chi(K_{k}^{\ell+\bullet})=\sum_{r>0}p(k,r)\chi(K_{k,r}^{\ell+\bullet})=\sum_{ r>0}p(k,r)\binom{r-1}{\ell-1}=\left[\frac{1}{(\ell-1)!}(\partial_{t})^{\ell-1}t^ {-1}\sum_{r>0}p(k,r)t^{r}\right]_{t=1} \tag{174}\] where \(p(k,r)\) is the number of partitions of \(k\) with \(r\) minima. Thus, the generating function for the dimension of the kernel is \[\sum_{n\geq 0}\dim(\ker{\rm Tr}_{n})x^{n} = \sum_{i=0}^{\infty}(-1)^{i}\left(\sum_{n\geq 0}\chi(K_{n}^{3+i+ \bullet})x^{n}\right)x^{i+1}\] \[= \sum_{i=0}^{\infty}(-1)^{i}x^{i+1}\left[\frac{1}{(3+i-1)!}( \partial_{t})^{3+i-1}(t^{-1}P(x,t))\right]_{t=1}\] \[= x^{-1}\left[\sum_{i=2}^{\infty}\frac{(-x)^{i}}{i!}(\partial_{t}) ^{i}(t^{-1}P(x,t))\right]_{t=1}\] \[= x^{-1}\left(\left[\sum_{i=0}^{\infty}\frac{(-x)^{i}}{i!}( \partial_{t})^{i}(t^{-1}P(x,t))\right]_{t=1}-\left[-x\partial_{t}(t^{-1}P(x,t ))\right]_{t=1}-P(x,1)\right).\] Using Taylor's formula, and the properties A.2 of \(P(x,t)\), we find \[= x^{-1}\left((1-x)^{-1}P(x,1-x)+(1-x)P(x,1)+P_{t}(x,1)\right)\] \[= x^{-1}(1-x)^{-1}+x^{-1}(1-x)P(x)+x^{-1}(1-x)^{-1}P(x)\] \[= \frac{1+(x^{2}+x-1)\,P(x)}{x(1-x)}.\] #### 3.6.4. Back to the Cokernel Using the calculation of the dimension of the kernel (proposition 3.32), we can conclude that: **Lemma 3.33**.: _The set of relations 151 exhaust the cokernel of the total trace, i.e._ \[\mathrm{coker\,\,Tr}_{n}=Span\{R_{n}^{s}:s\in\Omega(n)\}. \tag{175}\] Proof.: The vanishing Euler characteristic of the exact sequence 149 is \[0=\chi\left(\ker\mathrm{Tr}_{n}\to\mathcal{H}_{n}\stackrel{{ \mathrm{Tr}_{n}}}{{\longrightarrow}}\mathcal{W}_{n}\to\mathrm{coker\,Tr}_{n} \right). \tag{176}\] The generating function of this yields \[\sum_{n\geq 0}(\dim\mathrm{cokerTr}_{n})x^{n} = \sum_{n\geq 0}(\dim\ker\mathrm{Tr}_{n}-\dim\mathcal{H}_{n}+\dim \mathcal{W}_{n})x^{n} \tag{178}\] \[= \left(\frac{1+\left(x^{2}+x-1\right)P(x)}{x(1-x)}\right)-\left( \frac{P(x)}{1-x}\right)\] (179) \[+\left(\frac{P(x)-1}{x}+\left(Q(x)-\frac{1}{1-x}\right)+P(x) \right). \tag{177}\] With this we confirm that the dimension of the cokernel is given by \[\sum_{n\geq 0}(\dim\mathrm{cokerTr}_{n})x^{n}=Q(x). \tag{180}\] ## 4. Distinguished Elements The goal of this section will be to produce interesting algebra elements \(\zeta\in\mathcal{H}_{n}\) whose traces we can determine explicitly, and we'll then relate these traces through the fundamental cokernel relation (152), \[R_{n}(u)\mathrm{Tr}_{n}(\zeta)=0. \tag{181}\] To work towards the construction of these interesting elements, we provide more results on the structure of the algebra \(\mathcal{H}\). First, we'll construct various maps that are \(\mathcal{F}=\mathbb{C}[V_{1},V_{2},\ldots]\) linear, which will allow us to focus on \(\mathcal{H}=\mathcal{F}[w]\) as a free \(\mathcal{F}\) module. ### Homological algebra Let \(\mathcal{A}\) be a graded associative algebra. We recall the Hochschild cochain complex \[\mathcal{C}^{k}=\operatorname{Hom}(\otimes^{k}\mathcal{A},\mathcal{A})[-k+1] \tag{182}\] Equipped with the Gerstenhaber bracket \(\{,\}\), c.f. [6]. In particular let \(\mu:\otimes^{2}\mathcal{A}\to\mathcal{A}\) be the multiplication map \(\mu(a,b)=ab\). The differential \(\partial=\{\cdot,\mu\}\), which satisfies \(\partial^{2}=0\), turns \((C^{\bullet}(\mathcal{A}),\partial)\) into a differential graded (dg) algebra - the Hochschild cochain complex. In particular, for any linear map \(T:\mathcal{A}\to\mathcal{A}\), we call the map \(\partial T:\otimes^{2}\mathcal{A}\to\mathcal{A}\) the _derivator_ of \(T\), since \[(\partial T)(\zeta,\xi)=T(\zeta\cdot\xi)-(T\zeta)\cdot\xi-\zeta\cdot(T\xi), \tag{183}\] clearly vanishes if \(T\) is a derivation of the algebra, i.e. \(\ker\partial_{1}=\{\text{Derivations of }\mathcal{A}\}\). For the next section, the following expression of \(\partial^{2}T=0\) will be important: **Corollary 4.1**.: _The derivator \(\partial T\) satisfies the following identity_ \[\partial T(a\cdot b,c)-\partial T(a,b\cdot c)=a\cdot\partial T(b,c)-\partial T (a,b)\cdot c. \tag{184}\] ### \(\beta\) Elements We know that the NS Lax operator \(\mathcal{L}\) is not a derivation, however its derivator will play a fundamental role in the analysis to follow. We use the notation \[\beta(\xi,\zeta):=\partial\mathcal{L}(\xi,\zeta)=\mathcal{L}(\zeta\cdot\xi)-( \mathcal{L}\zeta)\cdot\xi-\zeta\cdot(\mathcal{L}\xi). \tag{185}\] **Lemma 4.2**.: _The derivator \(\beta:=\partial\mathcal{L}\) of the NS Lax operator has the following properties_ 1. \(\beta\) _is_ \(\mathcal{F}\)_-bilinear, that is, for_ \(\eta\in\pi_{0}H\) _we have_ (186) \[\beta(\zeta,\eta\cdot\xi)=\eta\cdot\beta(\zeta,\xi).\] _In particular,_ \(\beta\) _factors through_ \(\pi_{+}\)__ (187) \[\beta(\zeta,\xi)=\beta(\zeta,\pi_{+}\xi).\] 2. \(\beta\) _is Hochschild exact, that is,_ \(\beta=\partial(\mathcal{L}^{\prime})\) _where_ \(\mathcal{L}^{\prime}(w^{n}):=\sum_{0<k\leq n}w^{n-k}V_{k}\) _is_ \(\mathcal{F}\)_-linear. Note that this means that_ \(\beta\) _does not depend on_ \(\overline{\varepsilon}\)_,_ \(\hbar\) _(i.e._ \(\varepsilon_{i}\)_)._ Proof.: (1) follows because \(\mathcal{L}\) is a derivation on \(\pi_{0}\). (2) holds because \(\mathcal{L}=\mathcal{L}^{\prime}+(\text{derivation})\). **Lemma 4.3**.: _For any \(\zeta\in\mathcal{H}\),_ \[\beta(w,\zeta)=\pi_{0}\mathcal{L}w\zeta-V_{1}\zeta. \tag{188}\] Proof.: (189) \[\beta(w,\zeta) = \mathcal{L}(w\zeta)-w\mathcal{L}\zeta-\mathcal{L}(w)\zeta\] (190) \[= \pi_{0}\mathcal{L}(w\zeta)+\pi_{+}\mathcal{L}(w\zeta)-w\mathcal{L} \zeta-(V_{1}+\overline{\varepsilon})\zeta\] (191) \[= \pi_{0}\mathcal{L}(w\zeta)-V_{1}\zeta+\left(\pi_{+}\mathcal{L}^{+ }w\zeta-w(\mathcal{L}+\overline{\varepsilon})\zeta\right).\] Using \(\mathcal{L}^{+}w=w(\mathcal{L}+\overline{\varepsilon})\) we are done. A key lemma for our inductive proofs will be **Lemma 4.4**.: (192) \[\beta(\xi,\zeta)=\beta(\Pi\xi,w\zeta)+(\Pi\xi)\cdot(\pi_{0}\mathcal{L}w\zeta)- (\pi_{0}\mathcal{L}\xi)\cdot(\zeta).\] Proof.: Using 184 \(\partial\beta(\xi,w\Pi\zeta)=0\), we have \[\beta(\xi,\zeta)=\beta(\Pi\xi,w\zeta)+(\Pi\xi)\cdot\beta(w,\zeta)-\beta(\Pi \xi,w)\cdot(\zeta). \tag{193}\] Using 4.3, and noting that \(\pi_{0}\mathcal{L}\xi=\pi_{0}\mathcal{L}\pi_{+}\xi\) we easily recover the result. Because of the \(\mathcal{F}=\mathbb{C}_{\varepsilon}[V]\) linearity, we will often only need to perform manipulations with the basic elements, \[\beta^{n,m}:=\beta(w^{n},w^{m}). \tag{194}\] We note that \(\beta^{1,m}=V_{m+1}-w^{m}V_{1}\). ### Null sub-modules We explicitly construct two important \(\mathcal{F}\) sub-modules of the free \(\mathcal{F}\)-module \(\mathcal{H}=\mathcal{F}[w]\). **Lemma 4.5**.: _For \(\zeta\) in \(\mathcal{H}_{n}\), we have_ \[\boldsymbol{z}_{\gamma}(\hat{j}_{\lambda}\zeta)=\sum_{\mu}\hat{c}_{\lambda\mu }^{\gamma}\boldsymbol{z}_{\mu}(\zeta),\qquad\boldsymbol{x}_{\gamma}(\hat{j}_ {\lambda}\zeta)=\sum_{\mu}\hat{c}_{\lambda\mu}^{\gamma}\boldsymbol{x}_{\mu}( \zeta). \tag{195}\] Proof.: Consider the map \[\Omega(\zeta)\coloneqq\pi_{0}\mathcal{L}(w\cdot\zeta)=\sum_{\gamma\vdash n+1} \boldsymbol{x}_{\gamma}(\zeta)\hat{j}_{\gamma}. \tag{196}\] This map satisfies \[\Omega(\hat{j}_{\lambda}\zeta) = \pi_{0}\mathcal{L}(\hat{j}_{\lambda}\cdot w\zeta) \tag{198}\] \[= \pi_{0}(\mathcal{L}\hat{j}_{\lambda})(w\zeta)+\pi_{0}\hat{j}_{ \lambda}\mathcal{L}(w\zeta)\] (199) \[= \pi_{0}w(\mathcal{L}\hat{j}_{\lambda})(\zeta)+\hat{j}_{\lambda} \pi_{0}\mathcal{L}(w\zeta)\] (200) \[= 0+\hat{j}_{\lambda}\Omega(\zeta) \tag{197}\] So \(\Omega(\hat{j}_{\lambda}\zeta)=\hat{j}_{\lambda}\Omega(\zeta)\), i.e. \(\Omega\) is \(\mathbb{C}[V]\)-linear. Expanding this out, we have \[\sum_{\gamma}\boldsymbol{x}_{\gamma}(\hat{j}_{\lambda}\zeta)\hat{j }_{\gamma} = \hat{j}_{\lambda}\sum_{\mu}\boldsymbol{x}_{\mu}(\zeta)\hat{j}_{\mu} \tag{202}\] \[= \sum_{\mu}\sum_{\gamma}\hat{c}_{\mu\lambda}^{\gamma}\boldsymbol{x }_{\mu}(\zeta)\hat{j}_{\gamma}. \tag{201}\] Upon extracting the coefficients of \(\hat{j}_{\gamma}\) we are done. Using relation 139 the result follows for the other trace \(\boldsymbol{z}\). Define the _Null sub-modules_\(\mathcal{I}^{0},\mathcal{I}^{0}\subset\mathcal{F}[w]\) as the subspaces of all elements with vanishing \(\boldsymbol{z}\) and \(\boldsymbol{x}\) respectively. One can easily show that it follows from lemma 4.5 that: **Corollary 4.6**.: \(\mathcal{I}^{0}\) _and \(\mathcal{I}^{0}\) are \(\mathcal{F}\)-submodules of the free \(\mathcal{F}\)-module \(\mathcal{H}=\mathcal{F}[w]\). Furthermore, multiplication by \(w\), i.e. \(w\cdot:\mathcal{I}^{0}\to\mathcal{I}^{0}\), is a morphism of \(\mathcal{F}\)-modules._ ### Spectral Factors II For the next set of results, we will need an extension of the'spectral' factors that appeared earlier in section 2.2.3. Let \(\Gamma\) be a collection of boxes (possibly with multiplicities). We extend the previous definition 38 to this case, \[T_{\Gamma}(u)\coloneqq\prod_{b\in\Gamma}N(u-[b]). \tag{203}\] \[T_{\emptyset}(u)\coloneqq 1. \tag{204}\] Furthermore, we define a generalized Kerov (co-)transition measure \[\hat{\tau}_{\Gamma}^{s}=\operatorname*{Res}_{u=[s]}T_{\Gamma}(u). \tag{205}\] One of the novel constructions that appears in this work is a simple product on the space of partitions. We will show in the next section that this product is deeply related to the structure of Jack LR coefficients. **Definition 4.7**.: _The_ **star product** _of two partitions is the collection of boxes (with multiplicities) given by_ \[\lambda\star\nu\coloneqq\bigsqcup_{\begin{subarray}{c}s\in\lambda\\ t\in\mu\end{subarray}}s+t. \tag{206}\] With this, we have \(\mu\star\{1\}=\mu\). For example, \(\{1,3\}\star\{1,2^{2}\}\) is computed as (207) where the number inside a box indicates its multiplicity (if \(>1\)). The spectral factors3 of star products will appear in the next section, Footnote 3: Note the similarity with the work of Bourgine et al [2] and the so called ‘Nekrasov factors’ that appear therein, e.g. the first of the three terms on the RHS of their equation (2.33). There, however, the object that appears is \(\prod_{a\in\lambda,b\in\nu}S(\chi_{a}/\chi_{b})\sim\prod_{a\in\lambda,b\in\nu}N ([a-b])\), we are yet to see a correspondence with this type of factor that involves a _difference_ of the summed boxes. \[T_{\lambda\star\nu}(u)=\prod_{c\in\lambda\star\nu}N(u-[c])=\prod_{a\in\lambda, b\in\nu}N(u-[a+b]). \tag{208}\] ### \(\boldsymbol{y}\)-trace Formulae We arrive at one of the substantial results of this work, hinting at a surprising amount structure in the products of Lax eigenfunctions. This is the first appearance of the novel star product 4.7 introduced in the last section. **Theorem 4.8**.: _The \(\boldsymbol{y}\)-trace of a product of Lax eigenfunctions is given by the formula_ \[\boldsymbol{y}_{u}(\hat{\psi}_{\lambda}^{s}\cdot\hat{\psi}_{\nu}^{t})=\frac{ T_{\lambda\star\nu}(u)}{u-[s+t]}. \tag{209}\] We will prove this momentarily. First we note that the numerator only depends on the partitions, and the denominator only depends on the minima. For example, \[\boldsymbol{y}_{u}(\hat{\psi}_{1,2}^{(1,1)}\hat{\psi}_{1,2}^{(2,0)})=\frac{T_ {\{1,2\}\star\{1,2\}}(u)}{(u-[3,1])}. \tag{210}\] Back to the proof. First, we'll need a small lemma. **Lemma 4.9**.: _The \(\boldsymbol{y}\)-trace of a product of Lax eigenfunctions can be expressed in terms of the \(\boldsymbol{y}\)-trace of \(\beta\) applied to them,_ \[\boldsymbol{y}_{u}(\hat{\psi}_{\lambda}^{s}\cdot\hat{\psi}_{\nu}^{t})=\frac{ \boldsymbol{y}_{u}\left(\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t}) \right)+1}{u-[s+t]}. \tag{211}\] Proof.: (212) \[\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t})=\mathcal{L}(\hat{\psi}_{ \lambda}^{s}\hat{\psi}_{\nu}^{t})-\mathcal{L}(\hat{\psi}_{\lambda}^{s})\hat{ \psi}_{\nu}^{t}-\hat{\psi}_{\lambda}^{s}\mathcal{L}(\hat{\psi}_{\nu}^{t})=( \mathcal{L}-[s+t])(\hat{\psi}_{\lambda}^{s}\hat{\psi}_{\nu}^{t}).\] Now, using (140), we find \[\boldsymbol{y}_{u}\left(\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t}) \right)=\boldsymbol{y}_{u}\left((\mathcal{L}-[s+t])(\hat{\psi}_{\lambda}^{s} \hat{\psi}_{\nu}^{t})\right)=(u-[s+t])\boldsymbol{y}_{u}(\hat{\psi}_{\lambda}^ {s}\hat{\psi}_{\nu}^{t})-1. \tag{213}\] With this lemma we state and prove a result clearly equivalent to Theorem 4.8: **Theorem 4.10**.: _The \(\boldsymbol{y}\)-trace of \(\beta\) of a pair of Lax eigenfunctions is given by the simple formula_ \[\boldsymbol{y}_{u}\left(\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t}) \right)=T_{\lambda\star\nu}(u)-1, \tag{214}\] _in particular, it is independent of the choices of corners \(s,t\)._ Proof.: We prove this inductively on \(k=\min(|\lambda|,|\nu|)\), the minimum of the degrees of the two arguments. Without loss of generality, we assume \(|\lambda|\leq|\nu|\). For the base case, \(k=0\), we have \(\hat{\psi}_{\emptyset}^{(0,0)}=1\), and we have \(\beta(1,\hat{\psi}_{\lambda}^{s})=0\), thus \[\boldsymbol{y}_{u}(\beta(\hat{\psi}_{\emptyset}^{(0,0)},\hat{\psi}_{\lambda}^ {s}))=0=T_{\emptyset}(u)-1. \tag{215}\] For the inductive step, let us assume that 209 (and hence 214) holds for all \(|\lambda|=k\leq K\). We will show that this implies 214 holds for \(\beta(\hat{\psi}_{\lambda^{\prime}}^{s},\hat{\psi}_{\nu}^{t})\) with \(|\lambda^{\prime}|=K+1\). We use 187, with \(\pi_{+}\hat{\psi}_{\lambda}^{s}=w\Pi\hat{\psi}_{\lambda}^{s}\), and 184 to get \[\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t}) = \beta(\Pi\hat{\psi}_{\lambda}^{s},w\hat{\psi}_{\nu}^{t})+\Pi\hat{ \psi}_{\lambda}^{s}\cdot\hat{j}_{\nu+t}-\hat{j}_{\lambda}\cdot\hat{\psi}_{\nu }^{t} \tag{217}\] \[= \sum_{q\in\mathcal{R}_{\lambda}}c_{\lambda}^{q,s}\left(\beta(\hat{ \psi}_{\lambda-q}^{q},w\hat{\psi}_{\nu}^{t})+\hat{\psi}_{\lambda-q}^{q}\cdot \hat{j}_{\nu+t}\right)-\hat{j}_{\lambda}\cdot\hat{\psi}_{\nu}^{t}, \tag{216}\] where we have used the expansion 109 \[\Pi\hat{\psi}_{\lambda}^{s}=\sum_{q\in\mathcal{R}_{\lambda}}a_{\lambda}^{q,s} \hat{\psi}_{\lambda-q}^{q},\text{ with }\quad\sum_{q\in\mathcal{R}_{\lambda}}a_{\lambda}^{q,s}=1. \tag{218}\] Thus we have \[\boldsymbol{y}_{u}(\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t}))=\sum _{q\in\mathcal{R}_{\lambda}}a_{\lambda}^{q,s}\left(\boldsymbol{y}_{u}(\beta( \hat{\psi}_{\lambda-q}^{q},w\hat{\psi}_{\nu}^{t}))+\boldsymbol{y}_{u}(\hat{ \psi}_{\lambda-q}^{q}\cdot\hat{j}_{\nu+t})\right)-\boldsymbol{y}_{u}(\hat{j}_ {\lambda}\cdot\hat{\psi}_{\nu}^{t}). \tag{219}\] For the two terms inside the sum, we can use the inductive hypothesis on each, since \(|\lambda-q|=K<K+1\). For the first of these two terms, \[\boldsymbol{y}_{u}(\beta(\hat{\psi}_{\lambda-q}^{q},w\hat{\psi}_{ \nu}^{t})) = \sum_{v\in\mathcal{A}_{\nu}+t}d_{\nu}^{t,v}\boldsymbol{y}_{u}( \beta(\hat{\psi}_{\lambda-q}^{q},\hat{\psi}_{\nu+t}^{v})) \tag{221}\] \[= \left(T_{(\lambda-q)\star(\nu+t)}-1\right)\sum_{v\in\mathcal{A}_{ \nu}+t}d_{\nu}^{t,v} \tag{220}\] \[= T_{(\lambda-q)\star(\nu+t)}-1, \tag{222}\] where we have used the expansion 94 \[w\hat{\psi}_{\nu}^{t}=\sum_{v\in\mathcal{A}_{\nu+t}}d_{\nu}^{t,v}\hat{\psi}_{ \nu+t}^{v},\mbox{ with }\quad\sum_{v\in\mathcal{A}_{\nu+t}}d_{\nu}^{t,v}=1. \tag{223}\] For the second of the two terms inside the sum of (219), we use the inductive hypothesis again \[\boldsymbol{y}_{u}(\hat{\psi}_{\lambda-q}^{q}\cdot\hat{j}_{\nu+t}) = \sum_{v\in\mathcal{A}_{\nu+t}}\hat{\tau}_{\nu+t}^{v}\boldsymbol{y }_{u}(\hat{\psi}_{\lambda-q}^{q}\cdot\hat{\psi}_{\nu+t}^{v}) \tag{225}\] \[= T_{(\lambda-q)\star(\nu+t)}(u)\sum_{v\in\mathcal{A}_{\nu}}\frac{ \hat{\tau}_{\nu+t}^{v}}{u-[q+v]}\] (226) \[= T_{(\lambda-q)\star(\nu+t)}(u)\sum_{v\in\mathcal{A}_{\nu}}\frac{ \hat{\tau}_{\nu+t}^{t}}{(u-[q])-[v]}\] (227) \[= T_{(\lambda-q)\star(\nu+t)}(u)(T_{\nu+t}(u-[q])-1)\] (228) \[= T_{(\lambda-q)\star(\nu+t)}(u)(T_{q\star(\nu+t)}(u)-1). \tag{224}\] That is \[\boldsymbol{y}_{u}(\hat{\psi}_{\lambda-q}^{q}\cdot\hat{j}_{\nu+t})=T_{\lambda \star(\nu+t)}(u)-T_{(\lambda-q)\star(\nu+t)}(u). \tag{229}\] Putting these two together, we get \[\boldsymbol{y}_{u}(\beta(\hat{\psi}_{\lambda-q}^{q},w\hat{\psi}_{\nu}^{t})+ \hat{\psi}_{\lambda-q}^{q}\cdot\hat{j}_{\nu+t})=T_{\lambda\star(\nu+t)}-1. \tag{230}\] With this, we can write 219 as \[\boldsymbol{y}_{u}(\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{ \nu}^{t})) = \sum_{q\in\mathcal{R}_{\lambda}^{+}}a_{\lambda}^{q,s}\left(T_{ \lambda\star(\nu+t)}-1\right)-\boldsymbol{y}_{u}(\hat{j}_{\lambda}\cdot\hat{ \psi}_{\nu}^{t}) \tag{232}\] \[= \left(T_{\lambda\star(\nu+t)}-1\right)-\boldsymbol{y}_{u}(\hat{j }_{\lambda}\cdot\hat{\psi}_{\nu}^{t}). \tag{231}\] Note that from this formula we can see that all the \(s\)-dependence falls out in the RHS. We can't use the inductive hypothesis on the second term in this expression, as it has lowest degree \(|\lambda|=K+1\). However, we can continue by expanding \[\boldsymbol{y}_{u}\left(\hat{j}_{\lambda}\cdot\hat{\psi}_{\nu}^{t}\right) = \sum_{s\in\mathcal{A}_{\lambda}}\hat{\tau}_{\lambda}^{s}\, \boldsymbol{y}_{u}\left(\hat{\psi}_{\lambda}^{s}\cdot\hat{\psi}_{\nu}^{t}\right) \tag{234}\] \[= \sum_{s\in\mathcal{A}_{\lambda}}\hat{\tau}_{\lambda}^{s}\,\frac{ \boldsymbol{y}_{u}\left(\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t}) \right)+1}{u-[s+t]}\] (235) \[= \left(\boldsymbol{y}_{u}\left(\beta(\hat{\psi}_{\lambda}^{s}, \hat{\psi}_{\nu}^{t})\right)+1\right)\sum_{s\in\mathcal{A}_{\lambda}}\hat{ \tau}_{\lambda}^{s}\,\frac{1}{u-[s+t]}\] (236) \[= \left(\boldsymbol{y}_{u}\left(\beta(\hat{\psi}_{\lambda}^{s}, \hat{\psi}_{\nu}^{t})\right)+1\right)(T_{\lambda\star t}(u)-1). \tag{233}\] On the second line we have used 4.9, and in the third line we have used the \(s-\)independence of \(\boldsymbol{y}_{u}\left(\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t})\right)\). Combining the above with the expression 232, we get \[\boldsymbol{y}_{u}(\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t}))=\left( T_{\lambda\star(\nu+t)}-1\right)-\left(\boldsymbol{y}_{u}\left(\beta(\hat{\psi}_{ \lambda}^{s},\hat{\psi}_{\nu}^{t})\right)+1\right).(T_{\lambda\star t}(u)-1) \tag{237}\] Rearranged, this yields \[\boldsymbol{y}_{u}(\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t}))T_{ \lambda\star t}(u)=T_{\lambda\star(\nu+t)}(u)-T_{\lambda\star t}(u), \tag{238}\] and completes the inductive step: \[\boldsymbol{y}_{u}(\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t}))=T_{ \lambda\star\nu}(u)-1. \tag{239}\] Thus Theorems 4.9 and 4.10 are proven. ### Back to Jack-Lax LR Coefficients Using the \(\boldsymbol{y}\)-trace formula 4.9, we can determine certain Jack-Lax Littlewood-Richardson coefficients. First, we state a straightforward result: **Lemma 4.11**.: _Let \(v_{*}=(m-1,n-1)\). The partition \(m^{n}-v_{*}\) (i.e. a rectangle less a box) is the only partition of its size with a corner at \(v_{*}\)._ With this Lemma, we can show a simple form that Jack-Lax LR coefficients can take. **Proposition 4.12**.: _We have_ \[\hat{\psi}_{\mu}^{s}\hat{\psi}_{\nu}^{t}=\operatorname*{Res}_{u=[v_{*}]}\left( \frac{T_{\mu\star\nu}(u)}{u-[s+t]}\right)\hat{\psi}_{m^{n}-v_{*}}^{v_{*}}+\dots, \tag{240}\] _that is,_ \[\hat{c}_{\mu,\nu;v_{*}}^{s,t;m^{n}-v_{*}}=\operatorname*{Res}_{u=[v_{*}]}\left( \frac{T_{\mu\star\nu}(u)}{u-[s+t]}\right). \tag{241}\] Proof.: In the general Lax eigenfunction expansion, we have \[\hat{\psi}_{\mu}^{s}\hat{\psi}_{\nu}^{t}=a\hat{\psi}_{m^{n}-v_{*}}^{v_{*}}+\cdots, \tag{242}\] where the coefficient \(a\) is to be determined. The \(\boldsymbol{y}\)-trace 4.9 of both sides is \[\frac{T_{\mu\star\nu}(u)}{u-[s+t]}=\frac{a}{u-[v_{*}]}+\cdots, \tag{243}\] and we know by Lemma 4.11 that \(a\) is the only residue at \(u=[v_{*}]\) on the RHS. We investigate and develop further these kind of explicit formulae for Jack-Lax LR coefficients in the sequel [1]. ### Conjecture on Selection Rules Recall the well-known selection rules for multiplication of Jack functions (c.f. [9]), \[j_{\mu}\cdot j_{\nu}\subset\bigoplus_{\gamma:\mu\sqcup\nu\subset\gamma}\mathbb{C}j _{\gamma} \tag{244}\] We conjecture that this rule extends to the multiplication of Lax eigenfunctions, **Conjecture 4.13**.: (245) \[\mathbb{z}_{\mu}\cdot\mathbb{z}_{\nu}\subset\bigoplus_{\gamma:\mu\sqcup\nu \subset\gamma}\mathbb{z}_{\gamma}\] One can easily show that conjecture 4.13 is equivalent to \[\mathbb{x}_{\nu}\cdot\mathbb{z}_{\gamma}\subset\bigoplus_{\lambda\geq\nu, \gamma}\mathbb{x}_{\lambda}. \tag{246}\] We offer the following computations as evidence for this conjecture \[\hat{\psi}_{1^{r}}^{(r,0)}\hat{\psi}_{m}^{(0,m)}=\frac{[0,-m]}{[r,-m]}\hat{ \psi}_{\{1^{r},m\}}^{(0,m)}+\frac{[r,0]}{[r,-m]}\hat{\psi}_{\{1^{r-1},m+1\}}^{ (r,0)}. \tag{247}\] \[\hat{\psi}_{1^{r}}^{(r,0)}\hat{\psi}_{m}^{(1,0)}=\frac{[0,-m][r,0]}{[r,-m][r+ 1,-m]}\hat{\psi}_{\{1^{r},m\}}^{(0,m)}+\frac{[r+1,0][1,-m]}{[1,0][r+1,-m]}\hat {\psi}_{\{1^{r},m\}}^{(r+1,0)}+\frac{[r,0][0,-m]}{[r,-m][-1,0]}\hat{\psi}_{\{1 ^{r-1},m+1\}}^{(r,0)}. \tag{248}\] \[\hat{\psi}_{1^{r}}^{(0,1)}\hat{\psi}_{m}^{(1,0)}=a\hat{\psi}_{\{1^{r},m\}}^{(0,m)}+b\hat{\psi}_{\{1^{r},m\}}^{(1,1)}+c\hat{\psi}_{\{1^{r-1},m+1\}}^{(1,1)}+d \hat{\psi}_{\{1^{r-1},m+1\}}^{(r,0)}. \tag{249}\] **Definition 4.14**.: ### Trace twist Comparing 214 and 126, we notice that we have an seemingly coincidental equality of \(\boldsymbol{y}\)-traces, \[\boldsymbol{y}_{u}(\beta(\hat{\psi}_{1}^{v},\hat{\psi}_{\lambda}^{s}))=T_{ \lambda}(u)-1=\boldsymbol{y}_{u}(\hat{j}_{\lambda}). \tag{250}\] This is peculiar, as the right hand side is an element of degree \(|\lambda|+1\), and the right hand side is of degree \(|\lambda|\). Here we begin to explore this connection further, showing that it is in fact not a coincidence, but rather hints at deeper structure of the algebra of Lax eigenfunctions. **Proposition 4.15**.: _The following relations between traces hold,_ * \(\boldsymbol{y}_{u}(\beta(\hat{\psi}_{1}^{v},\hat{\psi}_{\lambda}^{s}))= \boldsymbol{y}_{u}(\hat{j}_{\lambda})\)_,_ * \(\boldsymbol{x}_{\nu}(\beta(\hat{\psi}_{1}^{v},\hat{\psi}_{\lambda}^{s}))=0\)_,_ \(\forall\nu\)_,_ * \(\boldsymbol{z}_{\mu}(\hat{j}_{\lambda})=0\)_,_ \(\forall\mu\)_,_ * \(\boldsymbol{z}_{\gamma}(\beta(\hat{\psi}_{1}^{v},\hat{\psi}_{\lambda}^{s}))=- \boldsymbol{x}_{\gamma}(\hat{j}_{\lambda})\)_,_ \(\forall\gamma\)_._ _In other words, we have \(\beta(\hat{\psi}_{1}^{v},\hat{\psi}_{\lambda}^{s})\in\mathfrak{x}_{n+1}^{0}\), and \(\hat{j}_{\lambda}\in\mathfrak{z}_{n}^{0}\), and the full traces are related by_ \[\operatorname{Tr}_{n+1}(\beta(\hat{\psi}_{1}^{v},\hat{\psi}_{\lambda}^{s}))= \rho_{*}\circ\operatorname{Tr}_{n}(\hat{j}_{\lambda}). \tag{251}\] _where_ \[\rho_{*}:(\{x_{\gamma}\},\{y^{s}\},\{0\})\mapsto(\{0\},\{y^{s}\},\{-x_{\gamma}\}). \tag{252}\] Proof.: We note that since \(\beta\) factors through \(\pi_{+}\), so we have \(\beta(\hat{\psi}_{1}^{v},\hat{\psi}_{\lambda}^{s})=\beta(\pi_{+}\hat{\psi}_{1}^ {v},\hat{\psi}_{\lambda}^{s})=\beta(w,\hat{\psi}_{\lambda}^{s})\). Next, we know that (1) has already been observed. For (2), we start with 4.3, \(\beta(w,\hat{\psi}_{\lambda}^{s})=\pi_{0}\mathcal{L}w\hat{\psi}_{\lambda}^{s}-V _{1}\hat{\psi}_{\lambda}^{s}=\hat{j}_{\lambda+s}-\hat{j}_{1}\hat{\psi}_{\lambda} ^{s}\). We then have \[\boldsymbol{x}_{\gamma}(\beta(w,\hat{\psi}_{\lambda}^{s}))=\boldsymbol{x}_{ \gamma}(\hat{j}_{\lambda+s}-\hat{j}_{1}\hat{\psi}_{\lambda}^{s})=\sum_{t} \delta_{\lambda+s+t}^{\gamma}\hat{\tau}_{\lambda+s}^{t}-\hat{c}_{1\sigma}^{ \gamma}\boldsymbol{x}_{\sigma}(\hat{\psi}_{\lambda}^{s})=0. \tag{253}\] where we have used the Kerov relation \(\hat{\tau}_{\lambda+s}^{t}=\hat{c}_{1,\lambda+s}^{\lambda+s+t}\) (43). Next, (3) holds because \(j_{\lambda}\) has vanishing top power in \(w\). For (4), we have \[\boldsymbol{z}_{\gamma}(\beta(w,\hat{\psi}_{\lambda}^{s}))=\boldsymbol{z}_{ \gamma}(\hat{j}_{\lambda+s}-\hat{j}_{1}\hat{\psi}_{\lambda}^{s})=0-\hat{c}_{1 \sigma}^{\gamma}\boldsymbol{z}_{\sigma}(\hat{\psi}_{\lambda}^{s})=-\hat{c}_{ 1\lambda}^{\gamma}=-\sum_{s}\delta_{\lambda+s}^{\gamma}\boldsymbol{z}_{\gamma} (\hat{\tau}_{\lambda}^{s}\hat{\psi}_{\lambda}^{s}). \tag{254}\] In the next section 4.9 we will demonstrate a general version of this phenomena relating traces of certain elements of different degrees, which will be crucial for our work. We will make continual use of the following map: **Definition 4.16**.: _(Trace twist) \(\rho_{*}:\operatorname{Im}\operatorname{Tr}\mathcal{Z}_{n}^{0}\to\operatorname {Im}\operatorname{Tr}\mathcal{X}_{n+1}^{0}\) given by 252._ Note: The range and domains are vector spaces of different dimensions, and it is not clear yet that this map lands in the image of the trace. ### \(\theta\) elements In the previous section, we found in 251 that there was an element \(\theta:=\hat{j}_{\lambda}\) one degree lower than \(\beta\) whose trace was related by the twist 252. We will show that such an element can always be constructed for \(\theta(\hat{\psi}_{\mu}^{t},\hat{\psi}_{\lambda}^{s})\) of any choices of \(\mu,t,\lambda,s\). That is, there exists a canonical \(\theta\equiv\theta(\hat{\psi}_{\mu}^{t},\hat{\psi}_{\lambda}^{s})\), with \(\theta(\hat{\psi}_{1}^{v},\hat{\psi}_{\lambda}^{s})=\hat{j}_{\lambda}\), such that \[\operatorname{Tr}(\beta(\hat{\psi}_{\mu}^{t},\hat{\psi}_{\lambda}^{s}))=\rho_ {*}\circ\operatorname{Tr}(\theta(\hat{\psi}_{\mu}^{t},\hat{\psi}_{\lambda}^{s} )). \tag{255}\] In this case, we'd say that the elements \(\beta\) and \(\theta\) have _twisted traces_. **Definition 4.17**.: _Let \(\theta:\mathcal{H}_{n}\times\mathcal{H}_{m}\to\mathcal{H}_{n+m-1}\) be the degree \(-1\) symmetric map defined by_ \[\theta:=\{\beta,\Pi\}, \tag{256}\] _where \(\{,\}\) is the Hochschild bracket, i.e._ \[\theta(\zeta,\xi)=\beta(\Pi\zeta,\xi)+\beta(\zeta,\Pi\xi)-\Pi\beta(\zeta,\xi). \tag{257}\] We can easily verify the following properties, in parallel with those for \(\beta\) (4.2). **Lemma 4.18**.: \(\theta\) _has the following properties,_ 1. \(\theta\) _is_ \(\mathbb{C}[V]\)_-linear,_ (258) \[\theta(\zeta,V_{k}\cdot\xi)=V_{k}\cdot\theta(\zeta,\xi).\] _In particular,_ \(\theta\) _factors through_ \(\pi_{+}\)__ (259) \[\theta(\zeta,\xi)=\theta(\zeta,\pi_{+}\xi).\] 2. \(\theta\) _is an exact operator,_ (260) \[\theta=\partial(\mathcal{L}^{\prime}\circ\Pi).\] _Note that_ \(\theta\) _is independent of_ \(\overline{\varepsilon},\hbar\)_._ 3. _The simplest case is given by the formula_ (261) \[\theta(\hat{\psi}_{1}^{v},\hat{\psi}_{\lambda}^{s})=\hat{j}_{\lambda}.\] Because of the \(\mathbb{C}[V]\) linearity, we often work with the basic elements (compare with 194) \[\theta^{n,m}:=\theta(w^{n},w^{m}). \tag{262}\] One can easily check that these \(\theta\) elements include the symmetric function generators: \[\theta^{1,m}=V_{m}. \tag{263}\] **Lemma 4.19**.: _The following relation holds,_ \[\beta(b,\Pi c)-\Pi\beta(b,c)=(\Pi b)\cdot(\pi_{0}\mathcal{L}c), \tag{264}\] _and hence \(\theta\) can alternately be given by_ \[\theta(\zeta,\xi)=\Pi\zeta\cdot\pi_{0}\mathcal{L}\xi+\beta(\Pi\zeta,\xi). \tag{265}\] Proof.: By inspection, we see that both sides of 264 are \(\mathbb{C}[V]\) bilinear. Thus we reduce the statement to the case where \(b=w^{n}\) and \(c=w^{m}\). Let \(T_{n}^{m}\) be the quantity \[T_{n}^{m} = \beta(w^{n},\Pi w^{m})-\Pi\beta(w^{n},w^{m})-\Pi w^{n}.\pi_{0} \mathcal{L}w^{m} \tag{267}\] \[= \beta^{n,m-1}-\Pi\beta^{n,m}-w^{n-1}V_{m}. \tag{266}\] We prove by induction that \(T_{n}^{m}=0\). For the base case, \(T_{0}^{m}\) it is obviously true, as both sides are zero. We also prove \(T_{1}^{m}\) directly, \[\beta(w,w^{m-1})-\Pi\beta(w,w^{m}) = (V_{m}-w^{m-1}V_{1})-\Pi(V_{m+1}-w^{m}V_{1}) \tag{269}\] \[= V_{m}\] (270) \[= \pi_{0}\mathcal{L}w^{m}. \tag{268}\] Now, for \(T_{n}\) \[\beta(w^{n},\Pi w^{m})-\Pi\beta(w^{n},w^{m}) = \beta(w^{n-1}.w,w^{m-1})-\Pi\beta(w^{n-1}.w,w^{m})\] \[= \beta(w^{n-1},w.w^{m-1})-\Pi\beta(w^{n-1},w.w^{m})\] \[+w^{n-1}\beta(w,w^{m-1})-\Pi w^{n-1}.\beta(w,w^{m})\] \[-\beta(w^{n-1}.w)w^{m-1}+\Pi\beta(w^{n-1},w)w^{m}\] \[= \beta(w^{n-1},w^{m})-\Pi\beta(w^{n-1},w^{m+1})\] \[+w^{n-1}\beta(w,w^{m-1})-w^{n-2}\Pi w.\beta(w,w^{m})\] \[-\beta(w^{n-1}.w)w^{m-1}+w^{m-1}\Pi w\beta(w^{n-1},w)\] \[= \beta(w^{n-1},w^{m})-\Pi\beta(w^{n-1},w^{m+1})\] \[+w^{n-1}\beta(w,w^{m-1})-w^{n-2}(w\Pi+\pi_{0})\beta(w,w^{m})\] \[-\beta(w^{n-1},w)w^{m-1}+w^{m-1}\beta(w^{n-1},w)\] \[= \beta(w^{n-1},w^{m})-\Pi\beta(w^{n-1},w^{m+1})\] \[+w^{n-1}\left(\beta(w,w^{m-1})-\Pi\beta(w,w^{m})\right)\] \[-w^{n-2}\pi_{0}\beta(w,w^{m}),\] where we have used \(\Pi w-w\Pi=\pi_{0}\). Using \(T_{1}^{m}=0\) and that \[w^{n-2}\pi_{0}\beta(w,w^{m})=w^{n-2}\pi_{0}\mathcal{L}w^{m+1}, \tag{271}\] we find that the big sequence of equalities above reduces to \[T_{n}^{m}=T_{n-1}^{m+1}. \tag{272}\] By induction, all \(T_{n}^{m}=0\), and we are done. Next, we inspect the structure of these \(\theta\) basis elements, finding expressions for their projection onto \(\mathcal{F}\subset\mathcal{H}\) and its compliment. **Proposition 4.20**.: (273) \[\pi_{0}\theta^{n,m}=\theta^{1,m+n-1}=V_{m+n-1},\] \[\pi_{+}\theta^{n,m}=w\beta^{n-1,m-1}. \tag{274}\] Proof.: From 265, we have \[\theta^{n,m}=w^{n-1}V_{m}+\beta^{n-1,m}. \tag{275}\] From \(\partial\beta(w^{n-1},w^{m-1},w)=0\), we have \[\beta^{n-1,m} = \beta^{n+m-2,1}-w^{n-1}\beta^{m-1,1}+\beta^{n-1,m-1}w \tag{277}\] \[= V_{n+m-1}-w^{n+m-2}V_{1}-w^{n-1}\beta^{m-1,1}+\beta^{n-1,m-1}w. \tag{276}\] So, we have \[\pi_{+}\theta^{n,m} = w^{n-1}V_{m}-w^{n+m-2}V_{1}-w^{n-1}\beta^{m-1,1}+\beta^{n-1,m-1}w \tag{279}\] \[= w^{n-1}(V_{m}-w^{m-1}V_{1})-w^{n-1}\beta^{m-1,1}+\beta^{n-1,m-1}w\] (280) \[= \beta^{n-1,m-1}w. \tag{278}\] By expanding elements in powers of \(w\), i.e. \(\zeta=\sum_{i=0}^{n}\zeta_{n}w^{n}\), we easily find the following extension of proposition 4.20: **Corollary 4.21**.: (281) \[\pi_{0}\theta(\zeta,\xi)=\theta(w,\partial\Pi(\zeta,\xi))=\pi_{0}\mathcal{L} \partial\Pi(\zeta,\xi),\] (282) \[\pi_{+}\theta(\zeta,\xi)=w\beta(\Pi\zeta,\Pi\xi).\] The operator \(\partial\Pi\) will make an importance appearance in later results, see 6.14. ### Relation between traces of \(\beta\) and \(\theta\) Next, we show a generalization of the results of Proposition 4.15 and equation 251 to the case of arbitrary parameters \(\beta(\zeta,\xi)\). We make the first steps towards a deeper understanding of this property later in section 6. **Proposition 4.22**.: _The Traces of \(\beta\) and \(\theta\) are related by the twist \(\rho_{*}\), that is for all \(\zeta\in\mathcal{H}_{k},\xi\in\mathcal{H}_{n-k}\), we have_ \[\mathrm{Tr}_{n}(\beta(\zeta,\xi))=\rho_{*}\circ\mathrm{Tr}_{n-1}(\theta(\zeta,\xi)). \tag{283}\] _Equivalently, the following hold_ * \(\boldsymbol{y}_{u}(\theta(\zeta,\xi))=\boldsymbol{y}_{u}(\beta(\zeta,\xi))\)_,_ * \(\boldsymbol{x}_{\nu}\left(\beta(\zeta,\xi)\right)=0\)_,_ \(\forall\nu\)_,_ * \(\boldsymbol{z}_{\sigma}\left(\theta(\zeta,\xi)\right)=0\)_,_ \(\forall\sigma\)__ * \(\boldsymbol{z}_{\gamma}(\beta(\zeta,\xi))=-\boldsymbol{x}_{\gamma}(\theta( \zeta,\xi))\)_,_ \(\forall\gamma\)_._ Proof.: The first statement, following from formula 214, is equivalent to \[\boldsymbol{y}_{u}(\theta(\hat{\psi}_{\lambda}^{*},\hat{\psi}_{\nu}^{t}))=T_{ \lambda\nu\nu}(u)-1. \tag{284}\] Using 265 along with 109, we write \(\Pi\hat{\psi}_{\lambda}^{s}=\sum_{q\in\mathcal{R}_{\lambda}}c_{\nu,s}^{q}\hat{ \psi}_{\lambda-q}^{q}\), to find \[\theta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t})=\sum_{q\in\mathcal{R}_{ \lambda}}c_{\nu,s}^{q}\left(\hat{\psi}_{\lambda-q}^{q}.\hat{j}_{\nu}+\beta(\hat {\psi}_{\lambda-q}^{q},\hat{\psi}_{\nu}^{t})\right). \tag{285}\] Then, using 229 and 214, we have \[\boldsymbol{y}_{u}(\theta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{ \nu}^{t})) = \sum_{q\in\mathcal{R}_{\lambda}^{+}}c_{\nu,s}^{q}\left((T_{\lambda \star\nu}(u)-T_{(\lambda-q)\star\nu}(u))+(T_{(\lambda-q)\star\nu}(u)-1)\right) \tag{287}\] \[= \left(\sum_{q\in\mathcal{R}_{\lambda}^{+}}c_{\nu,s}^{q}\right)(T _{\lambda\star\nu}(u)-1)\] (288) \[= T_{\lambda\star\nu}(u)-1. \tag{286}\] For the next three statements, we work just for powers of \(w\), since the statements have \(\mathbb{C}[V]\) linearity by property 195. Let \(T_{\sigma}^{m,n}\) be the statement that \(\boldsymbol{z}_{\sigma}\theta^{n,m}=0\), let \(B_{\nu}^{m,n}\) be the statement that \(\boldsymbol{x}_{\nu}\beta^{n,m}=0\), and \(N_{\lambda}^{n,m}\) be the statement \(\boldsymbol{x}_{\lambda}\theta^{n,m}+\boldsymbol{z}_{\lambda}\beta^{n,m}=0\). The base case \(T_{\sigma}^{n-1,1}\) is true, since \(\boldsymbol{z}_{\sigma}(\theta^{n-1,1})=\boldsymbol{z}_{\sigma}(V_{n-1})=0\). From the formula 265, we have \[\boldsymbol{z}_{\sigma}\theta^{n,m}=\boldsymbol{z}_{\sigma}(w^{n-1}V_{m})+ \boldsymbol{z}_{\sigma}\beta^{n-1,m} \tag{289}\] On the other hand, from \(\partial\theta(w,w^{n-1},w^{m})=0\), we have \[\theta^{n,m}-\theta^{1,n+m-1}=w\theta^{n-1,m}-\theta^{1,n-1}w^{m}. \tag{290}\] Taking the \(\boldsymbol{z}\)-trace of this, then using the base case and 263 ( \(\theta^{1,n+m-1}=V_{n+m-1}\), so \(\boldsymbol{z}_{\sigma}\theta^{1,n+m-1}=0\)), we find \[\boldsymbol{z}_{\sigma}\theta^{n,m}=\boldsymbol{x}_{\sigma}\theta^{n-1,m}- \boldsymbol{z}_{\sigma}(w^{m}V_{n-1}). \tag{291}\] Adding 289 and 291 together we get \[2\boldsymbol{z}_{\sigma}\theta^{n,m}=\boldsymbol{z}_{\sigma}(w^{n-1}V_{m})+ \boldsymbol{z}_{\sigma}\beta^{n-1,m}+\boldsymbol{x}_{\sigma}\theta^{n-1,m}- \boldsymbol{z}_{\sigma}(w^{m}V_{n-1}). \tag{292}\] Next we show that \(\boldsymbol{z}_{\sigma}(w^{n}V_{m})\) is symmetric under \(n,m\). \[\boldsymbol{z}_{\sigma}(w^{n}V_{m}) = m\hbar\sum_{\lambda/m}\boldsymbol{z}_{\sigma}(w^{n}\hat{j}_{ \lambda})/|\hat{j}_{\lambda}|^{2} \tag{294}\] \[= m\hbar\sum_{\lambda/m}\sum_{\mu/n}\hat{c}_{\mu\lambda}^{\sigma }\boldsymbol{z}_{\mu}(w^{n})/|\hat{j}_{\lambda}|^{2}\] (295) \[= m\hbar\sum_{\lambda}\sum_{\mu}\hat{c}_{\mu\lambda}^{\sigma} \boldsymbol{z}_{\mu}(w\hat{q}_{\mu})/|\hat{j}_{\lambda}|^{2}|\hat{j}_{\mu}|^{2}\] (296) \[= (m\hbar)(n\hbar)\sum_{\lambda}\sum_{\mu}\hat{c}_{\mu\lambda}^{ \sigma}/|\hat{j}_{\lambda}|^{2}|\hat{j}_{\mu}|^{2}. \tag{293}\] Following from the symmetry of the Jack LR coefficients, this is symmetric. Using this, equation 292 becomes \[2\boldsymbol{z}_{\sigma}\theta^{n,m}=\boldsymbol{z}_{\sigma}\beta^{n-1,m}+ \boldsymbol{x}_{\sigma}\theta^{n-1,m}. \tag{297}\] So we see that \(T_{\sigma}^{n,m}\Leftrightarrow N_{\sigma}^{n-1,m}\). If we assume \(T_{\sigma}^{m,n}\), then we have \[0 = \boldsymbol{z}_{\sigma}(\theta^{n,m}) \tag{299}\] \[= \boldsymbol{z}_{\sigma}(w^{n-1}V_{m})+\boldsymbol{z}_{\sigma}( \beta^{n-1,m})\] (300) \[= \boldsymbol{z}_{\sigma}(w^{n-1}V_{m})-\boldsymbol{x}_{\sigma}( \theta^{n-1,m})\operatorname{mod}N_{\sigma}^{n-1,m}\] (301) \[= \boldsymbol{z}_{\sigma}(w^{n-1}V_{m})-\boldsymbol{x}_{\sigma}( \Pi(w^{n-1})V_{m})-\boldsymbol{x}_{\sigma}(\beta^{n-2,m})\] (302) \[= \boldsymbol{z}_{\sigma}(w^{n-1}V_{m})-\boldsymbol{x}_{\sigma}( \Pi(w^{n-1}V_{m}))-\boldsymbol{x}_{\sigma}(\beta^{n-2,m})\] (303) \[= -\boldsymbol{x}_{\sigma}(\beta^{n-2,m}). \tag{298}\] Next, we use 274 and the properties (139) to show for all \(n,m\), we have \[\boldsymbol{z}_{\lambda}\theta^{n,m}=\boldsymbol{x}_{\lambda}\beta^{n-1,m-1}. \tag{304}\] Thus \(0=\boldsymbol{x}_{\sigma}(\beta^{n-2,m})=\boldsymbol{z}_{\sigma}(\theta^{n-1,m+1})=0\). And so we find the chain of implications \[T_{\sigma}^{n,m}\implies N_{\sigma}^{n-1,m}\implies B_{\sigma}^{n-2,m} \implies T_{\sigma}^{n-1,m+1}. \tag{305}\] So \(T_{\sigma}^{n,m}\) is true for all \(n,m\), and thus so is \(N_{\sigma}^{n,m}\) and \(B_{\sigma}^{n,m}\). Lastly, we conjecture that the basic \(\beta\) and \(\theta\) elements generate the null submodules, **Conjecture 4.23**.: _The null module \(\mathcal{I}^{0}\subset\mathcal{F}[w]\) is generated as a \(\mathcal{F}\) module by the elements_ \[\theta^{n,m}\in\mathcal{I}^{0}_{m+n-1}. \tag{306}\] _The null module \(\mathcal{I}^{0}\subset\mathcal{F}[w]\) is generated as a \(\mathcal{F}\) module by the elements_ \[\beta^{n,m}\in\mathcal{I}^{0}_{m+n}. \tag{307}\] ### Traces and Jack Littlewood-Richardson Coefficients Now that we have computed the relationship between the traces of \(\theta\) and \(\beta\), we are finally ready to compute the value of these traces. It is at this point that we discover the connection with Jack LR coefficients that was promised in the introduction. **Lemma 4.24**.: _The \(\boldsymbol{x}\)-trace of a \(\theta\) element in the \(\hat{\psi}\) basis is given by_ \[\boldsymbol{x}_{\sigma}(\theta(\hat{\psi}^{s}_{\lambda},\hat{\psi}^{t}_{\nu}) )=\hat{c}^{\sigma}_{\lambda\nu}, \tag{308}\] _where \(\hat{c}^{\sigma}_{\lambda\nu}\) are the hatted Jack Littlewood-Richardson coefficients, given by_ \[\hat{\jmath}_{\lambda}\hat{\jmath}_{\nu}=\sum_{\sigma}\hat{c}^{\sigma}_{\lambda \nu}\hat{\jmath}_{\sigma}. \tag{309}\] Proof.: First, we show that the \(\boldsymbol{x}\) trace of \(\theta(\hat{\psi}^{s}_{\lambda},\hat{\psi}^{t}_{\nu})\) is independent of \(t\). Using 265 we have \[\boldsymbol{x}_{\sigma}(\theta(\hat{\psi}^{s}_{\lambda},\hat{\psi} ^{t}_{\nu})) = \boldsymbol{x}_{\sigma}\left(\Pi\hat{\psi}^{s}_{\lambda}(\pi_{0} \mathcal{L}(\hat{\psi}^{t}_{\nu}))+\beta(\Pi\hat{\psi}^{s}_{\lambda},\hat{\psi }^{t}_{\nu})\right) \tag{311}\] \[= \boldsymbol{x}_{\sigma}\left(\Pi\hat{\psi}^{s}_{\lambda}\hat{ \jmath}_{\nu}\right). \tag{310}\] where we have used \(\boldsymbol{x}_{\sigma}\beta=0\). This is explicitly \(t\)-independent, and due to the symmetry of \(\theta\), it is therefore independent of both \(s\) and \(t\). Next, recall \(\mathcal{L}\hat{\jmath}_{\lambda}=w\hat{q}_{\lambda}=\sum_{t}[s]\hat{\tau}^{ s}_{\lambda}\hat{\psi}^{s}_{\lambda}\), where \(\sum_{s^{\prime}}[s^{\prime}]\hat{\tau}^{s^{\prime}}_{\lambda}=\hbar|\lambda|\). Following from this \(s\)-independence of the trace, we have \[\sum_{s}[s]\hat{\tau}^{s}_{\lambda}\boldsymbol{x}_{\sigma}(\theta(\hat{\psi}^ {s}_{\lambda},\hat{\psi}^{t}_{\nu}))=(\sum_{s^{\prime}}[s^{\prime}]\hat{\tau}^ {s^{\prime}}_{\lambda})\boldsymbol{x}_{\sigma}(\theta(\hat{\psi}^{s}_{\lambda },\hat{\psi}^{t}_{\nu})). \tag{312}\] This equality is thus rewritten as \[\boldsymbol{x}_{\sigma}(\theta(w\hat{q}_{\lambda},\hat{\psi}^{t}_{\nu}))= \hbar|\lambda|\cdot\boldsymbol{x}_{\sigma}(\theta(\hat{\psi}^{s}_{\lambda}, \hat{\psi}^{t}_{\nu})). \tag{313}\] Using 265, we work on the left hand side to get \[\boldsymbol{x}_{\sigma}(\theta(w\hat{q}_{\lambda},\hat{\psi}^{t}_ {\nu})) = \boldsymbol{x}_{\sigma}(\Pi(w\hat{q}_{\lambda})\hat{\jmath}_{\nu }+\beta(\Pi(w\hat{q}_{\lambda}),\hat{\psi}^{t}_{\nu})) \tag{315}\] \[= \boldsymbol{x}_{\sigma}(\hat{q}_{\lambda}\hat{\jmath}_{\nu}). \tag{314}\] Then, from 195, we have \[\boldsymbol{x}_{\sigma}(\hat{q}_{\lambda}\hat{\jmath}_{\nu})=\sum_{\mu}\hat{ c}^{\sigma}_{\nu\mu}\boldsymbol{x}_{\mu}(\hat{q}_{\lambda})=\hat{c}^{\sigma}_{\nu \lambda}|\lambda|\hbar \tag{316}\] Stringing the equalities 313, 314 and 316 together we recover the result. #### 4.11.1. Trace formula We summarize the results of these computations of traces. **Theorem 4.25** (Trace formula).: _The full trace of the \(\theta\) element is given by_ \[\mathrm{Tr}(\theta(\hat{\psi}^{s}_{\lambda},\hat{\psi}^{t}_{\nu}))=(\{\hat{c}^ {\sigma}_{\lambda,\nu}\},\{\hat{\tau}^{v}_{\lambda\star\nu}\},\{0\}), \tag{317}\] _or, equivalently,_ \[\mathrm{Tr}(\beta(\hat{\psi}^{s}_{\lambda},\hat{\psi}^{t}_{\nu}))=(\{0\},\{\hat {\tau}^{v}_{\lambda\star\nu}\},\{-\hat{c}^{\sigma}_{\lambda,\nu}\}), \tag{318}\] _where we use the notation 205, \(\hat{\tau}^{v}_{\lambda\star\nu}=\mathrm{Res}_{u=[v]}\,T_{\lambda\star\nu}(u)\)._ Proof.: By 4.22, \(\boldsymbol{y}\)-trace of \(\theta\) is equal to the \(\boldsymbol{y}\)-trace of \(\beta\), which was computed in 4.10. The \(\boldsymbol{x}\)-trace of \(\theta\) is given by Theorem 4.24. The \(\boldsymbol{z}\)-trace of \(\theta\) is zero by 4.22. The traces of \(\beta\) is similarly determined by 4.22. ### Main Theorem From the trace formula 4.25 of the previous section, we produce one of the main results of this work, which reveals a striking structure to the Jack LR coefficients. **Theorem 4.26**.: _For any partitions \(\lambda,\nu\neq\emptyset\), the hatted Jack Littlewood-Richardson coefficients \(\hat{c}_{\mu\nu}^{\gamma}\) satisfy the following equality of rational functions of \(u\),_ \[\sum_{\gamma\vdash\,n:\mu,\nu\subseteq\gamma}\hat{c}_{\mu\nu}^{\gamma}\left( \sum_{s\in\gamma/(\mu\cup\nu)}\frac{1}{u-[s]}\right)=T_{\mu\star\nu}(u)-1. \tag{319}\] Proof.: We apply the fundamental cokernel relation \(R(u)\) (152) to the total trace of a \(\theta\) element (4.25) (or, equivalently, a \(\beta\) element), \[R(u)\mathrm{Tr}\,\theta(\hat{\psi}_{\mu}^{s},\hat{\psi}_{\nu}^{t})=0. \tag{320}\] for any choice of \(s,t\). This is \[(T_{\mu\star\nu}(u)-1)-\sum_{\gamma}\hat{c}_{\mu\nu}^{\gamma}\left(\sum_{s\in \gamma}\frac{1}{u-[s]}\right)+(0)=0. \tag{321}\] Since \(\mu,\nu\neq\emptyset\) we have \(\sum_{\gamma}\hat{c}_{\mu\nu}^{\gamma}=0\), and \(\hat{c}_{\mu\nu}^{\gamma}=0\) unless \(\mu,\nu\subseteq\gamma\), \[\sum_{\gamma\vdash\,n:\mu,\nu\subseteq\gamma}\hat{c}_{\mu\nu}^{\gamma}\left( \sum_{s\in(\mu\cup\nu)}\frac{1}{u-[s]}\right)=\left(\sum_{\gamma\vdash\,n:\mu, \nu\subseteq\gamma}\hat{c}_{\mu\nu}^{\gamma}\right)\left(\sum_{s\in(\mu\cup \nu)}\frac{1}{u-[s]}\right)=0. \tag{322}\] and so the inner sum reduces to \[\sum_{\gamma\vdash\,n:\mu,\nu\subseteq\gamma}\hat{c}_{\mu\nu}^{\gamma}\left( \sum_{s\in\gamma/(\mu\cup\nu)}\frac{1}{u-[s]}\right)=T_{\mu\star\nu}(u)-1. \tag{323}\] This result is a generalization the relationship between Jack LR coefficients and residues of spectral resolvent factors as described by Kerov (45), which is clearly equivalent to the \(\nu=1\) case, \[\sum_{s\in\mathcal{A}_{\mu}}\hat{c}_{\mu,1}^{\mu+s}\left(\frac{1}{u-[s]} \right)=T_{\mu}(u)-1. \tag{324}\] It can be checked that Theorem 4.26 determines all Jack LR coefficients \(c_{\mu\nu}^{\gamma}\) for \(|\gamma|<8\). However, the number of poles on the right-hand side grows like \(n\log n\), whereas the number of LR coefficients on the left grows as the partition number \(p(n)\), so in general this system of equations for the Jack LR coefficients is undetermined. **Example 4.27**.: _Let \(\mu=1^{2},\nu=2\). We find_ \[T_{\mu\star\nu}=T_{2^{2}}=\frac{(u-[(0,0)])(u-[(2,2)])}{(u-[(2,0)])(u-[(0,2)] )}. \tag{325}\] _Expanding this in poles we find_ \[T_{\mu\ast\nu}=\frac{[2,0][0,-2]}{[2,-2]}\frac{1}{u-[2,0]}+\frac{[0,2][-2,0]}{[-2, 2]}\frac{1}{u-[0,2]}. \tag{326}\] _On the other hand, by theorem 4.26, we know this must be equal to_ \[\hat{c}_{1^{2},2}^{1^{4}}\left(\frac{1}{u-[3,0]}+\ldots\right)+\hat{c}_{1^{2}, 2}^{1^{2}2}\left(\frac{1}{u-[2,0]}+\ldots\right)+\hat{c}_{1^{2},2}^{2^{2}} \left(\frac{1}{u-[1,1]}+\ldots\right) \tag{327}\] \[+\hat{c}_{1^{2},2}^{13}\left(\frac{1}{u-[0,2]}+\ldots\right)+\hat{c}_{1^{2}, 2}^{4}\left(\frac{1}{u-[0,3]}+\ldots\right). \tag{328}\] _We can read off the only non-zero (up to transposition) hatted LR coefficient_ \[\hat{c}_{1^{2},2}^{1^{2}2}=\frac{[2,0][0,-2]}{[2,-2]}, \tag{329}\] _which gives us the regular Jack LR coefficient_ \[c_{1^{2},2}^{1^{2}2}=\hat{c}_{1^{2},2}^{1^{2}2}\frac{\varpi_{1^{2}}\varpi_{2} }{\varpi_{1^{2}2}}=\frac{[2,0][0,-2]}{[2,-2]}\frac{[1,0]\cdot[0,1]}{[2,0][1,0 ][0,1]}=\frac{[0,-2]}{[2,-2]}=\frac{-\varepsilon_{2}}{\varepsilon_{1}- \varepsilon_{2}}. \tag{330}\] _This can be checked to agree with the Pieri formula._ ## 5. The SH\({}^{c}\) algebra We now lay the groundwork for a re-writing of the main result of the previous section 4.26 in a radically different language, in the hope of elucidating its significance. In [14], Schiffman-Vasserot introduce an algebra denoted SH\({}^{c}\), described as a centrally extended spherical degenerate double affine Hecke algebra (DAHA). This algebra provides a systematic method to analyse the instanton partition functions of N = 2 supersymmetric gauge theories. ### Definitions Here we won't describe the full SH\({}^{c}\) algebra, but rather a special presentation of it relevant for our purposes. The rank \(r=1\) (with central charge \(a=0\)) _Holomorphic field presentation_ was described in [4], and we use the notation from [5]. This presentation consists of currents \(X^{\pm},\mathcal{Y}^{\pm}\) acting on the Fock module \(\mathcal{F}\). The operator \(\mathcal{Y}(z)\) (c.f. [12] eq. (125)), and its inverse \(\mathcal{Y}^{-1}\), act diagonally on the Jack basis states \(\hat{\jmath}_{\lambda}=|\lambda\rangle\), \[\mathcal{Y}(z)|\lambda\rangle=\mathcal{Y}_{\lambda}(z)|\lambda\rangle, \tag{331}\] with eigenvalues given by the familiar spectral functions \(T_{\lambda}\) from 38, \[\mathcal{Y}_{\lambda}(z)\coloneqq z\,T_{\lambda}(z)^{-1}=\frac{\prod_{s\in \mathcal{A}_{\lambda}}(z-[s])}{\prod_{t\in\mathcal{R}_{\lambda}^{+}}(z-[t])}. \tag{332}\] We note that this is equal to (the inverse of) the Nazarov-Skylanin transfer operator (40), \[\mathcal{Y}(z)=\mathcal{T}(z)^{-1}. \tag{333}\] The other 'box creation' operators \(X^{\pm}(z)\) are defined (in the notation of [5] equation 2.18, where \(\phi_{x}\) denotes the content of a box \(x\)) by their action on the Jack basis \[X^{+}(z)|\lambda\rangle:=\sum_{x\in\mathcal{A}_{\lambda}}\frac{1}{z-\phi_{x}} \left(\operatorname*{Res}_{u=\phi_{x}}\mathcal{Y}_{\lambda}(u)^{-1}\right)| \lambda+x\rangle, \tag{334}\] \[X^{-}(z)|\lambda\rangle:=\sum_{x\in\mathcal{R}_{\lambda}}\frac{1}{z-\phi_{x}} \left(\operatorname*{Res}_{u=\phi_{x}}\mathcal{Y}_{\lambda}(u+\overline{ \varepsilon})\right)|\lambda-x\rangle. \tag{335}\] These operators satisfy the commutation relation \[[X^{+}(z),X^{-}(w)]=\frac{\Psi(z)-\Psi(w)}{z-w}, \tag{336}\] where \(\Psi(z)\) is the so called _chiral ring generating operator_ (c.f. [5] (2.13)) \[\Psi(z)=\mathcal{Y}(z+\overline{\varepsilon})\mathcal{Y}(z)^{-1}. \tag{337}\] ### New construction We show that the algebra \(\{\mathcal{Y}^{\pm},X^{\pm}\}\) can be easily constructed out of the action of \(\mathcal{L}\). **Proposition 5.1**.: _The following operators \(\mathcal{F}\to\mathcal{F}(z)\)_ \[X^{+}(z)=A\frac{1}{z-\mathcal{L}}\pi_{0},\qquad X^{-}(z)=\pi_{0}\frac{1}{z- \mathcal{L}}A^{\dagger} \tag{338}\] \[\mathcal{Y}(z)=(\hbar\mathcal{N})^{-1}A\frac{1}{z-\overline{\varepsilon}- \mathcal{L}}A^{\dagger},\qquad\mathcal{Y}(z)^{-1}=\pi_{0}\frac{1}{z-\mathcal{ L}}, \tag{339}\] _where \(A=\pi_{0}\mathcal{L}w,\,A^{\dagger}=w^{-1}\mathcal{L}\pi_{0}\), reproduce the rank \(r=1\) Bourgine-Matsuo-Zhang Holomorphic field realization of \(\operatorname{SH}^{c}\) in terms of the action of the Nazarov-Skylanin quantum Lax operator \(\mathcal{L}\)._ Proof.: (340) \[X^{+}(z)j_{\lambda} = (\pi_{0}\mathcal{L}w)\frac{1}{z-\mathcal{L}}\sum_{s\in\mathcal{A }_{\lambda}}\tau_{\lambda}^{s}\psi_{\lambda}^{s}\] (341) \[= \sum_{s\in\mathcal{A}_{\lambda}}\frac{1}{z-[s]}\tau_{\lambda}^{s} (\pi_{0}\mathcal{L}w)\psi_{\lambda}^{s}\] \[= \sum_{s\in\mathcal{A}_{\lambda}}\frac{1}{z-[s]}\tau^{s}_{\lambda}j_{ \lambda+s}, \tag{342}\] which agrees with 334, using the formulae: \[\tau^{s}_{\lambda}:=\operatorname*{Res}_{u=[s]}\mathcal{Y}_{\lambda}(u)^{-1}, \qquad s\in\mathcal{A}_{\lambda}, \tag{343}\] \[\tilde{\tau}^{t}_{\lambda}:=\operatorname*{Res}_{u=[t]}\mathcal{Y}_{\lambda}(u ),\qquad t\in\mathcal{R}^{+}_{\lambda}. \tag{344}\] Secondly, we have \[X^{-}(z)j_{\lambda} = \pi_{0}\frac{1}{z-\mathcal{L}}(w^{-1}\mathcal{L})j_{\lambda} \tag{346}\] \[= \pi_{0}\frac{1}{z-\mathcal{L}}q_{\lambda}\] (347) \[= \pi_{0}\frac{1}{z-\mathcal{L}}\sum_{t\in\mathcal{R}^{+}_{\lambda }}\tilde{\tau}^{t}_{\lambda}\psi^{t}_{\lambda-t}\] (348) \[= \pi_{0}\sum_{t\in\mathcal{R}_{\lambda}}\frac{1}{z-([t]-\overline {\varepsilon})}\tilde{\tau}^{t}_{\lambda}\psi^{t}_{\lambda-t}\] (349) \[= \sum_{t\in\mathcal{R}_{\lambda}}\frac{1}{z-([t]-\overline{ \varepsilon})}\tilde{\tau}^{t}_{\lambda}j_{\lambda-t}, \tag{345}\] which agrees with 335. #### 5.2.1. The Gaiotto State Consider the following so called Gaiotto state in \(\mathcal{F}\), \[G=e^{V_{1}/\hbar}=\sum_{n=0}^{\infty}V_{1}^{n}/|V_{1}^{n}|^{2}=\sum_{\lambda} j_{\lambda}/|j_{\lambda}|^{2} \tag{350}\] The Gaiotto state is characterized as a Whittaker vector, which in the holomorphic presentation is the statement ([4] 3.12/13 with \(X^{+}=D_{+1}\)) \[X^{-}(u)G=\mathcal{Y}(u)^{-1}G,\qquad X^{+}(u)G=P_{u}^{-}\mathcal{Y}(u+ \overline{\varepsilon})G. \tag{351}\] The \(j_{\lambda}\) component of these two equations reproduce the pole expansions \[\sum_{s}\frac{\tau^{s}_{\lambda}}{u-[s]}=u^{-1}T_{\lambda}(u),\qquad\sum_{s} \frac{\kappa^{s}_{\lambda}}{u-[s]}=(u+\overline{\varepsilon})\,T_{\lambda}(u +\overline{\varepsilon})^{-1} \tag{352}\] In other words, the Whittaker condition for \(G\) is equivalent to the Kerov identities 43. #### 5.2.2. The Half-Boson The Holomorphic presentation can be alternately described (Bourgine [4] 1809 2.20) in terms of the following _half-boson4_\(\Phi(z)\), Footnote 4: The name refers to that face that only one half of the Fourier modes of a usual free boson are present. \[\Phi(z)=\log(z)\Phi_{0}-\sum_{n=1}^{\infty}\frac{1}{nz^{n}}\Phi_{n}, \tag{353}\] which acts diagonally on the Jack states \(j_{\lambda}=|\lambda\rangle\) as \[\Phi_{n}|\lambda\rangle\coloneqq\left(\sum_{s\in\lambda}[s]^{n}\right)| \lambda\rangle,\qquad\partial\Phi(z)|\lambda\rangle=\left(\sum_{s\in\lambda} \frac{1}{z-[s]}\right)|\lambda\rangle. \tag{354}\] The eigenvalues of the half-boson current \(\partial\Phi\) are recognisable in our earlier formulae for the cokernel relation (152), \[\partial\Phi_{\lambda}(z)\coloneqq\sum_{s\in\lambda}\frac{1}{z-[s]}, \tag{355}\] which was the original motivation for the re-interpretation of the earlier work in this paper in the language of the SH\({}^{c}\). The half-boson current has the following commutation relation with the SH\({}^{c}\) operators \[[\partial\Phi(x),X^{\pm}(w)]=\pm\frac{X^{\pm}(w)}{z-w}. \tag{356}\] #### 5.2.3. Flavor vertex The _flavor vertex operator_\(\mathcal{U}\) (c.f. [3] 3.18 D.2, [5] 2.14) is given by \[\mathcal{U}|\lambda\rangle=\left(\prod_{s\in\lambda^{\times}}[s]\right)| \lambda\rangle,\quad\mathcal{U}|\emptyset\rangle=0. \tag{357}\] We notice that the eigenvalue of this operator is the familiar constant \(\varpi_{\lambda}\) (c.f. 27). Equivalently, we find its action as implementing the switch between the hatted and unhatted Jacks, i.e. \(\mathcal{U}\cdot\hat{j}_{\lambda}=j_{\lambda}\). The action on the Gaiotto state is given simply by: **Lemma 5.2**.: _We have \(\mathcal{U}|G\rangle=|H\rangle\), where \(|H\rangle\in\mathcal{F}\) is given by_ \[|H\rangle\coloneqq\sum_{n=1}^{\infty}V_{n}/|V_{n}|^{2}=\mathcal{L}^{-1}\sum_{ n\geq 1}w^{n}=\sum_{\lambda\neq\emptyset}\hat{j}_{\lambda}/|\hat{j}_{\lambda}|^{2}. \tag{358}\] ### Generalized Whittaker condition In this section, we are going to show that the main result (Theorem 4.26) of this paper can be understood as a generalization of the Whittaker condition for the Gaiotto state \(G\) (351). **Corollary 5.3**.: _The first Whittaker condition for \(|G\rangle\) (351) is equivalent to the following condition for \(|H\rangle:=\mathcal{U}|G\rangle\) (c.f. 5.2)_ \[X^{-}(z)|H\rangle=P_{z}^{-}\left(\mathcal{T}(z)\right)|H\rangle+z^{-1}, \tag{359}\] _where, \(\mathcal{T}(z)=z\mathcal{Y}(z)^{-1}\), and \(P_{z}^{-}\) denotes projection onto only negative powers of \(z\)._ Corollary 5.3 can be proved easily in the language of [3], which we omit here. We note that by expanding out in components, we find that the \(|\lambda\rangle\) component of this relation gives the following familiar pole expansion \[\sum_{s}\frac{\hat{\tau}_{\lambda}^{s}}{z-[s]}=T_{\lambda}(z)-1. \tag{360}\] We note the following expansion \[X^{\pm}(z)=V_{1}^{\pm}z^{-1}+O(z^{-2}) \tag{361}\] where \(V_{1}^{+}:=V_{1}\cdot\) and \(V_{1}^{-}:=V_{1}^{\dagger}=\hbar\partial_{V_{1}}\) and hence from 356 we have \[[\partial\Phi(x),V_{1}^{\pm}]=\pm X^{\pm}(z). \tag{362}\] Motivated by this, we define the generalized box creation operators: \[X_{\lambda}^{\pm}(z):=\pm[\partial\Phi(x),\hat{j}_{\lambda}^{\pm}]. \tag{363}\] We now show the extension of 359 to these generalized operators, **Proposition 5.4** (Generalized Whittaker Condition).: _The main theorem 4.26 is equivalent to the following condition for \(H\),_ \[X_{\lambda}^{-}(z)|H\rangle=P_{z}^{-}\left(\prod_{s\in\lambda}\mathcal{T}(z-[ s])\right)|H\rangle+z^{-1}. \tag{364}\] Proof.: The \(|\hat{\mu}\rangle\) component of the left hand side is given by \[P_{z}^{-}\left(\prod_{s\in\mu}\mathcal{T}(z-[s])\right)|\hat{\mu}\rangle/| \hat{\mu}|^{2}=P_{z}^{-}\left(T_{\mu\star\lambda}(z)\right)|\hat{\mu}\rangle/ |\hat{\mu}|^{2}=\left(T_{\mu\star\lambda}(z)-1\right)|\hat{\mu}\rangle/|\hat{ \mu}|^{2}. \tag{365}\] On the other hand the right hand side is \[-[\partial\Phi(z),\hat{j}_{\lambda}^{\dagger}]|H\rangle = \sum_{\sigma}[\hat{j}_{\lambda}^{\dagger},\partial\Phi(z)]|\hat{ \sigma}\rangle/|\hat{\sigma}|^{2} \tag{367}\] \[= \sum_{\sigma}\sum_{\mu}\hat{c}_{\lambda\mu}^{\sigma}\left( \partial\Phi_{\sigma}(z)-\partial\Phi_{\mu}(z)\right)|\hat{\mu}\rangle/|\hat{ \mu}|^{2}, \tag{366}\] where we have used \[\hat{j}_{\lambda}^{\dagger}\frac{|\hat{\sigma}\rangle}{|\hat{\sigma}|^{2}}= \sum_{\mu}\hat{c}_{\lambda\mu}^{\sigma}\frac{|\hat{\mu}\rangle}{|\hat{\mu}|^{ 2}}. \tag{368}\] Equating the \(|\hat{\mu}\rangle\) components from both sides we recover the formula 4.26. Finally, we write the relation 160 in the language of the holomorphic representation. **Lemma 5.5**.: (369) \[(w^{-1}-1)\mathcal{L}\partial\Phi(z)|H)=\frac{\mathcal{L}}{z-\mathcal{L}}|H)+z^{-1}.\] Proof.: We start with the formula 160 \[\frac{1}{z-\mathcal{L}}w^{n}=\sum_{\gamma^{+}(n+1)}\left(\sum_{t\in\gamma} \frac{1}{z-[t]}\right)\frac{\hat{q}_{\gamma}}{|\hat{j}_{\gamma}|^{2}}-\sum_{ \lambda\vdash n}\left(\sum_{s\in\lambda}\frac{1}{z-[s]}\right)\frac{w\hat{q}_ {\lambda}}{|\hat{j}_{\lambda}|^{2}}. \tag{370}\] which we sum over \(n\) to and use the formulae 358 to write this as \[\frac{\mathcal{L}}{z-\mathcal{L}}|H)=w^{-1}\mathcal{L}\partial\Phi(z)(|H)-V_ {1}/\hbar)-\mathcal{L}\partial\Phi(u)|H). \tag{371}\] Upon rearranging we recover the result. We see that equation 369 is a lifting of the Whittaker condition 359, which is recovered under \(\pi_{0}\). #### 5.3.1. Basic Evaluation Map We can now re-express our Main Theorem 4.26 in terms of objects of the SH\({}^{c}\): **Proposition 5.6**.: _The 'basic' evaluation map \(\Delta:\mathcal{F}\to\mathbb{C}_{\varepsilon}(u)\), defined by_ \[\Delta:\zeta\mapsto\left\langle\zeta\right|\partial\Phi(u)\,\mathcal{U}\,|G), \tag{372}\] _i.e. on the hatted Jack basis it is given by_ \[\Delta(\hat{j}_{\lambda})\coloneqq\sum_{b\in\lambda}\frac{1}{u-[b]}, \tag{373}\] _satisfies the following multiplicative property_ \[\Delta(\hat{j}_{\mu}\cdot\hat{j}_{\nu})=T_{\mu\star\nu}(u)-1. \tag{374}\] Proof.: We compute \[\Delta(\hat{j}_{\mu}\cdot\hat{j}_{\nu}) = \left\langle\hat{j}_{\mu}\cdot\hat{j}_{\nu}\right|\partial\Phi(u )\,\mathcal{U}\,|G) \tag{376}\] \[= \left\langle\hat{j}_{\nu}|\hat{j}_{\mu}^{\dagger}\partial\Phi(u) |H\right\rangle\] (377) \[= \left\langle\hat{j}_{\nu}|X_{\mu}^{-}(u)|H)+\left\langle\hat{j}_ {\nu}|\partial\Phi(u)\hat{j}_{\mu}^{\dagger}|H\right\rangle\] (378) \[= \left\langle\hat{j}_{\nu}|P_{z}^{-}\left(\prod_{s\in\lambda} \mathcal{T}(z-[s])\right)|H)+z^{-1}\langle\hat{j}_{\nu}|1\rangle+\langle\hat{ j}_{\nu}|\partial\Phi(u)|1\rangle\] (379) \[= \left(T_{\mu\star\nu}(u)-1\right)\langle\hat{j}_{\nu}|H\rangle, \tag{375}\] where we have used the generalized Whittaker condition 364 and \(\hat{j}_{\mu}^{\dagger}|H\rangle=1\) We call this the _basic_ evaluation because for \(\zeta\in\mathcal{F}\), we have \(\boldsymbol{y}_{u}(\zeta)=\Delta(j_{1}\cdot\zeta)\). In terms of the usual normalization on Jack functions, we have \[\Delta(j_{\lambda}):=\varpi_{\lambda}\sum_{b\in\lambda}\frac{1}{u-[b]}. \tag{380}\] **Example 5.7**.: _We re-express the previous example 4.27 in terms of this new operator. Let \(\mu=1^{2},\nu=2\). We have_ \[\Delta(j_{1^{2}})=[1,0]\left(\frac{1}{u-[1,0]}+\frac{1}{u-[0,0]}\right),\quad \Delta(j_{2})=[0,1]\left(\frac{1}{u-[0,1]}+\frac{1}{u-[0,0]}\right).\] _We know (e.g. from Stanley's Pieri rule 29) that_ \[j_{1^{2}}\cdot j_{2}=\frac{[0,-2]}{[2,-2]}j_{1^{2}2}+\frac{[2,0]}{[2,-2]}j_{3,1}.\] _We then have_ \[\Delta(j_{1^{2}}\cdot j_{2}) = \frac{[0,-2]}{[2,-2]}\Delta(j_{1^{2}2})+\frac{[2,0]}{[2,-2]} \Delta(j_{3,1})\] \[= \frac{[0,-2]}{[2,-2]}[2,0][1,0][0,1]\left(\frac{1}{u-[2,0]}+\frac{ 1}{u-[1,0]}+\frac{1}{u-[0,1]}+\frac{1}{u-[0,0]}\right)\] \[+\frac{[2,0]}{[2,-2]}[0,2][0,1][1,0]\left(\frac{1}{u-[0,2]}+\frac{ 1}{u-[1,0]}+\frac{1}{u-[0,1]}+\frac{1}{u-[0,0]}\right)\] \[= \frac{[2,0]}{[2,-2]}[0,2][0,1][1,0]\left(\frac{1}{u-[0,2]}-\frac{ 1}{u-[2,0]}\right).\] _On the other hand, we have_ \[T_{1^{2}\star 2}(u)=T_{2^{2}}(u)=\frac{(u-[0,0])(u-[2,2])}{(u-[2,0])(u-[0,2])}.\] _Expanding this in poles we find_ \[T_{1^{2}\star 2}(u)=\frac{[2,0][0,-2]}{[2,-2]}\frac{1}{u-[2,0]}+\frac{[0,2][-2,0]}{ [-2,2]}\frac{1}{u-[0,2]}+1.\] _Thus we see that_ \[\Delta(j_{1^{2}}\cdot j_{2})=\varpi_{1^{2}}\varpi_{2}\left(T_{1^{2}\star 2}(u)- 1\right).\] #### 5.3.2. Kernel The map \(\Delta\) is not injective. The following result follows from the definition of \(\Delta\) (373) by direct calculation. **Proposition 5.8**.: _For any collection of partitions \(\Lambda=\{\lambda_{i}\}_{i=1}^{N}\), where \(N\) is even and where \(\lambda_{i}\) and \(\lambda_{i+1\,\mathrm{mod}\,N}\) differ by moving a single box, and where every box appears an even number of times (in total), the element_ \[k_{\Lambda}=\sum_{i}^{N}(-1)^{i}\hat{j}_{\lambda_{i}} \tag{381}\] _is in the kernel of \(\Delta\)._ We call such a collection of partitions an \(N\)-_cycle_, motivated by the following examples. **Example 5.9**.: _For example, in degree 8, the 4-cycle of partitions (with markers to track the moving boxes)_ \[\begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD} \tag{382}\] _gives the kernel element,_ \[k_{\Lambda}:=\hat{j}_{1,3,4}-\hat{j}_{2^{2},4}+\hat{j}_{1,2^{2},3}-\hat{j}_{1 ^{2},3^{2}}\in\ker\Delta_{8}. \tag{383}\] **Example 5.10**.: _In degree 7, the 6-cycle of partitions (with markers to track the moving boxes)_ \[\begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD}\to \begin{CD}\includegraphics[width=142.26378pt]{images/1-crop.pdf}\end{CD} \tag{384}\] _gives the kernel element,_ \[k_{\Lambda}:=\hat{j}_{2,5}-\hat{j}_{3,4}+\hat{j}_{1,3^{2}}-\hat{j}_{1^{2},2,3}+ \hat{j}_{1^{3},4}-\hat{j}_{1^{2},5}\in\ker\Delta_{7}. \tag{385}\] _Note that \(\ker\Delta_{7}=\operatorname{Span}\{k_{\Lambda},k_{\Lambda^{\prime}}\}\) is the first non-empty kernel._ ## 6. Structure of twists In this final section, we outline some conjectures that hope to further illuminate the structure of the algebra of Lax eigenfunctions, in particular we seek to provide a natural explanation of the trace twist property (283), that is, \[\operatorname{Tr}_{n}(\beta(\zeta,\xi))=\rho_{*}\circ\operatorname{Tr}_{n-1} (\theta(\zeta,\xi)),\] where as before \(\rho_{*}:(\{x_{\gamma}\},\{y^{s}\},\{0\})\mapsto(\{0\},\{y^{s}\},\{-x_{\gamma}\})\). ### The \(\rho\) operator We begin by introducing a peculiar operator. **Definition 6.1**.: _Consider the following operator, defined for \(s\in\mathcal{A}_{\lambda}\),_ \[\rho_{\lambda}^{s}:\mathbb{z}_{\lambda}^{0}\to\bigoplus_{\gamma\nearrow(\lambda+ s)}\mathbb{x}_{\gamma}^{0}\subset\mathcal{H}_{|\lambda|+1},\qquad\rho_{\lambda}^{s}: \hat{\psi}_{\lambda}^{t}-\hat{\psi}_{\lambda}^{s}\mapsto\hat{\psi}_{\lambda+ s}^{t}-\hat{\psi}_{\lambda+t}^{s}. \tag{386}\] _The operator \(\rho_{\lambda}^{s}\) is expressed in the basis of \(\mathcal{Z}_{\lambda}^{0}\) given by \(\{\hat{\psi}_{\lambda}^{t}-\hat{\psi}_{\lambda}^{s}\}_{t\in\mathcal{A}_{\lambda },t\neq s}\)._ The next result follows directly from the definition. **Corollary 6.2**.: _For \(\zeta\in\mathcal{Z}_{\lambda}^{0}\), i.e. \(\zeta\in\mathcal{Z}_{\lambda}\) and \(\boldsymbol{z}_{\lambda}(\zeta)=0\), the map \(\rho_{\lambda}^{s}\) has the properties_ \[\boldsymbol{y}_{u}(\rho_{\lambda}^{s}\zeta)=\boldsymbol{y}_{u}(\zeta),\qquad \boldsymbol{z}_{\gamma}(\rho_{\lambda}^{s}\zeta)=-\boldsymbol{x}_{\gamma}( \zeta),\qquad\boldsymbol{x}_{\gamma}(\rho_{\lambda}^{s}\zeta)=0, \tag{387}\] _for all \(\gamma\,\vdash\,|\lambda|+1\). In other words, \(\rho_{\lambda}^{s}\) induces the trace twist (252)_ \[\mathrm{Tr}_{n+1}(\rho_{\lambda}^{s}\zeta)=\rho_{*}\circ\mathrm{Tr}_{n}(\zeta). \tag{388}\] Using corollary 6.2, the following result gives a direct explanation of the first example (251) of the twisted trace property. **Proposition 6.3**.: (389) \[\beta(\hat{\psi}_{1}^{v},\hat{\psi}_{\lambda}^{s})=\rho_{\lambda}^{s}(\hat{j }_{\lambda}).\] Proof.: Starting from the lifted Pieri formula 3.7, we find \[(\mathcal{L}-[v+s])\,(\psi_{1}^{v}\cdot\psi_{\lambda}^{s}) = \sum_{u\in\mathcal{A}_{\lambda+s}}[v]\frac{[s-u+\overline{v}][s-u +v]}{[s-u][s-u+v+\overline{v}]}\,\tau_{\lambda+s}^{u}\psi_{\lambda+s}^{u} \tag{391}\] \[+\sum_{t\in\mathcal{A}_{\lambda},t\neq s}[-v]\tau_{\lambda}^{t} \psi_{\lambda+t}^{s} \tag{390}\] Note that if \(u=s+v\) or \(u=s+\overline{v}\) are minima of \(\lambda+s\), then their coefficients in the first sum vanish. Then, using 445, we have \[(\mathcal{L}-[v+s])(\hat{\psi}_{1}^{v}\cdot\hat{\psi}_{\lambda}^{s}) = \sum_{t\in\mathcal{A}_{\lambda},t\neq s}\hat{\tau}_{\lambda}^{t} \left(\hat{\psi}_{\lambda+s}^{t}-\hat{\psi}_{\lambda+t}^{s}\right) \tag{393}\] \[= \sum_{t\in\mathcal{A}_{\lambda},t\neq s}\hat{\tau}_{\lambda}^{t} \,\rho_{\lambda}^{s}\left(\hat{\psi}_{\lambda}^{t}-\hat{\psi}_{\lambda}^{s}\right)\] (394) \[= \rho_{\lambda}^{s}\sum_{t\in\mathcal{A}_{\lambda},t\neq s}\hat{ \tau}_{\lambda}^{t}\,\left(\hat{\psi}_{\lambda}^{t}-\hat{\psi}_{\lambda}^{s}\right)\] (395) \[= \rho_{\lambda}^{s}\sum_{t\in\mathcal{A}_{\lambda}}\hat{\tau}_{ \lambda}^{t}\,\hat{\psi}_{\lambda}^{t}. \tag{392}\] Since \(\sum_{t\in\mathcal{A}_{\lambda}}\hat{\tau}_{\lambda}^{t}=0\) implies that \(\sum_{t\in\mathcal{A}_{\lambda};t\neq s}\hat{\tau}_{\lambda}^{t}=-\hat{\tau}_ {\lambda}^{s}\). #### 6.1.1. Generalization Next, we look at extending the relation 6.3 to all higher degrees to understand the relationship between the total traces of \(\beta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t})\) and \(\theta(\hat{\psi}_{\lambda}^{s},\hat{\psi}_{\nu}^{t})\) for all \(\lambda,\nu\). **Definition 6.4**.: _For \(\zeta=\sum_{\lambda}\zeta_{\lambda}\in\mathcal{Z}^{0}\), i.e. \(\zeta_{\lambda}:=P_{\mathcal{Z}_{\lambda}}(\zeta)\) and \(\boldsymbol{z}_{\lambda}(\zeta_{\lambda})=0,\forall\lambda\), we define_ \[\rho(\xi)\zeta:=\sum_{\lambda,s}\xi_{\lambda}^{s}\rho_{\lambda}^{s}\zeta_{ \lambda},\text{ where }\xi=\sum_{\lambda,s}\xi_{\lambda}^{s}\hat{\psi}_{\lambda}^{s}. \tag{396}\] Note that we have \(\rho(\hat{\psi}^{s}_{\lambda})=\rho^{s}_{\lambda}\). With this, we can rewrite 6.3 more suggestively as \[\beta(\hat{\psi}^{v}_{1},\hat{\psi}^{s}_{\lambda})=\rho(\hat{\psi}^{s}_{\lambda}) \theta(\hat{\psi}^{v}_{1},\hat{\psi}^{s}_{\lambda}). \tag{397}\] **Lemma 6.5**.: * _For_ \(\xi,\zeta\in\mathbb{z}^{0}_{\lambda}\subset\mathbb{z}_{\lambda}\)_, we have_ \(\rho(\xi)\zeta\in\ker\operatorname{Tr}\)_._ * _For_ \(\zeta\in\mathbb{z}^{0}_{\lambda}\)_, we have_ \(\rho(\zeta)\zeta=0\)_, and hence_ \(\rho(\xi)\zeta=-\rho(\zeta)\xi\)_._ Proof.: We check \[\rho^{a}_{\lambda}-\rho^{b}_{\lambda}(\hat{\psi}^{c}_{\lambda}- \hat{\psi}^{a}_{\lambda}) = \hat{\psi}^{c}_{\lambda+a}-\hat{\psi}^{a}_{\lambda+c}-\rho^{b}_{ \lambda}\left(\hat{\psi}^{c}_{\lambda}-\hat{\psi}^{b}_{\lambda}+\hat{\psi}^{b} _{\lambda}-\hat{\psi}^{a}_{\lambda}\right) \tag{399}\] \[= \hat{\psi}^{c}_{\lambda+a}-\hat{\psi}^{c}_{\lambda+b}+\hat{\psi}^ {b}_{\lambda+c}-\hat{\psi}^{b}_{\lambda+a}+\hat{\psi}^{a}_{\lambda+b}-\hat{ \psi}^{a}_{\lambda+c}=\Gamma^{a,b,c}_{\lambda} \tag{398}\] so \[\rho^{a}_{\lambda}-\rho^{b}_{\lambda}:\hat{\psi}^{c}_{\lambda}-\hat{\psi}^{d}_ {\lambda}\mapsto\Gamma^{a,b,c}_{\lambda}-\Gamma^{a,b,d}_{\lambda}\in\ker W_{| \lambda|+1} \tag{400}\] Thus we have (where \(u=\sum_{a\neq b}u_{a}(\hat{\psi}^{a}_{\lambda}-\hat{\psi}^{b}_{\lambda})\)) \[\rho(u)u = \left(\sum_{a\neq b}u_{a}\rho(\hat{\psi}^{a}_{\lambda}-\hat{\psi} ^{b}_{\lambda})\right)\left(\sum_{c\neq b}u_{c}(\hat{\psi}^{c}_{\lambda}-\hat {\psi}^{b}_{\lambda})\right) \tag{402}\] \[= \sum_{a\neq b}\sum_{c\neq b}u_{a}u_{c}(\Gamma^{a,b,c}_{\lambda}- \Gamma^{a,b,b}_{\lambda})\] (403) \[= \sum_{a\neq b}\sum_{c\neq b}u_{a}u_{c}(\Gamma^{a,b,c}_{\lambda})\] (404) \[= 0. \tag{401}\] As this is a symmetric sum over an asymmetric symbol, it vanishes. We then have a generalization of proposition 6.3, **Corollary 6.6**.: _For general \(\zeta\in\mathcal{H}\), we have_ \[\rho(\zeta)\hat{j}_{\lambda}=\beta(w,\zeta). \tag{405}\] ### Conjectures Here we extend the result (387) to further understand the relationship between the traces and the more general \(\rho\) map (6.4). **Corollary 6.7**.: _For \(\zeta\in\mathbb{z}^{0}\), i.e. \(\mathbb{z}_{\lambda}(\zeta)=0,\forall\lambda\), the map \(\rho\) has the properties_ \[\mathbb{z}_{\gamma}(\rho(\xi)\zeta)=-\sum_{t\in\mathcal{R}_{\gamma}}\mathbb{z }_{\gamma-t}(\xi)\mathbb{z}_{\gamma}(P_{\mathbb{z}_{\gamma-t}}\zeta), \tag{406}\] \[\mathbb{z}_{\gamma}(\rho(\xi)\zeta)=-\sum_{t\in\mathcal{R}_{\gamma}}\mathbb{z }_{\gamma-t}(\xi)\mathbb{z}_{\gamma}(P_{\mathbb{z}_{\gamma-t}}\zeta), \tag{407}\] \[\boldsymbol{x}_{\gamma}(\rho(\xi)\zeta)=0, \tag{408}\] _for all \(\gamma\,\vdash\,|\lambda|+1\)._ **Definition 6.8**.: _Let \(\xi=\sum_{\lambda}\xi_{\lambda}\in\mathcal{H}\), where \(\xi_{\lambda}:=P_{\mathbb{Z}_{\lambda}}(\xi)\). We say \(\xi\) is "\(\mathrm{good}\)" if for each \(\lambda\), if \(\xi_{\lambda}\neq 0\) then \(\boldsymbol{z}_{\lambda}(\xi_{\lambda})\neq 0\). For good \(\xi\) we define_ \[F(\xi)\coloneqq\sum_{\lambda\,:\,\xi_{\lambda}\neq 0}\frac{\xi_{\lambda}}{ \boldsymbol{z}_{\lambda}(\xi_{\lambda})}. \tag{409}\] With this, we can construct a prototype explanation for the trace twisting property 283: **Corollary 6.9**.: _If \(\xi=\sum_{\lambda}\xi_{\lambda}\in\mathcal{H}_{n}\) satisfies \(\boldsymbol{z}_{\lambda}(\xi)\neq 0\) for all \(\lambda\), (which implies that \(\lambda\) is "good"), then_ \[\mathrm{Tr}_{n+1}(\rho(F(\xi))\zeta)=\rho_{*}\circ\mathrm{Tr}_{n}(\zeta). \tag{410}\] There is a canonical element in each degree \(n\) that satisfies the properties of this corollary: \(w^{n}\). It's easy to see: \[F(w^{n})=\sum_{\lambda}\frac{w\hat{q}_{\lambda}}{n\hbar}=\frac{1}{n\hbar} \mathcal{L}\sum_{\lambda}\hat{j}_{\lambda}. \tag{411}\] **Lemma 6.10**.: _For \(\zeta\in\mathcal{F}\subset\mathcal{Z}_{n}^{0}\), we have_ \[\rho(F(w^{n}))\zeta=\tfrac{1}{n\hbar}\beta(w,\mathcal{L}\zeta)=\left(\tfrac{1} {n\hbar}\sum_{k\geq 1}\beta(w,w^{k})V_{-k}\right)\zeta. \tag{412}\] Proof.: let \(\zeta=\sum_{\lambda}\zeta_{\lambda}\hat{j}_{\lambda}\). then \[\rho(F(w^{n}))\zeta = \tfrac{1}{n\hbar}\sum_{\lambda,s}v_{\lambda}[s]\hat{\tau}_{ \lambda}^{s}\rho_{\lambda}^{s}\hat{j}_{\lambda} \tag{414}\] \[= \tfrac{1}{n\hbar}\sum_{\lambda,s}\zeta_{\lambda}[s]\hat{\tau}_{ \lambda}^{s}\beta(w,\hat{\psi}_{\lambda}^{s})\] (415) \[= \tfrac{1}{n\hbar}\beta(w,\mathcal{L}\sum_{\lambda}\zeta_{\lambda }\hat{j}_{\lambda})\] (416) \[= \tfrac{1}{n\hbar}\beta(w,\mathcal{L}\zeta)\] (417) \[= \tfrac{1}{n\hbar}\beta(w,\sum_{k}w^{k}V_{-k}\zeta). \tag{413}\] **Corollary 6.11**.: (418) \[\beta^{1,n}=\rho(F(w^{n}))\theta^{1,n}\] Proof.: We know \(\theta^{1,n}=V_{n}\), so \(\rho(F(w^{n}))\theta^{1,n}=\frac{1}{n\hbar}\beta(w,\mathcal{L}V_{n})=\beta(w,w ^{n})\). **Conjecture 6.12**.: _For \(\zeta\in\mathrm{z}_{n}^{0}\)_ \[\rho(F(w^{n}))\zeta=\left(\tfrac{1}{n\hbar}\sum_{k\geq 1}\beta(w,w^{k})V_{-k} \right)\zeta. \tag{419}\] Note that by expanding out \(\beta^{m,n}\) (and \(\theta^{m,n}\)) in terms of \(w^{k}\) and \(\beta^{1,k^{\prime}}\) (\(\theta^{1,k^{\prime}}\in\mathcal{F}\)) for \(k,k^{\prime}\) using the fact that they are Hochschild closed, this conjecture would imply that \[\beta^{n,m}=\rho(F(w^{n+m-1}))\theta^{n,m}. \tag{420}\] We define \[\tilde{\rho}\coloneqq\rho(F(w^{n})). \tag{421}\] Following from the trace theorem, we know that **Corollary 6.13**.: _In general, \(\theta\) and \(\beta\) are related by_ \[\beta(\zeta,\xi)=\tilde{\rho}\cdot\theta(\zeta,\xi)+K(\zeta,\xi) \tag{422}\] _where \(K(\zeta,\xi)\in\ker\mathrm{Tr}\)._ Although we have shown that \(\theta(\zeta,\xi)\) and \(\beta(\zeta,\xi)\) have twisted traces, we might be tempted to claim that there exists some general elements \(\gamma(\zeta,\xi)\) such that, \[\beta(\zeta,\xi)=\rho(F(\gamma(\zeta,\xi)))\theta(\zeta,\xi), \tag{423}\] which would give a direct explanation of (283). However, this is impossible in general since \(\theta(w,\hat{\psi}_{\lambda}^{s}-\hat{\psi}_{\lambda}^{t})=0\), however \(\beta(w,\hat{\psi}_{\lambda}^{s}-\hat{\psi}_{\lambda}^{t})\neq 0\). **Conjecture 6.14**.: _For any \(\zeta,\xi\) such that \(\partial\Pi(\zeta,\xi)\) is "good" (c.f. 6.8), then the following holds_ \[\beta(\zeta,\xi)=\rho(F(\partial\Pi(\zeta,\xi))\theta(\zeta,\xi). \tag{424}\] A proof of this conjecture would immediately yield the twisted trace property 283 for all good pairs \(\zeta,\xi\), which is a generic condition. We notice that the earlier conjectured relation 420 is a special case of conjecture 6.14. We have also checked it computationally up to degree \(|\zeta|+|\xi|=6\). To explore this conjecture, we return to the non-'good' case above of \(\zeta=w,\xi=\hat{\psi}_{\lambda}^{s}-\hat{\psi}_{\lambda}^{t}\), rather now we perturb by small \(a\neq 1\), to \(\xi_{a}=\hat{\psi}_{\lambda}^{s}-a\hat{\psi}_{\lambda}^{t}\), where we find the conjecture holds \[\rho(F(\partial\Pi(w,\hat{\psi}_{\lambda}^{s}-a\hat{\psi}_{\lambda}^{t}))) \theta(w,\hat{\psi}_{\lambda}^{s}-a\hat{\psi}_{\lambda}^{t}) = \rho\left(\frac{\pi_{+}(\hat{\psi}_{\lambda}^{s}-a\hat{\psi}_{ \lambda}^{t})}{1-a}\right)(1-a)\hat{j}_{\lambda} \tag{425}\] \[= \rho(\hat{\psi}_{\lambda}^{s}-a\hat{\psi}_{\lambda}^{t})\hat{j}_{\lambda} \tag{427}\] \[= \beta(w,\hat{\psi}_{\lambda}^{s}-a\hat{\psi}_{\lambda}^{t}). \tag{426}\] We see directly that the denominator of \(F(\partial\Pi(\zeta,\xi))\) cancels out the vanishing coefficient \((1-a)\) of \(\hat{j}_{\lambda}\) in \(\theta(\zeta,\xi)\). In general, the following result shows this cancellation always happens. **Lemma 6.15**.: (428) \[\pi_{0}P_{\mathbb{Z}_{\lambda}}\theta(\xi,\zeta)=\boldsymbol{z}_{\lambda} \left(\partial\Pi(\xi,\zeta)\right)\hat{j}_{\lambda}.\] Proof.: From equation 4.21 that we have \(\pi_{0}\theta(\xi,\zeta)=\pi_{0}\mathcal{L}\partial\Pi(\xi,\zeta)\). Then, we know from equation 196 that for any \(\Gamma\), we have \(\pi_{0}\mathcal{L}\Gamma=\sum_{\lambda}\boldsymbol{z}_{\lambda}(\Gamma)\hat{ j}_{\lambda}\). There is reason to doubt that conjecture 6.14 holds in higher degrees due to the behaviour of the term 282. However, we show a substantial case in which conjecture 6.14 holds. **Proposition 6.16**.: _Conjecture 6.14 holds in the case where \(\xi=\hat{\psi}_{\lambda}^{t}\) and \(\zeta=\Gamma w\), where \(\Gamma\in\mathcal{F}\). In particular, that is,_ \[\beta(w\Gamma,\hat{\psi}_{\lambda}^{t})=\rho(F(\partial\Pi(w\Gamma,\hat{\psi} _{\lambda}^{t})))\theta(w\Gamma,\hat{\psi}_{\lambda}^{t}). \tag{429}\] Proof.: We start by expanding \(\Gamma=\sum_{\sigma}\Gamma_{\sigma}\hat{j}_{\sigma}\), to find \[\theta(w\Gamma,\hat{\psi}_{\lambda}^{t})=\Gamma\cdot\theta(w,\hat{\psi}_{ \lambda}^{t})=\Gamma\cdot\hat{j}_{\lambda}=\sum_{\mu,\sigma}\Gamma_{\sigma} \hat{c}_{\sigma\lambda}^{\mu}\hat{j}_{\mu}. \tag{430}\] For the next step, we start with \[\partial\Pi(w\Gamma,\hat{\psi}_{\lambda}^{t})=\Gamma\cdot\pi_{+}\hat{\psi}_{ \lambda}^{t}. \tag{431}\] We then use formula 195 to show \[\boldsymbol{z}_{\mu}(\Gamma\cdot\pi_{+}\hat{\psi}_{\lambda}^{t})=\sum_{\sigma }\Gamma_{\sigma}\boldsymbol{z}_{\mu}(\hat{j}_{\sigma}\cdot\pi_{+}\hat{\psi}_{ \lambda}^{t})=\sum_{\sigma}\Gamma_{\sigma}\sum_{\nu}\hat{c}_{\sigma\nu}^{\mu} \boldsymbol{z}_{\nu}(\pi_{+}\hat{\psi}_{\lambda}^{t})=\sum_{\sigma}\Gamma_{ \sigma}\hat{c}_{\sigma\lambda}^{\mu}. \tag{432}\] Putting all these statements together, we have \[\rho(F(\partial\Pi(w\Gamma,\hat{\psi}_{\lambda}^{t})))\theta(w \Gamma,\hat{\psi}_{\lambda}^{t}) = \rho\left(\sum_{\mu}\frac{P_{\mathbb{Z}_{\mu}}\Gamma\pi_{+}\hat{ \psi}_{\lambda}^{t}}{\sum_{\sigma}\Gamma_{\sigma}\hat{c}_{\sigma\lambda}^{\mu }}\right)\sum_{\mu,\sigma}\Gamma_{\sigma}\hat{c}_{\sigma\lambda}^{\mu}\hat{j}_ {\mu} \tag{434}\] \[= \sum_{\mu}\rho\left(P_{\mathbb{Z}_{\mu}}\Gamma\pi_{+}\hat{\psi}_{ \lambda}^{t}\right)\hat{j}_{\mu}\] (435) \[= \sum_{\mu}\beta(w,P_{\mathbb{Z}_{\mu}}\Gamma\pi_{+}\hat{\psi}_{ \lambda}^{t})\] (436) \[= \beta(w\Gamma,\hat{\psi}_{\lambda}^{t}), \tag{433}\] in which we have used corollary 6.6. ## Appendix A ### Partition Counting Let \(p(n)\) be the number of integer partitions of \(n\). The partition counting function is given by \(P(x):=\sum_{n=0}^{\infty}p(n)x^{n}=\frac{1}{(x;x)_{\infty}}=1+x+2x^{2}+3x^{3}+5x ^{4}+\ldots\). The _corner counting_ numbers \(p(n,r)\) count partitions of size \(n\) with \(r\) outer corners, and are represented by the generating function \[P(x,t)=\sum_{n,r\geq 0}p(n,r)x^{n}t^{r}=t+t^{2}x+2t^{2}x^{2}+(2t^{2}+t^{3})x^{ 3}+(3t^{2}+2t^{3})x^{4}+\ldots \tag{437}\] **Lemma A.1**.: _The corner counting generating function is given by_ \[P(x,t)=\prod_{k=1}^{\infty}\frac{1-x^{k-1}(1-t)}{1-x^{k}}=\frac{(1-t;x)_{ \infty}}{(x;x)_{\infty}}. \tag{438}\] Proof.: 5 Note that the number of corners is equal to the number of distinct parts plus one (you always have a corner at the 'top' of a series of repeated parts, plus one at the very bottom of the partition). So the term of Footnote 5: The proof is due to Sam Hopkins ([https://www.samuelfhopkins.com/](https://www.samuelfhopkins.com/)). \[\frac{1-x^{k}(1-t)}{1-x^{k}}=1+t\frac{x^{k}}{1-x^{k}}=1+t(x^{k}+x^{2k}+\cdots) \tag{439}\] correspond to choosing how many parts equal to \(k\) n your partition you want. There is an extra factor of \(1-x^{0}(1-t)=t\) for the extra corner at the bottom of every partition. **Corollary A.2**.: _The following properties of the corner counting generating function hold,_ \[P(x,1)=P(x), \tag{440}\] \[P(x,1-x)=1, \tag{441}\] \[[\partial_{t}P(x,t)]_{t=1}=\frac{1}{1-x}P(x). \tag{442}\] Proof.: The first two follow directly6 from formula 438. For the final statement, we note that Footnote 6: Sam Hopkins provides an alternate probabilistic proof of the result 441. Namely, that weighting each partition \(\lambda\) by \(x^{|\lambda|}(1-x)^{\#corners(\lambda)}\), gives a probability distribution on the set of all partitions. Imagine constructing a partition as follows. We start with the empty partition. Then we focus on its unique corner. We flip a coin that is heads with probability \(x\) and tails with probability \(1-x\). If we get heads, we add a box in that corner, and then move on to consider the "next" corner of the partition we’ve built so far, moving left to right and top to bottom. When we flip a tails at a corner, we leave that box empty, but we still move on to the next corner. Unless we flipped tails at the bottom corner (i.e., in a row with no boxes in it), in which case we stop and output the partition we’ve made. It is not hard to see that we produce each \(\lambda\) with probability \(x^{|\lambda|}(1-x)^{\#corners(\lambda)}\). \[[\partial_{t}P(x,t)]_{t=1}=\sum_{r,n\geq 0}r\,p(n,r)\,x^{n} \tag{443}\] and by the decomposition (48) and the formula (146), we know that counting the total number of minima of all partitions is given by \((1-x)^{-1}P(x)\). ### \(\tau\) identities Let \(\{v,\overline{v}\}=\{(1,0),(0,1)\}=\mathcal{A}_{\{1\}}\), and the Spectral Factor \[T_{1}([s])=\frac{[s][s+v+\overline{v}]}{[s+v][s+\overline{v}]} \tag{444}\] **Lemma A.3**.: _For \(s\neq b\in\mathcal{A}_{\lambda}\), we have_ \[\tau_{\lambda+s}^{b}=T_{1}([s-b])\,\tau_{\lambda}^{b}. \tag{445}\] _For \(s\in\mathcal{A}_{\lambda}\), \(t\in\mathcal{R}_{\lambda}^{+}\), we have_ \[\bar{\tau}_{\lambda+s}^{t}=T_{1}([s-t])^{-1}\,\bar{\tau}_{\lambda}^{t}. \tag{446}\] **Lemma A.4**.: (447) \[\frac{\hbar}{[s][s+v+\overline{v}]}=1-\frac{[s+v][s+\overline{v}]}{[s][s+v+ \overline{v}]}=1-T_{1}([s])^{-1}\] Proof.: Note that for any \(s\), we have \(\hbar=[s][s+v+\overline{v}]-[s+v][s+\overline{v}]\). **Lemma A.5**.: _For \(s+(1,1)\in\mathcal{R}_{\lambda}^{+}\), the following identities hold_ \[\sum_{q\in\mathcal{A}\lambda}\frac{\hbar\,\tau_{\lambda}^{q}}{[s-q][s-q+(1,1) ]}=\tau_{\lambda-s}^{s} \tag{448}\] _and_ \[\sum_{t\in\mathcal{R}_{\lambda}^{+}}\frac{\hbar\,\bar{\tau}_{\lambda}^{t}}{[s -t][s-t+(1,1)]}=-\hbar+\bar{\tau}_{\lambda+s}^{s+(1,1)} \tag{449}\] Proof.: For the first, we use (447) to find the left hand side is \[\sum_{q\in\mathcal{A}\lambda}\tau_{\lambda}^{q}\left(1-T_{1}([s-q])^{-1}\right) =1-\sum_{q\in\mathcal{A}\lambda}\tau_{\lambda}^{q}T_{1}([s-q])^{-1} \tag{450}\] Now the possible minima \(q=s+v\) and \(q=s+\overline{v}\) of \(\lambda\) cant contribute to the sum, since their coefficients vanish in the numerator of \(T_{1}([s-q])^{-1}\). Thus the sum reduces to minima of \(\lambda-s\), except \(q=s\), and then we use (445) to recover \[\sum_{q\in\mathcal{A}\lambda-s,q\neq s}\tau_{\lambda}^{q}T_{1}([s-q])^{-1}=\sum _{q\in\mathcal{A}\lambda-s,q\neq s}\tau_{\lambda-s}^{q}=1-\tau_{\lambda-s}^{s}. \tag{451}\] For the second identity, \[\sum_{t\in\mathcal{R}_{\lambda}^{+}}\tilde{\tau}_{\lambda}^{t} \frac{\hbar}{[t-(1,1)-s][t-s]} = \sum_{t\in\mathcal{R}_{\lambda}^{+}}\tilde{\tau}_{\lambda}^{t}(1 -T_{1}([s-t])^{-1}) \tag{453}\] \[= |\lambda|\hbar-\sum_{t\in\mathcal{R}_{\lambda}^{+}}\tilde{\tau} _{\lambda}^{t}T_{1}([s-t])^{-1}\] (454) \[= |\lambda|\hbar-\sum_{t\in\mathcal{R}_{\lambda}^{+}+s,t\neq s+(1, 1)}\tilde{\tau}_{\lambda+s}^{t}\] (455) \[= |\lambda|\hbar-|\lambda+s|\hbar+\tilde{\tau}_{\lambda+s}^{s+(1, 1)} \tag{452}\] On the first line we used (447), and on the second line we used (445).
2307.06740
Finite Algebras with Hom-Sets of Polynomial Size
We provide an internal characterization of those finite algebras (i.e., algebraic structures) $\mathbf A$ such that the number of homomorphisms from any finite algebra $\mathbf X$ to $\mathbf A$ is bounded from above by a polynomial in the size of $\mathbf X$. Namely, an algebra $\mathbf A$ has this property if, and only if, no subalgebra of $\mathbf A$ has a nontrivial strongly abelian congruence. We also show that the property can be decided in polynomial time for algebras in finite signatures. Moreover, if $\mathbf A$ is such an algebra, the set of all homomorphisms from $\mathbf X$ to $\mathbf A$ can be computed in polynomial time given $\mathbf X$ as input. As an application of our results to the field of computational complexity, we characterize inherently tractable constraint satisfaction problems over fixed finite structures, i.e., those that are tractable and remain tractable after expanding the fixed structure by arbitrary relations or functions.
Libor Barto, Antoine Mottet
2023-07-13T13:26:00Z
http://arxiv.org/abs/2307.06740v1
# Finite algebras with hom-sets of polynomial size ###### Abstract. We provide an internal characterization of those finite algebras (i.e., algebraic structures) \(\mathbf{A}\) such that the number of homomorphisms from any finite algebra \(\mathbf{X}\) to \(\mathbf{A}\) is bounded from above by a polynomial in the size of \(\mathbf{X}\). Namely, an algebra \(\mathbf{A}\) has this property if, and only if, no subalgebra of \(\mathbf{A}\) has a nontrivial strongly abelian congruence. We also show that the property can be decided in polynomial time for algebras in finite signatures. Moreover, if \(\mathbf{A}\) is such an algebra, the set of all homomorphisms from \(\mathbf{X}\) to \(\mathbf{A}\) can be computed in polynomial time given \(\mathbf{X}\) as input. As an application of our results to the field of computational complexity, we characterize inherently tractable constraint satisfaction problems over fixed finite structures, i.e., those that are tractable and remain tractable after expanding the fixed structure by arbitrary relations or functions. Both authors have been supported by the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement 771005). Libor Barto was also funded by the European Union (ERC, POCOCOP, 101071674). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. Abelian congruences are among the basic concepts of _commutator theory_[14] which emerged in the 1970s and provided useful generalizations of concepts such as commutator, solvability, or nilpotence from groups to general algebras. Strong abelianness is a substantially stronger form of abelianness, e.g., no group has a nontrivial strongly abelian congruence. The concept orginated in McKenzie's investigation of representing finite lattices as congruence lattices of finite algebras [24] and was further developed and applied e.g. in [18, 21, 22]. Crucial for us is the connection of strong abelianness to _tame congruence theory_, a structure theory of finite algebras initiated in [18]. This theory is an essential tool for our proof. We introduce strongly abelian congruences in Section 2.5, where we also show that the property in item (2) of Theorem 1 is decidable in polynomial time for algebras in finite signatures. The negative part of Theorem 1 is to provide a superpolynomial lower bound on \(c_{\mathbf{A}}\) in case that some subalgebra of \(\mathbf{A}\) has a nontrivial strongly abelian congruence. The proof presented in Section 3 gives a nearly exponential lower bound \(2^{\Omega(n^{1/k})}\) for \(k=\lfloor\log_{2}|A|\rfloor\) and we show that the bound is essentially optimal up to a constant in the exponent. Section 4 contains the proof of the more involved, positive part of Theorem 1, where we employ the tame congruence theory. The polynomial upper bound is effective in that if the condition in item (2) is met and \(\mathbf{A}\) is in finite signature, then homomorphisms from \(\mathbf{X}\) to \(\mathbf{A}\) can be algorithmically enumerated in polynomial time. We state these refinements in Section 5 and discuss further research directions. The paper is meant to be understandable by every mathematician. It is largely self-contained except from some very basic results in universal algebra (reviewed in Section 2.1) and results from tame congruence theory (Theorem 13, Section 4.1). ### Constraint satisfaction problems over fixed structures Our characterization of finite algebras with polynomially-sized hom-sets is originally motivated by the investigation of the complexity of constraint satisfaction problems over fixed finite structures. In this discussion, a signature may contain, apart from function symbols, also relation symbols with associated arities. A _structure_\(\mathcal{A}\) consists of a universe and interpretations of function symbols as above, and additionally interpretations of relation symbols: for each relation symbol \(R\) of arity \(k\), its interpretation in \(\mathbf{A}\) is a \(k\)-ary relation \(R^{\mathcal{A}}\subseteq A^{k}\). Given two structures \(\mathcal{X}\) and \(\mathcal{A}\) in the same signature, a homomorphism \(h\colon\mathcal{X}\to\mathcal{A}\) is a mapping \(h\colon\mathcal{X}\to A\) that preserves all function symbols as well as every relation symbol \(R\), i.e., \((h(x_{1}),\ldots,h(x_{k}))\in R^{\mathcal{A}}\) whenever \((x_{1},\ldots,x_{k})\in R^{\mathcal{X}}\), where \(k\) is the arity of \(R\). A structure \(\mathcal{A}\) is a _reduct_ of a structure \(\mathcal{B}\) if they have the same universe \(A=B\), the signature of \(\mathcal{A}\) is a subset of the signature of \(\mathcal{B}\) and symbols are interpreted in \(\mathcal{A}\) in the same way as in \(\mathcal{B}\). We also say that \(\mathcal{B}\) is an _expansion_ of \(\mathcal{A}\). The _constraint satisfaction problem (CSP) over \(\mathcal{A}\)_, written CSP(\(\mathcal{A}\)), is the computational problem to decide whether a given input finite structure \(\mathcal{X}\) of the same signature as \(\mathcal{A}\) admits a homomorphism to \(\mathcal{A}\). The most investigated special case is when \(\mathcal{A}\) is a finite structure of finite, purely relational signature. One of the main goals [28, 12], to obtain a characterization of the computational complexity of such CSPs, motivated much of developments in universal algebra in the last 25 years. These efforts culminated in a celebrated dichotomy theorem obtained independently in [9] and [30, 31] that provides a complete complexity classification of these CSPs assuming \(P\neq NP\): for every finite structure \(\mathcal{A}\) of finite purely relational signature, the problem CSP(\(\mathcal{A}\)) is solvable in polynomial time whenever a specific condition on \(\mathcal{A}\) is satisfied, and otherwise CSP(\(\mathcal{A}\)) is \(NP\)-complete (informally, it is "a hardest" problem solvable in nondeterministic polynomial time). We refer the reader to [1] for a short introduction to this area and to [2] for a more comprehensive survey. The CSP over general finite structures, not necessarily purely relational, was investigated in [13]. It is e.g. shown that, interestingly, even the purely algebraic setting extends the relational: for every finite relational structure \(\mathcal{A}\), there exists a finite algebra \(\mathbf{B}\) (both of finite signature) such that CSP(\(\mathcal{A}\)) is equivalent to CSP(\(\mathbf{B}\)) modulo polynomial-time Turing reductions. The CSP over general finite structures, in particular algebras, is thus one of the natural classes of computational problems to be systematically investigated after the dichotomy theorem for relational structures. Our paper with DeMeo [4] provided some general results and partial classifications in this direction. In this work we have encountered an interesting phenomenon that the CSP over an algebra \(\mathbf{A}\) is often solvable in polynomial time for a particularly strong reason: there are only polynomially many homomorphisms from the input algebra to \(\mathbf{A}\) and they can be even enumerated in polynomial time. This development has driven us to investigating the class of algebras with polynomially bounded \(c_{\mathbf{A}}\) and eventually led to the characterization in Theorem 1. Notice that for a relational structure \(\mathcal{A}\) it is never the case that \(c_{\mathcal{A}}\) is polynomially bounded unless \(|A|\leq 1\), as witnessed by structures \(\mathcal{X}\) with symbols interpreted as empty relations. Even counting the number of homomorphisms to \(\mathcal{A}\) is typically a hard computational problem [10]. If the homomorphisms from \(\mathbf{X}\) to a fixed \(\mathbf{A}\) can be enumerated in polynomial time, then \(\operatorname{CSP}(\mathbf{A})\) is clearly solvable in polynomial time and so is \(\operatorname{CSP}(\mathcal{B})\) for any expansion \(\mathcal{B}\) of \(\mathbf{A}\). In this sense, such algebras \(\mathbf{A}\) have _inherently tractable_ CSPs. A consequence of the proof of Theorem 1 is that having polynomially bounded \(c_{\mathbf{A}}\) in fact characterizes inherently tractable CSPs. In the statement, the _algebraic reduct_ of \(\mathcal{A}\) is the reduct obtained by taking all the algebraic symbols in the signature of \(\mathcal{A}\). **Theorem 2**.: _Assuming \(P\neq NP\), the following are equivalent for a finite structure \(\mathcal{A}\) of finite signature._ 1. _For every finite-signature expansion_ \(\mathcal{B}\) _of_ \(\mathcal{A}\)_, the problem_ \(\operatorname{CSP}(\mathcal{B})\) _is solvable in polynomial time._ 2. _For no finite-signature expansion_ \(\mathcal{B}\) _of_ \(\mathcal{A}\)_, the problem_ \(\operatorname{CSP}(\mathcal{B})\) _is NP-complete._ 3. \(c_{\mathbf{A}}(n)\in O(n^{k})\) _for some integer_ \(k\)_, where_ \(\mathbf{A}\) _is the algebraic reduct of_ \(\mathcal{A}\)_._ A full classification of the complexity of CSPs over finite algebras remains an open problem. ### Related numerical invariants Theorem 1 relates a polynomial upper bound on the sequence \(c_{\mathbf{A}}\) to an internal property of \(\mathbf{A}\) for general finite algebras. We now mention a few similar results for other natural counting sequences associated to finite algebras. In the following, \(\mathcal{V}(\mathbf{A})\) denotes the class of those algebras in the signature of \(\mathbf{A}\) that satisfy all the identities satisfied in \(\mathbf{A}\), see Section 2.1. We remark that a reflection of an algebra \(\mathbf{X}\) to \(\mathcal{V}(\mathbf{A})\) given in Proposition 6 is a useful step in the proof of the main theorem. A result quite closely related to ours concerns the _free spectrum_ sequence, where the \(n\)th element is the cardinality of the \(n\)-generated free algebra in \(\mathcal{V}(\mathbf{A})\); equivalently, the number of \(n\)-ary term operations of \(\mathbf{A}\). It is proved in [21] that, for a finite \(\mathbf{A}\) of finite signature, its free spectrum is bounded by a polynomial if, and only if, \(\mathbf{A}\) is strongly nilpotent. As we explain in Section 3.1, a polynomial upper bound on the free spectrum gives us automatically a superpolynomial lower bound on \(c_{\mathbf{A}}\). In fact, the negative part of Theorem 1 is based on a simple and seemingly novel modification of free algebras. The _G-spectrum_ sequence [6] counts the number of at most \(n\)-generated algebras in \(\mathcal{V}(\mathbf{A})\) up to isomorphism; equivalently, the number of isomorphic types of homomorphic images of the \(n\)-generated free algebra in \(\mathcal{V}(\mathbf{A})\). The effort to characterize finite algebras with polynomially bounded G-spectrum culminated in [19], which proves that all such algebras can be constructed in a specific way from modules over finite rings of finite representation type and so-called _matrix powers_ of permutation groups. In Section 3.2, matrix powers will provide examples of algebras with superpolynomial but subexponential growth of \(c_{\mathbf{A}}\). Another intersection of [19] with this paper is the crucial role of commutator theory and tame congruence theory. Finally, another numerical measure of the complexity of an algebra \(\mathbf{A}\) comes from the investigation of CSPs over relational structures. It is obtained by counting for \(n\geq 0\) the largest size of a minimal generating set for subalgebras of the \(n\)th power of \(\mathbf{A}\). The class of algebras for which this sequence is polynomially bounded was characterized in [7] by means of the existence of term operations of \(\mathbf{A}\) satisfying specific identities. There is no obvious relation between small generating sets of subalgebras of powers of \(\mathbf{A}\) and small generating sets of algebras \(\mathbf{X}\in\mathcal{V}(\mathbf{A})\); nevertheless, it follows from Theorem 1 that \(c_{\mathbf{A}}\in O(n^{k})\) whenever \(\mathbf{A}\) has the former property. ## 2. Basics We start this section by reviewing the very basic universal algebraic concepts and results in Section 2.1. This part can be skimmed over by readers acquainted with the subject; introductory books include [11, 5] and a comprehensive resource is the book-series [25, 15, 16]. In Section 2.2 we introduce and discuss the class of algebras admitting polynomially-many homomorphisms and their surjective and efficient variants. Section 2.3 provides several simple, but useful observations, which are then applied in Section 2.4 to show that homomorphisms to groups and semilattices can be efficiently computed. Finally, in Section 2.5 we introduce strongly abelian congruences and show that their existence is efficiently decidable. ### Terms and basic constructions For the entire subsection we fix a purely functional signature \(S\). A _term_ over a set of variables \(\{x_{1}\dots,x_{k}\}\) is a formal meaningful expression that involves variables \(x_{1}\),..., \(x_{k}\) and function symbols in \(S\). Given an algebra \(\mathbf{A}\), every term, say \(s\), over \(\{x_{1},\dots,x_{k}\}\) has a natural interpretation in \(\mathbf{A}\) as a \(k\)-ary operation on \(A\) which we denote by \(s^{\mathbf{A}}\). Each such an operation is called a _term operation_ of \(\mathbf{A}\). Note that each \(f^{\mathbf{A}}\), \(f\in S\) is also a term operation, these are called _basic operations_. Also notice that homomorphisms not only preserve \(f\in S\) but also all terms. The set of all term operations of \(\mathbf{A}\) is denoted \(\operatorname{Clo}(\mathbf{A})\) and the set of \(n\)-ary term operation of \(\mathbf{A}\) is denoted \(\operatorname{Clo}_{n}(\mathbf{A})\). Two algebras \(\mathbf{A}\) and \(\mathbf{B}\) with the same universes \(A=B\) are _term-equivalent_ if \(\operatorname{Clo}(\mathbf{A})=\operatorname{Clo}(\mathbf{B})\). An algebra \(\mathbf{A}\) is a _term-reduct_ of an algebra \(\mathbf{B}\) if \(\operatorname{Clo}(\mathbf{A})\subseteq\operatorname{Clo}(\mathbf{B})\). An _identity_ is a pair \((s,t)\) of terms over the same set of variables, also written \(s\approx t\). An algebra \(\mathbf{A}\)_satisfies_ an identity \(s\approx t\) if \(s^{\mathbf{A}}=t^{\mathbf{A}}\). A _variety_ is the class of all algebras satisfying every identity from a fixed set \(\Sigma\). For instance, the class of groups is the variety determined in this way by the following \(\Sigma_{grp}\) (here the signature \(S\) consists consists of nullary symbol \(1\), unary symbol \(\ {}^{-1}\) and binary symbol \(\cdot\)): ( \[\star\] ) \[\Sigma_{grp}=\{1\cdot x\approx 1,\ x\cdot 1\approx x,\ x\cdot x^{-1}\approx 1,\ x^{-1}\cdot x\approx 1,\ (x\cdot y)\cdot z\approx x\cdot(y\cdot z)\}.\] For an algebra \(\mathbf{A}\), the variety _generated_ by \(\mathbf{A}\) is the variety determined by \(\Sigma\) consisting of all identities satisfied by \(\mathbf{A}\). By Birkhoff's HSP theorem [8], varieties are exactly the classes closed under forming isomorphic copies, subalgebras, products, and quotients (or homomorphic images). We now review the three constructions. An algebra \(\mathbf{B}\) is a _subalgebra_ of \(\mathbf{A}\), written \(\mathbf{B}\leq\mathbf{A}\), if \(B\subseteq A\) and \(f^{\mathbf{B}}=f^{\mathbf{A}}|_{B^{k}}\) for every symbol \(f\in S\) (where \(k\) is its arity). In particular, the definition implies that the universe of \(\mathbf{B}\) must be closed under every operation of \(\mathbf{A}\) - it must be a _subuniverse_. The smallest subuniverse (or subalgebra) of \(\mathbf{A}\) containing a set \(X\subseteq A\) is called the subuniverse (subalgebra) of \(\mathbf{A}\)_generated_ by \(X\). It is equal to the one-step closure of \(X\) under term operations of \(\mathbf{A}\) and, if \(S\) is finite, it can be computed from \(\mathbf{A}\) and \(X\) in polynomial time by iteratively closing \(X\) under the basic operations. The _product_ of algebras \(\mathbf{A}_{i}\), \(i\in I\) has as its universe the Cartesian product of the sets \(A_{i}\) and the basic operations are defined coordinate-wise. Of particular importance for us is the case when \(\mathbf{A}_{i}=\mathbf{A}\). The product is then called the \(I\)th power and is denoted \(\mathbf{A}^{I}\). We identify its universe with \(A^{I}\), the set of all mappings from \(I\) to \(A\). Given an equivalence relation \(\alpha\) on \(A\), we write \(a/\alpha\) to denote the \(\alpha\)-equivalence class of \(a\), that is, \(a/\alpha=\{b\in A\mid(a,b)\in\alpha\}\). The set of equivalence classes is denoted \(A/\alpha=\{a/\alpha\mid a\in A\}\). We say that \(\alpha\) is a _congruence_ of an algebra \(\mathbf{A}\) if it is preserved by all (term or basic) operations of \(\mathbf{A}\). It then makes sense to define the quotient algebra \(\mathbf{A}/\alpha\) with universe \(A/\alpha\) by \(f^{\mathbf{A}/\alpha}(a_{1}/\alpha,\ldots,a_{k}/\alpha)=f^{\mathbf{A}}(a_{1}, \ldots,a_{k})/\alpha\) for a \(k\)-ary \(f\in S\). For every algebra \(\mathbf{A}\), the equality relation, denoted by \(0_{A}\), is a congruence of \(\mathbf{A}\), and so is the full relation \(A\times A\), denoted \(1_{A}\). If \(\mathbf{A}\) has only those congruence, then it is called _simple_. A congruence \(\alpha\) of \(\mathbf{A}\) is _minimal_ if it is not equal to \(0_{A}\) and there is no congruence \(\beta\) of \(\mathbf{A}\) such that \(0_{A}\subsetneq\beta\subsetneq\alpha\). The smallest congruence of \(\mathbf{A}\) containing a given \(R\subseteq A^{2}\) is called the congruence _generated_ by \(R\). It can be obtained by first forming the reflexive and symmetric closure, then closing under term operations, and finally forming the transitive closure (this takes polynomial time if the signature \(S\) is finite). The _kernel_ of a mapping \(h:A\to B\) is the equivalence relation on \(A\) defined by \[\ker(h)=\{(a,a^{\prime})\in A^{2}\mid h(a)=h(a^{\prime})\}.\] The kernel of a homomorphism \(h\colon\mathbf{A}\to\mathbf{B}\) is a congruence of \(\mathbf{A}\). The quotient \(\mathbf{A}/\ker(h)\) is isomorphic to the image of \(h\) (i.e., the subalgebra of \(\mathbf{B}\) with universe \(h(A)\)). Congruences of a quotient algebra \(\mathbf{A}/\alpha\) bijectively correspond to congruences of \(\mathbf{A}\) containing \(\alpha\). Given a congruence \(\beta\supseteq\alpha\), we denote by \(\beta/\alpha\) the corresponding congruence of \(\mathbf{A}/\alpha\). ### Polynomial classes and efficient variants Recall that \(c_{\mathbf{A}}(n)\) is the sequence counting the maximum number of homomorphisms from an at most \(n\)-element algebra \(\mathbf{X}\) (in the same signature as \(\mathbf{A}\)) to \(\mathbf{A}\). We similarly define \(c_{\mathbf{A}}^{*}(n)\) counting the maximum number of surjective homomorphisms. We define \[\mathcal{K}_{\text{poly}}=\{\mathbf{A}\mid c_{\mathbf{A}}(n)\in O(n^{k})\text { for some }k\}\] and \[\mathcal{K}_{\text{poly}}^{s}=\{\mathbf{A}\mid c_{\mathbf{A}}^{*}(n)\in O(n^{k}) \text{ for some }k\}\] The following observation explains the relationship between \(\mathcal{K}_{\text{poly}}\) and \(\mathcal{K}_{\text{poly}}^{s}\). **Proposition 3**.: _Let \(\mathbf{A}\) be a finite algebra. The following are equivalent._ 1. \(\mathbf{A}\) _is in_ \(\mathcal{K}_{\text{poly}}\)_._ 2. _Every subalgebra_ \(\mathbf{B}\) _of_ \(\mathbf{A}\) _(including_ \(\mathbf{A}\)_) is in_ \(\mathcal{K}_{\text{poly}}^{s}\)_._ Proof.: (1) implies (2) follows from the fact that every surjective homomorphism \(\mathbf{X}\to\mathbf{B}\) is in particular a homomorphism \(\mathbf{X}\to\mathbf{A}\), giving \(c_{\mathbf{B}}^{*}\leq c_{\mathbf{A}}\). For (2) implies (1), observe that every homomorphism \(\mathbf{X}\to\mathbf{A}\) is a surjective homomorphism onto a subalgebra \(\mathbf{B}\) of \(\mathbf{A}\). In particular, \(c_{\mathbf{A}}\leq\sum_{\mathbf{B}\leq\mathbf{A}}c_{\mathbf{B}}^{s}\), which is polynomially bounded by assumption. For an algebra \(\mathbf{A}\) in a finite signature, a stronger requirement than \(\mathbf{A}\in\mathcal{K}_{\mathrm{poly}}\) is that the homomorphisms \(\mathbf{X}\to\mathbf{A}\) can be _efficiently enumerated_: namely, that there exists a polynomial-time algorithm which, given an input algebra \(\mathbf{X}\) in the same signature as \(\mathbf{A}\), enumerates all the homomorphisms from \(\mathbf{X}\to\mathbf{A}\). Note that the concrete representation of the input algebra \(\mathbf{X}\) is irrelevant -- any reasonable encoding will do, e.g., for each symbol \(f\in S\), the operation \(f^{\mathbf{X}}\) can be represented by a natural encoding of its table. We extend the concept of efficient enumerability to algebras with infinite signatures by defining the class \(\mathcal{K}_{\mathrm{eff}}\) as follows. \[\mathcal{K}_{\mathrm{eff}}=\{\mathbf{A}\ |\ \exists\mathbf{B}\ \text{ finite-signature reduct of }\mathbf{A}\text{ such that}\] \[\text{homomorphisms }\mathbf{X}\to\mathbf{B}\text{ can be efficiently enumerated}\}\] The efficient variant \(\mathcal{K}_{\mathrm{eff}}^{s}\) of \(\mathcal{K}_{\mathrm{poly}}^{s}\) is introduced analogously. Observe that the efficient variant of Proposition 3 holds as well and that we have the trivial inclusions \[\mathcal{K}_{\mathrm{eff}}\subseteq\mathcal{K}_{\mathrm{poly}}\subseteq \mathcal{K}_{\mathrm{poly}}^{s}\supseteq\mathcal{K}_{\mathrm{eff}}^{s}\supseteq \mathcal{K}_{\mathrm{eff}}.\] ### General facts Most reasonable properties algebras only depend on their term operations rather than the concrete choice of basic operations. The following observation shows that membership in the polynomial classes is among such properties. **Proposition 4**.: _Let \(\mathbf{A},\mathbf{B}\) be finite algebras such that \(\mathrm{Clo}(\mathbf{A})\subseteq\mathrm{Clo}(\mathbf{B})\). If \(\mathbf{A}\) is in one of the classes \(\mathcal{K}_{\mathrm{poly}},\mathcal{K}_{\mathrm{poly}}^{s},\mathcal{K}_{ \mathrm{eff}},\mathcal{K}_{\mathrm{eff}}^{s}\), then so is \(\mathbf{B}\)._ Proof.: Consider first the classes \(\mathcal{K}_{\mathrm{poly}},\mathcal{K}_{\mathrm{poly}}^{s}\). For an algebra \(\mathbf{Y}\) in the signature of \(\mathbf{B}\), we define an algebra \(\mathbf{X}\) in the signature of \(\mathbf{A}\) with the same universe as \(\mathbf{Y}\) as follows. For every symbol \(f\) in the signature of \(\mathbf{A}\), there exists a term \(t_{f}\) in the signature of \(\mathbf{B}\) such that \(f^{\mathbf{A}}=t_{f}^{\mathbf{B}}\) (as \(\mathrm{Clo}(\mathbf{A})\subseteq\mathrm{Clo}(\mathbf{B})\)). We set \(f^{\mathbf{X}}=t_{f}^{\mathbf{Y}}\). Since homomorphisms preserve terms, we have \(\mathrm{Hom}(\mathbf{Y},\mathbf{B})\subseteq\mathrm{Hom}(\mathbf{X},\mathbf{A})\) and the claim follows. For the classes \(\mathcal{K}_{\mathrm{eff}}\) and \(\mathcal{K}_{\mathrm{eff}}^{s}\), we take a reduct \(\mathbf{A}^{\prime}\) of \(\mathbf{A}\) in a finite signature \(S\) witnessing membership of \(\mathbf{A}\) in the class. Then we take a finite signature \(T\) containing all the symbols from the signature of \(\mathbf{B}\) that appear in terms \(t_{f}\), \(f\in S\), and let \(\mathbf{B}^{\prime}\) be the corresponding reduct of \(\mathbf{B}\). Given \(\mathbf{Y}\) (in signature \(T\)) we compute \(\mathbf{X}\) (in signature \(S\)) as above and get \(\mathrm{Hom}(\mathbf{Y},\mathbf{B}^{\prime})\subseteq\mathrm{Hom}(\mathbf{X},\mathbf{A}^{\prime})\). We can then enumerate homomorphisms from \(\mathbf{Y}\) to \(\mathbf{B}^{\prime}\) by enumerating homomorphism from \(\mathbf{X}\) to \(\mathbf{A}^{\prime}\) and discarding the extra ones. For an algebra \(\mathbf{A}\), let \(\mathbf{A}^{*}\) denote the expansion of \(\mathbf{A}\) by all the constant operations. Formally, for each \(a\in A\), we add to the signature of \(\mathbf{A}\) a fresh nullary symbol \(a\) and interpret it in the obvious way \(a^{\mathbf{A}}=a\). Informally, \[\mathbf{A}^{*}=\mathbf{A}\text{ plus constants}.\] Notice that every homomorphism to \(\mathbf{A}^{*}\) is surjective. Moreover, \(\mathbf{A}\) and \(\mathbf{A}^{*}\) have the same congruences, and the congruences of \(\mathbf{A}^{*}\) are exactly those equivalence relations that are closed under applications of _unary_ term operations. The next proposition shows that adding constants does not significantly decrease the surjective counting function. The proof is simple, the fact is nevertheless crucial: in combination with the previous proposition, it allows us to concentrate on term operations of \(\mathbf{A}^{*}\) rather than \(\mathbf{A}\) and makes the tame congruence theory directly applicable. We note that the standard name in universal algebra for a term operation of \(\mathbf{A}^{*}\) is polynomial operation of \(\mathbf{A}\). We refrain from using this terminology to avoid a potential confusion with polynomial functions. **Proposition 5**.: _A finite algebra \(\mathbf{A}\) is in \(\mathcal{K}_{\mathrm{poly}}^{s}\) if, and only if \(\mathbf{A}^{*}\) is in \(\mathcal{K}_{\mathrm{poly}}^{s}\). The same holds with \(\mathcal{K}_{\mathrm{eff}}^{s}\) in place of \(\mathcal{K}_{\mathrm{poly}}^{s}\)._ Proof.: The "only if" direction is clear. For the other direction, we concentrate on the efficient variant and assume for simplicity that \(\mathbf{A}\) has a finite signature; the rest is an easy exercise. Let \(A=\{a_{1},\ldots,a_{k}\}\). Consider an input \(\mathbf{X}\) in the signature of \(\mathbf{A}\). For every choice of \(\mathbf{x}=(x_{1},\ldots,x_{k})\in X^{k}\), we create an expansion \(\mathbf{X}_{\mathbf{x}}\) of \(\mathbf{X}\) to the signature of \(\mathbf{A}^{*}\) by defining \(a_{i}^{\mathbf{X}_{\mathbf{x}}}=x_{i}\). Since every homomorphism \(h\colon\mathbf{X}\to\mathbf{A}\) such that \(h(x_{i})=a_{i}\) for all \(i\) is a homomorphism \(\mathbf{X}_{\mathbf{x}}\to\mathbf{A}^{*}\), and every surjective homomorphism \(h\) has this property for some \(\mathbf{x}\), we can enumerate all surjective homomorphisms \(\mathbf{X}\to\mathbf{A}\) by enumerating all homomorphism \(\mathbf{X}_{\mathbf{x}}\to\mathbf{A}^{*}\) for every \(\mathbf{x}\). The number of choices for \(\mathbf{x}\) is \(|X|^{k}\), so this procedure takes polynomial time. The final proposition helps us to tame the input algebras \(\mathbf{X}\). **Proposition 6**.: _Let \(\mathbf{A}\) be a finite algebra of finite signature and \(\Sigma\) be a finite set of identities satisfied in \(\mathbf{A}\). For every finite algebra \(\mathbf{X}\) in the signature of \(\mathbf{A}\), there exists an algebra \(\mathbf{Y}\) in the same signature and a surjective homomorphism \(q\colon\mathbf{X}\to\mathbf{Y}\) such that \(\mathbf{Y}\) satisfies \(\Sigma\) and_ \[\operatorname{Hom}(\mathbf{X},\mathbf{A})=\operatorname{Hom}(\mathbf{Y}, \mathbf{A})\circ q=\{h\circ q\mid h\in\operatorname{Hom}(\mathbf{Y},\mathbf{A })\}.\] _Moreover, \(\mathbf{Y}\) and \(q\) can be computed from \(\mathbf{X}\) in polynomial time._ Proof.: Let \(\alpha\) be the congruence of \(\mathbf{X}\) generated by \[R=\{(s^{\mathbf{X}}(\mathbf{z}),t^{\mathbf{X}}(\mathbf{z}))\mid s(x_{1}, \dots,x_{k})\approx t(x_{1},\dots,x_{k})\in\Sigma,\ \mathbf{z}\in X^{k}\},\] let \(\mathbf{Y}=\mathbf{X}/\alpha\), and let \(q\colon\mathbf{X}\to\mathbf{Y}\) be the quotient homomorphism. Note that \(R\) and \(\alpha\), and then \(\mathbf{Y}\) and \(q\) can all be computed from \(\mathbf{X}\) in polynomial time. From the definition of \(R\) it follows that \(\mathbf{Y}\) satisfies \(\Sigma\). Clearly \(\operatorname{Hom}(\mathbf{X},\mathbf{A})\supseteq\operatorname{Hom}(\mathbf{ Y},\mathbf{A})\circ q\), so it only remains to verify the other inclusion. Consider a homomorphism \(h\colon\mathbf{X}\to\mathbf{A}\). For every \((s^{\mathbf{X}}(\mathbf{z}),t^{\mathbf{X}}(\mathbf{z}))\in R\), we have \(h(s^{\mathbf{X}}(\mathbf{z}))=s^{\mathbf{A}}(h(\mathbf{z}))=t^{\mathbf{A}}(h (\mathbf{z}))=h(t^{\mathbf{X}}(\mathbf{z}))\), therefore \(R\subseteq\ker(h)\). Since \(\ker(h)\) is a congruence of \(\mathbf{A}\), we also have \(\alpha\subseteq\ker(h)\), so \(h\) indeed factorizes as \(h=h^{\prime}\circ q\) for the homomorphism \(h^{\prime}\colon\mathbf{Y}\to\mathbf{A}\) correctly defined by \(h^{\prime}(x/\alpha)=h(x)\). In order to show that \(\mathbf{A}\in\mathcal{K}_{\mathrm{eff}}\) for a finite finite-signature algebra \(\mathbf{A}\), we can now without loss of generality assume that the input algebra \(\mathbf{X}\) satisfies any fixed finite set of identities satisfied by \(\mathbf{A}\). Indeed, we compute \(\mathbf{Y}\) and \(q\) from the proposition, enumerate homomorphisms from \(\mathbf{Y}\) to \(\mathbf{A}\), and compose them with \(q\). The algebra \(\mathbf{Y}=\mathbf{X}/\alpha\) produced in the proof makes sense for the infinite \(\Sigma\) consisting of all identities satisfied by \(\mathbf{A}\). However, we do not know whether it can be computed in polynomial time in general. **Question 7**.: _For which finite algebras \(\mathbf{A}\) of finite signature is there a polynomial-time algorithm that, given an input algebra \(\mathbf{X}\) in the signature of \(\mathbf{A}\), computes the smallest congruence \(\alpha\) such that \(\mathbf{X}/\alpha\) is in the variety generated by \(\mathbf{A}\)?_ The question is only interesting for algebras that are not _finitely based_, i.e., for which there is no finite subset of identities satisfied by the algebra from which all the other satisfied identities follow. We refer to [15] for a discussion about finite bases. ### Examples The last proposition enables us to place any finite group to \(\mathcal{K}_{\mathrm{eff}}\). We remark that for our proof of the main result it would be sufficient to consider commutative groups of prime power order. **Proposition 8**.: _Every finite group is in \(\mathcal{K}_{\mathrm{eff}}\)._ Proof.: Let \(\mathbf{A}\) be a group and let \(\mathbf{X}\) be a finite algebra in the same signature. We can assume by Proposition 6 (applied with \(\Sigma_{grp}\) from (\(\star\))) that \(\mathbf{X}\) is a group. We start by greedily computing a generating set \(Y\) of \(\mathbf{X}\), i.e., we add a new element to \(Y\) when it is not in the subgroup generated by the elements we already have. By Lagrange's theorem, \(|Y|\leq\log_{2}|X|\). There is \(|A|^{|Y|}\leq|A|^{\log_{2}|X|}=|X|^{\log_{2}|A|}\) mappings from \(Y\to A\). For each of them, we try to extend it to a homomorphism \(\mathbf{X}\to\mathbf{A}\). The extension may not exist but if it does, it is unique since \(Y\) generates \(\mathbf{X}\) and can be computed in polynomial time. This algorithm clearly enumerates all homomorphisms \(\mathbf{X}\to\mathbf{A}\). As we shall see in the second example, membership in \(\mathcal{K}_{\mathrm{eff}}\) does not always come from small generating sets. Nevertheless, the following question is of independent interest. **Question 9**.: _For which varieties does every finite member \(\mathbf{X}\) have a generating set of size \(O(\log|X|)\)?_ We now turn to the second example. A _semilattice_ is an algebra in the signature consisting of a single binary symbol \(\wedge\), which satisfies the identities \[\Sigma=\{x\wedge x\approx x,x\wedge y\approx y\wedge x,(x\wedge y)\wedge z \approx x\wedge(y\wedge z)\}.\] For any semilattice \(\mathbf{X}\), the binary relation \(\leq_{\mathbf{X}}\) on \(S\) defined by \(x\leq_{\mathbf{X}}y\) if \(x=x\wedge^{\mathbf{X}}y\) is a partial order on \(X\). It is not hard to directly show that any finite semilattice is in \(\mathcal{K}_{\mathrm{eff}}\); however, we only prove the result for two-element semilattices since this is what we will need; the result for arbitrary semilattices follows from Corollary 39. **Proposition 10**.: _Every two-element semilattice is in \(\mathcal{K}_{\mathrm{eff}}\)._ Proof.: Let \(\mathbf{A}\) be a two-element semilattice. Without loss of generality, we assume that \(A=\{0,1\}\) and \(0\leq_{\mathbf{A}}1\), i.e., \(0\wedge^{\mathbf{A}}0=0\wedge^{\mathbf{A}}1=1\wedge^{\mathbf{A}}0=0\) and \(1\wedge^{\mathbf{A}}1=1\). Let \(\mathbf{X}\) be a finite algebra in the same signature. We can assume by Proposition 6 that \(\mathbf{X}\) is a semilattice. For every homomorphism \(h\colon\mathbf{X}\to\mathbf{A}\), the set \(P=h^{-1}(\{1\})\) is upward closed and is closed under \(\wedge^{\mathbf{X}}\). Indeed, if \(x\in P\) and \(x\leq_{\mathbf{X}}y\), then \[h(y)=1\wedge^{\mathbf{A}}h(y)=h(x)\wedge^{\mathbf{A}}h(y)=h(x\wedge^{\mathbf{ X}}y)=h(x)=1.\] Similarly, if \(x,y\in P\), then \(h(x\wedge^{\mathbf{X}}y)=h(x)\wedge^{\mathbf{A}}h(y)=1\). It follows that \(P=\emptyset\) or \(P\) is the principal filter \(p\upd=\{x\in X\mid x\geq_{\mathbf{X}}p\}\) where \(p=\bigwedge P\). Homomorphisms from \(\mathbf{X}\) to \(\mathbf{A}\) can thus be enumerated by first listing the constant \(0\) mapping, and then iterating over the elements \(p\) of \(X\), building the corresponding principal filter \(p\upd\) and checking whether the mapping that sends \(x\) to \(1\) iff \(x\in p\upd\) is a homomorphism. Observe that the proof shows that \(c_{\mathbf{A}}(n)\leq n+1\). In fact, \(c_{\mathbf{A}}(n)=n+1\), as witnessed by an \(n\)-element semilattice \(\mathbf{X}\) such that \(\leq_{\mathbf{X}}\) is a linear order. Note also that semilattices in general do not have generating set of logarithmic size since the \((n+1)\)-element semilattice \(\mathbf{X}\) with universe \(\{0,1,\dots,n\}\) such that \(i\wedge^{\mathbf{X}}j=0\) whenever \(i\neq j\) cannot be generated by \(n-1\) elements. We also remark that using known results, the general facts, and the two examples, it is now quite easy to characterize two-element algebras in \(\mathcal{K}_{\mathrm{poly}}\) (or the other three variants). Indeed, for any two-element algebra \(\mathbf{A}\), the algebra \(\mathbf{A}^{*}\) is term-equivalent to one of \(7\) specific algebras by, e.g., Lemma 4.8. in [18]. Five of them have a term-reduct which is a semilattice or a group, so they are in \(\mathcal{K}_{\mathrm{eff}}\) by Propositions 4, 5, 8 and 10. The remaining two contain only unary operations. It is a nice exercise to show that \(c_{\mathbf{A}}^{*}\) grows exponentially for every unary algebra \(\mathbf{A}\); the algebras one would construct to witness the exponential growth will likely be isomorphic, or at least similar, to those constructed in Section 3.1. ### Strongly abelian congruences It is time to define strongly abelian congruences. **Definition 11**.: Let \(\mathbf{A}\) be an algebra. A congruence \(\alpha\) of \(\mathbf{A}\) is _strongly abelian_ if for all \(t\in\mathrm{Clo}_{k}(\mathbf{A})\) (\(k\geq 1\)) and all \(x_{1},\dots,x_{k}\), \(y_{1},\dots,y_{k}\), and \(z_{2},\dots,z_{k}\) in \(A\) such that \((x_{i},y_{i})\in\alpha\) for all \(i\geq 1\) (and \(i\leq k\)) and \((y_{i},z_{i})\in\alpha\) for all \(i\geq 2\), we have \[t(x_{1},x_{2},\dots,x_{k})=t(y_{1},y_{2},\dots,y_{k})\ \ \text{implies}\ \ t(x_{1},z_{2},\dots,z_{k})=t(y_{1},z_{2},\dots,z_{k}).\] We make several simple observations. First, \(0_{A}\) is trivially a strongly abelian congruence of any algebra \(\mathbf{A}\). Other strongly abelian congruences are called _nontrivial_. Second, if \(\alpha\) is a strongly abelian congruence of \(\mathbf{A}\), then so is any congruence \(\beta\subseteq\alpha\). In particular, if \(\mathbf{A}\) has a nontrivial strongly abelian congruence, then it has a minimal one. Third, if \(\alpha\) is a strongly abelian congruence of \(\mathbf{A}\) and \(\mathbf{B}\) is a subalgebra of \(\mathbf{A}\), then \(\alpha\cap(B\times B)\) is a strongly abelian congruence of \(\mathbf{B}\). Fourth, a strongly abelian congruence of \(\mathbf{A}\) is also a strongly abelian congruence of \(\mathbf{A}^{*}\). An algebra is _strongly abelian_ if \(1_{A}\) is strongly abelian, equivalently, all congruences are strongly abelian. Examples of strongly abelian algebras include essentially unary algebras. The definitions are as follows. We say that an operation \(t\) on \(A\), or more generally a mapping \(t:A^{k}\to B\), _depends_ on the \(i\)th coordinate (where \(1\leq i\leq k\)) if there exists \(a_{1},\dots a_{k}\) and \(a^{\prime}\in A\) such that \(t(a_{1},\dots,a_{k})\neq t(a_{1},\dots,a_{i-1},a^{\prime},a_{i+1},\dots,a_{k})\). We call \(t\)_essentially unary_ if it depends on at most one coordinate. An algebra is _essentially unary_ if so is each of its basic (term) operation. Note that, indeed, every essentially unary algebra is strongly abelian; different examples are provided in Section 3.2. On the other hand, having a strongly abelian congruence can be seen as "being locally close to unary" by reading the contrapositive of the implication in Definition 11: if \(t\) depends on the first coordinate and this is witnessed by suitable \(x_{1}\), \(y_{1}\) and the \(z_{i}\), then \(t(x_{1},\dots)\) is "often" different from \(t(y_{1},\dots)\) (which would always be the case if \(t\) depended only on the first coordinate). Having a nontrivial strongly abelian congruence can be regarded as a rather pathological situation, e.g., no group or semilattice (nor an expansion thereof, e.g., a ring, a module, or a lattice) have this property, and the only such two-element algebras are essentially unary. In the mentioned examples, even no quotients have nontrivial strongly abelian congruences. The following example shows that latter property is strictly stronger and shows that the classes \(\mathcal{K}_{\mathrm{poly}}\), etc. are not closed under quotients. _Example 12_.: Let \(\mathbf{A}\) be the algebra on \(\{0,1,2\}\) with a single binary operation \(\cdot^{\mathbf{A}}=\cdot\) defined by the following table. \[\begin{array}{c|cccc}\cdot&0&1&2\\ \hline 0&0&0&2\\ 1&0&1&2\\ 2&1&0&2\end{array}\] Let \(\alpha\) be \(\{0,1\}^{2}\cup\{2\}^{2}\). One can see that no proper subalgebra of \(\mathbf{A}\) has a nontrivial strongly abelian congruence, that \(\alpha\) is the only congruence of \(\mathbf{A}\) different from \(0_{A}\) and \(1_{A}\), and that \(\mathbf{A}/\alpha\) is an essentially unary \(2\)-element algebra, so \(1_{A/\alpha}\) is a strongly abelian congruence of \(\mathbf{A}/\alpha\). However, note that \(\alpha\) is not strongly abelian since \(0\cdot 1=1\cdot 0\), but \(0\cdot 1\neq 1\cdot 1\). It follows from Corollaries 38 and 39 that \(\mathbf{A}\) is in \(\mathcal{K}_{\mathrm{eff}}\) while \(\mathbf{A}/\alpha\) is not even in \(\mathcal{K}_{\mathrm{poly}}^{s}\). On the other hand, it can be directly verified that \(\mathcal{K}_{\mathrm{poly}}\) and \(\mathcal{K}_{\mathrm{eff}}\) are closed under finite products of algebras with the same signature and under subalgebras, and that \(\mathcal{K}_{\mathrm{poly}}^{s}\) and \(\mathcal{K}_{\mathrm{eff}}^{s}\) are closed under finite products. The latter two classes are not closed under subalgebras - consider the algebra \(\mathbf{B}\) with the same universe \(B=\{0,1,2\}\) and a binary operation defined as follows. \[\begin{array}{c|cccc}\cdot&0&1&2\\ \hline 0&0&1&0\\ 1&0&1&0\\ 2&0&2&2\end{array}\] The subalgebra \(\mathbf{C}\) of \(\mathbf{B}\) with universe \(\{0,1\}\) is essentially unary, so \(c_{\mathbf{C}}^{s}\) grows exponentially. On the other hand, \(\mathbf{B}\) is simple and is not strongly abelian (e.g., \(0\cdot 2=2\cdot 0\) while \(0\cdot 2\neq 2\cdot 2\)), therefore \(\mathbf{B}\in\mathcal{K}_{\mathrm{eff}}^{s}\) by Corollary 38. The final goal of this section is to show that the existence of a nontrivial strongly abelian congruence in a finite-signature algebra \(\mathbf{A}\) (or some if its subalgebras) can be decided in polynomial time. In particular, the condition in item (2) of Theorem 1 can be efficiently checked. We employ the following consequence of Theorem 7.2 in [18]. **Theorem 13** ([18]).: _Let \(\alpha\) be a congruence of a finite algebra \(\mathbf{A}\). The following are equivalent._ 1. _The congruence_ \(\alpha\) _is strongly abelian,_ 2. _There do not exist distinct_ \(a,b\in A\) _and_ \(t\in\mathrm{Clo}_{2}(\mathbf{A}^{*})\) _such that_ \((a,b)\in\alpha\)_,_ \(t(a,b)=t(b,a)=a\)_, and_ \(t(b,b)=b\) _(cf. Example_ 12_)._ **Proposition 14**.: _There exist polynomial-time algorithms that, given a finite algebra \(\mathbf{A}\) of finite signature, decide whether_ 1. \(\mathbf{A}\) _has a nontrivial strongly abelian congruence._ 2. _some subalgebra of_ \(\mathbf{A}\) _has a nontrivial strongly abelian congruence._ Proof.: We first observe that, for every \(a,b\in A\), the following conditions are equivalent. * There exists \(t\in\mathrm{Clo}_{2}(\mathbf{A}^{*})\) such that \((a,b)\in\alpha\), \(t(a,b)=t(b,a)=a\), and \(t(b,b)=b\). * The subalgebra of \((\mathbf{A}^{*})^{3}\) generated by \(X=\{(a,b,b),(b,a,b)\}\) contains \((a,a,b)\). The reason is that the universe of a subalgebra generated by \(X\) is equal to the one-step closure of \(X\) under term operations. Therefore, \((a,a,b)\) is in the subalgebra appearing in the second item if, and only if, there exists \(t\in\mathrm{Clo}_{2}(\mathbf{B})\), where \(\mathbf{B}=(\mathbf{A}^{*})^{3}\), such that \(t^{\mathbf{B}}((a,b,b),(b,a,b))=(a,a,b)\), which is equivalent to the first item. Moreover, the second condition can be verified in polynomial time because that universe can also be obtained as a (many-step) closure under basic operations. By Theorem 13, we can thus check in polynomial-time whether a given congruence is strongly abelian by going over all pairs of distinct elements \((a,b)\in\alpha\). Suppose that some subalgebra \(\mathbf{B}\) of \(\mathbf{A}\) has a nontrivial strongly abelian congruence \(\alpha\). Let \((c,d)\in\alpha\) and \(c\neq d\). By the remarks following Definition 11, in the subalgebra of \(\mathbf{A}\) generated by \(\{c,d\}\) (which is contained in \(B\)), the congruence generated by \(\{(c,d)\}\) is strongly abelian. Therefore, in order to verify whether some subalgebra \(\mathbf{A}\) has a nontrivial strongly abelian congruence, it is enough to concentrate on these subalgebras and congruences. Since there are fewer than \(|A|^{2}\) such situations and generating subalgebras or congruences can be done in polynomial time, item (2) follows. Item (1) is similar: it is enough to check whether congruences generated by pairs of distinct elements are strongly abelian. ## 3. Non-membership This section is devoted to the simpler, negative part of our main result. In Section 3.1, we provide a superpolynomial lower bound on \(c_{\mathbf{A}}^{*}\) in case that \(\mathbf{A}\) has a nontrivial strongly abelian congruence. Section 3.2 shows that the obtained bound is essentially optimal. ### Superpolynomial lower bound Let \(\mathbf{A}\) be a finite algebra. If some algebra \(\mathbf{F}\) in the signature of \(\mathbf{A}\) and a subset \(X\subseteq F\) have the property that every mapping \(X\to A\) can be extended to a homomorphism \(\mathbf{F}\to\mathbf{A}\), then we get \(c_{\mathbf{A}}(|F|)\geq|A|^{|X|}\). Therefore, if we have such an \(\mathbf{F}\) for every finite \(X\) and \(|F|\) is upper bounded by a polynomial in \(|X|\), then we get a superpolynomial lower bound on \(c_{\mathbf{A}}\). Examples of algebras \(\mathbf{F}\) with the above extension property with respect to \(X\subseteq F\) include the _free algebra over \(X\) in the variety generated by \(\mathbf{A}\)_. This algebra, denote it \(\mathbf{F_{A}}(X)\), can be defined as the quotient of the algebra of all terms over \(X\) by the congruence consisting of identities satisfied in \(\mathbf{A}\). A convenient alternative description for finite \(X\) is as follows. Identify \(X\) with the set \(\{\pi_{1}^{n},\ldots,\pi_{n}^{n}\}\), where \(n=|X|\) and \(\pi_{i}^{n}:A^{n}\to A\) denotes the \(n\)-ary projection to the \(i\)th variable, i.e., \(\pi_{i}^{n}(a_{1},\ldots,a_{n})=a_{i}\). The free algebra \(\mathbf{F_{A}}(X)\) can be defined as the subalgebra of \(\mathbf{A}^{A^{n}}\) with universe \(\operatorname{Clo}_{n}(\mathbf{A})\) (recall here that the universe of \(\mathbf{A}^{A^{n}}\) is \(A^{A^{n}}\), the set of mappings \(A^{n}\to A\), so the definition makes formal sense). Corollary 4.8 in [21] characterizes those finite algebras \(\mathbf{A}\) for which \(|F_{\mathbf{A}}(X)|\) depends polynomially on \(|X|\). By the discussion above, such algebras have a superpolynomial lower bound on \(c_{\mathbf{A}}\). The following example shows that these considerations are not sufficient for our purposes. _Example 15_.: The algebra \(\mathbf{A}\) with universe \(\{0,1,2\}\) and a single binary operation defined by \[\begin{array}{c|cccc}\cdot&0&1&2\\ \hline 0&0&1&0\\ 1&0&1&0\\ 2&0&1&2\end{array}\] has a nontrivial strongly abelian congruence, namely \(\{0,1\}^{2}\cup\{2\}^{2}\) (this follows e.g. from a simple calculation of term operations), but also a semilatice quotient modulo the same congruence. Therefore, \(c_{\mathbf{A}}\) has a superpolynomial growth by Theorem 1, but \(|F_{\mathbf{A}}(X)|\) is not bounded by a polynomial since even the two-element semilattice quotient has exponentially many term operations. However, a natural modification of free algebras turns out to work for our purposes. **Definition 16**.: Let \(\mathbf{A}\) be an algebra and \(a,b\in A\). Let \(\mathbf{F}\) be an algebra in the same signature and \(X\subseteq F\). We say that \(\mathbf{F}\) is _ab-free for \(\mathbf{A}\) with \(ab\)-free set \(X\)_ if every mapping \(X\to\{a,b\}\) can be extended to a homomorphism \(\mathbf{F}\to\mathbf{A}\). The following proposition shows that \(ab\)-free algebras can be constructed similarly to free algebras, with the difference that we restrict all term operations to \(\{a,b\}\). (Naturally, one can similarly define \(B\)-free for any subset \(B\) of \(A\), not just \(B=\{a,b\}\), and prove a similar result.) **Proposition 17**.: _Let \(\mathbf{A}\) be an algebra and \(a,b\in A\). Let \(\mathbf{F_{A,ab}(n)}\) be the subalgebra of \(\mathbf{A}^{\{a,b\}^{n}}\) with universe_ \[F_{\mathbf{A},ab}(n)=\{t|_{\{a,b\}^{n}}\colon\{a,b\}^{n}\to A\mid t\in \operatorname{Clo}_{n}(\mathbf{A})\}.\] _The algebra \(\mathbf{F_{A,ab}(n)}\) is \(ab\)-free for \(\mathbf{A}\) with \(ab\)-free set \(X=\{\pi_{1}^{n}|_{\{a,b\}^{n}},\ldots,\pi_{n}^{n}|_{\{a,b\}^{n}}\}\)._ Proof.: First observe that \(F_{\mathbf{A},ab}\) is preserved by every basic operation of \(\mathbf{A}^{\{a,b\}^{n}}\): for every \(f\) in the signature of \(\mathbf{A}\) of arity \(k\), and every \(t_{1},\ldots,t_{k}\in\operatorname{Clo}_{n}(\mathbf{A})\), we have that \[f^{\mathbf{A}^{\{a,b\}^{n}}}(t_{1}|_{\{a,b\}^{n}},\ldots,t_{k}|_{\{a,b\}^{n}} )=\left(f^{\mathbf{A}}(t_{1},\ldots,t_{k})\right)|_{\{a,b\}^{n}},\] where \[\left(f^{\mathbf{A}}(t_{1},\ldots,t_{k})\right)\left(a_{1},\ldots,a_{n}\right) =f^{\mathbf{A}}\left(t_{1}(a_{1},\ldots,a_{n}),\ldots,t_{k}(a_{1},\ldots,a_{n} )\right).\] Therefore, the definition of \(\mathbf{F_{A,ab}(n)}\) makes sense. Denote in the following \(\pi_{i}^{n}|_{\{a,b\}^{n}}\) by \(x_{i}\). Given any mapping \(h:X\to\{a,b\}\), we define \(h^{\prime}:F_{\mathbf{A},a,b}(n)\to A\) by \[h^{\prime}(t|_{\{a,b\}^{n}})=t(h(x_{1}),\ldots,h(x_{n})),\] which is well-defined for if \(t|_{\{a,b\}^{n}}=s|_{\{a,b\}^{n}}\), then \(t(h(x_{1}),\ldots,h(x_{n}))=s(h(x_{1}),\ldots,h(x_{n}))\). It clearly extends \(h\), and it remains to show that \(h^{\prime}\) is a homomorphism from \(\mathbf{F}_{\mathbf{A},ab}(n)\) to \(\mathbf{A}\). Let \(f\) be a symbol of arity \(k\) in the signature of \(\mathbf{A}\), and let \(t_{1},\ldots,t_{k}\in\operatorname{Clo}_{n}(\mathbf{A})\). Then \[h^{\prime}\left(f^{\mathbf{F}_{\mathbf{A},ab}(n)}(t_{1}|_{\{a,b \}^{n}},\ldots,t_{k}|_{\{a,b\}^{n}})\right) =h^{\prime}\left(\left(f^{\mathbf{A}}(t_{1},\ldots,t_{k})\right)| _{\{a,b\}^{n}}\right)\] \[=\left(f^{\mathbf{A}}(t_{1},\ldots,t_{k})\right)(h(x_{1}),\ldots,h(x_{n}))\] \[=f^{\mathbf{A}}(t_{1}(h(x_{1}),\ldots,h(x_{n})),\ldots,t_{k}(h(x _{1}),\ldots,h(x_{n})))\] \[=f^{\mathbf{A}}(h^{\prime}(t_{1}|_{\{a,b\}^{n}}),\ldots,h^{\prime }(t_{k}|_{\{a,b\}^{n}})),\] so that \(h^{\prime}\) is indeed a homomorphism \(\mathbf{F}_{\mathbf{A},ab}(n)\to\mathbf{A}\). We now show that the \(ab\)-free algebras we just constructed have polynomial size whenever \((a,b)\) is in a strongly abelian congruence. **Proposition 18**.: _Let \(\mathbf{A}\) be an algebra with strongly abelian congruence \(\alpha\) and let \(a,b\in A\) be such that \((a,b)\in\alpha\). Then \(|F_{\mathbf{A},ab}(n)|\in O(n^{k})\) where \(k=\lfloor\log_{2}|A|\rfloor\)._ Proof.: Consider \(t\in\operatorname{Clo}_{n}(\mathbf{A})\) and its restriction \(s=t|_{\{a,b\}^{n}}\). Let \(I\) be the set of coordinates on which \(s\) depends. We show that \(|I|\leq k\). If \(1\in I\), then there exist \(z_{2},\ldots,z_{n}\in\{a,b\}\) such that \(s(a,z_{2},\ldots,z_{n})\neq s(b,z_{2},\ldots,z_{n})\). Since \(\alpha\) is strongly abelian, we must then have \(s(a,x_{2},\ldots,x_{n})\neq s(b,y_{2},\ldots,y_{n})\) for any \(x_{2},\ldots,x_{n},y_{2},\ldots,y_{n}\in\{a,b\}\). In other words, whenever two tuples \(\mathbf{x},\mathbf{y}\in\{a,b\}^{n}\) differ on the first coordinate, the results \(s(\mathbf{x})\) and \(s(\mathbf{y})\) are different as well. Similar conclusion can be derived for any coordinate \(i\in I\) by using the property in Definition 11 to \(s\) with permuted coordinates. Applying \(s\) to any \(2^{|I|}\)-element set of arguments that are pairwise different on \(I\), we get that \(s\) attains at least \(2^{|I|}\) values. So \(2^{|I|}\leq|A|\) and, indeed, \(|I|\leq k\). Every element \(s\in\mathbf{F}_{\mathbf{A},ab}(n)\) is thus given by the choice of a \(k\)-element subset of coordinates \(I\) and a mapping from \(\{a,b\}^{I}\) to \(A\). Therefore, \(|F_{\mathbf{A},ab}(n)|\) is at most \(n^{k}|A|^{2^{k}}\in O(n^{k})\). The main result of this section now follows by a simple calculation. **Corollary 19**.: _Let \(\mathbf{A}\) be a finite algebra. If \(\mathbf{A}\) has a nontrivial strongly abelian congruence, then \(c_{\mathbf{A}}^{s}(n)\in 2^{\Omega(n^{1/k})}\) where \(k=\lfloor\log_{2}|A|\rfloor\)._ Proof.: Let \(\mathbf{B}=\mathbf{A}^{*}\) and let \(a,b\) be distinct elements such that \((a,b)\in\alpha\). Denote \(d(m)=|F_{\mathbf{B},ab}(m)|\). By Proposition 17, \(\mathbf{F}_{\mathbf{B},ab}(m)\) is \(ab\)-free for \(\mathbf{B}\) with \(ab\)-free set of size \(m\), therefore \(c_{\mathbf{B}}(d(m))\geq 2^{m}\). Since \(\alpha\) is still a strongly abelian congruence of \(\mathbf{B}\), Proposition 18 implies that \(d(m)\leq Cm^{k}\) for some constant \(C\). Let \(n\) be a sufficiently large integer and let \(m\) be the largest integer such that \(Cm^{k}\leq n\). We have \(m\geq C^{\prime}n^{1/k}\) for a suitable positive constant \(C^{\prime}\), so \[c_{\mathbf{A}}^{s}(n)\geq c_{\mathbf{B}}^{s}(n)=c_{\mathbf{B}}(n)\geq c_{ \mathbf{B}}(Cm^{k})\geq c_{\mathbf{B}}(d(m))\geq 2^{m}\geq 2^{C^{\prime}n^{1/k}}\] and the claim follows. The proof in this section is based on a modification of the standard and useful free algebra concept. We wonder whether these "somewhat free" algebras (or other structures) naturally occur elsewhere as well. ### Matrix powers of sets We have just shown that \(c_{\mathbf{A}}^{s}(n)\geq 2^{Cn^{1/k}}\), where \(k=\lfloor\log_{2}|A|\rfloor\) and \(C\) is a positive constant, whenever \(\mathbf{A}\) has a nontrivial strongly abelian congruence. In this section we show that, for each positive integer \(k\), there is a strongly abelian algebra \(\mathbf{A}\) of size \(|A|=2^{k}\) such that \(c_{\mathbf{A}}(n)\leq 2^{n^{1/k}}\). The construction is a special case of so-called matrix power of an algebra. The algebras we use are matrix powers of algebras with no operations -- sets. The concept of a matrix power emerged independently in various contexts, we refer to Section 10.6 of [16] where the general construction is discussed. Fix a positive integer \(k\) and let \(S\) be the signature consisting of a unary symbol \(s\) (for \(s\)hift) and a \(k\)-ary symbol \(d\) (for \(d\)iagonal). Let \(\Sigma\) be the following set of identities, where \(s^{k}(x)\) should be read as \(s(s(\dots(s(x))\dots)\) with \(k\) occurences of \(s\). \[d(x,\dots,x) \approx x\] \[s^{k}(x) \approx x\] \[s(d(x_{1},\dots,x_{k})) \approx d(s(x_{k}),s(x_{1}),\dots,s(x_{k-1}))\] \[d(d(x_{11},\dots,x_{1k}),d(x_{21},\dots,x_{2k}),\dots,d(x_{k1}, \dots,x_{kk})) \approx d(x_{11},x_{22},\dots,x_{kk})\] For a set \(Y\), we denote by \(Y^{[k]}\) the algebra with universe \(Y^{k}\) and basic operations defined as follows. \[s^{Y^{[k]}}((y_{1},y_{2},\dots,y_{k})) =(y_{2},\dots,y_{k},y_{1})\] \[d^{Y^{[k]}}((y_{1}^{1},y_{2}^{1},\dots,y_{k}^{1}),\dots,(y_{1}^{ k},\dots,y_{k}^{k})) =(y_{1}^{1},y_{2}^{2},\dots,y_{k}^{k})\] The following two facts imply that the algebras \(Y^{[k]}\) are fully axiomatized by \(\Sigma\) and give us a full understanding on homomorphisms between them. We do not attribute them to any specific set of authors for the reason above. We give brief sketches of proofs, full proofs are given in [16, Theorem 10.92, Theorem 10.98]. **Proposition 20**.: _An algebra \(\mathbf{A}\) in signature \(S\) satisfies \(\Sigma\) if, and only if, \(\mathbf{A}\) is isomorphic to \(Y^{[k]}\) for some set \(Y\)._ Proof sketch.: The backward implication amounts to checking that every \(Y^{[k]}\) satisfies \(\Sigma\), which is straightforward. Assume now that \(\mathbf{A}\) satisfies \(\Sigma\). Let \(Y=\{a\in A\mid s^{\mathbf{A}}(a)=a\}\) and define mappings \(h:Y^{k}\to A\) and \(h^{\prime}:A\to Y^{k}\) as follows (omitting the superscripts \(\mathbf{A}\)). \[h(y_{1},\dots,y_{k}) =d(y_{1},\dots,y_{k})\] \[h^{\prime}(a) =(d(a,s(a),\dots,s^{k-1}(a)),\] \[\qquad d(s^{k-1}(a),a,\dots,s^{k-2}(a)),\dots,\] \[\qquad d(s(a),\dots,s^{k-1}(a),a))\] Using the identities in \(\Sigma\) it can be verified that the mapping \(h^{\prime}\) is correctly defined, that the composition \(hh^{\prime}\) is the identity on \(A\), that \(h^{\prime}h\) is the identity on \(Y^{k}\), and that \(h\) preserves \(d\) and \(s\), so \(h\) is an isomorphism \(\mathbf{A}\to Y^{[k]}\). **Proposition 21**.: _Let \(Y,Z\) be sets. For any mapping \(h:Z\to Y\), the induced mapping \(h^{k}:Z^{k}\to Y^{k}\) is a homomorphism from \(Z^{[k]}\) to \(Y^{[k]}\). Conversely, every homomorphism from \(Z^{[k]}\) to \(Y^{[k]}\) has this form._ Proof sketch.: The first part is straightforward. For the second part, observe that the diagonal elements \(\bar{z}=(z,\dots,z)\) in \(Z^{[k]}\) are characterized by the property \(s(\bar{z})=\bar{z}\). A homomorphism \(h^{\prime}\) from \(Z^{[k]}\) to \(Y^{[k]}\) therefore maps the diagonal elements of \(Z^{[k]}\) to diagonal elements of \(Y^{[k]}\), and thus induces a mapping \(h\colon Z\to Y\). That \(h^{\prime}=h^{k}\) then follows from preservation of \(d\). The algebra promised in the beginning of the subsection is \(\mathbf{A}=\{1,2\}^{[k]}\). It essentially only remains to observe that matrix powers of sets are strongly abelian. **Corollary 22**.: _For any finite \(Y\), the algebra \(\mathbf{A}=Y^{[k]}\) is strongly abelian and \(c_{\mathbf{A}}(n)\leq|Y|^{n^{1/k}}\) for every \(n\)._ Proof.: Strong abelianess follows from the definition and a description of term operations of \(\mathbf{A}\): they are exactly operations of the form \[t((x_{1}^{1},\dots,x_{k}^{1}),(x_{1}^{2},\dots,x_{k}^{2}),\dots,(x_{1}^{n}, \dots,x_{k}^{n}))=(x_{j_{1}}^{i_{1}},x_{j_{2}}^{i_{2}},\dots,x_{j_{n}}^{i_{n}})\] for some \(n\) and \(i_{1},\dots,i_{n}\), and \(j_{1},\dots,j_{n}\). Let \(\mathbf{X}\) be any algebra in the signature \(S\) and let \(\mathbf{X}^{\prime}\) be the algebra from Proposition 6 such that \(\mathbf{X}^{\prime}\) satisfies \(\Sigma\), \(|X^{\prime}|\leq|X|\), and \(|\operatorname{Hom}(\mathbf{X},\mathbf{A})|\leq|\operatorname{Hom}(\mathbf{X}^ {\prime},\mathbf{A})|\). By Proposition 20, the algebra \(\mathbf{X}^{\prime}\) is isomorphic to \(Z^{[k]}\) for some \(Z\). Additionally applying Proposition 21, we obtain \[|\operatorname{Hom}(\mathbf{X},\mathbf{A})|\leq|\operatorname{Hom}(Z^{[k]},Y^{[ k]})|=|Y|^{|Z|}\leq|Y|^{|X|^{1/k}}\] and the claim follows. ## 4. Membership This section provides the implication from (2) to (1) in Theorem 1; more precisely, we show that \(\mathbf{A}\in\mathcal{K}_{\mathrm{eff}}^{*}\) whenever \(\mathbf{A}\) has no minimal strongly abelian congruence. The necessary prerequisites from tame congruence theory are reviewed in Section 4.1. The proof is given in Section 4.2. ### Tame congruence theory For an algebra \(\mathbf{A}\) and a subset \(N\subseteq A\) we denote by \(\mathbf{A}|_{N}\) the _algebra induced by \(\mathbf{A}\) on \(N\)_, that is, the algebra with universe \(N\) whose operations are those term operations of \(\mathbf{A}^{*}\) that preserve \(N\) (the signature of this algebra can be chosen arbitrarily so that the operation symbols are in bijective correspondence with the operations of \(\mathbf{A}|_{N}\)). One of the core components of the theory is the classification of so-called minimal algebras. An algebra \(\mathbf{M}\) is _minimal_ if \(|M|\geq 2\) and every unary term operation of \(\mathbf{M}^{*}\) is either a constant or a permutation. **Theorem 23** ([27]).: _Let \(\mathbf{M}\) be a finite minimal algebra. Then \(\mathbf{M}^{*}\) is term-equivalent to \(\mathbf{A}^{*}\), where \(\mathbf{A}\) is exactly one of the following:_ 1. _An algebra with only unary operations, all of which are permutations._ 2. _A vector space over a finite field._ 3. _A two-element boolean algebra._ 4. _A two-element lattice._ 5. _A two-element semilattice._ A minimal algebra is said to have type \(i\), for \(i\in\{1,\ldots,5\}\), if item \((i)\) in Theorem 23 takes place. Note that if \(\mathbf{M}\) is a finite minimal algebra of type 2-5, then there exists an algebra \(\mathbf{A}\) such that \(\mathrm{Clo}(\mathbf{A})\subseteq\mathrm{Clo}(\mathbf{M}^{*})\) and \(\mathbf{A}\) is either a group or a 2-element semilattice. Consider now an arbitrary finite algebra \(\mathbf{A}\), not necessarily minimal. A pair \((\gamma,\delta)\) of congruences of \(\mathbf{A}\) is called a _cover_ if \(\gamma\subsetneq\delta\) and no congruence \(\theta\) of \(\mathbf{A}\) lies strictly between \(\gamma\) and \(\delta\). By using a construction explained below, for each cover \((\gamma,\delta)\) one obtains a minimal algebra that belongs to one of the five classes in the previous theorem and can therefore be given a type \(i\in\{1,\ldots,5\}\). We then assign this type to the pair \((\gamma,\delta)\), which we write \(\mathrm{typ}_{\mathbf{A}}(\gamma,\delta)=i\) or just \(\mathrm{typ}(\gamma,\delta)\) if \(\mathbf{A}\) is clear from the context. The construction goes as follows. Let \(U\) be a minimal set with the property that there exists a unary term operation \(p\in\mathrm{Clo}_{1}(\mathbf{A}^{*})\) such that \(p(A)=U\) and \(p(\delta)\not\subseteq\gamma\); such a set is called a _\((\gamma,\delta)\)-minimal set_ of \(\mathbf{A}\). Then, for every \(a\in U\) such that \(a/\delta\cap U\not\subseteq a/\gamma\), the induced algebra \(\mathbf{A}|_{N}\) on the set \(N=(a/\delta\cap U)\) is such that \((\mathbf{A}|_{N})/\gamma\) is a minimal algebra [18, Lemma 2.16 (3)] that is of one of the 5 types given in Theorem 23. Such a set \(N\) is called a _\((\gamma,\delta)\)-trace_. All the \((\gamma,\delta)\)-traces have the same type independent on the chosen \(U\) and \(N\)[18, Theorem 4.23], and therefore \(\mathrm{typ}(\gamma,\delta)\) is well-defined. Corollary 5.3 in [18] shows that the types behaves well with respect to quotients. **Theorem 24** ([18]).: _Let \(\mathbf{A}\) be a finite algebra and \(\beta\subseteq\gamma\subseteq\delta\) be congruences such that \((\gamma,\delta)\) is a cover. Then_ \[\mathrm{typ}_{\mathbf{A}}(\gamma,\delta)=\mathrm{typ}_{\mathbf{A}/\beta}( \gamma/\beta,\delta/\beta).\] Chapter 9 of [18] provides several results characterizing omitting types in all finite algebras in a variety by means of identities. We will only need a very special case of [18, Theorem 9.6]. Two concepts are required to formulate it: an operation \(t:A^{n}\to A\) is _idempotent_ if \(t(a,a,\ldots,a)=a\) for every \(a\in A\); a ternary operation \(t:A^{3}\to A\) is _Mal'cev_ if \(t(b,a,a)=b=t(a,a,b)\) for every \(a,b\in A\). **Theorem 25** ([18]).: _Let \(\mathbf{A}\) be a finite algebra that has a ternary Mal'cev term operation or a binary commutative idempotent term operation. Then \(\mathrm{typ}(\gamma,\delta)\neq 1\) for every cover of congruences \((\gamma,\delta)\) in \(\mathbf{A}\)._ We now concentrate only on the case \((\gamma,\delta)=(0_{A},\alpha)\) for a minimal congruence \(\alpha\). Note that traces in a \((0_{A},\alpha)\)-minimal set \(U\) are exactly those intersections of \(\alpha\)-equivalence classes with \(U\) that have at least 2 elements. The union of traces, denote it \(B\), is referred to as the _body_ of \(U\) and the complement \(T=U\setminus B\) is the _trace_ of \(U\). Theorem 2.8 in [18] shows various density and separation properties of minimal sets. Two of them (items (2) and (4) in that theorem) are crucial for us. **Theorem 26** ([18]).: _Let \(\mathbf{A}\) be a finite algebra, \(\alpha\) be a minimal congruence, and \(U\) be a \((0_{A},\alpha)\)-minimal set._ 1. _There exists_ \(e\in\mathrm{Clo}_{1}(\mathbf{A}^{*})\) _such that_ \(e|_{U}=\mathrm{id}_{U}\) _and_ \(e(A)=U\) _._ 2. _For every_ \((a,b)\in\alpha\) _with_ \(a\neq b\)_, there exists_ \(e\in\operatorname{Clo}_{1}(\mathbf{A}^{*})\) _such that_ \(e(a)\neq e(b)\) _and_ \(e(A)=U\)_._ The following direct relation of types to strong abelianness is a consequence of Theorem 5.6 in [18]. **Theorem 27** ([18]).: _A minimal congruence \(\alpha\) of a finite algebra \(\mathbf{A}\) is strongly abelian if, and only if, \(\operatorname{typ}(0_{A},\alpha)\neq 1\)._ Therefore, we are interested in algebras such that \(\operatorname{typ}(0_{A},\alpha)\neq 1\) for every minimal congruence \(\alpha\). Strong structure theorems are available for \(\operatorname{typ}(0_{A},\alpha)\)-minimal sets in this case. We will only need several pieces of information that follow from [18, 23] and some additional work. **Theorem 28**.: _Let \(\mathbf{A}\) be a finite algebra, let \(\alpha\) be a minimal congruence such that \(\operatorname{typ}(0_{A},\alpha)\neq 1\), and let \(U\) be a \((0_{A},\alpha)\)-minimal set with body \(B\) and tail \(T\). Then_ 1. \(\mathbf{A}|_{B}\) _has a Mal'cev term operation or a binary commutative idempotent term operation, and_ 2. \(\mathbf{A}^{*}\) _has a binary term operation_ \(p\) _such that_ \[p(B,B)\subseteq B,\ p(B,T)\subseteq T,\ p(T,B)\subseteq T,\text{ and }p(T,T)\subseteq T.\] Proof.: If \(\operatorname{typ}(0_{A},\alpha)\in\{3,4,5\}\), then Lemmas 4.15 and 4.17 in [18] (or Lemma 3.2 in [23]) imply that \(B\) is a two-element set, say \(B=\{0,1\}\), and there exists \(p\in\operatorname{Clo}_{2}(\mathbf{A}|_{U})\) such that * \(p(u,1)=p(1,u)=p(u,u)=u\) for every \(u\in U\), * \(p(u,0)=p(0,u)=u\) for every \(u\in U\setminus\{1\}\), and * \(p(u,p(u,v))=p(u,v)\) for every \(u,v\in U\). It follows that the restriction of \(p\) to \(B\) is commutative and idempotent (indeed, it is a semilattice operation on the two-element set \(B\)) and \(p\) satisfies the first three inclusions in the second item. It also satisfies the last one, since if \(p(u,v)\not\in T\) for some \(u,v\in T\), then \(p(u,v)\in B\) and we get \(p(u,v)=p(u,p(u,v))\in p(T,B)\subseteq T\), a contradiction. Assume now that \(\operatorname{typ}(0_{A},\alpha)=2\). By Lemma 4.20 in [18] (or Lemma 3.5 in [23]) and Lemma 3.6 in [23], there exists a ternary operation \(d\in\operatorname{Clo}_{3}(\mathbf{A}|_{U})\) such that * \(d(u,u,u)=u\) for every \(u\in U\), * \(d(b,b,u)=u=d(u,b,b)\) for every \(b\in B\), \(u\in U\), * \(d(t,t,b)\in T\) for every \(t\in T\), \(b\in B\), and * \(B\) is closed under \(d\). The restriction of \(d\) to \(B\) is thus a Mal'cev operation in \(\operatorname{Clo}(\mathbf{A}|_{B})\). In order to find \(p\) satisfying the second item, we start with \(p_{0}\) defined by \[p_{0}(u,v)=d(u,u,v)\] and inductively define \[p_{i+1}(u,v)=p_{0}(p_{i}(u,v),v).\] Observe that \(p_{0}(U,U)\subseteq U\), \(p_{0}(B,B)\subseteq B\), \(p_{0}(T,B)\subseteq T\) and, by induction, these inclusions hold for any \(p_{i}\). It is thus enough to find \(i\) such that \(p_{i}(U,T)\subseteq T\). Fix \(u\in U\) and \(t\in T\) and consider the sequence \[p_{0}(u,t),\ p_{1}(u,t),\ p_{2}(u,t),\ldots\] If some member of this sequence, say \(b=p_{j}(u,t)\), is not in \(T\), then it is in \(B\), thus \(p_{j+1}(u,t)=p_{0}(b,t)=d(b,b,t)=t\) and then, for any \(k>j\), \(p_{k+1}(u,t)=p_{0}(t,t)=t\). It follows that all but finitely many members of the sequence belong to \(T\). For a sufficiently large \(i\) we thus have \(p_{i}(u,t)\in T\) for every \(u\in U\), \(t\in T\), as required. ### Polynomial upper bound We prove here that \(\mathbf{A}\) is in \(\mathcal{K}_{\mathrm{eff}}^{\mathrm{s}}\) whenever \(\mathbf{A}\) has no nontrivial strongly abelian congruence. The proof is done by studying "tractable pieces" of \(\mathbf{A}\). The exact definition is somewhat convoluted; the following example is intended to provide some intuition behind the concept and the proof. _Example 29_.: Let \(\mathbf{B}\) be the algebra with universe \(B=\{0,1,2\}\) that consists of a single binary operation defined as follows. \begin{tabular}{c|c c c} \(\cdot\)**B** & 0 & 1 & 2 \\ \hline 0 & 0 & 0 & 2 \\ 1 & 0 & 1 & 1 \\ 2 & 2 & 1 & 2 \\ \end{tabular} This algebra is sometimes called the _rock-paper-scissors_ algebra, since it outputs the winner in that game (where, e.g., rock=0, scissors=1, paper=2) Let \(\mathbf{A}=\mathbf{B}^{*}\). We sketch the reason why \(\mathbf{A}\in\mathcal{K}_{\mathrm{eff}}\). Consider a homomorphism \(h:\mathbf{X}\to\mathbf{A}\). Let \(e\) be the term \(e=x_{1}\cdot 1\) and note that \(e^{\mathbf{A}}(0)=0\) and \(e^{\mathbf{A}}(1)=e^{\mathbf{A}}(2)=1\). Since \(h\) preserves all term operations, we in particular have \(h(e^{\mathbf{X}}(x))=e^{\mathbf{A}}(h(x))\) for every \(x\in X\). This has two consequences. First, it follows that \(h\) maps \(Y=e^{\mathbf{X}}(X)\) into \(e^{\mathbf{A}}(A)=\{0,1\}\). Since \(\{0,1\}\) together with the appropriate restriction of \({}^{\cdot\mathbf{A}}\) is a semilattice, and homomorphisms to semilattices can be efficiently enumerated by Proposition 10, it can be deduced that we can efficiently enumerate a set of functions \(Y\to\{0,1\}\) that contains all the restrictions of homomorphisms \(\mathbf{X}\to\mathbf{A}\) to the set \(Y\). A more general version of this fact is item (2) in Lemma 31. The second consequence of \(e^{\mathbf{A}}(h(x))=h(e^{\mathbf{X}}(x))\) is that \(e^{\mathbf{A}}\circ h\) is determined by \(h|_{Y}\), i.e., \(h|_{Y}\) determines \(h\) "modulo" the kernel of \(e^{\mathbf{A}}\), which is the equivalence \(\alpha=\{1,2\}^{2}\cup\{0\}^{2}\). We can thus efficiently compute candidate homomorphisms modulo \(\alpha\). A generalization of this fact is item (4) in Lemma 32. The proof now can be finished by similarly computing candidate homomorphisms modulo the equivalence \(\beta=\{0,1\}^{2}\cup\{2\}^{2}\), combing the information modulo \(\alpha\) and \(\beta\) to get a list of candidate homomorphisms modulo \(\alpha\cap\beta=0_{A}\) (cf. item (2) in Lemma 32), and removing non-homomorphisms from the list (cf. item (1) in Lemma 31). Another option to finish the proof is to use only the candidates modulo \(\alpha\) and, for each such candidate \(g\), to create a list containing all possible \(f\) corresponding to \(g\), cf. item (3) in Lemma 32. A _piece_ of \(\mathbf{A}\) is a pair \([P,\mu]\), where \(P\subseteq A\) and \(\mu\) is an equivalence relation on \(P\). **Definition 30**.: A piece \([P,\mu]\) of \(\mathbf{A}\) is _tractable_ if there exists a finite-signature reduct \(\mathbf{B}\) of \(\mathbf{A}\) and a polynomial-time algorithm that, given an input algebra \(\mathbf{X}\) in the signature of \(\mathbf{B}\) and subset \(Y\subseteq X\), outputs a set of mappings \(Y\to P/\mu\) containing the following set (where \(q_{\mu}:P\to P/\mu\) denotes the quotient mapping). \[\{q_{\mu}\circ h|_{Y}\mid h\colon\mathbf{X}\to\mathbf{B}\text{ is a homomorphism},\ h(Y)\subseteq P\}\] We say that such a \(\mathbf{B}\)_witnesses_ the tractability of \([P,\mu]\). We set \[\mathcal{Z}(\mathbf{A})=\{[P,\mu]\mid[P,\mu]\text{ is a tractable piece of }\mathbf{A}\}.\] We also extend the definition of induced algebras from subsets to pieces: for a piece \([P,\mu]\) of \(\mathbf{A}\), we define \(\mathbf{A}|_{[P,\mu]}\) to be the algebra with universe \(P\) and whose operations are all the operations of the form \(f|_{P}\), where \(f\in\mathrm{Clo}(\mathbf{A}^{*})\) preserves \(P\) and \(f|_{P}\) preserves \(\mu\). In the following, recall that \(0_{P}\) is used to denote the equality relation on \(P\). We sometimes disregard the formal difference between \(P\) and \(P/0_{P}\). The relation of tractable pieces to \(\mathcal{K}_{\mathrm{eff}}\) is as follows. **Lemma 31**.: _Let \(\mathbf{A}\) be a finite algebra._ 1. _If_ \([A,0_{A}]\in\mathcal{Z}(\mathbf{A})\)_, then_ \(\mathbf{A}\in\mathcal{K}_{\mathrm{eff}}\)_._ 2. _If_ \((\mathbf{A}|_{[P,\mu]})/\mu\in\mathcal{K}_{\mathrm{eff}}\)_, then_ \([P,\mu]\in\mathcal{Z}(\mathbf{A})\)_._ Proof.: (1) Let \(\mathbf{B}\) be a finite-signature reduct of \(\mathbf{A}\) witnessing that \([A,0_{A}]\in\mathcal{Z}(\mathbf{A})\). Therefore, there is a polynomial-time algorithm that, given an input algebra \(\mathbf{X}\) in the signature of \(\mathbf{B}\), enumerates a set of maps that contains all the homomorphisms \(\mathbf{X}\to\mathbf{B}\). In order to obtain an algorithm witnessing that \(\mathbf{A}\in\mathcal{K}_{\mathrm{eff}}\), it suffices to filter out from this set the maps that are not homomorphisms. (2) If \((\mathbf{A}|_{[P,\mu]})/\mu\in\mathcal{K}_{\mathrm{eff}}\), then there exists a finite-signature reduct \(\mathbf{B}\) of \(\mathbf{A}|_{[P,\mu]}\) having \(\mathbf{P}\) as a subalgebra, itself having \(\mu\) as a congruence, and such that \(\mathbf{P}/\mu\) is in \(\mathcal{K}_{\mathrm{eff}}\). Consider a homomorphism \(h\) from an algebra \(\mathbf{X}\) to \(\mathbf{B}\) such that \(h(Y)\subseteq P\), and let \(\mathbf{Z}\) be the subalgebra of \(\mathbf{X}\) generated by \(Y\). Since the preimage of a subuniverse under a homomorphism is a subuniverse, the restriction \(h|_{Z}\) maps \(Z\) into \(P\), so \(h|_{Z}\) is a homomorphism \(\mathbf{Z}\to\mathbf{P}\) and \(q_{\mu}\circ h|_{Z}\) a homomorphism \(\mathbf{Z}\to\mathbf{P}/\mu\). Therefore, in order to obtain an algorithm witnessing that \([P,\mu]\in\mathcal{Z}(\mathbf{A})\), it suffices to do the following on an input \(\mathbf{X}\), \(Y\subseteq X\): compute the subalgebra \(\mathbf{Z}\) generated by \(Y\) in \(\mathbf{X}\), enumerate all the homomorphisms \(\mathbf{Z}\to\mathbf{P}/\mu\), which is possible since \(\mathbf{P}/\mu\in\mathcal{K}_{\mathrm{eff}}\), and restrict the obtained mappings to the set \(Y\). The next lemma will allow us to produce new tractable pieces from those already known. **Lemma 32**.: _Let \(\mathbf{A}\) be a finite algebra._ 1. _If_ \([P,\mu]\in\mathcal{Z}(\mathbf{A})\) _and_ \(Q\subseteq P\)_, then_ \([Q,\mu\cap(Q\times Q)]\in\mathcal{Z}(\mathbf{A})\)__ 2. _If_ \([P,\mu],[P,\nu]\in\mathcal{Z}(\mathbf{A})\)_, then_ \([P,\mu\cap\nu]\in\mathcal{Z}(\mathbf{A})\) _._ 3. _If_ \([P,\mu]\in\mathcal{Z}(\mathbf{A})\) _and_ \([Q,0_{Q}]\in\mathcal{Z}(\mathbf{A})\)_, where_ \(Q\) _is a_ \(\mu\)_-equivalence class, then_ \([P,\mu\cap\nu]\in\mathcal{Z}(\mathbf{A})\)_, where_ \(\nu=0_{Q}\cup(P\setminus Q)^{2}\)_._ 4. _If_ \(P,Q\subseteq A\)_,_ \(e\in\operatorname{Clo}_{1}(\mathbf{A})\) _is such that_ \(e(P)\subseteq Q\)_, and_ \([Q,0_{Q}]\in\mathcal{Z}(\mathbf{A})\)_, then_ \([P,\ker e|_{P}]\in\mathcal{Z}(\mathbf{A})\)_._ Proof.: Let \(\mathbf{B}\) be a finite-signature reduct of \(\mathbf{A}\) witnessing the tractability of every piece in \(\mathcal{Z}(\mathbf{A})\); such a \(\mathbf{B}\) exists by taking an algebra having all the operations from finite-signature reducts witnessing the membership of the finitely many pieces \([P,\mu]\in\mathcal{Z}(\mathbf{A})\). (1) Let \(\mathbf{X}\) be an algebra in the signature of \(\mathbf{B}\) and \(Y\subseteq X\). Let \(\nu\) be \(\mu\cap(Q\times Q)\). In order to enumerate a set containing all the maps of the form \(q_{\nu}\circ h|_{Y}\), where \(h\) is a homomorphism \(\mathbf{X}\to\mathbf{B}\) such that \(h(Y)\subseteq Q\), it suffices to enumerate a set containing all the maps \(q_{\mu}\circ h|_{Y}\) with \(h(Y)\subseteq P\), discard those that do not satisfy \(h(Y)\subseteq Q\), and restrict the remaining to the set \(Q\). (2) Let \(\mathbf{X}\) be an algebra in the signature of \(\mathbf{B}\) and \(Y\subseteq X\). Let \(S_{\mu}\) be the set of maps enumerated by an algorithm witnessing that \([P,\mu]\in\mathcal{Z}(\mathbf{A})\), and similarly for \(S_{\nu}\). Given \(f\in S_{\mu}\) and \(g\in S_{\nu}\), one can check whether for every \(y\in Y\), there exists an \((\mu\cap\nu)\)-equivalence class \(C_{y}\) contained in \(f(y)\) and \(g(y)\). If it is the case, then we add the mapping \(y\mapsto C_{y}\) to the enumeration. In particular, for every \(h\colon\mathbf{X}\to\mathbf{B}\) such that \(h(Y)\subseteq P\), the maps \(q_{\mu}\circ h|_{Y}\) and \(q_{\nu}\circ h|_{Y}\) are in \(S_{\mu}\) and \(S_{\nu}\), respectively, and they satisfy the property above, so that \(q_{\mu\cap\nu}\circ h|_{Y}\) is enumerated by the algorithm. Since there is a polynomial number of maps in \(S_{\mu}\) and \(S_{\nu}\), this algorithm runs in polynomial time. (3) Let \(\mathbf{X}\) be an algebra in the signature of \(\mathbf{B}\) and \(Y\subseteq X\). We describe an algorithm witnessing that \([P,\mu\cap\nu]\in\mathcal{Z}(\mathbf{A})\). Let \(S\) be the set of mappings enumerated by an algorithm witnessing the fact that \([P,\mu]\in\mathcal{Z}(\mathbf{A})\), when applied to the input \(\mathbf{X}\) and \(Y\). Consider a mapping \(g\in S\). Let \(Y^{\prime}=g^{-1}(\{Q\})\) and let \(S_{g}\) be the set of mappings enumerated by an algorithm witnessing the fact that \([Q,0_{Q}]\in\mathcal{Z}(\mathbf{A})\), when applied on the input \(\mathbf{X}\) and \(Y^{\prime}\). For each \(f\in S_{g}\), add to the enumeration the mapping \(\tilde{f}\colon Y\to P/(\mu\cap\nu)\) defined by \(\tilde{f}(y)=g(y)\) if \(y\in Y^{\prime}\), and \(\tilde{f}(y)=\{f(y)\}\) if \(y\in Y^{\prime}\). It remains to check that every \(q_{\mu\cap\nu}\circ h|_{Y}\) is enumerated by this algorithm, where \(h\colon\mathbf{X}\to\mathbf{B}\) is a homomorphism such that \(h(Y)\subseteq P\). We have that \(g=q_{\mu}\circ h|_{Y}\) is in \(S\). Let \(Y^{\prime}=g^{-1}(\{Q\})\) and note that we also have that \(f=h|_{Y^{\prime}}\) is in \(S_{g}\). Clearly, \(q_{\mu\cap\nu}\circ h\) is equal to \(\tilde{f}\) and is thus enumerated by the algorithm. (4) Let \(\mathbf{X}\) be an algebra in the signature of \(\mathbf{B}\) and \(Y\subseteq X\). Let \(S\) be the set of mappings enumerated by an algorithm witnessing the fact that \([Q,0_{Q}]\in\mathcal{Z}(\mathbf{A})\), when applied on the input \(\mathbf{X}\) and \(Z=e^{\mathbf{X}}(Y)\). For every \(g\in S\), add to the enumeration the mapping \(\tilde{g}\colon Y\to P/\ker e|_{P}\) that sends \(y\) to the \(\ker e|_{P}\)-equivalence class \((e^{\mathbf{A}}|_{P})^{-1}(g(e^{\mathbf{X}}(y)))\). We show that every mapping \(q_{\ker e|_{P}}\circ h\) is enumerated, where \(h\colon\mathbf{X}\to\mathbf{B}\) is such that \(h(Y)\subseteq P\). Note that since \(h\) is a homomorphism, we have \(h(e^{\mathbf{X}}(y))=e^{\mathbf{A}}(h(y))\), for every \(y\in Y\). Therefore, \(h(Z)\subseteq e^{\mathbf{A}}(P)\subseteq Q\). In particular, \(g=h|_{Z}\) is in \(S\). It remains to observe that \(q_{\ker e|_{P}}\circ h|_{Y}\) is the same as \(\tilde{g}\), and therefore it is enumerated by the algorithm. At this point, Definition 30 can be forgotten; we will only work with Lemmas 31 and 32. The proof is finished in two steps. The following corollary of Lemma 32 is useful for both of them. **Corollary 33**.: _Let \(\mathbf{A}\) be a finite algebra and \([P,\mu]\) a piece of \(\mathbf{A}\)._ 1. _If for every_ \(a,b\in P\) _with_ \(a\neq b\) _there is a unary term operation_ \(e^{ab}\in\operatorname{Clo}_{1}(\mathbf{A})\) _such that_ \(e^{ab}(a)\neq e^{ab}(b)\) _and_ \([e^{ab}(P),0_{e^{ab}(P)}]\in\mathcal{Z}(\mathbf{A})\)_, then_ \([P,0_{P}]\in\mathcal{Z}(\mathbf{A})\)_._ 2. _If_ \([P,\mu]\in\mathcal{Z}(\mathbf{A})\) _and_ \([Q,0_{Q}]\in\mathcal{Z}(\mathbf{A})\) _for every equivalence class_ \(Q\) _of_ \(\mu\)_, then_ \([P,0_{P}]\in\mathcal{Z}(\mathbf{A})\)_._ Proof.: We first prove (1). By item (4) of Lemma 32, we have that \([P,\ker e^{ab}|_{P}]\in\mathcal{Z}(\mathbf{A})\) for every \(a,b\in P\) with \(a\neq b\). Since the equivalence relation \(\bigcap_{a\neq b}\ker e^{ab}\) is \(0_{P}\), we get by item (2) of Lemma 32 that \([P,0_{P}]\in\mathcal{Z}(\mathbf{A})\). Concerning (2), it suffices to apply item (2) of Lemma 32 to all the pieces \([P,\mu\cap(0_{Q}\cup(P\setminus Q)^{2})]\), where \(Q\) is an equivalence class of \(\mu\). The fact that all these pieces are in \(\mathcal{Z}(\mathbf{A})\) comes from item (3) of Lemma 32. The first step is to prove that \(\mathbf{A}\in\mathcal{K}_{\text{eff}}^{s}\) whenever \(\mathbf{A}\) has a chain of congruences of type different than \(1\). A special case stated in the following lemma was already essentially done in Section 2.4. **Lemma 34**.: _Every finite minimal algebra that is not of type \(1\) is in \(\mathcal{K}_{\text{eff}}^{s}\)._ Proof.: Let \(\mathbf{M}\) be a finite minimal algebra whose type is not \(1\). By Theorem 23, there exists an algebra \(\mathbf{A}\) such that \(\operatorname{Clo}(\mathbf{A})\subseteq\operatorname{Clo}(\mathbf{M}^{*})\) and such that \(\mathbf{A}\) is either a two-element semilattice or a group. Thus, by Propositions 8 and 10, \(\mathbf{A}\) is in \(\mathcal{K}_{\text{eff}}^{s}\). By Proposition 4, \(\mathbf{M}^{*}\) is also in \(\mathcal{K}_{\text{eff}}^{s}\). Finally, by Proposition 5, \(\mathbf{M}\) is in \(\mathcal{K}_{\text{eff}}^{s}\). **Theorem 35**.: _Let \(\mathbf{A}\) be a finite algebra with a sequence \(\alpha_{0},\ldots,\alpha_{n}\) of congruences such that_ * \(\alpha_{0}=0_{A}\)_,_ \(\alpha_{n}=1_{A}\)_, and_ * _for all_ \(i<n\)_,_ \((\alpha_{i},\alpha_{i+1})\) _is a cover and_ \(\operatorname{typ}(\alpha_{i},\alpha_{i+1})\neq 1\)_._ _Then \(\mathbf{A}\) is in \(\mathcal{K}^{s}_{\mathrm{eff}}\)._ Proof.: We prove that \(\mathbf{A}^{*}\) is in \(\mathcal{K}_{\mathrm{eff}}\), which is enough by Proposition 5 and since every homomorphism to \(\mathbf{A}^{*}\) must be surjective. Recall that every congruence of \(\mathbf{A}\) is also a congruence of \(\mathbf{A}^{*}\). The proof is by induction on \(n\). The case \(n=0\) is clear, since then \(|A|=1\). Suppose \(n>0\). The quotient algebra \(\mathbf{A}^{*}/\alpha_{1}\) has the sequence of congruences \(0_{A/\alpha_{1}}=\alpha_{1}/\alpha_{1}\subsetneq\alpha_{2}/\alpha_{1}\subsetneq \cdots\subsetneq\alpha_{n}/\alpha_{1}=1_{A/\alpha_{1}}\). By Theorem 24, \(\operatorname{typ}_{\mathbf{A}^{*}/\alpha_{1}}(\alpha_{i}/\alpha_{1}, \alpha_{i+1}/\alpha_{1})=\operatorname{typ}_{\mathbf{A}^{*}}(\alpha_{i}, \alpha_{i+1})\), therefore the induction hypothesis gives us \(\mathbf{A}^{*}/\alpha_{1}\in\mathcal{K}_{\mathrm{eff}}\). It then follows from item (2) in Lemma 31 that \([A,\alpha_{1}]\in\mathcal{Z}(\mathbf{A}^{*})\). By item (2) in Corollary 33 and item (1) in Lemma 31, it is now enough to verify that \([P,0_{P}]\) is in \(\mathcal{Z}(\mathbf{A}^{*})\) for every equivalence class \(P\) of \(\alpha_{1}\). Let \(P\) be such an equivalence class, let \(U\) be a \((0_{A},\alpha)\)-minimal set, and consider any distinct \(a,b\in P\). By item (2) in Theorem 26, there exists \(e^{ab}\in\operatorname{Clo}_{1}(\mathbf{A}^{*})\) such that \(e^{ab}(a)\neq e^{ab}(b)\) and \(e^{ab}(A)=U\). The image \(e^{ab}(P)\) is contained in \(U\) and in an equivalence class of \(\alpha_{1}\) (as \(e^{ab}\) preserves \(\alpha_{1}\)). Since \(e^{ab}(P)\) also contains two distinct elements, namely \(e^{ab}(a)\) and \(e^{ab}(b)\), it is contained in a trace \(N\). Since \(\mathbf{A}^{*}|_{N}\) is a minimal algebra of type different than \(1\), by Lemma 34, \(\mathbf{A}^{*}|_{N}=\mathbf{A}^{*}_{[N,0_{N}]}\in\mathcal{K}_{\mathrm{eff}}\), which in turn implies \([N,0_{N}]\in\mathcal{Z}(\mathbf{A}^{*})\) by item (2) of Lemma 31, and then \([e^{ab}(P),0_{e^{ab}(P)}]\in\mathcal{Z}(\mathbf{A}^{*})\) by item (1) of Lemma 32. An application of item (1) in Corollary 33 gives that \([P,0_{P}]\) is in \(\mathcal{Z}(\mathbf{A}^{*})\), which finishes the proof of the theorem. The theorem already covers a large class of algebras, e.g., all so-called Taylor algebras (see, e.g., [4] for a discussion about this class and another proof of the fact that finite Taylor algebras are in \(\mathcal{K}^{s}_{\mathrm{eff}}\)). However, not all algebras without strongly abelian minimal congruences are covered, e.g., the algebra in Example 12. For the second step, we start almost from scratch: we only use Theorem 35 to derive the following lemma. **Lemma 36**.: _Let \(\alpha\) be a minimal congruence of \(\mathbf{A}\), \(U\) be a \((0_{A},\alpha)\)-minimal set, and suppose that \(\operatorname{typ}(0_{A},\alpha)\neq 1\). Denote \(B\) the body of \(U\) and \(T\) the tail (so \(U\) is a disjoint union of \(B\) and \(T\)). Then \([U,\mu]\in\mathcal{Z}(\mathbf{A}^{*})\) where \(\mu=0_{B}\cup T^{2}\)._ Proof.: Let \(\nu\) be the equivalence on \(U\) whose classes are \(B\) and \(T\). By item (1) in Theorem 28, \(\mathbf{A}^{*}|_{[B,0_{B}]}=\mathbf{A}^{*}|_{B}\) has a Mal'cev term operation or a binary commutative idempotent term operation. Therefore, \(\mathbf{A}^{*}|_{B}\) has no congruence covers of type \(1\) by Theorem 25 and is then in \(\mathcal{K}^{s}_{\mathrm{eff}}\) by Theorem 35 applied to any sequence of covers from \(0_{A}\) to \(1_{A}\). This implies that \([B,0_{B}]\in\mathcal{Z}(\mathbf{A}^{*})\) by item (2) in Lemma 31. We also know by item (2) in Theorem 28 that \(\mathbf{A}^{*}|_{[U,\nu]}/\nu\) contains a binary commutative idempotent operation so \([U,\nu]\in\mathcal{Z}(\mathbf{A}^{*})\) by the same argument. Now item (3) in Lemma 32 gives us the conclusion. We are now ready to prove the general membership result. **Theorem 37**.: _Let \(\mathbf{A}\) be a finite algebra without nontrivial strongly abelian congruences. Then \(\mathbf{A}\in\mathcal{K}^{s}_{\mathrm{eff}}\)._ Proof.: We will prove the following two claims by induction on \(n\). * For every minimal congruence \(\alpha\) of \(\mathbf{A}\), for every \((0_{A},\alpha)\)-minimal set \(U\) with tail \(T\), and every \(P\subseteq U\) such that \(|P\cap T|\leq n\), we have \([P,0_{P}]\in\mathcal{Z}(\mathbf{A}^{*})\). * For every \(P\subseteq A\) with \(|P|\leq n\), we have \([P,0_{P}]\in\mathcal{Z}(\mathbf{A}^{*})\). Note that \((II_{|A|})\) implies that \(\mathbf{A}^{*}\in\mathcal{K}_{\mathrm{eff}}\) by Lemma 31. Then \(\mathbf{A}^{*}\in\mathcal{K}^{s}_{\mathrm{eff}}\) since every homomorphism to \(\mathbf{A}^{*}\) is surjective, which implies that \(\mathbf{A}\in\mathcal{K}^{s}_{\mathrm{eff}}\) by Proposition 5. The induction starts by observing that \((I_{1})\) holds by Theorem 27, Lemma 36, and item (1) of Lemma 32. We now prove that if \((I_{n-1})\) holds, then \((II_{n})\) holds. Consider any \(P\subseteq A\) with \(|P|\leq n\). We claim that for every \(a\neq b\) in \(P\), there exists \(f^{ab}\in\operatorname{Clo}_{1}(\mathbf{A}^{*})\) such that \([f^{ab}(P),0_{f^{ab}(P)}]\in\mathcal{Z}(\mathbf{A}^{*})\) and \(f^{ab}(a)\neq f^{ab}(b)\). From (1) in Corollary 33 it then follows that \([P,0_{P}]\in\mathcal{Z}(\mathbf{A}^{*})\), as required. So, let \(a,b\in P\) be distinct. Let \(\alpha\) be a minimal congruence contained in the congruence generated by \((a,b)\), let \(U\) be a \((0_{A},\alpha)\)-minimal set, let \(c,d\) be distinct elements in a trace of \(U\), and let \(e\in\operatorname{Clo}_{1}(\mathbf{A}^{*})\) be such that \(e|_{U}=\operatorname{id}_{U}\) and \(e(A)=U\) (given by Theorem 26). By the choice of \(\alpha\), \(c\), and \(d\), the pair \((c,d)\) is in the congruence generated by \(\{(a,b)\}\). Therefore, because of the way how congruences are generated, there exists a sequence of elements \[c=c_{0},\ c_{1},\ \ldots,\ c_{k}=d,\quad c_{i}\in A\] and a sequence of term operations \[f_{1},\ f_{2},\ \dots,\ f_{k},\ \ \ f_{i}\in\operatorname{Clo}_{1}(\mathbf{A}^{ \mathbf{x}}),\ f_{i}(\{a,b\})=\{c_{i-1},c_{i}\}.\] By replacing \(f_{i}\) with \(e\circ f_{i}\) and \(c_{i}\) with \(e(c_{i})\) we can further assume that \(f_{i}(P)\subseteq U\) and all the \(c_{i}\) are in \(U\) (note that \(c_{0}\) and \(c_{k}\) do not change since \(e|_{U}\) is the identity on \(U\) and \(c,d\in U\)). As \(c\neq d\), there exists \(i\) such that \(c_{i-1}=c\) and \(c_{i-1}\neq c_{i}\). But now \(f_{i}\) can be taken for \(f^{ab}\): the set \(f_{i}(P)\) is contained in \(U\), \(f_{i}(a)\neq f_{i}(b)\), and \(f_{i}(P)\) has at most \(n\) elements and contains an element in the body of \(U\), so that \(|f_{i}(P)\cap T|\leq n-1\). By \((I_{n-1})\), we get \([f_{i}(P),0_{f_{i}(P)}]\in\mathcal{Z}(\mathbf{A}^{\mathbf{x}})\). We now prove that if \((II_{n})\) holds, then \((I_{n})\) holds. Consider \(P\) as in the statement of \((I_{n})\). By Lemma 36 we have that \([P,\mu]\in\mathcal{Z}(\mathbf{A}^{\mathbf{x}})\), where \(\mu\) is the equivalence whose only non-singleton equivalence class is \(P\cap T\). By item (3) of Lemma 32, it is enough to show that \([P\cap T,0_{P\cap T}]\in\mathcal{Z}(\mathbf{A}^{\mathbf{x}})\). Since \(|P\cap T|\leq n\), this is guaranteed by \((II_{n})\). The proof presented in this section is not very long. This was made possible, apart from the tame congruence theory, by the somewhat unnatural concept of a tractable piece, with which we could work axiomatically after we established its properties. The two main unsatisfactory aspects are the ad hoc choice of properties of \((0,\alpha)\)-minimal sets in Theorem 28 and the two step process, in which we "almost proved" the result in Theorem 35 and then started again almost from scratch. It could be worth the effort to keep looking for a more natural and straightforward proof. ## 5. Conclusion Two refinements of Theorem 1 follow immediately from our results. **Corollary 38**.: _Let \(\mathbf{A}\) be a finite algebra. The following are equivalent:_ 1. \(\mathbf{A}\) _is in_ \(\mathcal{K}^{s}_{\mathrm{c}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! matches the trivial upper bound. Characterizing \(c_{\mathbf{A}}(n)\in O(n^{k})\) may be nontrivial for finite \(n\) and infinite \(\mathbf{A}\) but it has a different flavor because the \(k\) in our result is related to \(|A|\). We now turn to the complexity-theoretic application, Theorem 2, which also follows from the obtained results in a straightforward manner. We restate the theorem for convenience. **Theorem 2**.: _Assuming \(P\neq NP\), the following are equivalent for a finite structure \(\mathcal{A}\) of finite signature._ 1. _For every finite-signature expansion_ \(\mathcal{B}\) _of_ \(\mathcal{A}\)_, the problem_ \(\mathrm{CSP}(\mathcal{B})\) _is solvable in polynomial time._ 2. _For no finite-signature expansion_ \(\mathcal{B}\) _of_ \(\mathcal{A}\)_, the problem_ \(\mathrm{CSP}(\mathcal{B})\) _is NP-complete._ 3. \(c_{\mathbf{A}}(n)\in O(n^{k})\) _for some integer_ \(k\)_, where_ \(\mathbf{A}\) _is the algebraic reduct of_ \(\mathcal{A}\)_._ Proof.: The implication from (1) to (2) follows from the assumption that P is different from NP, while the implication from (3) to (1) is essentially by Corollary 39. Indeed, any homomorphism \(\mathcal{X}\to\mathcal{B}\), if one exists, must be in particular a homomorphism from the algebraic reduct \(\mathbf{X}\) corresponding to the signature of \(\mathbf{A}\) to \(\mathbf{A}\). By the equivalence of items (1) and (3) in Corollary 39, one can enumerate all such homomorphisms, and check if one is a homomorphism \(\mathcal{X}\to\mathcal{B}\). We prove the implication from (2) to (3) by contraposition. Suppose that \(c_{\mathbf{A}}\) is not polynomially bounded. By the equivalence of items (2) and (4) in Corollary 39, there exist a subalgebra \(\mathbf{B}\) of \(\mathbf{A}\) and a nontrivial strongly abelian congruence \(\alpha\) of \(\mathbf{B}\). Let \(a\neq b\) be elements of \(\mathbf{B}\) be such that \((a,b)\in\alpha\). Let \(\mathcal{B}\) be the expansion of \(\mathcal{A}\) by the ternary relation \(R^{\mathcal{B}}\) containing the tuples \((a,a,b),(a,b,a),(b,a,a)\). Let \(\mathcal{C}\) be the structure with universe \(\{a,b\}\) in the signature containing only \(R\) interpreted as \(R^{\mathcal{C}}=R^{\mathcal{B}}\). Then \(\mathrm{CSP}(\mathcal{C})\) is known as the 1-in-3-SAT problem and is NP-complete (see, e.g., [28]). We prove that \(\mathrm{CSP}(\mathcal{B})\) is NP-complete by constructing a polynomial-time reduction from \(\mathrm{CSP}(\mathcal{C})\) to \(\mathrm{CSP}(\mathcal{B})\). Let \(\mathcal{X}\) be an instance of \(\mathrm{CSP}(\mathcal{C})\), whose domain has size \(|X|=n\geq 1\). By removing superfluous elements, we can assume that every element \(x\in X\) belongs to a tuple in \(R^{\mathcal{X}}\). Let \(\mathbf{F}=\mathbf{F}_{\mathbf{A},ab}(n)\) be the ab-free algebra constructed in Proposition 17. By Proposition 18, \(F\) has size \(O(n^{k})\), where \(k\) is a constant, and it can be computed in polynomial time. We rename the elements in the universe so that \(X\subseteq F\) and \(X\) is the \(ab\)-free set of \(\mathbf{F}\). Let \(\mathcal{F}\) be the expansion of \(\mathbf{F}\) to the signature of \(\mathcal{B}\) defined by \(R^{\mathcal{F}}=R^{\mathcal{X}}\) and \(S^{\mathcal{F}}=\emptyset\) for all the remaining relation symbols \(S\). We need to show that there exists a homomorphism \(\mathcal{X}\to\mathcal{C}\) if, and only if, there exists a homomorphism \(\mathcal{F}\to\mathcal{B}\). Suppose that there exists a homomorphism \(h\colon\mathcal{X}\to\mathcal{C}\). Since \(\mathbf{F}\) is \(ab\)-free, there exists a homomorphism \(h^{\prime}\colon\mathbf{F}\to\mathbf{A}\) extending \(h\). By construction, \(h^{\prime}\) preserves \(R\) as well as all the other relations, so it is a homomorphism \(\mathcal{F}\to\mathcal{B}\). Conversely, suppose that there exists a homomorphism \(h\colon\mathcal{F}\to\mathcal{B}\). Since every \(x\in X\) belongs to a tuple in \(R^{\mathcal{X}}\) and \(R^{\mathcal{B}}\) only contain values in \(\{a,b\}\), \(h\) maps \(X\) into \(\{a,b\}\). The restriction of \(h\) to \(X\) is then a homomorphism \(\mathcal{X}\to\mathcal{C}\). As we have already mentioned, the complexity classification of CSPs over finite structures remains open even for finite algebras. Note that \(\mathrm{CSP}(\mathbf{A})\) can be trivially solvable even if \(c_{\mathbf{A}}\) grows exponentially. For instance, if \(\mathbf{A}\) contains an element \(a\) such that \(f^{\mathbf{A}}(a,a,\ldots,a)=a\) for every \(f\) in the signature, then the constant mapping with image \(\{a\}\) is always a homomorphism to \(\mathbf{A}\), so deciding \(\mathrm{CSP}(\mathbf{A})\) is trivial; such algebras \(\mathbf{A}\) can clearly have exponential growth. Interestingly, the investigation of computational complexity of CSPs over finite relational structures can be reduced to investigating finite _idempotent_ algebras, i.e., those where each element \(a\) has the above property. This is among the reasons why the new universal algebraic theories that emerged in that context (see [3] for comparison) have small intersection with tame congruence theory. On the other hand, our partial result toward classifying the complexity of CSPs over finite algebras uses tame congruence theory and does not use the newly emerged ones at all. The project of classifying the complexity of CSPs over finite algebras (or general finite structures) could therefore also force one to unify these theories, which would be a much desired outcome. For finite algebras \(\mathbf{A}\) in finite signature with \(c_{\mathbf{A}}(n)\in O(n^{k})\), our results yield a polynomial algorithm algorithm for \(\mathrm{CSP}(\mathbf{A})\), which is however nonuniform in that the running time depends on \(\mathbf{A}\). _Is there a uniform polynomial-time algorithm in this case?_ That is, is there a polynomial-time algorithm that, given finite \(\mathbf{X}\) and \(\mathbf{A}\) such that \(c_{\mathbf{A}}(n)\in O(n^{k})\) for some \(k\), decides whether \(\mathbf{X}\) has a homomorphism to \(\mathbf{A}\)? Such an algorithm cannot be based on simply providing uniform bound on \(k\), since there is no such, as witnessed by powers of the 2-element group. There are many purely mathematical questions concerning the growth rates of \(c_{\mathbf{A}}\) for finite \(\mathbf{A}\). Here are some. _Is it possible to (almost) exactly compute \(c_{\mathbf{A}}\) for interesting classes of algebras? Is it possible to characterize sequences of the form \(c_{\mathbf{A}}\) (where \(\mathbf{A}\) ranges through all finite algebras or algebras from a specific class)? Are there nontrivial lower bounds on \(c_{\mathbf{A}}(n)\) (other than Corollary 19)? When is \(c_{\mathbf{A}}\) upper bounded by a linear (quadratic,..., or sublinear, logarithmic,...) function?_ Observe that \(c_{\mathbf{A}}\) is always at most exponential, and at least logarithmic as witnessed by direct powers of \(\mathbf{A}\). It can be logarithmic, e.g., if \(\mathbf{A}\) is the 2-element Boolean algebra. Similar questions can be interesting for the sequence counting the minimal size of a generating set from Question 9. We also remark that for the other counting sequences mentioned in Section 1.2, there are interesting results in this spirit, e.g., a finite group is nilpotent if, and only if, its free spectrum is in \(2^{O(n^{k})}\)[26, 17] and this happens if, and only if, its G-spectrum is [6, 20]. Several results of this sort are also provided in [7] for counting sequences closely related to generating sets of subalgebras of powers, too.
2305.07227
Multireference protonation energetics of a dimeric model of nitrogenase iron-sulfur clusters
Characterizing the electronic structure of the iron--sulfur clusters in nitrogenase is necessary to understand their role in the nitrogen fixation process. One challenging task is to determine the protonation state of the intermediates in the nitrogen fixing cycle. Here, we use a dimeric iron--sulfur model to study relative energies of protonation at C, S or Fe. Using a composite method based on coupled cluster and density matrix renormalization group energetics, we converge the relative energies of four protonated configurations with respect to basis set and correlation level. We find that accurate relative energies require large basis sets, as well as a proper treatment of multireference and relativistic effects. We have also tested ten density functional approximations for these systems. Most of them give large errors in the relative energies. The best performing functional in this system is B3LYP, which gives mean absolute and maximum errors of only 10 and 13 kJ/mol with respect to our correlated wavefunction estimates, respectively. Our work provides benchmark results for the calibration of new approximate electronic structure methods and density functionals for these problems.
Huanchen Zhai, Seunghoon Lee, Zhi-Hao Cui, Lili Cao, Ulf Ryde, Garnet Kin-Lic Chan
2023-05-12T03:41:04Z
http://arxiv.org/abs/2305.07227v2
# Multireference protonation energetics of a dimeric model of nitrogen and iron-sulfur clusters ###### Abstract Characterizing the electronic structure of the iron-sulfur clusters in nitrogenase is necessary to understand their role in the nitrogen fixation process. One challenging task is to determine the protonation state of the intermediates in the nitrogen fixing cycle. Here, we use a dimeric iron-sulfur model to study relative energies of protonation at C, S or Fe. Using a composite method based on coupled cluster and density matrix renormalization group energetics, we converge the relative energies of four protonated configurations with respect to basis set and correlation level. We find that accurate relative energies require large basis sets, as well as a proper treatment of multireference and relativistic effects. We have also tested ten density functional approximations for these systems. Most of them give large errors in the relative energies. The best performing functional in this system is B3LYP, which gives mean absolute and maximum errors of only 10 and 13 kJ/mol with respect to our correlated wavefunction estimates, respectively. Our work provides benchmark results for the calibration of new approximate electronic structure methods and density functionals for these problems. + Footnote †: preprint: AIP/123-QED ## I Introduction Nitrogenase is the only enzyme that can catalyze the conversion of atmospheric dinitrogen (N\({}_{2}\)) to ammonia (NH\({}_{3}\)), the key reaction in nitrogen fixation.[1; 2; 3; 4] Extensive biochemical research has revealed that the catalysis in Mo-nitrogenase takes place in the MoFe-protein, which contains a FeMo-cofactor (FeMoco) cluster, with composition MoFe\({}_{7}\)S\({}_{9}\)C(homocitrate), responsible for the N\({}_{2}\) reduction, and a P-cluster, with composition [Fe\({}_{8}\)S\({}_{7}\)Cys\({}_{6}\)], that transfers electrons to the FeMoco active site.[5; 6] During the last two decades, the atomic structure of the nitrogenase clusters have been determined by X-ray crystallography.[7; 8; 9] Given these structures, _ab initio_ electronic structure computation may be applied to determine the binding sites, reaction intermediates and eventually the catalytic mechanism.[10; 11] Recently, the intriguing E\({}_{4}\) intermediate of nitrogenase,[12] formed by adding four electrons and protons to the E\({}_{0}\) resting state of FeMoco and responsible for the binding of N\({}_{2}\), has been studied computationally by several groups.[13; 14; 15; 16; 17; 18; 19] This is a formidable task owing to the enormous number of possible binding positions of the added protons[17] and the complicated electronic structure of the FeMo cluster.[20; 21] All calculations have been performed using density functional theory (DFT).[22] However, due to the open-shell and multireference nature of the nitrogenase clusters, the reliability of the obtained DFT results has been called into question and the various functionals predict remarkably different results (over 600 kJ/mol difference in the predicted stability of different protonation states for the E\({}_{4}\) state).[23; 11; 24] In spite of the large number of open-shell transition metal centers in these clusters, it has been shown that approximate full configuration interaction (FCI) methods, such as the _ab initio_ density matrix renormalization group (DMRG) algorithm,[25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37] can tackle the qualitative multireference behaviour. For example, earlier calculations using DMRG provided new insights into the electronic structure of the P-cluster with its manifold of low-energy electronic states and their non-classical spin correlations.[38; 39; 40] These studies focused on active space models of the cluster, which were sufficient for a qualitative understanding of the electronic landscape. However, correctly modeling protonation energetics in FeMoco requires calculations that go beyond qualitative accuracy. In particular, quantitative energetics requires a treatment of electron correlation beyond the strongly correlated active space. In this work, we study the protonation problem in the simpler case of a dimeric iron-sulfur cluster. We compare four representative protonation sites (protonation of C, S, Fe or bridging two Fe ions), and try to estimate the lowest energy site and the energetic ordering. We do this within a composite approach where we separately treat multireference electron correlation using DMRG and dynamic correlation beyond the active space using coupled cluster methods. For comparison, we also include several density functional approaches. Overall, we find that multireference effects and correlation in large basis sets, are both crucial to describing the protonation energetics, with correlation effects beyond (perturbative) triples being large. Capturing both effects accurately remains challenging within the composite treatment, but we reach sufficient precision to identify the lowest energy protonation site as well as order the relative energies of the protonation structures. In contrast, most of the density functionals that we examine yield qualitative and large quantitative errors for this problem. ## II Methods We consider the dimer [(SCH\({}_{3}\))\({}_{2}\)FeS(CH\({}_{2}\))Fe(SCH\({}_{3}\))\({}_{2}\)]\({}^{4-}\) as a simple model for studying protonation energetics in an iron-sulfur cluster. For this purpose, we added a proton in the following locations: (a) on the bridging CH\({}_{2}^{2-}\) group, (b) on the bridging S\({}^{2-}\) ion, (c) terminally on one Fe ion, or (d) bridging both Fe ions. These locations are representative of potential protonation sites in FeMoco.[17] We denote the four protonated configurations HC, HS, HFe and HFe\({}_{2}\), respectively. Our goal is to identify the lowest-energy structure, and to predict the relative energetics of the four structures. **Structures**. We optimized structures for the four protonated configurations using a broken-symmetry (BS) open-shell singlet ground-state with two antiferromagnetically coupled high-spin Fe(II) centers (net charge -3). Geometry optimization was carried out at the DFT level with the TPSS functional,[41] the def2-SV(P) basis,[42] and using DFT-D3[43] as a dispersion correction. We show the optimized geometries in Fig. 1. In the optimized structures, we evaluated 2\(\langle S_{z}\rangle\) at each Fe center; these had opposite signs on the different centers and a magnitude ranging from 2.7 to 3.4 depending on the structure. We summarize the Fe-ligand bond lengths in the four structures in Table 1. We see that the Fe-ligand distances vary substantially depending on the protonation site. In particular, protonation of the bridging S\({}^{2-}\) or CH\({}_{2}^{2-}\) ions (thereby decreasing their charge to -1) increases their distances to Fe by \(\sim\)0.2 A. Adding the proton terminally to Fe1 (formally becoming a hydride ion and Fe(IV)) also increases the distances between this Fe ion and its other ligands. However, when the added proton bridges the two Fe ions, the distances are not much distorted. **Composite approach**. For these four structures, we employed a composite energy approach using coupled cluster with singles and doubles (CCSD) and with perturbative triples [CCSD(T)] to estimate dynamic correlation[44; 45] and DMRG to estimate multireference correlation. Additional corrections were then included for basis-set completeness and relativistic effects. The final composite energy was computed as \[E_{\rm composite} = E_{\rm CCSD(T)\mbox{-}TZ}\ + \tag{1}\] \[(E_{\rm DMRG\mbox{-}act}-E_{\rm CCSD(T)\mbox{-}act})+\] \[\Delta E_{\rm CCSD(T)\mbox{-}CBS}\] where \(E_{\rm CCSD(T)\mbox{-}TZ}\) is the CCSD(T) energy obtained with the cc-pVTZ-DK basis set, \((E_{\rm DMRG\mbox{-}act}-E_{\rm CCSD(T)\mbox{-}act})\) is a multireference correction, and \(\Delta E_{\rm CCSD(T)\mbox{-}CBS}\) is a basis-set correction (specified below). The mean-field and single-reference post-HF calculations were performed in PySCF.[46; 47] Spin-adapted DMRG calculations were performed in block2.[37] For each of these terms, in addition to obtaining energies for the composite energy formula, we carried out additional calculations to understand the impact of different approximations and to estimate the reliability of the corrections. We describe the different terms and these aspects below. **Coupled cluster calculations**. For the coupled cluster (CC) calculations, we started from BS unrestricted reference determinants. To understand the influence of orbitals and importance of triples, we first carried Figure 1: The geometries of the protonated dimer complex [\({\rm HFe}_{2}\)S(CH\({}_{2}\))(SCH\({}_{3}\))\({}_{4}\)]\({}^{3-}\) with the added proton on (a) C, (b) S, (c) terminally on one Fe atom and (d) bridging both Fe atoms. The added proton is shown in purple. \begin{table} \begin{tabular}{c c c c c} Bond & HC & HS & HFe & HFe\({}_{2}\) \\ \hline S\({}_{1}\)\(-\) Fe1 & 2.45 & 2.43 & 2.62 & 2.43 \\ S\({}_{12}\)\(-\) Fe1 & 2.45 & 2.45 & 2.50 & 2.48 \\ S\({}_{3}\)\(-\) Fe2 & 2.45 & 2.44 & 2.42 & 2.39 \\ S\({}_{4}\)\(-\) Fe2 & 2.42 & 2.39 & 2.38 & 2.43 \\ S\({}_{\rm br}\)\(-\) Fe1 & 2.26 & 2.47 & 2.42 & 2.31 \\ S\({}_{\rm br}\)\(-\) Fe2 & 2.28 & 2.53 & 2.21 & 2.34 \\ Average (all S\(-\)Fe) & 2.38 & 2.45 & 2.42 & 2.40 \\ C\({}_{\rm br}\)\(-\)Fe1 & 2.20 & 1.98 & 2.02 & 1.97 \\ C\({}_{\rm br}\)\(-\)Fe2 & 2.19 & 1.99 & 1.95 & 1.99 \\ H\(-\)Fe1 & & & 1.76 & 1.72 \\ H\(-\)Fe2 & & & & 1.76 \\ \end{tabular} \end{table} Table 1: The bond lengths (in Å) between the iron ions (Fe1 and Fe2) and their direct ligands in the optimized geometries of the four dimer models. S\({}_{1-}\)S\({}_{44}\) are the sulfur atoms of four terminal SCH\({}_{3}^{-}\) groups, S\({}_{\rm br}\) is the bridging S\({}^{2-}\) ion, C\({}_{\rm br}\) is the bridging CH\({}_{2}^{2-}\) ion and H is the added proton, when binding to Fe. out calculations using the small cc-pVDZ-DK basis set and the exact two-component (X2C) scalar relativistic Hamiltonian [48; 49; 50] (larger basis set CC calculations are discussed in the basis-set correction section below) and using 40 frozen core orbitals to reduce computational cost. To examine the impact of the orbital choice, we used both Kohn-Sham DFT (with the TPSS or B3LYP functionals [51; 52; 53]), as well as Hartree-Fock Slater determinants. In the unrestricted mean-field calculations, we targeted the projected \(S_{z}=0\) BS state using an initial guess where the spins in the two Fe atoms were coupled antiferromagnetically. [54] The expectation value of the total \(\langle S^{2}\rangle\) in the mean-field state ranged from 3.1 to 5.0 for the structures in this work. Starting from these states, we then computed unrestricted CCSD and CCSD(T) energies. [44; 45] For the DFT reference determinants, we computed CCSD(T) results based on the semi-canonicalized orbitals. In addition to the low-spin BS state, we also computed the mean-field and CC energies of the high spin (HS) state with \(S_{z}=4\). With this, we could estimate the energy of the pure-spin (PS) singlet state (\(S=S_{z}=0\)) from the Yamaguchi formula [54; 55] \[J=\frac{E_{\rm HS}-E_{\rm BS}}{\langle S_{\rm BS}^{2}\rangle-\langle S_{\rm HS }^{2}\rangle}=\frac{E_{\rm PS}-E_{\rm BS}}{\langle S_{\rm BS}^{2}\rangle- \langle S_{\rm PS}^{2}\rangle},\] where \(J\) is the exchange coupling. For simplicity, we used \(\langle S_{\rm PS}^{2}\rangle=0\) and \(\langle S_{\rm BS}^{2}\rangle\) and \(\langle S_{\rm BS}^{2}\rangle\) computed at the CCSD level in the above formula, for computing \(J\) from both CCSD and CCSD(T) energies. The difference between the BS and PS state can be taken as an estimate of the missing multireference correlation energy arising from spin-recoupling of the Fe centers, but does not capture other types of multireference correlation. **DMRG multireference correction**. To better estimate the multireference correction, we constructed an active space for a DMRG calculation. We started from a set of (restricted) natural orbitals obtained by diagonalizing the spin-averaged one-particle density matrix (1PDM) of the CCSD wave function calculated above using the cc-pVDZ-DK basis, and then selected orbitals with the occupancy furthest from 0 or 2 as the active space. Using this active space, we carried out complete active space configuration interaction (CASCI) spin-adapted DMRG [31] calculations, computing the PS singlet state (\(S=0\)) energies. Before performing DMRG, we split-localized the nearly doubly occupied orbitals, nearly empty orbitals, and other orbitals, using the Pipek-Mezey localization algorithm. [56] The orbitals were then reordered using the Fiedler algorithm. [32] The maximum bond dimension in the DMRG calculations was 5000 [SU(2) states]. We used a reverse schedule to generate data for DMRG energy extrapolation, and the DMRG extrapolation error was estimated as one fifth of the energy difference between the extrapolated DMRG energy and the DMRG energy computed at the largest bond dimension [32] (see Supplementary Material Section I). For analysis, we also extracted the largest configuration state function (CSF) coefficient from the DMRG wave function, using a deterministic depth-first search algorithm. [57] To obtain a multireference energy correction, we also computed the CCSD and CCSD(T) energies in the same active space, using a BS Hartree-Fock reference. The initial guess for the active space BS UHF density matrix was obtained by projecting the BS UHF density matrix in the full space into the active space. The correction was then computed as \(\Delta E_{\rm DMRG-act}=E_{\rm DMRG-act}-E_{\rm CCSD(T)-act}\). To validate the size of the correction, we considered three different active spaces, to verify that the multireference effects were converged. The dimer model in the cc-pVDZ-DK basis contains 180 electrons in 321 spatial orbitals. We constructed the active spaces from the UHF/CCSD natural orbitals (see Supplementary Material Section II): one with 36 orbitals and 48 electrons (36o, 48e), one with 55 orbitals and 48 electrons (55o, 48e) and one with 63 orbitals and 64 electrons (63o, 64e). The uncertainty in the multireference contribution to the relative energies was then estimated crudely as one half the amount of the DMRG multireference correction for the energy difference between HC and HFe\({}_{2}\) (the least and most multireference structures); here, the one half factor is used to be conservative, as it is the largest estimate of the error still compatible with an assumption that the correction improves the result. We further assume that the DMRG multireference correction computed in the cc-pVDZ-DK basis can be used to correct the CCSD(T) relative energies in the complete basis set (CBS) limit, as shown in Eq. (1). **basis-set correction and relativistic contribution**. To estimate the CBS limit, we used energies computed in several bases: the UHF energy in cc-pVDZ/TZ/QZ-DK bases, as well as CCSD and CCSD(T) energies in the cc-pVDZ/TZ-DK bases. To estimate the error in the CBS extrapolation, we additionally computed second-order Moller-Plesset perturbation theory [58; 59] (MP2) energies in the cc-pVDZ/TZ/QZ-DK bases. To independently analyze the size of the relativistic correction we also computed the CCSD(T)/def2-SV(P) energies with and without using the X2C Hamiltonian. The CCSD and CCSD(T) calculations were performed with 40 frozen core orbitals (i.e. excluding the \(3s\) and \(3p\) semicore on Fe). Using the CC correlation energies in the cc-pVDZ-DK and cc-pVTZ-DK bases, we extrapolated to the CBS limit energies using the two-point formula [60] \[E_{\rm corr}^{\infty}=\frac{X^{\beta}E_{\rm corr}^{(X)}-Y^{\beta}E_{\rm corr}^{( Y)}}{X^{\beta}-Y^{\beta}}\] where \(X=2\,({\rm DZ}),Y=3\,({\rm TZ})\) and taking \(\beta=2.4\). For the corresponding mean-field energy at the CBS limit we simply used \(E_{\rm UHF}^{\infty}=E_{\rm UHF}^{\rm QZ}\). To estimate the error in the CCSD(T) relative energies in the CBS limit, we performed an independent extrapolation with the MP2 energies, and took half of the difference between the DZ/TZ extrapolation and TZ/QZ extrapolation (i.e. the difference from the average of the two) using the same CBS formula with \(X=3\) and \(Y=4\)) for the MP2 energies, namely \[\left|\Delta E^{\text{DZ/TZ}\rightarrow\infty}_{\text{CCSD(T)}}- \Delta E^{\text{exact}}_{\text{CCSD(T)}}\right|\] \[\approx\frac{1}{2}\Big{(}\Delta E^{\text{TZ/QZ}\rightarrow\infty }_{\text{MP2}}-\Delta E^{\text{DZ/TZ}\rightarrow\infty}_{\text{MP2}}\Big{)}\] where \(\Delta E^{\text{DZ/TZ}\rightarrow\infty}_{\text{CCSD(T)}}\) is the difference in the CCSD(T) energies between HC and HFe (the structures showing the largest difference in the extrapolated MP2 energies), estimated from the extrapolation based on DZ and TZ bases to the CBS limit. **DFT comparisons**. For comparison, we computed BS-DFT energies using the X2C Hamiltonian, and the TPSS,[41] BLYP,[51; 52] PBE,[61] B97-D,[62] r\({}^{2}\)SCAN,[63] TPSSh,[64] B3LYP*,[65] B3LYP,[51; 52; 53] PBE0,[66] and M06[67] functionals, with the cc-pVQZ-DK basis set and with dispersion corrections from the DFT-D3 method.[43] ## III Results and Discussion In Section III.1, we first discuss the CC energies with the cc-pVDZ-DK basis. This will allow us to understand some features of correlation in the system, including the influence of orbital choice and the size of triples correction on the relative protonation energies, setting the stage for understanding the reliability of the composite method. In Section III.2 we discuss the contribution associated with correcting the BS spin states. In Section III.3 we discuss the detailed multireference corrections entering the composite energy formula from the DMRG and CC calculations. In Section III.4 we discuss CC calculations in larger basis sets, the CBS extrapolation entering the composite energy, and the size of relativistic effects. We report the final composite energies, the prediction of the lowest energy protonation site and the relative ordering, and the comparison with DFT calculations in Section III.5. ### CC energies: importance of higher-order correlations We show the energies of the four protonated structures relative to the HC structure from calculations with HF, CCSD and CCSD(T) methods with the cc-pVDZ-DK basis in Table 2 and Fig. 2. All methods find that the HC structure is the most stable. However, the CCSD method predicts a different relative ordering after adding the approximate triples (T) correction. In addition, there are large quantitative differences in the relative energies, particularly for the HFe and HFe\({}_{2}\) structures. For example, the relative energy of the HS structure decreases by 69 kJ/mol on moving from UHF to UHF/CCSD(T), but that of HFe\({}_{2}\) decreases by 216 kJ/mol. Comparing the energy differences from UHF/CCSD and UHF/CCSD(T), we see a sizable contribution from (T) to the absolute and relative energies. Specifically, the absolute (T) corrections for the HC, HS, HFe and HFe\({}_{2}\) structures are \(-214,-224,-269\) and \(-287\) kJ/mol (see Table 2, meaning that relative to the HC structure, HFe\({}_{2}\) is further stabilized by triples by as much as 73 kJ/mol. Consistent with this, the CCSD relative energies are observed to be sensitive to the choice of reference orbitals in HFe and HFe\({}_{2}\). The large (T) corrections to the relative energies highlight the potentially large contribution of higher-order multireference correlations in the relative protonation energies, especially for the H-Fe bond. ### Spin-state corrections As the above calculations used a BS reference, part of the missing higher-order correlation could potentially originate from the energy difference between the BS and PS singlet states. In Table 3 we report the results from the Yamaguchi energy correction to the BS state, and the resulting estimate of the PS relative energies. The PS state correction to the relative energies is shown in the last two lines in Table 3. We see that the PS state correction to the relative energies is modest. It is largest for HFe, where it lowers the relative energy by 13 kJ/mol at the CCSD(T) level. Note that as we explicitly compute multireference contributions from DMRG energies below (which are for PS states), we do not use the PS state energy corrections in the composite energy formula. ### Multireference effects To obtain a more complete picture of the multireference effects, we next consider _ab initio_ DMRG energies. In Fig. 3(a) and (b) we plot the CCSD and DMRG \begin{table} \begin{tabular}{c c c c c} Theory & \multicolumn{3}{c}{Energy difference \(E-E_{\text{HC}}\)} \\ & HC & HS & HFe & HFe\({}_{2}\) \\ UHF & 0.0 & 203.8 & 322.5 & 396.3 \\ UHF/CCSD & 0.0 & 143.7 & 245.3 & 253.1 \\ UKS-TPSS/CCSD & 0.0 & 154.9 & 278.9 & 305.9 \\ UKS-B3LYP/CCSD & 0.0 & 149.3 & 269.6 & 286.8 \\ UHF/CCSD(T) & 0.0 & 134.3 & 190.1 & 179.9 \\ UKS-TPSS/CCSD(T) & 0.0 & 132.6 & 179.4 & 163.8 \\ UKS-B3LYP/CCSD(T) & 0.0 & 132.4 & 185.4 & 167.3 \\ \end{tabular} \end{table} Table 2: Relative single-point energies (in kJ/mol) for the four protonated Fe dimer structures computed using different theories with the cc-pVDZ-DK basis and the scalar relativistic X2C Hamiltonian. The energy of the HC structure is used as the reference. natural-orbital occupancies in the (36o, 48e) active space. We see that the most fractionally and singly occupied orbitals are all included in the active space, which suggests the active space (and its larger counterparts) should capture the multireference effects in this system. The occupancy patterns of CCSD and DMRG are qualitatively similar. This shows that while (BS) CCSD and CCSD(T) are not usually considered to be multireference methods, they can be qualitatively correct for (spin-averaged) one-particle quantities, and thus for most conventional analyses of bonding. The main problem with the energies obtained from the BS CC methods here is the lack of error cancellation for configurations with varying multireference character, rather than the complete failure of the CCSD and CCSD(T) methods. From the DMRG natural-orbital occupancy plot for the PS state (Fig. 3(b)), we see that there are singly occupied orbitals associated with the Fe centers, but additionally zero, one, two and three orbitals with fractional occupancies between 0.2 and 0.8 (or between 1.2 and 1.8), respectively, for the HC, HS, HFe and HFe\({}_{2}\) structures. This clearly illustrates the trend of increasing multireference character, beyond spin-recoupling of the Fe centers, in this sequence of four structures. In Fig. 4 we compare the UHF, CCSD, CCSD(T) and DMRG energy differences for the four protonation states and in Table 4 we compare the CCSD(T) and DMRG energy corrections to the CCSD relative energies for the individual structures. We see that within the (36o, 48e), (55o, 48e) and (63o, 64e) active spaces, the (T) contribution to the energy difference between HC and HFe\({}_{2}\) is 60%, 66% and 63% of the contribution in the full orbital space, respectively. The largest estimated error for the extrapolated DMRG energy (3 kJ/mol) illustrates that Figure 2: Relative single point energies of the protonated Fe dimers computed using different theories with the cc-pVDZ-DK basis. The energy of the HC structure is used as the reference. \begin{table} \begin{tabular}{c c c c c} \hline \hline Theory & \multicolumn{4}{c}{Energy difference \(E-E_{\text{HC}}\)} \\ & HC & HS & HFe & HFe\({}_{2}\) \\ \hline \multicolumn{5}{c}{High-spin \(S_{z}=4\)} \\ UHF/CCSD & 0.0 & 164.1 & 291.6 & 309.0 \\ UHF/CCSD(T) & 0.0 & 158.5 & 230.7 & 211.9 \\ CCSD \(\langle S^{2}\rangle\) & 20.01 & 20.01 & 20.32 & 20.08 \\ \multicolumn{5}{c}{BS singlet} \\ UHF/CCSD & 0.0 & 143.7 & 245.3 & 253.1 \\ UHF/CCSD(T) & 0.0 & 134.3 & 190.1 & 179.9 \\ CCSD \(\langle S^{2}\rangle\) & 3.89 & 3.78 & 4.57 & 4.15 \\ \multicolumn{5}{c}{Exchange coupling \(J\) (estimated, cm\({}^{-1}\))} \\ UHF/CCSD & \(-94.0\) & \(-198.2\) & \(-342.0\) & \(-388.5\) \\ UHF/CCSD(T) & \(-112.2\) & \(-236.2\) & \(-329.9\) & \(-281.5\) \\ \multicolumn{5}{c}{PS singlet (estimated)} \\ UHF/CCSD & 0.0 & 139.1 & 231.0 & 238.2 \\ UHF/CCSD(T) & 0.0 & 128.8 & 177.3 & 171.1 \\ \multicolumn{5}{c}{PS singlet correction (estimated)} \\ UHF/CCSD & \(-4.4\) & \(-9.0\) & \(-18.7\) & \(-19.3\) \\ UHF/CCSD(T) & \(-5.2\) & \(-10.7\) & \(-18.0\) & \(-14.0\) \\ \hline \hline \end{tabular} \end{table} Table 3: Relative single point energies (in kJ/mol) for the BS, high-spin and (estimated) PS singlet states of the protonated Fe dimers computed using different theories with the cc-pVDZ-DK basis and a scalar relativistic (X2C) Hamiltonian. The energy of the HC structure is used as the reference. the DMRG energies are almost exact on the current scale of the relative energetics. In all cases the DMRG and (T) correction to the CCSD relative energies is in the same direction, as indicated by the dashed lines in Fig. 4. Based on the data above, we can estimate the error in the CCSD(T) energies for the HC, HS, HFe and HFe\({}_{2}\) structures to be \(-3,-7,-22\), and \(-32\) kJ/mol (from the 36o active space), \(-6,-10,-19\), and \(-31\) kJ/mol (from the 55o active space), or \(-9,-14,-20\), and \(-28\) kJ/mol (from the 63o active space), respectively. The DMRG correction is thus quite small for the HC nad HS structures, but larger for the HFe and HFe\({}_{2}\) structures, reflecting the multireference character of the Fe-H bond. From the \(M=5000\) DMRG wave function in the 63o active space, we obtain a largest CSF weight for the four structures of 0.72, 0.66, 0.51 and 0.36, respectively. This confirms that the error in the CCSD(T) energy increases when the multireference character of the structure increases. In Fig. 5, we show the trends in the correlation effects beyond CCSD as estimated by DMRG (\(\Delta\)DMRG = \(E_{\text{DMRG}}-E_{\text{CCSD}}\)) and (T), for three active space sizes. The curves track each other, justifying the possibility to use the composite energy formula. We use the DMRG results in the 63 orbital active space to correct the missing multireference effect in the CCSD(T) energies. As discussed in the Methods section, we estimate the uncertainty of this correction for the relative energies as half of the correction for the HFe\({}_{2}\) structure (the largest correction), i.e. \(\pm 10\) kJ/mol. ### Basis-set correction and relativistic contribution In order to study the basis-set effects on the CCSD(T) energies, we computed the UHF, MP2, CCSD and CCSD(T) energies for the four protonated structures using also larger basis sets. The results are listed in Table 5 and plotted in Fig. 6. We can see that the basis-set dependence of the UHF and correlation energies is very different, largely depending on whether the proton is bound to the metal or not. For the HC and HS structures, the UHF relative energies increase (become more positive) as the basis-set size increases, while the CCSD contributions decrease; for the HFe and HFe\({}_{2}\) structures, the trends are opposite. As a result, the basis-set dependence of the mean-field and correlation energies partially cancel, and overall the total CCSD(T) relative energies change non-monotonically with increasing basis-set size. The UHF energies converge at the QZ level and the (T) corrections converge at the TZ level. Therefore, the CCSD(T) relative energy basis-set trend beyond TZ (bottom right panel, Fig. 6) is dominated by the basis-set trend of the CCSD relative energies beyond TZ (top left panel, Fig. 6). Using the difference between the DZ/TZ- and TZ/QZ-CBS extrapolation energies computed at the MP2 level, we estimate the error at the DZ/TZ-CBS CCSD(T) level to be \(\pm 8\) kJ/mol for the relative energy of the various structures. It is also interesting to break out the scalar relativis \begin{table} \begin{tabular}{c c c c c} \hline \hline Theory & \multicolumn{3}{c}{Energy correction \(E-E_{\text{CCSD}}\)} \\ & HC & HS & HFe & HFe\({}_{2}\) \\ \hline \multicolumn{5}{c}{active space (36o, 48e)} \\ CCSD (relative to HC) & 0.0 & 169.2 & 241.8 & 290.7 \\ CCSD(T) & \(-3.4\) & \(-4.4\) & \(-30.4\) & \(-47.0\) \\ DMRG (extrapolated) & \(-6.8\) & \(-11.9\) & \(-52.8\) & \(-79.4\) \\ DMRG extrap. error & 0.0 & 0.0 & 0.1 & 0.2 \\ \multicolumn{5}{c}{active space (55o, 48e)} \\ CCSD (relative to HC) & 0.0 & 160.4 & 232.1 & 269.5 \\ CCSD(T) & \(-10.8\) & \(-13.2\) & \(-42.3\) & \(-59.2\) \\ DMRG (extrapolated) & \(-16.5\) & \(-23.5\) & \(-61.0\) & \(-90.1\) \\ DMRG extrap. error & 0.4 & 0.4 & 1.0 & 3.2 \\ \multicolumn{5}{c}{active space (63o, 64e)} \\ CCSD (relative to HC) & 0.0 & 157.9 & 270.1 & 304.2 \\ CCSD(T) & \(-18.3\) & \(-21.4\) & \(-46.9\) & \(-64.6\) \\ DMRG (extrapolated) & \(-27.1\) & \(-35.5\) & \(-67.2\) & \(-92.6\) \\ DMRG extrap. error & 0.5 & 0.5 & 1.2 & 2.8 \\ \multicolumn{5}{c}{full orbital space} \\ CCSD (relative to HC) & 0.0 & 143.7 & 245.3 & 253.1 \\ CCSD(T) & \(-214.0\) & \(-223.5\) & \(-269.2\) & \(-287.3\) \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison between UHF/CCSD(T) and DMRG energy corrections (in kJ/mol) for individual protonated structures computed using the cc-pVDZ-DK basis. The CCSD energy is used as the reference for CCSD(T) and DMRG energies. Note that the DMRG energies are computed for the PS singlet state while other energies are computed for the BS state. Figure 3: Natural orbital occupancies computed using (a) CCSD, and (b) DMRG in the (36o, 48e) active space, for the four protonated structures. tic contributions to the relative energies of the different structures, shown in Table 6. We see that the scalar relativistic contribution is important for the relative energies of HFe and HFe\({}_{2}\)[11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 220; 221; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 289; 281; 285; 286; 287; 288; 289; 288; 289; 291; 289; 292; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 324; 335; 336; 337; 338; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 431; 432; 433; 434; 435; 436; 437; 438; 439; 444; 445; 446; 447; 448; 450; 451; 452; 453; 454; 455; 456; 457; 458; 459; 461; 462; 463; 464; 465; 466; 467; 468; 469; 470; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 489; 490; 491; 492; 493; 494; 495; 496; 497; 498; 499; 500; 510; 511; 512; 513; 514; 525; 536; 549; 511; 537; 540; 541; 555; 556; 557; 566; 578; 581; 599; 600; 61; 620; 621; 631; 641; 642; 65; 667; 68; 691; 621; 643; 669; 622; 644; 665; 69; 636; 64e) ### Final composite energies and analysis In Table 7 we summarize our final estimates for the relative energies of the four protonated structures obtained with the composite formula. We show the various contributions to the energy differences in Fig. 7. Overall, we find that \(E_{\rm HC}<E_{\rm HS}<E_{\rm HFe_{2}}<E_{\rm HFe_{2}}\). The HC structure is more stable than the other structures by an amount that is much greater than the contribution of the electron correlation. Both the basis-set and high-order correlation effects are important to obtain the correct qualitative ordering. While CCSD(T)/TZ is often considered sufficient for accurate thermochemistry of small molecules, this is not the case for the Fe-S clusters: multireference effects beyond (T) and basis-set effects beyond TZ change the relative protonation energy of HC and HFe\({}_{2}\) by \(-19\) and \(+21\) kJ/mol, respectively. As we needed to perform extrapolations to obtain both the multireference and basis-set corrections, our estimated uncertainty in these energies is \(\pm 10\) and \(\pm 8\) kJ/mol, respectively. However, it must be stressed that our estimates of the uncertainties are quite crude. Interestingly, although the multireference and basis-set contributions are individually large, they have opposite signs. Consequently, the combined contribution is significantly smaller, and more closely resembles the raw CCSD(T)/TZ result. Table 7 and Fig. 7 also include relative energies calculated with ten different DFT methods. It can be seen that the BLYP, B97-D, r2SCAN, TPSSh, B3LYP*, B3LYP, and PBE0 functionals all obtain the correct qual Figure 4: Relative single point energies for the protonated Fe dimers computed using UHF, CCSD, CCSD(T) and DMRG with the cc-pVDZ-DK basis, in the full orbital space and in the (36o, 48e) and (55o, 48e) active spaces. The energy of the HC structure is used as the reference. The trends in the (T) and DMRG correction for the relative energies are shown by dashed lines. Figure 5: CCSD(T) and DMRG correlation energies, relative to CCSD, of the protonated Fe dimers computed for active spaces of different sizes (using UHF/CCSD natural orbitals) in the cc-pVDZ-DK basis set. \({}_{2}\) relative relative ordering of the structures, while the TPSS, PBE, and M06 functionals do not. However, for the functionals with the correct ordering, the quantitative accuracy is quite poor, as is the agreement between them. For example, the estimates for the energy difference between HC and HS deviate from our best result by 4-45 kJ/mol. The largest errors are found for the HF\({}_{2}\) structure, with an error in the relative protonation energy of 112 kJ/mol with the PBE functional and 9 kJ/mol for B3LYP. Likewise, the mean absolute error for the relative energies of the HS, HFe and HFe\({}_{2}\) structures is 10 kJ/mol for B3LYP, but 13-80 kJ/mol for other functionals. These effects are expected to be multiplied when there are multiple protons involved, as is the case for the E\({}_{4}\) intermediate state of FeMoco. Our results are thus consistent with the large variation in protonation energies (hundreds of kJ/mol) observed when using different functionals to study multiply-protonated structures in the E\({}_{4}\) intermediate [23]. \begin{table} \begin{tabular}{c c c c c} \hline \hline Basis & \(E_{\text{average}}\) & \multicolumn{3}{c}{\(E-E_{\text{average}}\) (kJ/mol)} \\ & (Hartree) & HC & HS & HFe & HFe\({}_{2}\) \\ \hline \multicolumn{5}{c}{UHF} \\ def2-SV(P) & \(-4728.9828\) & \(-248.7\) & \(-40.7\) & \(+108.3\) & \(+181.1\) \\ cc-pVDZ-DK & \(-4733.8071\) & \(-230.7\) & \(-26.9\) & \(+91.9\) & \(+165.7\) \\ cc-pVTZ-DK & \(-4733.9598\) & \(-221.5\) & \(-17.9\) & \(+83.2\) & \(+156.1\) \\ cc-pVQZ-DK & \(-4734.0044\) & \(-220.9\) & \(-17.9\) & \(+83.2\) & \(+155.6\) \\ \multicolumn{5}{c}{UHF/MP2 (correlation energy)} \\ def2-SV(P) & \(-1.6831\) & \(+49.5\) & \(+21.1\) & \(-12.7\) & \(-57.9\) \\ cc-pVDZ-DK & \(-2.3403\) & \(+27.9\) & \(-28.5\) & \(+36.4\) & \(-35.9\) \\ cc-pVTZ-DK & \(-3.3457\) & \(-0.6\) & \(-52.7\) & \(+66.2\) & \(-13.0\) \\ cc-pVQZ-DK & \(-3.9214\) & \(-5.0\) & \(-58.0\) & \(+71.6\) & \(-8.5\) \\ TZ/QZ CBS & \(-4.5001\) & \(-9.5\) & \(-63.3\) & \(+77.0\) & \(-4.1\) \\ DZ/TZ CBS & \(-3.9565\) & \(-17.9\) & \(-67.4\) & \(+84.3\) & \(+0.9\) \\ \multicolumn{5}{c}{UHF/CCSD (correlation energy)} \\ def2-SV(P) & \(-1.8521\) & \(+81.3\) & \(+39.4\) & \(-34.7\) & \(-86.0\) \\ cc-pVDZ-DK & \(-2.4104\) & \(+70.1\) & \(+10.0\) & \(-7.1\) & \(-73.1\) \\ cc-pVTZ-DK & \(-3.1171\) & \(+48.8\) & \(-5.5\) & \(+13.4\) & \(-56.7\) \\ DZ/TZ CBS & \(-3.5464\) & \(+35.8\) & \(-14.9\) & \(+25.8\) & \(-46.7\) \\ \multicolumn{5}{c}{UHF/CCSD(T) [(T) only]} \\ def2-SV(P) & \(-0.0701\) & \(+26.8\) & \(+21.1\) & \(-16.9\) & \(-31.1\) \\ cc-pVDZ-DK & \(-0.0946\) & \(+34.5\) & \(+25.0\) & \(-20.7\) & \(-38.8\) \\ cc-pVTZ-DK & \(-0.1491\) & \(+34.6\) & \(+23.8\) & \(-18.7\) & \(-39.7\) \\ DZ/TZ CBS & \(-0.1821\) & \(+34.7\) & \(+23.0\) & \(-17.4\) & \(-40.3\) \\ \hline \hline \end{tabular} \end{table} Table 5: The UHF, MP2 correlation, CCSD correlation and (T) correction energies computed using different basis sets. To better highlight trends in the relative energetics, we show the total or correlation energy averaged over the four structures for each basis set. This is then used as a reference energy. \begin{table} \begin{tabular}{c c c c c} \hline \hline Theory & \multicolumn{3}{c}{Energy difference \(E-E_{\text{HC}}\)} \\ & HC & HS & HFe & HFe\({}_{2}\) \\ \hline \multicolumn{5}{c}{Relativistic: \(\Delta E_{\text{X2C}}-\Delta E_{\text{ref}}\)} \\ UHF & \(0.0\) & \(-3.7\) & \(-21.5\) & \(-21.9\) \\ UKS-TPSS & \(0.0\) & \(-2.2\) & \(-8.7\) & \(-7.7\) \\ UKS-B3LYP & \(0.0\) & \(-2.1\) & \(-10.1\) & \(-8.8\) \\ UHF/CCSD & \(0.0\) & \(-2.5\) & \(-13.7\) & \(-13.5\) \\ UHF/CCSD(T) & \(0.0\) & \(-2.3\) & \(-11.9\) & \(-10.7\) \\ \hline \hline \end{tabular} \end{table} Table 6: Scalar relativistic corrections (in kJ/mol) to the relative energies computed using different theories with the def2-SV(P) basis. The energy of the HC structure is used as the reference energy for all energies. \(\Delta E_{\text{ref}}\) represents the energy difference with no relativistic corrections. Figure 6: Trends in the energies of the protonated Fe dimers for (a) the CCSD correlation energies, (b) the (T) corrections, (c) the UHF energies and (d) the total CCSD(T) energies, as a function of basis set. For each basis set, the total or correlation energies are shifted by their average among the four structures. For UHF and CCSD(T) energies of the HC structure, an additional \(+100\) kJ/mol shift is added for clarity. ## IV Conclusions In this work, we studied the protonation energetics of a dimeric iron-sulfur cluster, as a simple model for the protonation of intermediates in FeMoco. Using a composite method based on CC and DMRG energies, we estimated the relative protonation energies of four representative structures (protonated on C, S, Fe or bridging two Fe) in the multireference and basis-set limits. We found that both multireference and basis-set effects are extremely important to capturing the correct energy ordering. Importantly, even though we are studying the seemingly simple process of adding a single proton to the cluster, basis-set effects beyond triple-zeta, and correlation effects beyond (perturbative) triples, contribute about 20 kJ/mol to the relative energies (although the contributions have opposite signs). This highlights the challenge of computing accurate energetics for even larger clusters. The current work relies on a number of extrapolations to obtain the basis-set and correlation-effect limits. These extrapolations, as well as the crude estimates of the errors associated with them, are not entirely satisfactory. While some of these steps could be removed by performing more demanding computations, it may be challenging to scale such a strategy to the larger iron-sulfur clusters. In particular, while perturbative triples formed a reasonable starting point for the relative energetics in this cluster, it is unclear whether this will be the case in larger iron-sulfur clusters. The density functionals that we examined yielded either qualitatively incorrect results, or quantitatively poor energetics, with \begin{table} \begin{tabular}{c c c c c} \multirow{2}{*}{Correction/functional} & \multicolumn{4}{c}{Energy difference \(E-E_{\text{HC}}\)} \\ & HC & HS & HFe & HFe\({}_{2}\) \\ \hline \multicolumn{5}{c}{UHF/CCSD(T) (with X2C)} \\ uncorrected & 0.0 & 138.6 & 216.1 & 197.9 \\ (cc-pVTZ-DK) & & & & \\ multireference correction & 0.0 & \(-\)5.2 & \(-\)11.4 & \(-\)19.2 \\ basis-set correction & 0.0 & \(+\)2.1 & \(+\)25.9 & \(+\)21.2 \\ total & 0.0 & 135.5 & 230.6 & 199.9 \\ UKS (with X2C, DFT-D3 and cc-pVQZ-DK basis) & & & & \\ TPSS & 0.0 & 98.6 & 161.8 & 97.4 \\ BLYP & 0.0 & 90.4 & 143.3 & 97.0 \\ PBE & 0.0 & 93.1 & 145.6 & 87.8 \\ B97-D & 0.0 & 100.2 & 143.0 & 115.2 \\ r\({}^{2}\)SCAN & 0.0 & 114.5 & 163.5 & 128.3 \\ TPSBh & 0.0 & 115.6 & 200.9 & 156.0 \\ B3LYP* & 0.0 & 113.9 & 203.4 & 176.0 \\ B3LYP & 0.0 & 122.1 & 223.8 & 209.1 \\ PBE0 & 0.0 & 132.0 & 239.5 & 225.0 \\ M06 & 0.0 & 135.5 & 190.6 & 194.0 \\ \end{tabular} \end{table} Table 7: Relative single-point energies (in kJ/mol) for the four protonated Fe dimer structures computed by the composite method. The energy of the HC structure is used as the reference. UKS energies with different DFT functionals are also listed for comparison. Figure 7: Comparison between the difference in single-point energy of the protonated Fe dimers computed using the composite CCSD(T)/DMRG approach and DFT with different functionals. The energy of the HC structure is used as the reference. The cc-pVQZ-DK basis set and CBS extrapolation results are used for mean-field and post-mean-field methods, respectively, unless otherwise specified. The uncertainty in the energy difference is shown as the error bar. mean absolute and maximum errors of 10-80 and 13-112 kJ/mol, with the hybrid B3LYP functional giving the best results. The variation between the density functionals was much larger than the uncertainty in the correlated wavefunction calculations. The different behaviors of the functionals suggests that new functional combinations, customized for the Fe-S cluster problem, may be necessary. The benchmark energetics from this work will enable such explorations. ## Supplementary Material DMRG energy extrapolations performed in the three active spaces and the figures of the CCSD natural orbitals for constructing the active space. ###### Acknowledgements. Work at Caltech was supported by the Center for Molecular Magnetic Quantum Materials, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under award no. DE-SC0019330. The computations were conducted at the Resnick High Performance Computing Center, a facility supported by the Resnick Sustainability Institute at the California Institute of Technology. Work at Lund University was supported by grants from the Swedish research council (projects 2018-05003 and 2022-04978). The computations were performed on computer resources provided by the Swedish National Infrastructure for Computing (SNIC) at Lunarc at Lund University and HPC2N at Umea University, partially funded by the Swedish Research Council (grant 2018-05973). ## Conflict of Interest The authors have no conflicts to disclose. ## Data Availability The data presented in this work can be reproduced using the open-source code PySCF 2.0.14[47; 46] and block2 0.5.1[37] The reference input and output files can be found in the GitHub repository at [https://github.com/hczhai/fe-dimer-data](https://github.com/hczhai/fe-dimer-data). Footnote 1: [https://github.com/hczhai/fe-dimer-data](https://github.com/hczhai/fe-dimer-data).
2304.07668
FedBlockHealth: A Synergistic Approach to Privacy and Security in IoT-Enabled Healthcare through Federated Learning and Blockchain
The rapid adoption of Internet of Things (IoT) devices in healthcare has introduced new challenges in preserving data privacy, security and patient safety. Traditional approaches need to ensure security and privacy while maintaining computational efficiency, particularly for resource-constrained IoT devices. This paper proposes a novel hybrid approach combining federated learning and blockchain technology to provide a secure and privacy-preserved solution for IoT-enabled healthcare applications. Our approach leverages a public-key cryptosystem that provides semantic security for local model updates, while blockchain technology ensures the integrity of these updates and enforces access control and accountability. The federated learning process enables a secure model aggregation without sharing sensitive patient data. We implement and evaluate our proposed framework using EMNIST datasets, demonstrating its effectiveness in preserving data privacy and security while maintaining computational efficiency. The results suggest that our hybrid approach can significantly enhance the development of secure and privacy-preserved IoT-enabled healthcare applications, offering a promising direction for future research in this field.
Nazar Waheed, Ateeq Ur Rehman, Anushka Nehra, Mahnoor Farooq, Nargis Tariq, Mian Ahmad Jan, Fazlullah Khan, Abeer Z. Alalmaie, Priyadarsi Nanda
2023-04-16T01:55:31Z
http://arxiv.org/abs/2304.07668v1
FedBlockHealth: A Synergistic Approach to Privacy and Security in IoT-Enabled Healthcare through Federated Learning and Blockchain ###### Abstract The rapid adoption of Internet of Things (IoT) devices in healthcare has introduced new challenges in preserving data privacy, security and patient safety. Traditional approaches need to ensure security and privacy while maintaining computational efficiency, particularly for resource-constrained IoT devices. This paper proposes a novel hybrid approach by combining federated learning and blockchain technology to provide a secured and privacy-preserved solution for IoT-enabled healthcare applications. Our approach leverages a public-key cryptosystem that provides semantic security for local model updates, while blockchain technology ensures the integrity of these updates and enforces access control and accountability. The federated learning process enables a secure model aggregation without sharing sensitive patient data. We implement and evaluate our proposed framework using EMNIST datasets, demonstrating its effectiveness in preserving data privacy and security while maintaining computational efficiency. The results suggest that our hybrid approach can significantly enhance the development of secure and privacy-preserved IoT-enabled healthcare applications, offering a promising direction for future research in this field. Federated Learning, Blockchain, ElGamal, Privacy Protection ## I Introduction In healthcare, Internet of Things (IoT) devices like wearables, sensors, and medical equipment have transformed applications and services, enabling remote monitoring, diagnostics, and personalized treatments [10]. For effective healthcare, IoT devices must be reliable and accurate, as inaccuracies could result in misdiagnosis or improper treatment. The sensitive patient data collected by IoT devices is vulnerable to various cyber attacks. Thus, robust security measures are essential to protect patient privacy and ensure safety. Regulatory requirements, such as HIPAA and GDPR, further emphasize the need for robust solutions [11]. Traditional centralized machine learning approaches have various drawbacks, including higher communication costs, battery consumption, and potential security risks [7]. Federated Learning (FL), a distributed machine learning approach, has emerged as an alternative that enhances user privacy by training models over remote devices or data centers without sharing the raw data [9]. However, FL is vulnerable to poisoning attacks, and attackers can recover data from gradients [5, 16]. Integrating FL with blockchain has been explored in healthcare [2, 6], but these studies lack comprehensive solutions addressing privacy and security while maintaining computational efficiency. FL faces security vulnerabilities, model consistency and accuracy, limited network bandwidth, and data imbalances between clients [17]. We propose FedBlockHealth, a novel hybrid approach combining FL and blockchain (BC) technology for secure and privacy-preserved IoT-enabled healthcare applications to address these vulnerabilities. Our approach leverages a public-key cryptosystem for local model update semantic security and BC for decentralized data storage, management, and access control. FL ensures secure model aggregation without sharing sensitive patient data, maintaining privacy, security, and computational efficiency. Research has addressed FL challenges, such as non-IID data distributions [13], global model convergence [12], and the client device and data heterogeneity [8]. Differential privacy has been introduced [1], and applied to FL [15]. Blockchain has been used to secure patient data [3] but with centralized machine learning for disease detection. Our FedBlockHealth framework offers vital contributions to privacy-preserving IoT-enabled healthcare. These contributions include * **Hybrid Approach:** An Algorithm is proposed to combine FL and BC technology to address privacy and security challenges in IoT-enabled healthcare scenarios. Smart contracts are designed for all clients to data on BC. This hybrid approach ensures the integrity and privacy of patient data while maintaining computational efficiency. * **Semantic Security:** To secure the communication between clients and server, the Elgamal public-key cryptosystem is employed in our framework to provide semantic security for local model updates and ensures that sensitive patient information remains private during the FL process. * **Performance Evaluation:** Our proposed CNN framework for FL is evaluated using the EMNIST dataset. The approach maintains privacy and security while ensuring computational efficiency. Results demonstrate its effectiveness compared to traditional ANN and CNN models by achieving 99.9\(\%\) accuracy and a loss of 0.01\(\%\). This is the result of reducing the number of layers and adding a batch Normalization layer, which alternatively decreased complexity and the use of Feedforward and Backpropagation approach. * **Scalability and Applicability:** Our framework FedBlockHleath is designed to be scalable and applicable to a wide range of healthcare scenarios, paving the way for future research and development in secure and privacy-preserving IoT-enabled healthcare applications. In this paper, we discuss the problem statement and system architecture in Section 2, followed by the methodology in Section 3. Experimental results and analysis are presented in Section 4, and finally, we conclude the study and suggest future research in Section 5. ## II Proposed System Architecture To circumvent the problem of data privacy and security, in this paper, we proposed the FedBlockHealth model illustrated in Fig. 1, in which patient data is received through IoT devices and stored in the hospital database represented by the client. Each client has a locally trained CNN model shared by the global server. Each client trains independently, and the updated weights, encrypted through the EL-Gammal approach, are shared with the global server for aggregation. On the other hand, each client sends model updates to the BC for data storage. Note that a smart contract is designed for each client before the transaction in model updates are forwarded to the transaction pool for validation. After completing the validation process, the system stores the transaction in blocks, and authorized users from the BC network can only access the data. ### _System Architecture_ The proposed system architecture, FedBlockHealth, comprises a Key Generation Centre (KGC), a server, and multiple participants. #### Ii-A1 Koc The KGC is a trusted entity responsible for system setup, public parameter generation, and private key distribution to each participant and the server. The KGC is fully trusted and does not collide with any other entities. #### Ii-A2 Server The server is a shared location for securely aggregating encrypted local gradients from all participating clients and distributing updated global parameters. In encrypted computing techniques, an adversarial setting is often assumed, where all entities, except the KGC, follow the protocol precisely while attempting to infer sensitive data from the training data. #### Ii-A3 Clients Each client represents a hospital that possesses a health-related dataset and a copy of the trained model shared by the global server. During learning, clients train their local models on their private datasets and share local gradients/weights with the server. Malicious clients may attempt to communicate with the server for nefarious purposes, necessitating measures to prevent such activities and protect the data privacy of legitimate clients. Hence, the EL-Gammal encryption approach is used to secure communication between clients and the global server. On the other hand, smart contracts are created for each client to communicate with the BC network. Figure 1 illustrates the proposed system architecture. #### Ii-A4 Federated Learning Federated learning is a distributed machine learning technique allowing multiple parties to train a model collaboratively without sharing their raw data. In our system model, we assume that multiple healthcare providers (clients) are participating in the FL process to improve the accuracy of the Convolutional Neural Network (CNN) model. Each healthcare provider maintains their datasets of medical images and trains a local CNN model using FL. The local model weights are encrypted using an ElGamal cryptosystem before being sent to the central server. The central server aggregates the encrypted model weights and updates the global CNN model. The updated global model weights are then sent back to the clients for further training. The process continues iteratively until the model converges to a satisfactory accuracy level as reflected in Algorithm 4. FL allows stakeholders to improve their models by leveraging the network's collective intelligence without sharing their personal data. #### Ii-A5 Blockchain Network The proposed system uses BC technology to manage access control and provide an immutable record of the training process in FL in healthcare. The system Fig. 1: The proposed FedBlockHealth model, where clients data are stored in blockchain and local model weights are shared with global server. initially designs smart contracts for each client and then transfers the model updates to a smart contract for conversion to hash as shown in Algorithm 1. Afterward, the system forwards the hash to the transaction pool, and the miners pick it for mining. Once the miners have picked the hash, they create blocks. The authorized clients can retrieve the data stored in blocks through their smart contract, as depicted in Algorithm 3. This approach enables the system to manage access to the distributed CNN model transparently and securely while ensuring the integrity of the training process. ### _Federated Learning with Blockchain Integration_ We can transform healthcare data management, analysis, and use by integrating BC technology and FL. Using BC technology, we can manage patient data securely and transparently while employing FL to develop predictive models for disease diagnosis or personalized treatment plans. We can achieve the integration of BC technology and FL in healthcare through the following steps: 1. Data partitioning: The healthcare data is partitioned among multiple hospitals or healthcare institutions to ensure data privacy and security. Each hospital holds its data locally. 2. Model creation: A central server creates a CNN model, which trains the data from all the participating hospitals through the federated averaging algorithm presented in Algorithm 4. 3. Data encryption: The hospital data is encrypted using a cryptographic algorithm ElGamal discussed in the following subsection, adding an extra layer of security and privacy. 4. Model distribution: The central server receives the encrypted data and distributes the CNN model to all participating hospitals. 5. Local model training: Each hospital trains the CNN model on their local encrypted data using FL, which involves multiple rounds of model training and aggregation of model updates. 6. Model aggregation: The hospitals send their encrypted model updates to the central server, which aggregates the updates to create a global model as presented in Algorithm 4. 7. Blockchain integration: The updated models of clients are stored on a BC, ensuring the model updates' transparency and immutability. 8. Model validation: A third-party auditor validates the global model to confirm that it meets the necessary accuracy and security standards. 9. Model deployment: The validated global model is then deployed back to the participating hospitals for local inference on new data. **Algorithm 1** Smart Contract ``` 1:INPUT Client Registration 2:if Registration == successful then 3: Check data from the Global server 4:else 5: Register client as a New Client 6: U \(\leftarrow\) Client 7: Store Data on Blockchain 8: Encrypt data Using Sha256 Algo 9:OUTPUT generate Application Binary Interface of Contract 10: generate Byte code of Contract 11: Decrypt data Using Sha256 Algorithm 12:endif ``` **Algorithm 2** Operation of Blockchain ### _Feedforward and Backpropagation_ Clients discretize their model updates, add discrete Gaussian noise, and submit them for modular secure summation. This comprehensive end-to-end system employs the ElGamal encryption scheme. To perform a forward pass through the network, we use the following iterative formula to compute each neuron in the subsequent layer [14]: \[a^{l}=\sigma\times(W\times a^{(l-1)}+b) \tag{1}\] Backpropagation efficiently computes gradients, and the optimizer uses these gradients to train the neural network: \[w^{t+1}=w^{t}-n\times\nabla_{w}\times L(D^{t},w^{t}) \tag{2}\] The equation represents the Feedforward function \(f(x,w)=y\), where \(w\) is the parameter vector, and the training dataset is \(D=\{(x_{i},y_{i});i\in L\}\). \(L\) represents the loss function, and backpropagation is defined as: \[\frac{1}{|D|}\sum_{(x_{i},y_{i})\in D}\ell(y_{i},f(x_{i},w)) \tag{3}\] Training continues until the loss function reaches an optimal minimum value. Following this approach, the proposed system architecture effectively addresses distributed learning challenges, ensuring data privacy and successfully training a high-quality centralized model. ``` 0: Initialize: Initialize blockchain network Create a smart contract for FL Server and Clients 1:Client-Side Operations: 2:for each client U in the pool do 3: Connect to the blockchain network 4: Register as a client with the smart contract 5: Load pre-trained CNN model weights 6: The FL Agorithm will perform its operation 7:endfor 8:Server-Side Operation 9: Connect to the blockchain network 10: Register as the server with the smart contract 11: Collect local gradients from clients 12: The FL Agorithm will perform its operation 13:Blockchain Operations 14: Record transactions on the blockchain 15: Maintain a tamper-proof record of all transactions 16: Ensure transparency and accountability ``` **Algorithm 3** Blockchain integration with Federated Learning ## III Methodology In order to enhance data privacy, this study investigates the performance of FL on the Extended Modified National Institute of Standards and Technology (EMNIST) datasets by utilizing a Convolutional Neural Network (CNN). FL on EMNIST datasets using a CNN can address the challenge by training the model on local data held on different healthcare devices while keeping the data decentralized and preserving the privacy of individual patients. The architecture of the employed CNN comprises four convolutional layers, each succeeded by a max-pooling layer and an additional two fully connected layers within the hidden layer structure. Given the flexibility of the convolutional layer, the subsequent section presents the proposed CNN model tailored explicitly for FL applications. Only retrain the pre-trained model and the fully connected layers after using previously trained convolutional layers. We examined the performance of FL on the EMNIST datasets using a CNN. In order to improve the accuracy of our model, we have introduced two additional hidden layers, which enable the extraction of more complex features from the input data. We have utilized the stochastic gradient descent (SGD) optimizer to optimize the model's performance further. The SGD optimizer iteratively refines the model's parameters to minimize the discrepancy between the predicted and actual outputs. This is accomplished by computing the gradient of the loss function with respect to the parameters and updating them in the opposite direction of the gradient, ultimately determining the optimal parameters for the model. As a result, the model can more accurately predict the output. Furthermore, securing the communication between clients and the global server is vitally important; therefore, we use the El-Gamal Multiplicative Cryptosystem approach. El-Gamal Multiplicative Cryptosystem is preferred over other techniques because it provides both confidentiality and data integrity during transmission. Unlike other techniques, it simultaneously encrypts plaintext and signature generation, ensuring that the data remains secure and unaltered during transmission. Additionally, ElGamal encryption relies on mathematical problems that are computationally difficult to solve, making it a more secure approach to encryption [4]. ### _El-Gamal Multiplicative Cryptosystem_ ElGamal encryption is a public-key cryptography algorithm that is based on the Diffie-Hellman key exchange [14]. In the context of federated learning, it can be used to encrypt the model parameters before they are sent from the client devices to the central server for aggregation. This helps to ensure that the model parameters remain secure and confidential during the transmission. The ElGamal encryption algorithm comprises three parts: * **Key generation**: In this step, a user generates a public-private key pair. The public key encrypts, while the private key decrypts. * **Encryption:** To encrypt the model parameters, the client device selects a random value known as the session key. The client uses the session key to encrypt the model parameters with ElGamal encryption. Then, the encrypted session key and model parameters are sent to the central server for aggregation. * **Decryption**: The central server uses its private key to decrypt the encrypted session key and the encrypted model parameters. The decrypted session key decrypts the encrypted model parameters. The security of the ElGamal encryption algorithm relies on the difficulty of the Discrete Logarithm Problem (DLP), which involves finding the exponent \(x\) in the equation \(g^{x}\bmod p=h\). This problem is computationally complex, making ElGamal encryption secure against attacks from hackers. In summary, using the ElGamal cryptosystem in federated learning with the CNN model adds a security layer to the model parameters during transmission. This ensures the privacy and confidentiality of the model parameters, even when transmitted over a public network. The El-Gamal Multiplicative Cryptosystem Algorithm is presented in 5, while its mathematical proof will appear in the extended version of this model. ``` 1:Input: Exponential ElGamal 2:Output: Messages are encoded by exponentiation 3:Give Value \(v_{1}=4,v_{2}=5\) 4:\(v_{1}\leftarrow\) generator.selfApply(4) 5:\(v_{2}\leftarrow\) generator.selfApply(5) 6:\(c_{1}\leftarrow\) ElGamal.encrypt(Public Key, \(v_{1}\)) 7:\(c_{2}\leftarrow\) ElGamal.encrypt(Public Key, \(v_{2}\)) 8:Combine \(\leftarrow\)\(c_{1}.\)apply(\(c_{2}\)) 9:Results \(\leftarrow\) ElGamal.decrypt(Private Key, Combine) 10:Calculate \(v=v_{1}\cdot v_{2}\) in message space: generator.selfApply(\(v=v_{1}\cdot v_{2}\)) 11:Print results 12:End ``` **Algorithm 5** Pseudo-code of the ElGamal technique ## IV Performance Evaluation and Simulation This section aims to evaluate the performance of the proposed FL system using the EMNIST datasets with a Convolutional Neural Network (CNN). We compare the privacy-enhanced system to the non-private baseline and discuss the trade-offs between privacy, model complexity, and accuracy. ### _Dataset and Model Architecture_ We use the EMNIST dataset, which consists of 28x28 gray-scale images of handwritten digits. We divide the dataset into 71,039 training samples, 14,799 testing samples, and 17,760 validation samples. We further partition the data among ten clients for distributed training. The CNN model used for evaluation consists of four convolutional layers, max-pooling layers, two fully connected layers and a batch normalization layer. The convolutional layers extract essential features from the input images, while the deeper layers identify more complex patterns. Similarly, we use flattened and dense layers to convert the output of the convolutional layer into a single-dimensional vector and perform classification, respectively. ### _Training and Evaluation_ During the training phase, the clients train their local models on their respective datasets and share the local gradients with the central server. The server aggregates the gradients, updates the global model, and broadcasts it back to the clients. This process repeats until the loss function converges to an optimal value. Figures 2 and 3 show the accuracy and loss graphs of the CNN model applied to the EMNIST dataset. After 70 epochs, we achieved an accuracy of 95.57% and a loss value of 1.4. ### _Comparison with Baseline and Alternative Models_ We compare the performance of our proposed system with the non-private baseline model and an alternative Artificial Neural Network (ANN) model. The baseline CNN model achieved an accuracy of 99.03% after 150 epochs, while our privacy-enhanced CNN model reached an accuracy of 99.99% after 40 epochs. The ANN model, on the other hand, achieved an accuracy of 68.91% after 70 epochs. This is due to the reduced complexity of the model by limiting the hidden layers, using the batch norm layer for faster training, and using the Fig. 2: Accuracy of FedBlockHealth based CNN Model. Feedforward and Backpropagation approach. The performance is reflected in Table I summarizes the model details and performance comparisons. The results indicate that the proposed privacy-enhanced FL system with a CNN model achieves competitive performance compared to the non-private baseline. Although the accuracy is slightly lower, the model ensures privacy preservation and employs a less complex encryption technique. In contrast, the ANN model performs significantly worse, demonstrating the importance of using appropriate model architectures for the specific problem domain. ## V Conclusion This study presented a privacy-enhanced federated learning (FL) system, incorporating blockchain and smart contracts, using a convolutional neural network (CNN) for distributed training on the EMNIST dataset. The system effectively balances data privacy preservation and model performance, making it a suitable solution for sensitive data tasks in IoT-enabled healthcare applications. Our evaluation demonstrates that the privacy-enhanced CNN model achieves 99.99% accuracy. We employed ElGamal encryption to maintain anonymity while enabling computation in the ciphertext space. This method ensures privacy preservation and utilizes a less complex encryption technique. Additionally, integrating blockchain technology and smart contracts enhance the integrity and security of the system. Future work will involve using high-dimensional datasets and exploring more complex neural network models to enhance accuracy and efficiency in privacy-preserving IoT-enabled healthcare applications.
2307.06026
Learning from Exemplary Explanations
eXplanation Based Learning (XBL) is a form of Interactive Machine Learning (IML) that provides a model refining approach via user feedback collected on model explanations. Although the interactivity of XBL promotes model transparency, XBL requires a huge amount of user interaction and can become expensive as feedback is in the form of detailed annotation rather than simple category labelling which is more common in IML. This expense is exacerbated in high stakes domains such as medical image classification. To reduce the effort and expense of XBL we introduce a new approach that uses two input instances and their corresponding Gradient Weighted Class Activation Mapping (GradCAM) model explanations as exemplary explanations to implement XBL. Using a medical image classification task, we demonstrate that, using minimal human input, our approach produces improved explanations (+0.02, +3%) and achieves reduced classification performance (-0.04, -4%) when compared against a model trained without interactions.
Misgina Tsighe Hagos, Kathleen M. Curran, Brian Mac Namee
2023-07-12T09:14:35Z
http://arxiv.org/abs/2307.06026v1
# Learning from Exemplary Explanations ###### Abstract eXplanation Based Learning (XBL) is a form of Interactive Machine Learning (IML) that provides a model refining approach via user feedback collected on model explanations. Although the interactivity of XBL promotes model transparency, XBL requires a huge amount of user interaction and can become expensive as feedback is in the form of detailed annotation rather than simple category labelling which is more common in IML. This expense is exacerbated in high stakes domains such as medical image classification. To reduce the effort and expense of XBL we introduce a new approach that uses two input instances and their corresponding Gradient Weighted Class Activation Mapping (GradCAM) model explanations as exemplary explanations to implement XBL. Using a medical image classification task, we demonstrate that, using minimal human input, our approach produces improved explanations (\(+0.02,+3\%\)) and achieves reduced classification performance (\(-0.04,-4\%\)) when compared against a model trained without interactions. **Keywords:** Explanation based Learning, Interactive Learning, Medical Image Classification. ## 1 Introduction Figure 1: The inner circle shows the typical mode of feedback collection where users annotate image features. The outer circle shows how the Exemplary eXplanation Based Learning (eXBL) approach requires only identification of one good and one bad explanation. Interactive Machine Learning (IML) is an approach that aims to provide a platform for user involvement in the model training or retraining process [14]. The literature on IML is dominated by active learning which reduces the manual effort associated with creating labelled training datasets by interactively selecting a sub-sample of an unlabelled dataset for manual labelling [1]. However, eXplanation Based Learning (XBL) has recently begun to gain traction as it allows deeper interaction with users by providing an opportunity to collect feedback on model explanations [21, 14, 15]. This form of interaction allows a more transparent form of model training than other IML approaches as users get a chance to refine a model by interacting-with and correcting its explanations. XBL starts off with a learner model, \(f\), that was initially trained using a simple classification loss, categorical cross entropy for example, which is calculated based on the error between the model's prediction and ground-truth label. Then, XBL typically refines \(f\) by augmenting its classification loss with an explanation loss, \[L=L_{CE}+L_{expl}+\lambda\sum_{i=0}\theta_{i} \tag{1}\] In Equation (1), \(L_{CE}\) is the traditional categorical cross entropy which is calculated based on the error between the model's predictions and ground-truth labels; \(L_{expl}\) is an explanation loss that is computed between the explanation produced from a model and a manual annotation of input instances, \(M\); \(\lambda\) is a regularisation term used to avoid overfitting that could be caused by the introduction of the new loss term, \(L_{expl}\); and \(\theta\) refers to network parameters. \(M\) can be a mask showing the important image regions that a learner should focus on or a mask of confounding or non-salient regions that a model should ignore. Saliency based feature attributions are usually used to generate model explanations. One example, from [21] formulates the explanation loss for training instances \(x\in X\) of size N and Gradient Weighted Class Activation Mapping (GradCAM) model explanations generated using a trained model \(f\) as shown in Equation (2). GradCAM is a saliency based local model explanation technique [21]. \[L_{expl}=\sum_{i=0}^{N}M_{i}GradCAM(x_{i}) \tag{2}\] As is seen in the inner circle of Figure 1, in XBL, the most common mode of user interaction is image feature annotation. This requires user engagement that is considerably much more demanding than the simple instance labelling that most IML techniques require [13] and increases the time and cost of feedback collection in XBL. As can be seen in the outer circle of Figure 1, we are interested in lifting this pressure from users (or feedback providers) and simplifying the interaction to ask for identification of two explanations as exemplary explanations and ranking them as good and bad explanations, and so make feedback collection cheaper and faster. This kind of user interaction where users are asked for a ranking instead of category labels has also been found to increase inter-rater reliability and data collection efficiency [17]. We incorporate this feedback into model training through a contrastive loss; specifically, triplet loss [1]. The main goal of this paper is to demonstrate the effectiveness this loss based on just two exemplars. Therefore, we use an existing feature annotated dataset to identify good and bad explanations to demonstrate suitability of our proposal. In a real-world interactive learning scenario where end users have to choose the good and bad explanations, active learning approaches can be used to reduce the pool of explanations users have to choose the explanations from. The main contributions of this paper are: 1. We propose the first type of eXplanation Based Learning (XBL) that can learn from only two exemplary explanations of two training images; 2. We adopt triplet loss for XBL to incorporate the two exemplary explanations into an explanation loss; 3. In addition to showing that XBL can be implemented with just two instances, our experiments demonstrate that our proposed method achieves improved explanations and comparable classification performance when compared against a baseline model. ## 2 Related Work Based on the approach utilised to incorporate user feedback into model training, XBL methods can be generally categorised into two: (1) augmenting loss functions; and (2) augmenting training datasets using user feedback by removing confounding or spurious regions identified by users. Augmenting Loss Functions.XBL methods that fall under this category follow the approach introduced in Equation 1 by adding an explanation loss to a model's training to refine it to focus on image regions that are considered relevant by user(s) or to ignore confounding regions. One example of this category is Right for the Right Reasons (RRR) [14] that penalises a model with high input gradient model explanations on the wrong image regions based on user annotation. It uses, \[L_{expl}=\sum_{n}^{N}\left[M_{n}\frac{\partial}{\partial x_{n}}\sum_{k=1}^{K} \log\hat{y}_{nk}\right]^{2} \tag{3}\] for a function \(f(X|\theta)=\hat{y}\in\mathbb{R}^{N\times K}\) trained on images \(x_{n}\) of size \(N\) with \(K\) categories, where \(M_{n}\in\{0,\ 1\}\) is user annotation of image regions that should be avoided by the model. Similarly, Right for Better Reasons (RBR) [13] uses Influence Functions (IF) in place of input gradients to correct a model's behaviour. Contextual Decomposition Explanation Penalisation (CDEP) [11] penalises features and feature interactions. User feedback in XBL experiments can be either: (1) telling the model to ignore non-salient image regions; or (2) instructing the model to focus on important image regions in a training dataset [11]. While the XBL methods presented above refine a model by using the first feedback type, Human Importance-aware Network Tuning (HINT) does the opposite by teaching a model to focus on important image parts using GradCAM model explanations [10]. Augmenting Training Dataset.In addition to augmenting loss functions, XBL can also be implemented by augmenting a training dataset based on user feedback. Instance relabelling [12], counterexamples generation [12], and using user feedback as new training instances [23] are some of the methods that augment a dataset to incorporate user feedback into XBL. While XBL approaches show promise in unlearning spurious correlations that a model might have learned by giving attention to non-relevant or confounding image regions [11, 13], they all need a lot of effort from users. In order to unlearn spurious correlations from a classifier, [13] collected feature annotation on 3000 chest x-ray images. This kind of demanding task hinders practical deployment and domain transferability of XBL. For this reason, it is of paramount importance to build an XBL method that can refine a trained model using a limited amount of user interaction in order to achieve a plausible and domain transferable implementation. To the best of our knowledge, this area of XBL is completely unexplored. ## 3 Exemplary eXplanation Based Learning As is illustrated by Equations 2 and 3, for typical XBL approaches, user annotation of image features, or \(M\), is an important prerequisite. We introduce Exemplary eXplanation Based Learning (eXBL) to mitigate the time and resource complexity caused by the feature annotation process. In eXBL, we propose to simplify the expensive feature annotation requirement and replace it with two exemplary explanations: _Good GradCAM explanation_ (\(C_{good}\)) and _Bad GradCAM explanation_ (\(C_{bad}\)). However, even if this replaces feature annotation with two labels, categorising explanations would still be expensive if it's to be performed for all training instances whose size could be in the thousands. For this reason, we only use one \(C_{good}\) and one \(C_{bad}\). We choose to use GradCAM model explanations because they have been found to be more sensitive to training label reshuffling and model parameter randomisation than other saliency based explanations (Adebayo et al., 2018). To select the good and bad explanations from a list of generated GradCAM explanations, we use an objective explanation metric, called Activation Recall (AR). AR measures how much of the actual relevant parts of test images, \(M\), are considered relevant by a model. While a larger AR value means a model is giving higher attention to relevant image regions, a smaller AR would mean the model is not focusing on relevant image parts for its prediction. AR is formulated as follows, \[AR_{x\in X}=\frac{GradCAM(x)*M}{M} \tag{4}\] We then assign products of input instances and GradCAM explanation to \(C_{bad}\) and \(C_{good}\) using the instances with maximum and minimum AR values, as follows, \[C_{good}:=i\cdot GradCAM(i),\max_{x\in X}(AR(x)):=AR_{i} \tag{5}\] \[C_{bad}:=j\cdot GradCAM(j),\min_{x\in X}(AR(x)):=AR_{j} \tag{6}\] The product of the input instance and the Grad-CAM explanation is used instead of just the Grad-CAM explanation because taking only the GradCAM outputs to be the good/ bad explanations could lead to biased exemplary explanations as it would mean we are only taking the model's focus or attention into consideration. We then take inspiration from triplet loss to incorporate \(C_{good}\) and \(C_{bad}\) into our explanation loss. The main purpose of our explanation loss is to penalise a trainer according to its distance from \(C_{good}\) and \(C_{bad}\): The closest to \(C_{good}\) and the furthest from \(C_{bad}\), the lower the loss. For the product of the training instances \(x\in X\), and their corresponding GradCAM outputs, \(x\cdot GradCAM(x)\), we compute the euclidean distances \(d_{xg}\) and \(d_{xb}\), which represent distances from \(C_{good}\) and \(C_{bad}\) as follows, \[d_{xg}:=d(x\cdot GradCAM(x),C_{good}) \tag{7}\] \[d_{xb}:=d(x\cdot GradCAM(x),C_{bad}) \tag{8}\] We train the model \(f\) to achieve \(d_{xg}\ll d_{xb}\) for all \(x\). We do this by adding a \(margin=1.0\); \(d_{xg}-d_{xb}+margin<0\). We then compute the explanation loss as follows, \[L_{expl}=\sum_{i}^{N}\max(d_{x_{i}g}-d_{x_{i}b}+margin,0) \tag{9}\] In addition to correctly classifying the training images, which is achieved through \(L_{CE}\), this \(L_{expl}\) (Equation 9) would train \(f\) to output GradCAM values that resemble the good explanations and that differ from the bad explanations. ## 4 Experiments ### Data Collection and Preparation We use the Covid-19 Radiography Database dataset [1, 20] which contains chest x-ray images of four categories: covid, normal, lung opacity, and viral pneumonia. We downsample the dataset to circumnavigate class imbalance. For model training we used 800 x-ray images per category totalling 3200 images. For validation and testing, we collected 1200 and 800 total images. We resize all images to 224 x 224 pixels. The dataset is also accompanied with feature annotation masks that show the relevant regions for each of the x-ray images collected from radiologists [1]. Even though the exact number of effected images is unknown, the dataset contains confounding regions, such as marks, texts, and timestamps in many of the images. ### Model Training We followed a transfer learning approach using a pre-trained MobileNetV2 model [1]. We chose to use MobileNetV2 because it achieved better performance at the chest x-ray images classification task at a reduced computational cost after comparison against pre-trained models available at the Keras website2. In order for the training process to affect the GradCAM explanation outputs, we only freeze and reuse the first 50 layers of MobileNetV2 and retrain the rest of the convolutional layers with a classifier layer (256 nodes with a ReLu activation with a 50% dropout followed by a Softmax layer with 4 nodes) that we added. Figure 2: (A) Input images. (B) Feature annotations masks. (C) GradCAM explanations of the Unrefined model. (D) GradCAM outputs of Unrefined model overlaid over input images. (E) GradCAM explanations of eXBL. (F) GradCAM outputs of eXBL model overlaid over input images. We first trained the MobileNetV2 to categorise the training set into the four classes using categorical cross entropy. It was trained for 60 epochs3 using Adam optimiser. We refer to this model as the Unrefined model. We use the Unrefined model to extract good and bad GradCAM explanations. Next, we employ our eXBL algorithm using the good and bad explanations to teach the Unrefined model to focus on relevant image regions by tuning its explanations to look like the good explanations and differ from the bad explanations as much as possible. We refer to this model as the eXBL model and it was trained for 100 epochs using the same early stopping, learning rate, and optimiser as the Unrefined model. Footnote 3: The model was trained with an early stop monitoring the validation loss at a patience of five epochs and a decaying learning rate = 1e-04. ## 5 Results Tables 1 and 2 show classification performance of the Unrefined and eXBL refined models. While the average AR score of GradCAM explanations produced using the eXBL model is 0.705, the explanations of the Unrefined model score an average AR of 0.685. Sample test images, masks, GradCAM outputs, and overlaid GradCAM visualisations of both the Unrefined and eXBL models are displayed in Figure 2. From the sample outputs, we observe that the eXBL model was able to produce more accurate explanations that capture the relevant image regions presented with annotation masks. However, the superior explanations of the eXBL model come with a classification performance loss on half of the categories as is summarised in Table 2. ## 6 Conclusion In this work, we have presented an approach to simplify the demanding task of feature annotation in XBL to an identification of only two model explanations. Our approach, Exemplary eXplanation-based Learning (eXBL) can tune a model's attention to focus on relevant image regions, thereby improving the saliency-based model explanations. We believe our approach is domain transferable and shows potential for real-world implementation of interactive learning using XBL. Even though the eXBL model achieved comparable classification performance when compared against the Unrefined model (especially in categorising the Normal and Lung opacity categories, in which it scored better and equal to the Unrefined model, respectively), as is presented in Tables 1 and 2, we observed that there is a classification performance loss when retraining the Unrefined model with eXBL to produce good explanations. We attribute this to the accuracy-interpretability trade-off. Although the existence of this trade-off is debated [14, 15], performance loss after retraining a model could mean that the initial model was exploiting confounding regions in the training instances. It could also mean that our selection of good and bad explanations may not have been optimal and that the two exemplary explanations may be degrading model performance. \begin{table} \begin{tabular}{l c c} \hline Metric & Unrefined model & eXBL \\ \hline Accuracy & 0.950 & 0.910 \\ Precision & 0.947 & 0.912 \\ Recall & 0.945 & 0.902 \\ \hline \end{tabular} \end{table} Table 1: Summary of classification performances of the Unrefined and eXBL models. \begin{table} \begin{tabular}{l c c} \hline Category & Unrefined model & eXBL \\ \hline Covid & 0.925 & 0.855 \\ Normal & 0.930 & 0.955 \\ Lung opacity & 0.955 & 0.955 \\ Viral pneumonia & 0.975 & 0.945 \\ \hline \end{tabular} \end{table} Table 2: Classification performance into four categories. The two exemplary explanations are selected using an objective evaluation metric, AR, and an existing dataset of annotation masks. For system development and experiment purposes, we use the masks as base knowledge. Although we believe our work presents a simple approach to implement XBL on other domains, future work should involve domain experts when picking the good and bad explanations. However, when involving end users, since the pool of explanations to choose the exemplary explanations from could be large, active learning approaches should be explored to select a subset of model explanations to prompt domain experts for feedback. ## Acknowledgements This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6183. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
2307.09315
The Strong Field QED approach of the vacuum interaction processes at ELI-NP
The commissioning of the high power laser facility Extreme Light Infrastructure - Nuclear Physics (ELI-NP) at Bucharest-Magurele (Romania) allows the in-depth study of nonlinear interactions in Strong Field Quantum Electrodynamics (SF-QED). The present paper analyzes the SF-QED processes possible to study at ELI-NP. Carrying out such experiments will allow finding answers to many fundamental QED questions. After a brief review of the first experiment (E-144 SLAC) which confirmed the existence of nonlinear QED interactions of high-energy electrons with photons of a laser beam, we presented the fundamental QED processes that can be studied at ELI-NP in the multi-photon regime along with the characteristic parameters of the laser beam used in the QED interaction with electrons. To prepare an experiment at ELI-NP, it is necessary to analyze both the kinematics and the dynamics of the interactions. Therefore, we first reviewed the kinematics of linear QED processes and then the corresponding Feynman diagrams. For nonlinear, non-perturbative multi-photon QED interactions, the Feynman diagram technique must be adapted from linear to nonlinear processes. This is done by switching to quantum fields described by Dirac-Volkov dressed states, of particles in an intense electromagnetic (EM) field. This allows the evaluation of the amplitude of the physical processes and finally the determination of the cross-sections of these processes. SF-QED processes of multi-photon interactions with strong laser fields can be investigated taking into account the characteristics of the ELI-NP facility in the context of QED vacuum pair production of electron-positron pairs and energetic gamma rays. Finally, we present some similar experimental projects from other research centers, in different stages of implementation.
M. Pentia, C. R. Badita, D. Dumitriu, A. R. Ionescu, H. Petrascu
2023-07-18T14:59:19Z
http://arxiv.org/abs/2307.09315v2
# The Strong Field QED approach of the vacuum interaction processes at ELI-NP ###### Abstract The commissioning of the high power laser facility Extreme Light Infrastructure - Nuclear Physics (ELI-NP) at Bucharest-Magurele (Romania) allows the in-depth study of nonlinear interactions in Strong Field Quantum Electrodynamics (SF-QED). The present paper analyzes the SF-QED processes possible to study at ELI-NP. Carrying out such experiments will allow finding answers to many fundamental QED questions. Firstly it needs to highlight and evaluate the various interactions with the virtual particles of the QED vacuum as the processes of multi-photon inverse Compton scattering, \(e^{+}e^{-}\) pair production, \(e^{+}e^{-}\) pair annihilation, \(e^{-}e^{-}\) Moller scattering, \(e^{+}e^{-}\) Bhabha scattering, electron self-energy, photon self-energy or vacuum energy. In this sense, the current worldwide results are analyzed along with the main steps necessary for the design of SF-QED experiments at ELI-NP. After a brief review of the first experiment (E-144 SLAC) which confirmed the existence of nonlinear QED interactions of high-energy electrons with photons of a laser beam, we presented the fundamental QED processes that can be studied at ELI-NP in the multi-photon regime along with the characteristic parameters of the laser beam used in the QED interaction with electrons. To prepare an experiment at ELI-NP, it is necessary to analyze both the kinematics and the dynamics of the interactions. Therefore, we first reviewed the kinematics of linear QED processes and then the corresponding Feynman diagrams. For nonlinear, non-perturbative multi-photon QED interactions, the Feynman diagram technique must be adapted from linear to nonlinear processes. This is done by switching to quantum fields described by Dirac-Volkov dressed states of particles in an intense electromagnetic (EM) field. This allows the evaluation of the amplitude of the physical processes and finally the determination of the cross-sections of these processes. SF-QED processes of multi-photon interactions with strong laser fields can be investigated taking into account the characteristics of the ELI-NP facility in the context of QED vacuum pair production of electron-positron pairs and energetic gamma rays. Finally, we present some similar experimental projects from other research centers, in different stages of implementation. ## I Introduction In 2009 G.V. Dunne (University of Connecticut) [1] remarked "the ELI project opens up an entirely new non-perturbative regime of QED and of quantum field theories in general. There are many experimental and theoretical challenges ahead. Theoretically, the biggest challenge in the non-perturbative arena is to develop efficient techniques, both analytical and numerical, for computing the effective action and related quantities, in external fields that realistically represent the experimental laser configurations. A lot of progress has been made in this direction, but new ideas and methods are still needed". Most of the high power laser works interpret the SF-QED interactions as non-perturbative processes, relative to the clasical field-matter interaction strength \(\xi\) and quantum parameter \(\chi\) (see later). For experimental studies must be evaluated the cross sections of these processes. Therefore, the treatment of SF-QED processes must be done in terms of vacuum interaction processes with Feynman diagrams for "dressed" particle in oscillatory EM fields [2]. The treatment of non-perturbative QED has a different meaning than that of non-perturbative QCD. The last one is connected with the strong coupling parameter \(\alpha_{s}\), which is too large for a perturbative approach. On the other hand, the non-perturbative QED regime is characterized by the coupling parameter \(\xi\) between the matter and the laser field. Here we are dealing with SF-QED interactions but we do not appeal to the QED coupling constant \(\alpha\approx 1/137\) as parameter of perturbative developments, but to the coupling \(\xi\) between matter and the laser field, which can exceed the unit. As such, processes and diagrams of given order in \(\alpha\) (Feynman diagrams) depend on all terms of the expansion in \(\xi\). On the other hand, the electron-laser interactions are described by particle "dressed" states [3] as it comes to have multi-photon interactions, and by high-order processes with radiative corrections in the theory. Today, the ELI-NP facility can provide laser beams of 2 x 10 PW with intensities up to \(10^{22}-10^{23}\,W/cm^{2}\)[4; 5; 6]. Therefore, we can proceed to initiate experimental works for the in-depth study of nonlinear QED processes. With such laser beams can be performed a series of works as: * Systematic studies of the dynamics of fundamental QED processes possible to approach at high power lasers and to evaluate the amplitude of these processes such as: \(\gamma e\) inverse Compton scattering [7; 8; 9; 10; 11], Breit-Wheeler \(e^{+}e^{-}\) pair production [12], Bethe-Heitler \(e^{+}e^{-}\) pair production [13], Dirac \(e^{+}e^{-}\) pair annihilation, \(e^{-}e^{-}\) Moller Scattering, \(e^{+}e^{-}\) Bhabha Scattering, Electron Self Energy, Photon Self Energy, Vacuum Energy [14]. * Proposal of experimental works for the measurement of physical properties related to the production of \(e^{+}e^{-}\) pairs (Schwinger mechanism) in the photon-multi-photon interaction (Breit-Wheeler nonlinear), the multi-photon-virtual photon interaction of the nucleus field (Bethe-Heitler nonlinear), or the production and measurement of QED (positronium) bound states. * Design and carry out experimental works to measure some fundamental processes using high-power lasers at ELI-NP. The production of a large number of positrons with MeV energies opens the door to new avenues of antimatter research, including understanding the physics of various processes and phenomena in astrophysics, such as black holes and gamma-ray bursts [15; 16], or from pair plasma physics [17; 18]. ## II Fundamental gamma-electron interactions The conversion of laser light to matter is one of the fundamental processes of EM photon-photon interaction, less experimentally studied so far. This is one of the spectacular predictions of QED, but difficult to achieve. The reason is due both to a reduced photon-photon cross section (\(\sim 0.1\) barn), but mainly due to the difficulty of achieving an adequate density of photon beams [12]. However, E.J. Williams [19] noted that sufficiently high photon densities can be obtained in the intense electric field of a nucleus in relativistic motion. Therefore, the production of pairs of particles from the EM field is possible to achieve in principle: * the Schwinger effect [20]; * Breit-Wheeler production [12; 21; 22]; * Bethe-Heitler production [13]. Such linear QED laser light - matter interaction processes are shown in Fig.1 in the form of Feynman diagrams. They can be studied at the ELI-NP facility, but in the laser multi-photon oscillatory EM field, as nonlinear interactions of "dressed" particles [2]. Current theoretical and experimental works with high-power lasers (see for example [23]) have highlighted the possibility of experimental study of some fundamental QED interactions such as: * Linear Breit-Wheeler interaction process (Fig.2.a): \(\gamma+\gamma\to e^{+}+e^{-}\) with electron-positron pair production, treated in linear QED [24]. * Nonlinear inverse Compton scattering (multi-photon) (Fig.2.b): \(e^{-}+n\,\gamma_{L}\to e^{-}+\gamma\), where the initial electron interacts with \(n\) laser photons \(\gamma_{L}\) and is emitted a \(\gamma\) photon of high energy. Up to 40% of the energy of the initial electrons is transferred to the final photon [25]. * Nonlinear (multi-photon) Breit-Wheeler interaction process (Fig.2.c): \(\gamma+n\,\gamma_{L}\to e^{+}+e^{-}\), pair production, with the energy of several laser photons transformed into the mass of electron-positron pairs [26]. * The Bethe-Heitler interaction process with the intense electric field of the nucleus (Fig.2.d): \(n\gamma_{L}+\gamma_{V}\to e^{+}+e^{-}\), where the \(n\gamma_{L}\) the laser multi-photon interaction with \(\gamma_{V}\) virtual photon of the intense field of the nucleus, leads to \(e^{+}e^{-}\) pair production [13]. ### QED vacuum interaction processes Under normal conditions, the physical vacuum, due to quantum fluctuations, is in a permanent state of "boiling", with the creation and annihilation of virtual particle-antiparticle pairs. According to the Heisenberg principle, locally, on short time intervals \(\Delta t\), there are energy fluctuations \(\Delta E\), so that their product is not smaller than \(\hbar\): \(\Delta E\cdot\Delta t\geq\hbar\). On time intervals \(\Delta t\), the \(\Delta E\) fluctuation can produce \(e^{+}e^{-}\) pairs, at shallow depth under the mass shell. That is, \(\Delta E\approx 2m_{e}c^{2}=2\cdot 0.511MeV\approx 10^{6}\) eV and \(\Delta t\geq\hbar/2m_{e}\,c^{2}=10^{-22}\) s. So, locally, the process of \(e^{+}e^{-}\) pair production is confined in a spatial interval \(\Delta x\) and temporal \(\Delta t\), after which the pair annihilates (see Fig.3.a)). This process of production and annihilation of virtual \(e^{+}e^{-}\) pairs is associated with the vacuum polarization phenomenon. Figure 1: Light (\(\gamma\)) and matter (\(e^{-}\)) QED interaction processes. (thanks to Oliver Pick presentation 23 October 2014, LNFN – Frascati, ”Observing the two-photon Breit-Wheeler process for the first time”, underlined - Nobel laureates) If an external electric field \(E\) transfers enough energy to these virtual pairs during \(\Delta t\) (see Fig.3.b)) it can transform them into real pairs that can be observed and recorded experimentally. The characteristic distance \(2\Delta x\) over which the electric field can produce real \(e^{+}e^{-}\) pairs is given by the reduced Compton wavelength \(\lambda_{c}\), \[2\Delta x\approx 2\,c\,\Delta t=\frac{\hbar}{m_{e}c}=\lambda_{c}\approx 386\cdot 1 0^{-15}m \tag{1}\] The real \(e^{+}e^{-}\) pair will be produced if a minimum energy W is transferred from the field E, \[W=e\,E\,\lambda_{c}=\frac{eE\hbar}{m_{e}\,c}>2m_{e}\,c^{2} \tag{2}\] Hence the minimum value of the electric field leading to vacuum breakdown is: \[E>\frac{2m_{e}^{2}c^{3}}{\hbar e}=2E_{cr} \tag{3}\] In the case of a uniform EM field, this defines the Schwinger limit of the critical electric field \(E_{cr}\) capable of producing vacuum breakdown, i.e. the starting value of the spontaneous real \(e^{+}e^{-}\) pairs production in the laser field - vacuum interaction: \[E_{cr}=\frac{m_{e}^{2}c^{3}}{\hbar e}=1.323\cdot 10^{18}\,V/m \tag{4}\] Other words, the value (4) specifies the minimum electric field capable of performing on an electron the work \(\epsilon_{e}=W/2\) over the Compton wavelength \(\lambda_{c}\), equal to its rest mass \(m_{e}\,c^{2}\): \[\epsilon_{e}=e\,E_{cr}\,\lambda_{c}=m_{e}\,c^{2} \tag{5}\] The value (4) of the Schwinger critical electric field \(E_{cr}\) has not yet been reached experimentally. This field leading to vacuum breakdown with the production of \(e^{+}e^{-}\) pairs (Breit-Wheeler process), presents a lot of nonlinear aspects. The field intensities accessible in the laboratory system nowadays are three orders of magnitude lower than the value of the critical field \(E_{cr}\). However, in the rest system of a high energy electron, it "sees" the transverse component of the laser electric field \(E\) boosted by \(\gamma_{e}\) (Lorentz factor) and reaches a value \(E^{*}=\gamma_{e}\,E\). Based on the relationship between the electric field \(E\) and the intensity of the laser beam \(I_{L}\): \(E(V/m)=2750\sqrt{I_{L}}\,(W/cm^{2})\), for the ELI-NP with the intensity \(I_{L}>10^{22}W/cm^{2}\), light pulses lead to field intensity \(E\approx 10^{14}\,V/m\) at the focal point, on a distance of several lasr wavelengths. For example, with an electron beam energy of \(\epsilon_{e}=10\) GeV, the Lorenz factor is \(\gamma_{e}=\epsilon_{e}/m_{e}c^{2}\approx 2\cdot 10^{4}\) so that if a laser beam collides head-on, it "sees" a boosted laser field \(E^{*}\approx 2\cdot 10^{18}\) V/m. Therefore, the intensity of the laser field in the moving electron system will be of the order of critical value \(E_{cr}\) (4). ### First matter-light conversion experiment (E-144 SLAC) In the SLAC E-144 experiment [25; 26; 27; 28], studies of nonlinear QED processes were carried out using 46.6 GeV electrons scattered on the intense laser EM field with a wavelength of 527 nm (2.35 eV), see Fig.4. Using peak focused laser in pulses of the order of terawatt and energy of 650 mJ, intensities of \(10^{18}\)\(W/cm^{2}\) were obtained. The production of \(e^{+}e^{-}\) pairs requires an energy in the center of mass system at least of \(2m_{e}c^{2}=1.02\) MeV. This can be achieved by a nonlinear Breit-Wheeler process, i.e. by scattering on a laser beam a high-energy photon created by inverse Compton scattering of a laser beam on a high-energy electron (see Fig.4). Figure 3: a) Vacuum fluctuations with creation and annihilation of virtual \(e^{+}e^{-}\) pairs; b) Energy transfer from the EM field ensures the virtual pairs separation and their transformation into real pairs. Figure 2: a) Breit-Wheeler linear process; b) Nonlinear inverse Compton scattering (multi-photon); c) Nonlinear Breit-Wheeler process (multi-photon); d) Nonlinear Bethe-Heitler interaction. For the production of pairs from the interaction \(\gamma_{1}\gamma_{2}\to e^{+}e^{-}\), having laser light \(\gamma_{1}\) of wavelength 527 nm (energy 2.35 eV), would require photons \(\gamma_{2}\) of energy 109 GeV. But with 527 nm laser photons scattered on a 46.6 GeV electron beam available at SLAC, the maximum energy of the nonlinear Compton scattered photons is only 29.2 GeV [28]. Therefore it is necessary to use both the relativistic electron boost and a multi-photon interaction process. The electric field increase is obtained by the relativistic boost of the counter-propagating electron with the energy in the laboratory system \(\epsilon_{e}\) by a Lorentz factor \(\gamma_{e}\!=\!\epsilon_{e}/m_{e}\,c^{2}\!\gg\!1\). Then the electron "sees" the \(E_{L}\) laser field boosted to the value \(E^{*}\!=\!\gamma_{e}E_{L}\). So, for \(\epsilon_{e}\!=\!46.6\) GeV, the electron with Lorentz factor \(\gamma_{e}\!=\!46.6\cdot 10^{3}\) MeV/0.511 \(MeV\!=\!9\cdot 10^{4}\) colliding head-on with a 527 nm laser photon, "sees" the field in its rest frame \(E^{*}=1.1\cdot 10^{18}\) V/m. This is close to the Schwinger limit \(E_{cr}=m_{e}^{2}\,c^{3}/\hbar\,e=1.3\cdot 10^{18}\) V/m, which can transfer to an electron, along a Compton wavelength, the energy equal to its rest mass, at which a static electric field would vacuum spontaneously break-down into \(e^{+}e^{-}\) pairs. To reach the value of the critical field \(E_{cr}\), we must resort to multi-photon and electron interactions with a high-intensity laser \(I_{L}\), which in this way ensures a sufficiently strong electric field \(E_{L}\), according to \(I_{L}\!=\!\epsilon_{0}\,c\left\langle E_{L}^{2}\right\rangle\) (see later) (24). The parameters of the experiment correspond to the nonlinear Compton regime (see Fig.5). The high-energy photon resulting from inverse Compton scattering interacts with the multi-photon laser beam and produces \(e^{+}e^{-}\) pairs through the nonlinear Breit-Wheeler process (see Fig.6). **Important Note:** Linear, single-photon QED interaction processes described by Feynman diagrams such as those in Fig.1, can be studied in the multi-photon regime using the same Feynman diagrams, but with "dressed" (Dirac-Volkov) particle states and propagators [2] because the particle is moving in the oscillating EM field. ## III Gamma - Electron Scattering ### Kinematics of the \(e_{i}\,\gamma_{i}\to e_{f}\,\gamma_{f}\) scattering The 4-momentum conservation, see Fig.7: \[q_{\gamma i}+q_{ei}=q_{\gamma f}+q_{ef} \tag{6}\] By squaring (6), with \(q_{\gamma}^{2}=0\) and \(q_{e}^{2}=m_{e}^{2}c^{2}\), we have: \[q_{\gamma i}\cdot q_{ei}=q_{\gamma f}\cdot q_{ef} \tag{7}\] Multiply (6) by \(q_{\gamma f}\) and using (7) we have: \[q_{\gamma f}\cdot\left(q_{\gamma i}+q_{ei}\right)=q_{\gamma i}\cdot q_{ei} \tag{8}\] Expressing (8) by 3-components, with the notations of Fig.8, we get the energy \(\epsilon_{\gamma f}\) of the final photon as a function of the energy \(\epsilon_{\gamma i}\) of the initial photon for a given electron energy \(\epsilon_{ei}\): \[\epsilon_{\gamma f}=\frac{\epsilon_{\gamma i}\,\epsilon_{ei}\left(1-\beta_{ ei}\cos\alpha_{i}\right)}{\epsilon_{\gamma i}\Big{(}1-\cos\theta\Big{)}+ \epsilon_{ei}\left(1-\beta_{ei}\cos\alpha_{f}\right)} \tag{9}\] In the initial electron rest frame: \(\beta_{ei}=0\,\,;\,\gamma_{ei}=1\,;\,\epsilon_{ei}=m_{e}\,c^{2}\) the photon energy change is: \[\frac{\epsilon_{\gamma f}}{\epsilon_{\gamma i}}=\frac{1}{1+\frac{\epsilon_{ \gamma i}}{m_{e}\,c^{2}}\Big{(}1-\cos\theta\Big{)}} \tag{10}\] Figure 4: The SLAC E-144 experiment. Positron production process in \(\gamma\gamma\) scattering (O.Pike, [https://agenda.infn.it/event/8532/contributions/74190](https://agenda.infn.it/event/8532/contributions/74190)) Figure 5: High-energy photon production by inverse Compton scattering of laser beams. a) Kinematics of the Compton inverse scattering; b) Feynman diagram for determining the amplitude. Figure 6: Feynman diagrams for the production of \(e^{+}e^{-}\) pairs by the nonlinear Breit-Wheeler process. Multiply (8) by \(q_{ei}\), we have: \[q_{\gamma f}=\frac{\left(q_{\gamma i}\cdot q_{ei}\right)\,q_{ei}}{\left(q_{\gamma i }\cdot q_{ei}\right)+m_{e}^{2}c^{2}} \tag{11}\] The temporal component gives the energy dependence of the final photon on the initial electron energy: \[\epsilon_{\gamma f}=\frac{\left(q_{\gamma i}\cdot q_{ei}\right)\epsilon_{ei}}{ \left(q_{\gamma i}\cdot q_{ei}\right)+m_{e}^{2}c^{2}} \tag{12}\] where (see Fig.9) \[q_{\gamma i}\cdot q_{ei}=\frac{\epsilon_{\gamma i}\epsilon_{ei}}{c^{2}}\left( 1+\beta_{ei}\cos\alpha\right) \tag{13}\] That is, the final photon energy (12) is \[\epsilon_{\gamma f}=\frac{\epsilon_{ei}}{1+\frac{m_{e}^{2}\,c^{4}}{\epsilon_ {\gamma i}\epsilon_{ei}}\frac{1}{\left(1+\beta_{ei}\cos\alpha\right)}} \tag{14}\] ### The dressed electron mass in the electromagnetic field The motion of a free electron in an EM field can be described in terms of the interaction of the electron with a classical plane wave of frequency \(\omega\). In general, such an electron will show an oscillatory motion with the same frequency \(\omega\) and will radiate in turn. For a circularly polarized laser, EM waves with electric and magnetic components \(E_{L}\) and \(B_{L}\) have a constant amplitude and rotate with the angular frequency \(\omega_{L}\) in a plane perpendicular to the direction of wave propagation. In this wave, the movement of the electron is circular with radius r, angular frequency \(\omega_{L}\) and tangential speed \(v_{\perp}=\omega_{L}\,r\) perpendicular to the direction of movement, parallel to the magnetic field vector \(B_{L}\)[29]. The circular motion of the electron is \(m_{e}v_{\perp}^{2}/r=eE_{L}\) or \[p_{\perp}\omega_{L}=eE_{L} \tag{15}\] where \(p_{\perp}\!=\!\gamma\,m_{e}v_{\perp}\) is the electron transverse momentum. For a relativistic electron we have \(\beta_{\perp}\!=\!p_{\perp}c/\epsilon_{e}\) and \(\gamma=\epsilon_{e}/m_{e}\,c^{2}\) and the transverse momentum will be: \[p_{\perp}=\beta_{\perp}\gamma\,m_{e}\,c \tag{16}\] The product: \[\xi=\beta_{\perp}\,\gamma \tag{17}\] defines the field strength parameter. A relativistic electron inside an EM plane wave field appears to have an "increased" mass. Indeed, based on the energy - mass relation of the electron: \[m_{e}^{2}c^{2}=\frac{\epsilon_{e}^{2}}{c^{2}}-\vec{p}^{\,2}\quad\mbox{with} \ \ (\ref{eq:16})\ \ \mbox{and}\ \ (\ref{eq:17})\] \[m_{e}^{2}c^{2}=\frac{\epsilon_{e}^{2}}{c^{2}}-p_{\parallel}^{2}-\xi^{2}m_{e}^ {2}c^{2} \tag{18}\] Figure 9: The high energy \(\gamma_{f}\)-ray production geometry by inverse Compton scattering the relativistic energy of the electron in an EM field, now is: \[\frac{\epsilon_{\rm e}^{2}}{c^{2}}-p_{\parallel}^{2}=m_{e}^{2}c^{2}\Big{(}1+\xi^{ 2}\Big{)} \tag{19}\] where \(p_{\parallel}\) is the longitudinal momentum parallel to the direction of propagation of the wave. Heuristically, one can say that the electron behaves as if it had an effective mass [7]: \[\overline{m}_{e}=m_{e}\sqrt{1+\xi^{2}} \tag{20}\] This behavior is identifiable by a shift in the kinematic edge for Compton scattering, such that, the electron in an EM plane wave field is moving along the \(p_{\parallel}\) and behaves as having a "dressed" mass \(\overline{m}_{e}\). Although this mass "increase" has been derived classically, the same relation (20) for the effective mass appears in the quantum treatment of the solutions of the Dirac equation for free electrons in the EM plane wave as Dirac-Volkov "dressed" states [2]. ### Classical laser intensity parameter \(\xi\) (nonlinearity charge-field coupling) The electric field strength parameter \(\xi\) (17) can be expressed with the transverse momentum (16) or in connection (15) with the electric field component \(E_{L}\) or, as we will see, with the intensity \(I_{L}\) of the laser beam. For the moment it can be write: \[\xi=\beta_{\perp}\gamma=\frac{e\,E_{L}}{m_{e}\omega_{L}c} \tag{21}\] The Lorentz factor \(\gamma=1/\sqrt{1-\beta_{\perp}^{2}}\) with (17) can be expressed as: \(\gamma=1/\sqrt{1+\xi^{2}}\) and \(\beta_{\perp}=\xi/\sqrt{1+\xi^{2}}\). Then the radius of electron's circular trajectory is less than the reduced laser wavelength \(\lambda_{L}=\lambda_{L}/(2\pi)\): \[r=\frac{v_{\perp}}{\omega_{L}}=\frac{\beta_{\perp}c}{\omega_{L}}=\frac{\xi}{ \sqrt{1+\xi^{2}}}\,\frac{\lambda_{L}}{2\pi}\leq\frac{\lambda_{L}}{2\pi} \tag{22}\] Now, it is convenient to redefine the parameter \(\xi\) by squaring (21) as: \[\xi^{2}=\frac{e^{2}\langle E_{L}^{2}\rangle}{m_{e}^{2}\omega_{L}^{2}\,c^{2}} \tag{23}\] where the average \(\langle E_{L}^{2}\rangle\) is taken with respect to time. The \(\xi^{2}\) (23) measures the average laser beam intensity \(I_{L}\) expressed as usual in electro-dynamics by \(\langle E_{L}^{2}\rangle\): \[I_{L}=\epsilon_{0}\,c\,\langle E_{L}^{2}\rangle \tag{24}\] With the mean laser electric field as the r.m.s. \[E_{L}\approx\sqrt{\langle E_{L}^{2}\rangle}=\sqrt{\frac{1}{\epsilon_{0}\,c}} \,\sqrt{I_{L}}\qquad\mbox{ or}\] \[E_{L}=1944\cdot\sqrt{I_{L}}\qquad\mbox{for}\qquad I_{L}\ \ \mbox{in}\ \ W/cm^{2} \tag{25}\] Substituting \(\langle E_{L}^{2}\rangle\) from (25) in (23) with \(\omega_{L}=2\pi c/\lambda_{L}\), we get finally: \[\xi^{2}\!\!=\!3.65\times 10^{-19}I_{L}\lambda_{L}^{2}\ \ \mbox{for}\ I_{L}\ \mbox{in}\ W/cm^{2}\ \mbox{and}\ \lambda_{L}\ \mbox{in}\ \mu m \tag{26}\] At ELI-NP for a laser wavelength \(\lambda_{L}=0.815\ \mu m\) and the pulse intensity at the focus \(I_{L}\sim 10^{22}W/cm^{2}\), we have \(\xi\cong 50\). ### Physical interpretation of the laser intensity parameter \(\xi\) The laser intensity is connected with the energy transfer from the laser field to electron [30]. The classical laser intensity parameter \(\xi\) (21) can be interpreted with the work of the laser field over the electron Compton wavelength \(\lambda_{\rm c}\): \[\xi=\frac{e\,E_{L}}{m_{e}\,c\,\omega_{L}}=e\,E_{L}\frac{\lambda_{c}}{\hbar\, \omega_{L}}=e\,E_{L}\frac{\lambda_{L}}{m_{e}\,c^{2}} \tag{27}\] \[\mbox{where}\qquad\ \ \ \lambda_{c}=\frac{\hbar}{m_{e}\,c}\quad;\quad\lambda_{L}= \frac{c}{\omega_{L}}\] The product \(e\,E_{L}\lambda_{c}\) represents the work of the laser field \(E_{L}\) over the electron reduced Compton wavelength \(\lambda_{c}\) (see Fig.10). \(\xi\) (27) is measuring this work in units of photon energy \(\hbar\,\omega_{L}\). So \(\xi\) gives the number of laser photons interacting along the \(\lambda_{c}\). \(\xi\) is a classical parameter, because it is independent of \(\hbar\) and it measures the laser intensity as the number \(n\) of \(\gamma_{L}\) laser photons interacting with the electron along the Compton wavelength \(\lambda_{\rm c}\). Also, the last term of Eq. (27) can be interpreted as the energy transfer to the electron over the laser wavelength \(\lambda_{L}\) in units of electron rest mass \(m_{e}\,c^{2}\). If we consider the Schwinger electric field threshold \(E_{cr}\) (4), the \(\xi\) parameter can be expressed as: \[\xi=\frac{m_{e}\,c^{2}}{\hbar\,\omega_{L}}\,\frac{E_{L}}{E_{cr}}=\frac{ \lambda_{L}}{\lambda_{c}}\,\frac{E_{L}}{E_{cr}} \tag{28}\] The smallness of the factor \(E_{L}/E_{cr}\) is compensated by the large ratio of laser to Compton wavelength \(\lambda_{L}/\lambda_{c}\) of the order \(10^{6}\)[30]. The \(\xi\) parameter value encodes certain processes as: Figure 10: Electron multi-photon laser interaction. * \(\xi\ll 1\) : the processes with minimum possible number of photons are the most probable. The probabilities equal the perturbation (linear) theory probabilities and plane waves play the role of individual photons. * \(\xi\sim 1\) or \(\xi>1\) : the probabilities to absorb different number of photons become comparable and the process becomes multi-photon, i.e., the probability has an essentially non-perturbative (nonlinear) dependence on the field. * \(\xi\gg 1\) : the case of modern laser technology. ### Quantum nonlinearity parameter \(\chi_{e}\) As long as \(\xi\) parameter (27) is given in relation to the photon field energy transfer over a reduced Compton wavelength \(e\,E_{L}\,\lambda_{c}\) (in \(\hbar\,\omega_{L}\) units), the quantum \(\chi_{e}\) parameter is defined in terms of the energy of the electron \(\epsilon_{e}\) (in \(m_{e}\,c^{2}\) units) and the laser field \(E_{L}\) (in \(E_{cr}\) units): \[\chi_{e}=\frac{\epsilon_{e}}{m_{e}c^{2}}\frac{E_{L}}{E_{cr}}=\gamma_{e}\frac{E _{L}}{E_{cr}}=\gamma_{e}\frac{\hbar\,\omega_{L}}{m_{e}\,c^{2}}\,\xi \tag{29}\] here we used (28) to express connection with \(\xi\). If laser field is \(E_{L}=E_{cr}\) and electron energy (5) \(\epsilon_{e}=\epsilon_{cr}=eE_{cr}\lamlam_{e}=m_{e}c^{2}\) is the work performed by the field \(E_{L}\) over reduced Compton wavelength \(\lam_{c}\), then \(\chi_{e}=1\). This way \(\chi_{e}\) compares the \(\gamma_{e}E_{L}\) field strength in the rest frame of the electron, with the critical field \(E_{cr}\) and measures the importance of quantum nonlinear effects in \(e^{+}e^{-}\) vacuum pair production [8; 31]. In the context of SF-QED, the quantum nonlinearity parameter \(\chi_{e}\) serves as a measure of the importance of nonlinear QED processes such as multi-photon Compton scattering, \(e^{+}e^{-}\) pair production and photon-photon scattering. These processes become significant when the quantum nonlinearity parameter is of order unity or greater. ## IV Linear QED interaction processes The Feynman diagrams allows to determine the invariant amplitude of the QED processes, the \(\hat{S}\) matrix elements and finally the cross section for the studied process. The Feynman diagrams of the interested linear QED processes are shown in Table 1. Evaluation of the Feynman diagrams uses the electromagnetic \(\hat{A}_{\mu}(x)\) and Dirac \(\hat{\psi}(x)\) and \(\hat{\overline{\hat{\psi}}}(x)\) field operators with the corresponding annihilation and creation components listed below: \[\begin{cases}\hat{A}_{\mu}(x)\!\!=\!\!\int\!\frac{d^{3}\vec{k}}{(2\pi)^{3}} \frac{1}{2\omega}\!\!\underbrace{\left[\hat{a}_{\lambda}(\vec{k})\epsilon^{ \lambda}_{\mu}e^{-i\,k\,\cdot\,x}\right]}_{\sim\,\hat{A}^{-}_{\mu}(x)}\!\!+\! \underbrace{\hat{a}^{\dagger}_{\lambda}(\vec{k})\,\epsilon^{\lambda}_{\mu}e^{ i\,k\,\cdot\,x}}_{\sim\,\hat{A}^{+}_{\mu}(x)}\!\!\right]\\ \\ \hat{\psi}(x)\!\!=\!\!\sum_{s}\!\int\!\!\frac{d^{3}\vec{p}}{(2\pi)^{3}}\frac{m }{\omega}\!\!\underbrace{\left[\hat{b}_{s}(\vec{p})\,u_{s}(\vec{p})\,e^{-i\,p \,\cdot\,x}\right]}_{\sim\,\hat{\psi}^{-}(x)}\!\!+\!\underbrace{\hat{c}^{ \dagger}_{s}(\vec{p})\,v_{s}(\vec{p})\,e^{i\,p\,\cdot\,x}}_{\sim\,\hat{\psi}^ {+}(x)}\!\!\right]\\ \\ \hat{\overline{\psi}}(x)\!\!=\!\sum_{s}\!\int\!\!\frac{d^{3}\vec{p}}{(2\pi)^{3}} \frac{m}{\omega}\!\!\underbrace{\left[\hat{c}_{s}(\vec{p})\,\overline{v}_{s} (\vec{p})\,e^{-i\,p\,\cdot\,x}\right]}_{\sim\,\hat{\overline{\psi}}^{-}(x)}\! \!+\!\underbrace{\hat{b}^{\dagger}_{s}(\vec{p})\,\overline{u}_{s}(\vec{p})\,e ^{i\,p\,\cdot\,x}}_{\sim\,\hat{\overline{\psi}}^{+}(x)}\!\!\right]\end{cases}\] where the field operators act with corresponding anihilation and creation operators: \[\begin{array}{c}\hat{A}^{-}_{\mu}(x)\to\hat{a}\text{ - photon annihilation in }x\\ \hat{\psi}^{-}(x)\to\hat{b}\text{ - electron anihillation in }x\\ \hat{\overline{\hat{\psi}}}^{-}(x)\to\hat{c}\text{ - positron anihillation in }x\\ \hat{A}^{+}_{\mu}(x)\to\hat{a}^{\dagger}\text{ - photon creation in }x\\ \hat{\psi}^{+}(x)\to\hat{c}^{\dagger}\text{ - positron creation in }x\\ \hat{\overline{\hat{\psi}}}^{+}(x)\to\hat{b}^{\dagger}\text{ - electron creation in }x\\ \end{array} \tag{30}\] An example, for evaluation of the Feynman diagrams Fig.11 of the Compton (photon-electron) scattering, imply determination of the \(\hat{S}\) matrix elements: \(\left\langle\gamma,e^{-}\left|\,\hat{S}_{A}\,\right|\gamma,e^{-}\right\rangle\) and \(\left\langle\gamma,e^{-}\left|\,\hat{S}_{B}\,\right|\gamma,e^{-}\right\rangle\). Using the appropriate creation/annihilation components, the scattering matrix for the two diagrams in Fig.11, are: \[\begin{array}{c}\text{crea.}\\ e^{-}\gamma\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \gamma\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \gamma\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \downarrow\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \gamma\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \downarrow\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \gamma\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \gamma\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \downarrow\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \gamma\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \gamma\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \gamma\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \gamma\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih.}\\ \gamma\\ \downarrow\\ \end{array}\begin{array}{c}\text{anih. \begin{table} \begin{tabular}{||c|c|c|c||} \hline \hline Process & Feynman diagrams & \(\hat{S}\) matrix element \\ \hline photon-electron scattering & \(\gamma+e^{-}\longrightarrow\gamma+e^{-}\) & \(\gamma+e^{+}\longrightarrow\gamma+e^{+}\) & \(\gamma+e^{-}\longrightarrow\gamma+e^{-}\) \\ photon-positron scattering & \(\gamma+e^{-}\longrightarrow\gamma+e^{-}\) & \(\gamma+e^{-}\longrightarrow\gamma+e^{-}\) & \(\gamma+e^{-}\longrightarrow\gamma+e^{-}\) \\ \hline photon-electron scattering & \(\gamma+e^{-}\longrightarrow\gamma+e^{-}\) & \(\gamma+e^{-}\longrightarrow\gamma+e^{+}\) & \(\gamma+e^{-}\longrightarrow\gamma+e^{-}\) \\ photon-positron scattering & \(\gamma+e^{+}\longrightarrow\gamma+e^{+}\) & \(\gamma+e^{+}\longrightarrow\gamma+e^{+}\) & \(\gamma+e^{+}\longrightarrow\gamma+e^{+}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The Feynman diagrams of some linear QED processes and the corresponding \(\hat{S}\) matrix elements \[\hat{\psi^{-}(x_{1})\hat{\psi}^{+}(x_{2})} =i\,S_{F}(p+k)\] \[\hat{\psi^{-}(x_{1})\hat{\psi}^{+}(x_{2})} =i\,S_{F}(p-k^{\prime})\] Individual invariant amplitude \(\mathcal{M}_{i}\) is evaluated for each Feynman diagram relative to \(\hat{S}_{i}\) matrix element. The \(\hat{S}\) matrix element is given by total amplitude \(\mathcal{M}_{fi}\) and the phase space volume of the process: \[\left\langle\gamma,e^{-}\,\left|\,\hat{S}\,\right|\gamma,e^{-}\right\rangle= (2\pi)^{4}\,\delta^{4}(p\!+\!k\!-\!p^{\prime}\!-\!k^{\prime})\,\mathcal{M}_{fi} \tag{31}\] where the total amplitude \(\mathcal{M}_{fi}\) is the sum of individual amplitudes: \[\left\{\begin{array}{l}\mathcal{M}_{fi}=\mathcal{M}_{fi}^{A}+\mathcal{M}_{ fi}^{B}\\ \\ \mathcal{M}_{fi}^{A}=\!-\!\left(\frac{e}{\hbar}\right)^{2}\!\overline{u}_{s^{ \prime}}(\vec{p}\,^{\prime})\not\varepsilon_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! a vacuum-polarization loop in a strong field. The cross section of the nonlinear Breit-Wheeler pair creation can be evaluated with the Feynman type diagrams Fig.15, with dressed electrons (not drawn with double line in the figure). ### Bethe-Heitler pair production A process of \(e^{+}e^{-}\) pair production possible to be studied at ELI-NP is the interaction of the previously obtained high energy photons (by inverse Compton scattering or bremsstrahlung) with virtual photons of a strong EM field of the atomic nucleus. \[\gamma+\gamma_{V}\to e^{+}+e^{-} \tag{36}\] The evaluation of the cross section is done with the Feynman type diagrams Fig.16, but with the dressed electrons. Another possible Bethe-Heitler \(e^{+}e^{-}\) pairs production is by multi-photon laser interaction with the virtual photons of an atomic nucleus field: \[n\,\gamma+\gamma_{V}\to e^{+}+e^{-} \tag{37}\] The evaluation of the cross section of this \(e^{+}e^{-}\) pairs production in the field of the atomic nucleus is done with the Feynman type diagrams in Fig.17. ### ELI-NP experimental facility The facilities at ELI-NP allow for the first time the use of the two 10 PW laser beams with intensities up to \(10^{22}-10^{23}W/cm^{2}\) to study strong-field nonlinear QED interaction processes with laser beams [4; 32]. The two 10 PW laser beams are extracted from the same laser pulse by splitting it into two pulses and are amplified on two identical amplifier chains. The two 10 PW pulses must be coherent. However, variations in the optical path traveled by the two pulses require additional adjustments to be made for "femtosecond" level synchronization. There are two main types of experiments that could be performed (see Fig.18). In the first experiment Fig.18.a), using one of the 10 PW laser beams as the pump-laser focused with a large Figure 16: Feynman diagrams for Bethe-Heitler process (the double line dressed electrons are not drawn). Figure 17: Feynman diagrams for multi-photon Bethe-Heitler \(e^{+}e^{-}\) pair production (the double line dressed electrons are not drawn). Figure 13: a) Experimental arrangement of the bremsstrahlung process; b) Feynman diagram of bremsstrahlung (the double line dressed electrons are not drawn). Figure 14: High-energy photon distribution emitted from the gold target [10] focal length mirror (F/20) on a gaseous target (gas jet or gas cell) which produces, by wakefield acceleration, relativistic electrons (\(\gamma_{e}\gg 1\)), see Fig.19. The second high-intensity 10 PW probe-laser beam is focused with an F/3 mirror onto the relativistic electron bunch, Fig.19. Being exposed to the strongly focused laser field, they generate intense gamma rays and electron-positron pairs, through the nonlinear QED interaction processes that we intend to study. The laser intensity of 10 PW focused in the focal spot with a diameter of 5 \(\mu\)m is expected to be greater than \(10^{22}\)\(W/cm^{2}\). The pump and probe-laser are synchronized and delayed relative to each other as required. High-energy gamma photons can be measured with a gamma detector placed before the electron beam-dump. On the other hand, electron and positron spectra can be measured with dedicated spectrometers Fig.19. In the second experiment Fig.18.b), the 10 PW pump-laser is tightly focused by an F/3 mirror on a foil-solid target and produces relativistic electrons. The second laser, the 10 PW probe-laser is also tightly focused by an F/3 mirror on a solid target and then conveniently delayed relative to the pump-laser. The probe-laser produces a strong EM field to which the electrons are exposed. The solid target method is complementary to the gas target method. In the latter case, the number of relativistic electrons is very high, the target being a solid. This number is much higher than in the gas target or beam-beam method. On the other hand, the kinetic energy of the electrons is no longer as high as in the "beam-beam" method. ELI-NP can prepare the E6 experimental area and the interaction chamber with the gas target, shown in Fig.19. The chamber contains the two pump & probe colliding 10 PW laser beams. Measuring the characteristics of interacting SF-QED processes in intense laser fields is an experimental challenge: it requires high-sensitivity detectors for electrons and positrons in the presence of very high background levels of X-radiation and \(\gamma\) photons. The experimental area E6 with the two counter-propagating 10 PW laser beams focused on the gas and the solid target, can be prepared at ELI-NP. The required equipment and configuration of the E6 experimental chamber with gas target are presented in Table 2. The possibilities presented by Keita Seto et al. [34] are shown in the diagram for the physical regime offered by ELI-NP (see Fig.20). The high intensity of the ELI-NP laser allows obtaining a number of photons participating in an interaction \(N>10^{5}\) (see the star in Fig.20). The use of photons from 10 keV up to the GeV class should be considered. \begin{table} \begin{tabular}{|p{142.3pt}|} \hline E6 equipment consists of: \\ \hline \(\bullet\) 24 \(m^{3}\) interaction chamber \\ \(\bullet\) ISO-7 clean room \\ \(\bullet\) Opto-mechanical components \\ \(\bullet\) Gas targets of various types \\ \(\bullet\) 30 m long focal, large aperture spherical mirror \\ \(\bullet\) Laser and Plasma Diagnostics (1st or 2nd harmonic) \\ \(\bullet\) Multi GeV electron spectrometers \\ \(\bullet\) Gamma ray spectrometer \\ \hline E6 target area configuration (gas targets, QED) \\ \(\bullet\) \(2\times 10\) PW laser beams: 240 J, 23 fs, 810 nm, \(\sim\) 45 cm dia. FWHM \\ \(\bullet\) or 10 PW @ 1/60 Hz and 1 PW @ 1 Hz \\ \(\bullet\) 1 Short focal - parabolic mirrors F2.7 \\ \(\bullet\) 1 Long focal \(\sim\) 30 m - spherical mirror \\ \(\sim\) F60 @ 10 PW (\(\sim\) F160 @ 1 PW) \\ \(\bullet\) 1 Plasma mirror \\ \(\bullet\) 1 Cleanroom \\ \(\bullet\) Experimental chamber: \\ \(L\times W\times H\) of \(4000\times 3300\times 1780\)\(mm^{3}\) \\ \hline \end{tabular} \end{table} Table 2: E6 experimental chamber (laser-gas interaction) Figure 19: Layout of the gas target experimental components for the study of SF-QED processes (Reprinted from [32]) Figure 18: The two 10 PW laser pulses (red arrows) for SF-QED studies with gas (a) and solid (b) targets (I.C.E. Turcu et al. [32]) ## VII Ucoming Experiments Several petawatt-class lasers have been built worldwide. Currently, multi-PW and even 10 PW lasers have been built or are being planned to be built around the world [35]. Some of them have proposals to study fundamental processes in SF-QED regime in order to explore non-perturbative effects. These upcoming experiments include: - **LUXE (Laser Und XFEL Experiment)** is a new experiment proposed at DESY and the European XFEL to study QED in the strong-field regime where QED becomes non-perturbative. It aims at studying high-field QED in electron-laser and photon-laser interactions, with the 16.5 GeV electron beam of the XFEL and a laser beam with power of up to 350 TW. The experiment will measure the spectra of electrons, positrons and photons [36]. - **ASTRA-GEMINI** (Central Laser Facility, STFC Rutherford Appleton Laboratory, Harwell Oxford, Dicot, UK) [11]. - **E-320 experiment at FACET-II**, SLAC will collide 13 GeV electrons with 10 TW laser pulses, to study fundamental strong-field QED processes [37]. - **Apollon** in France. The ultra-intense and highly focused Apollo pulses make it possible to study the ultra-relativistic laser-plasma interaction regime, nonlinear Compton/Thomson scattering from laser-accelerated electron beams, the production of pairs in the presence of strong Coulombian fields [38]. Other laser facilities with active SF-QED study programs include: - **ZEUS facility at the University of Michigan,** once commissioned (late 2023), will use two laser pulses (with 2.5 PW and 0.5 PW), one to accelerate electrons in a laser wakefield accelerator (to either 10 GeV, or several GeV) and one to provide the EM field (with intensity \(10^{21}\)\(W/cm^{2}\), or \(10^{23}\)\(W/cm^{2}\)), will allow exploration of fundamental yet unanswered questions regarding nonlinear quantum electrodynamics in relativistic plasmas, including quantum radiation reaction and electron-positron pair production mechanisms [39]. - **SEL Station of Extreme Light (SEL)** facility in China, will be completed in 2025 and then open to users a 100-PW laser facility. It can provide focused intensity of more than \(10^{23}W/cm^{2}\)[40]. - **ELI-BL in Czech Republic** [https://www.eli-beams.eu/](https://www.eli-beams.eu/)) - **CALA in Germany** [http://cala-laser.de/experiments/hf.html](http://cala-laser.de/experiments/hf.html) - **J-Karen in Japan**, [https://doi.org/10.3390/qubs1010007](https://doi.org/10.3390/qubs1010007) - **CORELS in Korea** [https://corels.ibs.re.kr/html/corels_en/](https://corels.ibs.re.kr/html/corels_en/) ## VIII Conclusions Here we presented the theoretical framework for the main SF-QED vacuum interaction processes possible to be studied at ELI-NP and the experimental possibilities offered by this laser infrastructure. The kinematics and dynamics of these processes were presented for evaluation of the amplitudes necessary for cross section determination. We analyzed the early experimental results as well as the new approaches of similar projects around the world. Based on these results, we can move on to the preparation, design and implementation of the experimental works to test some fundamental SF-QED interactions. For this purpose, it is necessary to go through some essential stages of the experimental studies at ELI-NP. It will be necessary to: * evaluate the cross sections for the processes proposed to be studied * simulate the physical interaction processes * preparation of the characteristic distributions on the available phase-space for these processes Figure 20: The curves at given \(N\) and \(\chi\). \(N\) is the number of absorbed laser photon and \(\chi\) is intensity parameter. The pink ribbon represents the domain as \(\chi\in[0.2,0.5]\). We consider ”linear” Compton scattering in the area where \(N\leq 1\) (a single laser photon absorption). The star symbol shows the parameter set at ELI-NP [34]. * detector system design to cover the available phase-space * the evaluation of the required statistics of the experimental data to achieve significant results to verify the QED predictions. * implementation of the detector system and performance of the experimental works. Based on the results of this work, it is possible to proceed to the next stage of the design and realization of the experimental studies of the SF-QED processes at ELI-NP. The proposed experimental configuration include: * gas targets with the pump-laser beam focused by a long focal length (F/20 or F/80) mirror to drive a wakefield for electron bunch acceleration to multi-GeV energies and then exposed to the EM field of the probe-laser tightly focused (F/3). * solid targets with the pump and probe-laser beams focused on the target. * vacuum QED experiments without any target but with similar interaction geometries and diagnostics to the ones above. ###### Acknowledgements. This work was supported by Institute of Atomic Physics and the Ministry of Research, Innovation and Digitalization under program ELI-NP-RO, Contract 8/ELI-RO (2020) - Romania.
2308.08856
MV-ROPE: Multi-view Constraints for Robust Category-level Object Pose and Size Estimation
Recently there has been a growing interest in category-level object pose and size estimation, and prevailing methods commonly rely on single view RGB-D images. However, one disadvantage of such methods is that they require accurate depth maps which cannot be produced by consumer-grade sensors. Furthermore, many practical real-world situations involve a moving camera that continuously observes its surroundings, and the temporal information of the input video streams is simply overlooked by single-view methods. We propose a novel solution that makes use of RGB video streams. Our framework consists of three modules: a scale-aware monocular dense SLAM solution, a lightweight object pose predictor, and an object-level pose graph optimizer. The SLAM module utilizes a video stream and additional scale-sensitive readings to estimate camera poses and metric depth. The object pose predictor then generates canonical object representations from RGB images. The object pose is estimated through geometric registration of these canonical object representations with estimated object depth points. All per-view estimates finally undergo optimization within a pose graph, culminating in the output of robust and accurate canonical object poses. Our experimental results demonstrate that when utilizing public dataset sequences with high-quality depth information, the proposed method exhibits comparable performance to state-of-the-art RGB-D methods. We also collect and evaluate on new datasets containing depth maps of varying quality to further quantitatively benchmark the proposed method alongside previous RGB-D based methods. We demonstrate a significant advantage in scenarios where depth input is absent or the quality of depth sensing is limited.
Jiaqi Yang, Yucong Chen, Xiangting Meng, Chenxin Yan, Min Li, Ran Cheng, Lige Liu, Tao Sun, Laurent Kneip
2023-08-17T08:29:54Z
http://arxiv.org/abs/2308.08856v3
# MV-ROPE: Multi-view Constraints for ###### Abstract We propose a novel framework for RGB-based category-level 6D object pose and size estimation. Our approach relies on the prediction of normalized object coordinate space (NOCS), which serves as an efficient and effective object canonical representation that can be extracted from RGB images. Unlike previous approaches that heavily relied on additional depth readings as input, our novelty lies in leveraging multi-view information, which is commonly available in practical scenarios where a moving camera continuously observes the environment. By introducing multi-view constraints, we can obtain accurate camera pose and depth estimation from a monocular dense SLAM framework. Additionally, by incorporating constraints on the camera relative pose, we can apply trimming strategies and robust pose averaging on the multi-view object poses, resulting in more accurate and robust estimations of category-level object poses even in the absence of direct depth readings. Furthermore, we introduce a novel NOCS prediction network that significantly improves performance. Our experimental results demonstrate the strong performance of our proposed method, even comparable to state-of-the-art RGB-D methods across public dataset sequences. Additionally, we showcase the generalization ability of our method by evaluating it on self-collected datasets. ## 1 Introduction The detection of objects and the estimation of their 3D pose and size is an important problem in applications such as robotics and augmented reality. Generally, the problem can be divided into instance-level and category-level pose estimation. In the former, we assume knowledge about a small number of exact shape priors (e.g. meshes, CAD models), thus reducing the problem to discrete model selection, correspondence estimation, and pose estimation. In the present work, we look at the more general case of category-level pose estimation, in which the exact shape and appearance of the observed objects are generally assumed to be unknown. It defines the center, orientation, and size of objects at the category level, ultimately providing absolute pose and size estimations. With the introduction of Normalized Object Coordinate Spaces (NOCS)[29], there has been a surge in research efforts in recent years, continuously improving the accuracy of category-level pose and size estimation. These methods significantly broaden the applicability of object pose and size estimation in real-world scenarios. While NOCS methods enable the extraction of object canonical representations from RGB images, achieving robust object pose and size estimation still requires integration with additional depth information for accurate alignment. Its needs for availability of depth information commonly given in the form of a direct depth channel reading or image-based depth prediction. However, it is well-understood that depth camera readings may easily suffer from measurement partiality or artifacts [3], and that single-view depth prediction may fail to accurately reflect depth details [35]. Methods that purely rely on images in turn are currently not yet able to perform on par with the state-of-the-art. The present work relies on two important insights: (1) We recognize the fact that in many practical applications, we do not only have a single image being taken of the environment, but typically the sensor is mobile and continuously gathers novel views of the scene. As a result, we may indeed continuously generate predictions from nearby images and incrementally and robustly generate improved object pose predictions over time. (2) We may rely on motion stereo to perform dense depth reconstruction and thereby bypass the need for a depth camera or inaccurate single-view depth predictions. We exploit these insights in a novel framework combining a state-of-the-art dense monocular SLAM framework and an improved NOCS predictor, denoted _MV-ROPE_. Though the architecture is modular, the present work relies on DROID-SLAM [24], a powerful monocular dense SLAM framework built around a recurrent, dense optical flow network that gradually refines estimates by considering residual discrepancies between optical flow predictions and the disparities that are consistent with the estimated depth values. The NOCS predictor is then applied to a sequence of images from which multiple successive object pose estimation results are identified. Taking into account the ego-motion, these results are then robustly merged into a single, incrementally updated result. In summary, we make the following contributions: * We present the first multi-view RGB framework for robust and accurate category-level object pose estimation. We rely on a state-of-the-art, learning-based dense monocular dense SLAM framework to obtain accurate depth predictions from successive images. We furthermore rely on a trimming strategy to robustly average multiple object predictions over time. * The predictions are obtained by a novel NOCS predictor network based on the Segment Anything Model (SAM) [12] and a novel U-Net architecture [21]. The new network is compared against existing alternatives through dedicated experiments. * Through detailed ablation studies, we underline the impact of including multi-view information. Even relying solely on images, our results are able to perform on par or even better than previous state-of-the-art RGB-D methods in some metrics. Our framework can be combined with different dense SLAM frameworks and is released open-source1 Footnote 1: [https://github.com/greatotyster/MV-ROPE](https://github.com/greatotyster/MV-ROPE) ## 2 Related Works Typical object pose and size estimation is different from object pose tracking, which is also called relative object pose estimation. Pose tracking methods [22, 28, 31, 32] do not reason about an object's intrinsic shape or absolute pose, but only aim at capturing relative pose variations, primarily by tracking 3D keypoints across different frames. On the other hand, absolute pose and size estimation of objects generally falls into one of two categories: instance-level, and category-level. Instance-level pose estimation [27, 33, 34]--while precise--imposes strict requirements in terms of data. It necessitates knowledge of exact CAD models or meshes corresponding to the observed objects to define each object's pose. However, securing such information is often a significant challenge in practical applications, and instance-level estimation is prone to instability due to occlusions and partial observations, further complicating its application. It is not further discussed here, as the present paper focuses on category-level pose estimation. The latter expands the practical usability of object pose and size estimation in real-world scenarios. Although a few works directly learn object poses [6], the fundamental idea behind category-level pose estimation is to replace CAD models with canonical object representations. By using canonical representations and real-world object points, we can directly obtain a 7DoF similarity transformation by employing geometric solvers [1, 26] neural networks for implicit prediction. The process of obtaining absolute object poses can therefore be classified by the employed canonical representation: **Deformed shape priors.** Even in the absence of an exact CAD model, a categorical shape prior can still be utilized by deforming it to match the canonical object model [5, 7, 16, 18, 25, 30, 36]. ShapePrior [25] learns a deformation field and applies it to a categorical shape prior, allowing for the reconstruction of the object representations in canonical space. Similarly, SAR-Net [16] employs a deformed shape prior to estimate object orientation. It then utilizes reconstructed depth points to estimate object position and size. DPDN [18] uses a self-supervised approach to generate better deformed shape priors. Although it avoids the need for exact CAD models, it still needs representative CAD models at the category level. Besides, it inevitably requires direct depth inputs. **Generated object models.** In GCASP [15], a latent code is utilized to reconstruct canonical object points. Additionally, Sim(3)-invariant 3D features are employed to estimate the object pose. On the other hand, works such as CenterSnap [10] and ShAPO [11] utilize neural implicit representations for the canonical object model. These approaches optimize the latent code for better poses and reconstructed objects. Methods like GPV-Pose [8] directly regress transformation to canonical space and then reconstruct object points in canonical views for refinement. These methods, however, also need geometric information from depth channels for their features or latent codes to generate models. **Regression of canonical representations.** Our method falls into this category. NOCS [29] utilizes a single RGB image to predict normalized object coordinates for each pixel. It then employs correspondences between NOCS points and object points from the depth image to get object pose and size. Methods like MetricScale [14] independently predict object metric scale and object center by networks and utilize predicted NOCS to recover object orientations. CASS [4] utilizes features from both images and depth points to predict canonical representations. DualPose [17] utilizes an explicit decoder for pose prediction and an implicit decoder for canonical representations, leveraging the latter to optimize the encoder through consistency. Regression of canonical representations is intuitive and easy to operate. More importantly, such methods have the potential to use the semantic information in RGB images to obtain geometric information, which avoids the need for depth input channels. To the best of our knowledge, our method is the first to employ multi-view constraints together with NOCS networks in order to obtain reliable absolute object pose estimation results from pure image sequences. ## 3 Methodology We will first provide an overview of the system before providing further details on camera pose and dense depth estimation, object instance segmentation and association, NOCS prediction, and final object pose extraction. ### Overview From a high-level perspective, our framework accepts an RGB image sequence \(\{\mathcal{I}_{i}\}\) with optionally additional scale information as an input. The scale information can be in the form of depth maps or stereo images to support dense structure and motion estimation over the image sequence, or merely IMU measurements to help resolving the global scale factor. The output of our framework consists of camera poses and dense depth maps for each keyframe \(i\), represented as \(\xi_{\text{ref}}^{c_{i}}\in\mathrm{SE}(3)\) and \(\mathcal{D}_{i}\in\mathbb{R}^{H\times W}\), respectively. Additionally, our framework provides the pose of each object instance \(k\) within a global reference frame, denoted as \(\xi_{\text{ref}}^{o_{k}}\in\mathrm{Sim}(3)\). To facilitate comprehension, we designate the first keyframe as the reference frame, and \(\mathcal{I}_{\text{ref}}=\mathcal{I}_{0}\). For ease of representation, going forward, we will consistently use superscript "o" to represent objects and superscript "c" to represent cameras. Additionally, we will use indices "\(i\)" and "\(k\)" to refer to frames and objects, respectively. Our framework is illustrated in Figure 1. The first block calculates accurate poses and depth maps for each keyframe, for which we rely on the state-of-the-art monocular dense SLAM framework DROID-SLAM [23] (details in Sec. 3.2). Towards object-centric perception, our system then comprises a second block with an instance segmentation network and a new version of the NOCS network proposed by Wang et al. [29] predicting masked NOCS maps for each individual object instance. With the NOCS map and the back-projected 3D points derived from the depth maps, the second, single-view block concludes by employing the Umeyama algorithm [26] within RANSAC to compute individual object poses. Concurrently, a third block takes all single view object pose estimates and incorporates them along with keyframe poses into a multi-view robust object pose averaging module. The entire process results in incrementally refined, globally consistent object poses \(\xi_{\text{ref}}^{o_{k}}\) for each instance \(o_{k}\). The instance segmentation works in tandem with an object tracking module to assign unique instance identifiers to objects within each keyframe, and these modules are introduced in Sec. 3.3. Our new NOCS network is introduced in Sec. 3.4, and the geometric registration and averaging modules are summarized in Sec. 3.5. Figure 1: Overview of the complete, proposed system. The first block (in purple) takes an image sequence and identifies poses and dense depth maps for each keyframe. Though different dense visual SLAM frameworks may be possible, here we employ DROID-SLAM [24], a state-of-the-art solution that alternates between dense residual-based optical flow predictions and dense structure and motion estimation. The second block (in purple) then handles all single view tasks and applies Grounded SAM [19] for object instance segmentation and feeds the masks into our novel NOCS predictor (details in Section 3.4). The NOCS maps are then aligned with dense depth within a RANSAC framework to obtain individual object poses. A third block (in green) finally establishes object correspondences over time and incrementally and robustly averages their pose. ### Camera Pose and Depth Estimation Our camera pose and depth estimation module is borrowed from DROID-SLAM [24]. The framework jointly optimizes keyframe poses \(\xi_{\text{ref}}^{c_{i}}\) and dense depth maps \(\mathcal{D}_{i}\) through its dense bundle adjustment layer. The objective minimizes the covariance reweighted sum of squared reprojection errors. The latter consist of the difference between the target location of 2D image points as predicted by optical flow and as obtained by applying the estimated relative pose and the dense depth to perform image warping. The objective is given by: \[E(\xi_{\text{ref}}^{c},\mathcal{D})=\sum_{\{i,j\}\in\mathcal{E}}||\mathcal{P} _{ij}-\Pi_{c}\left(\tau\left(\xi_{\text{c}_{i}}^{c_{j}},\Pi_{c}^{-1}(\mathcal{ D}_{i})\right)\right)||_{\sum_{ij}}^{2}, \tag{1}\] where \(\xi_{\text{ref}}^{c}=\{\xi_{\text{ref}}^{c_{i}}\}\) is the set of all keyframe poses to be estimated, \(\mathcal{D}=\{\mathcal{D}_{i}\}\) is the set of all dense depth maps to be estimated, \(\mathcal{E}\) is the set of all keyframe-pairs between which residuals are evaluated, \(\Pi_{c}\) is a camera projection function that takes a \(H\times W\times 3\) tensor of 3D world points and returns the \(H\times W\times 2\) tensor of corresponding image points, and \(\Pi_{c}^{-1}\) is the corresponding inverse mapping that takes a dense depth field \(\mathcal{D}_{i}\) and returns the corresponding \(H\times W\times 3\) tensor of 3D points expressed in the camera frame. \(\xi_{\text{c}_{i}}^{c_{j}}=\xi_{\text{ref}}^{c_{j}}(\xi_{\text{ref}}^{c_{i}}) ^{-1}\) is the euclidean transformation from frame \(i\) to \(j\), and \(\tau(\cdot,\cdot)\) is a function defined to take a euclidean transformation and a \(H\times W\times 3\) tensor of 3D world points, and return the equal-size tensor of transformed 3D world points. Finally, \(\mathcal{P}_{ij}\) is the target location of the points in frame \(i\) in frame \(j\) as hypothesized by the optical flow prediction between frames \(i\) and \(j\)[23], and \(\sum_{ij}\) expresses the uncertainty of these predictions. We can use Gauss-Newton-style optimization methods to solve this nonlinear optimization objective, and--in analogy to bundle adjustment methods--use the Schur complement trick to accelerate the computation. The framework hence is a dense geometric optimization framework that searches for poses and depth fields that are consistent with network-based optical flow predictions [23]. However, rather than doing the optical flow predictions only once, the framework alternates between minimizing the above objective and taking the residual error field to recurrently refine the network-based predictions. Note that there currently exist several frameworks that effectuate dense geometric bundle adjustment based on optical flow predictions [20], and in principle our method could be combined with any of them. ### Instance Segmentation and Object Association For instance segmentation and object category information, we use the off-the-shelf toolbox Segment Anything Model (SAM) [12]. To build associations between each object, we then calculate the Intersection over Union (IoU) across all instances of two frames. If two bounding boxes have the same category and the IoU exceeds a certain threshold, the two object instances are assigned to the same instance ID. We employ Grounded SAM [19] for instance segmentation. Our methodology involves providing the model with a prompt word, which serves as a driving force for effectively segmenting the target object. By incorporating this prompt-driven mechanism, we aim to enhance the accuracy and precision of the instance segmentation process, enabling more robust and reliable results in complex visual scenes. The use of Grounded SAM replaces the previous Mask-RCNN module [9], as this has shown improved performance. Note furthermore that the use of the SAM model holds the advantage of zero-shot learning, as no training or fine-tuning on any of our used datasets is required. After obtaining mask images from Grounded SAM, we employ the raw masks in our association model to build the connection between objects across frames. The first frame is each time designated as the reference frame. The initial bounding boxes around objects in the reference frame serve as the basis for tracking them in subsequent frames. In each frame, the bounding box is calculated within the same RGB mask that excludes the background region. This ensures that only foreground objects are considered for association and tracking. We employ the IoU metric to calculate the consistency between the bounding box of the objects in the reference frame and the potentially corresponding bounding boxes in the subsequent frames. Specifically, our method calculates the initial bounding box using minimum and maximum RGB coordinates. We then compute the IoU with the next frame's bounding boxes and rank the results. We set the IoU threshold to 0.5 which helps to avoid wrong associations, especially for objects that are located near the image edges or objects that are incomplete, occluded, or wrongly detected. ### NOCS Predictor Our NOCS predictor is designed to produce canonical representations of objects. It processes RGB keyframes and incorporates object masks and class labels from our instance segmentation and tracking module. The output comprises NOCS maps for each object, serving as a crucial input for the subsequent object pose estimation module. The fundamental design of our NOCS predictor is inspired by the original NOCS implementation [29]. However, we have already acquired the necessary masks and class labels, thereby eliminating the need for the Mask R-CNN-like framework [9]. Instead, we leverage an encoder adapted from the U-Net architecture [21] to attain a feature map for each complete, full-resolution input keyframe. This step is followed by using the object masks to crop the ROI features and resize them to a standard \(32\times 32\) grid. Subsequently, three separate predictors are engaged to acquire the \(x\), \(y\), and \(z\) components of the NOCS output. It is important to note that NOCS outputs are not merely representations of the 3D shape of an object. It also encapsulates semantic information pertinent to an object's category. For instance, it stipulates that the front of a camera corresponds to the direction of its lens. To facilitate such generation, we further segregate the output of each individual coordinate map by object categories. The resulting output is thus given by three tensors of size \((C,H,W)\), where the additional first channel is the object category. From all learned categories along this dimension, we then pick the one as dictated by the class label of that particular instance. The resulting \(x\), \(y\), and \(z\) maps are finally concatenated to compile a \(H\times W\times 3\) NOCS map for each object. Details of the NOCS predictor are summarized in Figure 2. ### Geometric Pose Estimation and Averaging Single View Object Pose EstimationFor each masked object, its pose is estimated by registering the NOCS points \(X^{nocs}\in\mathbb{R}^{3\times N}\) and the back-projected object depth points \(X^{xyz}\in\mathbb{R}^{3\times N}\) using Umeyama's algorithm [26]. Note that the correspondences between \(X^{nocs}\) and \(X^{xyz}\) are simply given by the question whether or not they originate from the same pixel in the image. In order to deal with inaccurate instance segmentations and noisy NOCS predictions, we apply RANSAC for outlier rejection. The fitted model is the similarity transformation \[X^{xyz}=\alpha RX^{nocs}+t, \tag{2}\] where \(\alpha\in\mathbb{R}\) is the scale of object, \(R\in\mathrm{SO}(3)\) is the orientation of the object, and \(t\in\mathbb{R}^{3}\) its position. RANSAC with post-refinement over all inliers hence aims at a robust minimizing of the energy objective \[\alpha^{*},R^{*},t^{*}=\operatorname*{arg\,min}_{\alpha,R,t}\sum_{i}^{N}||X^{xyz }_{i}-\alpha RX^{nocs}_{i}-t||^{2}. \tag{3}\] ``` Data: Video stream \(V=\{\mathcal{I}_{i}\}\) Result: Object poses \(\{\xi^{o_{k}}_{\text{ref}}\}\) whilenot\(V\).empty()do \(\mathcal{I}_{i}\)=\(V\).pop(); for each \(m^{o_{k}}\)in\(\mathcal{I}_{i}\)do ifsizeof(\(b^{o_{k}}\)) \(\leq M\)then \(b^{o_{k}}\).append(\(m^{o_{k}}\)); \(\xi^{o_{k}}_{\text{ref}}=b^{o_{k}}\).average.pose(); end if elseif\(m^{o_{k}}\).score\(\geq b^{o_{k}}\).min_score()then \(b^{o_{k}}\).append(\(m^{o_{k}}\)); \(b^{o_{k}}\).remove_min_score(); \(\xi^{o_{k}}_{\text{ref}}=b^{o_{k}}\).average.pose(); end if end for end for ``` **Algorithm 1**MultiView Object Pose Averaging Multi-view Object Pose AveragingFor each object \(\mathrm{o}_{k}\) in each keyframe, we obtain an individual object pose measurement and an alignment score. The object pose is expressed with respect to the local camera frame in which it Figure 2: Overview of our NOCS prediction architecture. We start by applying a U-net style feature encoder to the complete image in order to generate a dense feature map. We the apply Grounded SAM [19] to generate object masks for each detected instance, and use the corresponding bounding boxes to crop and resize the feature map to individual 32\(\times\) 32 feature volumes for each object. Three individual heads are then applied to each such sub-volume to generate the \(x\), \(y\) and \(z\) coordinates of the NOCS map for each object. Note that—in order to support the exploitation of class-specific semantics, we predict NOCS coordinates for each individual class. We finally consider the predicted class from Grounded SAM to extract the relevant slices for each particular class, and concatenate them to complete \(\{x,y,z\}\) NOCS maps for each object instance. was detected. We start by expressing all object poses in a common reference frame by incorporating camera pose measurements, i.e. \[\xi_{\text{ref}}^{\text{o}_{ki}}=\xi_{\text{c}_{i}}^{\text{o}_{ki}}\cdot\xi_{ \text{ref}}^{\text{c}_{i}} \tag{4}\] After all object poses have been transformed into a common reference frame, we can merge the multiple estimates to obtain a single, consistent, global object pose estimation by applying a robust averaging strategy. A single pose measurement \(m^{\text{o}_{ki}}\) of object \(k\) in keyframe \(i\) can hence be represented as \[m^{\text{o}_{ki}}=(\xi_{\text{c}_{i}}^{\text{o}_{ki}}\cdot\xi_{\text{ref}}^{ \text{c}_{i}},s^{\text{o}_{ki}}), \tag{5}\] a tuple containing the object pose in the reference view and its inlier ratio \(s^{\text{o}_{ki}}\) reflecting the quality of that measurement. Our robust averaging strategy consists of first applying a trimming strategy. Notably, for each object \(\text{o}_{k}\), we maintain a buffer to store the \(M\) measurements with the best inlier ratios, denoted \(b^{\text{o}_{k}}=\{m^{\text{o}_{ki}}\}\). The buffer is updated as further keyframes come in (see explanation in Alg. 1), and the final object pose is given by the average pose of all buffered poses. We use the median of object positions and scales as the final estimated value, and employ the rotation averaging method proposed by Lee et al. [13] in our study. The method consists of first estimating the geodesic L1 mean of all buffered rotations, and then concludes by recalculating the mean from the top one-fourth of identified rotations. ## 4 Experiments ### Implementation Details Our framework is built upon DROID-SLAM [24]. We employ its pre-trained weights without performing any fine-tuning. Since our input consists of only RGB image sequences, the poses and depths obtained from the dense bundle adjustment layer are inherently scale-ambiguous. DROID-SLAM [24] has the capability to incorporate depth images as additional residual terms in bundle adjustment, but doing so would individually influence the depth estimates of each pixel. To ensure our method can be called a purely image-based approach, we refrain from including depth images in the dense bundle adjustment layer, and instead of utilizing robust linear regression to estimate a single scale factor between the predicted depth and ground truth depth over the whole input sequence. Note that this way of rescaling the output could easily be replaced by other ways that would solely rely on images as as exteroceptive sensing modality (e.g. stereo vision, visual-interial estimation). The NOCS predictor is trained on the NOCS dataset [29]. The dataset consists of two splits: CAMERA and REAL. The CAMERA split is generated by rendering synthetic objects into 300k real-world images. The Real split contains RGB-D image sequences from 31 indoor scenes. Our training strategy aligns with the policies set forth by ShAPO [11]. Initially, the model undergoes training on the CAMERA training, followed by fine-tuning on the REAL training split. To address the challenge of rotational symmetries, the loss function is designed to rotate symmetric objects (bottle, bowl, and can) around the y-axis of the predicted NOCS coordinates. It then selects the smallest loss from rotations along the circle. This ensures a more accurate and reliable model. The encoder is structured as a modified U-Net, where the last two up-sampling blocks are excluded. Instead, we utilize 4\(\times\)interpolation to regain the full resolution. This modification is crucial to maintaining the channel depth, thereby ensuring that the feature encapsulates sufficient information. Following the encoder, each of the three separate predictors is a shallow CNN composed of five convolutional layers. This design choice contributes to the efficiency and compactness of the model. Overall, the NOCS predictor is an efficient, lightweight solution and can be trained on a single NVIDIA GeForce RTX 2080 Ti. ### Results on NOCS Dataset Our method requires RGB image sequence as an input. The CAMERA split of the NOCS dataset [29] consists of single RGB-D images, only, hence we conducted our experiments on the REAL test split. Regarding baseline methods, we selected state-of-the-art approaches [8, 11, 15, 17, 18, 36] from the three different categories as discussed in the related work section (Sec. 2). Additionally, given Metric-Scale [4], CASS [4] fall into the same category as ours, we also conduct comparisons against them. In our experimental results, we present the mean Average Precision (mAP) based on the 3D Intersection over Union (3D IoU) as well as translation and rotation errors, as shown in Table 1. Figure 3 indicates qualitative results. Figure 4 shows its AP curve. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline **Method** & **Extra inputs** & **IoU-25** & **IoU-50** & **IoU-75** & **\(\mathbf{\SIUnitSymbolFace}\)** & \(\mathbf{\SIUnitSymbolFace}\)** & \(\mathbf{\SIUnitSymbolFace}\)** & \(\mathbf{\SIUnitSymbolFace}\)** & \(\mathbf{\SIUnitSymbolFace}\)** \\ \hline NOCIS[29] & **d** & 84.8 & 70.0 & 30.1 & 72.2 & 10.0 & 13.8 & 25.2 \\ MetricScale[4] & **s** & 81.6 & 68.1 & 32.9 & - & 5.3 & - & 24.7 \\ CASS[4] & **d** & 84.2 & 77.7 & - & - & 23.5 & - & 5.0 \\ SAM-Net[6] & **dpp** & - & 79.3 & 62.4 & 31.6 & 42.3 & 50.3 & 68.3 \\ SMPL[4] & **dpp** & - & 80.1 & 61.9 & 35.9 & 39.6 & 61.3 & 70.7 \\ Shapidhar[25] & **dpp** & 81.2 & 77.3 & 53.2 & 19.3 & 21.4 & 43.2 & 54.1 \\ RBP-Paste[36] & **dpp** & - & - & 67.8 & 38.2 & 48.1 & 68.1 & **79.2** \\ DPDAN[18] & **dpp** & - & 83.4 & **76.0** & 46.0 & 50.7 & **70.4** & 78.4 \\ GPV-Pw[6] & **d** & 84.2 & 83.0 & 64.4 & 32.0 & 42.9 & - & 73.3 \\ GCAS[15] & **d** & - & 79.0 & 65.3 & **46.9** & **54.7** & 64.2 & 76.3 \\ SMPL[11] & **d** & 85.3 & 79.0 & - & 4.8 & 48.5 & - & 66.8 \\ \hline **Ours** & **s** & **99.9** & **96.6** & 61.4 & 30.9 & 47.9 & 47.4 & 75.7 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparisons of mAP on REAL[29]. We marked the extra inputs of each method to distinguish their technical routes. **d** means depth images, **p** means categorical shape priors, and **s** means scale information. For the metrics, IoUx means mAP defined by 3D IoU over a threshold of x%; **m\({}^{\text{o}}\)**n** cm represents mAP defined by rotation error less than n\({}^{\text{o}}\) and transformation error less than m cm. Our method achieves comparable performance to methods that require additional inputs such as shape priors and depth images across various metrics. Specifically, our method shows significant improvements in IoU25 and IoU50 metrics, highlighting the robustness of our approach. The robustness of our approach stems from three main factors. Firstly, better camera pose and depth estimation can be obtained by utilizing multi-view images. Secondly, by integrating object pose information from multiple frames, we reduce errors and improve the accuracy of pose estimation. Finally, the utilization of multi-view constraints helps mitigate the negative impact of occlusions and false detections that can occur in single-frame images. This combination of factors contributes to the overall robustness of our method. ### Generalization Ability To further evaluate the generalization capabilities of our proposed method, we collected a dataset from indoor scenes. We utilized the Azure Kinect RGB-D camera to capture several RGB-D sequences from three representative indoor scenes: office desk, coffee table, and conference table. The object categories included in our dataset are consistent with those in the official NOCS dataset, which encompasses objects such as mugs, cans, bottles, bowls, laptops, and cameras. Note that the data sequences we collected are intentionally challenging, including various factors that can impose difficulties for object pose estimation. These challenges include a significant amount of textureless backgrounds, reflective surfaces like conference tables, and motion blur. This dataset allows us to assess how well our method performs on objects in various indoor environments, providing insights into its ability to generalize across different scenes and object categories. The qualitative results showcase the effectiveness of our approach in accurately estimating the pose of objects in this custom dataset. We demonstrate the qualitative results of our method on this dataset, shown in Figure 5. ### Ablation Studies In our ablation study, our main focus is to examine the following aspects: (1) The limitations of depth perception in single-image-based methods to highlight the advantages of multi-view constraints in object pose estimation. (2) The factors influencing the accuracy of depth estimation in multi-view constraints and how they ultimately contribute to the improvement of object pose estimation. #### 4.4.1 Limitations of Using Single RGB Image We employ a state-of-the-art approach for single-image-based depth estimation [2]. However, the depth estimated by this method exhibits shortcomings in terms of its inability to estimate absolute scale. To ensure a fair comparison, we utilized robust linear regression to obtain a scale factor between their predicted depth images and ground truth depth. Consequently, we are able to analyze three experimental settings: (1) Using the predicted NOCS map and the predicted depth image. (2) Using the predicted NOCS map and the ground truth depth image. (3) Using the ground truth NOCS map and the predicted depth images. The results of object pose estimation accuracy are shown in Table 2. From the results, we can observe that using ground truth depth brings better performance improvement compared to using ground truth NOCS map. This indirectly reflects that in a setup with a single RGB image, poor depth Figure 4: The AP curve of our approach, with the vertical axis representing AP and the horizontal axis representing IoU, rotation error, and translation error of each category for (a), (b), and (c), respectively. Figure 5: Qualitative object pose estimation results for three representative scenes: office desk, coffee table, and conference table. The first row displays the projection of the 3D bounding boxes onto the 2D image, while the second row showcases the complete 3D perspective. Figure 3: Qualitative results of our method obtained all 6 sequences of the REAL test set [29]. estimation struggles to capture the geometric details of objects, becoming a bottleneck for pose estimation as opposed to noisy NOCS predictions. As shown in Table 1, our depth estimation from multi-view images with the predicted NOCS map can produce much better results than these single RGB image setups in Table 2. This emphasizes the necessity of incorporating multi-view information. #### 4.4.2 Further Exploration of Multi-view Information In this set of experiments, we aim at exploring how multi-view constraints improve the overall accuracy of our framework. The multiview constraints can be controlled by two parameters inside the dense SLAM framework: the frontend window size and the keyframe threshold. The frontend window size determines the maximum number of adjacent keyframes that participate in bundle adjustment simultaneously to jointly optimize pose and depth. The keyframe threshold parameter controls the total number of keyframes in the entire sequence. Once the distance between a frame and the previous keyframe exceeds this threshold, the frame becomes a new keyframe. We vary the keyframe threshold from 1.0 to 3.5 and the frontend window size from 4 to 24 to demonstrate the impact of the total number of keyframes per sequence and the number of keyframes involved in bundle adjustment on object pose estimation accuracy. Detailed results are shown in Figure 6. From the heatmap, we can observe that decreasing the keyframe threshold and increasing the frontend window size leads to improved object pose estimation results. As the number of keyframes increases, we have a greater variety of viewing angles, increasing the likelihood of obtaining accurate NOCS observations. Additionally, increasing the frontend window size enhances the accuracy of both pose and depth estimation. Therefore, such understanding allows us to better utilize temporal information from multi-view constraints which enhances the accuracy of both camera pose and depth estimation. ### Failure Cases Similar to other works, we encounter challenges with our NOCS predictor when dealing with objects like cameras that exhibit significant intra-class variations. It becomes difficult for the predictor to accurately output the object's canonical representation in such cases. Furthermore, our method's accuracy in camera pose and depth estimation can be compromised under extreme conditions, such as reflective surfaces, high-speed camera motion, and limited variation in the camera view-point. These factors indirectly affect the precision of object pose estimation. Some examples of failure cases are indicated in Figure 7. ## 5 Conclusions The present work introduces a novel method for category-level object pose and size estimation that distinguishes itself from previous approaches by leveraging a continuous stream of images in conjunction with a cutting-edge dense visual SLAM method. The dense depth predictions are combined with an improved NOCS prediction network to generate stable and accurate predictions from multi-view images. By taking into account the known camera poses, these predictions are then robustly averaged over time to generate competitive results similar to what can be obtained by employing a depth camera. We believe that our method will be of strong interest in applications in which a simple camera is preferred or perhaps already available, and in which the camera is naturally required to move (e.g. robotic manipulation). Our continued efforts consist of utilizing the geometric information from predicted depth to enhance the NOCS prediction, and then intensify the pose and size estimation of objects with large intra-class variations. \begin{table} \begin{tabular}{l c c c c c c c} \hline **NOCS map** & **Depth image** & **IoU2S** & **IoU50** & **\(\mathbf{\hat{5}}\)-Sem** & **\(\mathbf{10}\)-Sem** & \(\mathbf{10}^{\circ}\)**-Sem** & \(\mathbf{10}^{\circ}\)**-Sem** \\ \hline predicted & predicted & 90.72 & 37.3 & 2.0 & 3.7 & 9.8 & 19.1 \\ predicted & ground truth & **95.7** & **78.7** & **24.9** & **24.9** & **45.3** & **45.3** \\ ground truth & predicted & 95.5 & 85.4 & 11.5 & 21.1 & 22.2 & 43.4 \\ \hline \end{tabular} \end{table} Table 2: Ablation results of object pose and size estimation from single RGB image. Figure 6: The heatmap plots depict the keyframe threshold along the horizontal axis and the frontend window size along the vertical axis. Each block’s numerical value represents the mean 3D IoU, mean translation errors(cm), and mean rotation errors(degree), respectively. Figure 7: Typical failure cases in challenging situations. It shows inaccurate pose estimations.
2310.03591
Impact of Artificial Intelligence on Electrical and Electronics Engineering Productivity in the Construction Industry
Artificial intelligence (AI) can revolutionize the development industry, primarily electrical and electronics engineering. By automating recurring duties, AI can grow productivity and efficiency in creating. For instance, AI can research constructing designs, discover capability troubles, and generate answers, reducing the effort and time required for manual analysis. AI also can be used to optimize electricity consumption in buildings, which is a critical difficulty in the construction enterprise. Via machines gaining knowledge of algorithms to investigate electricity usage patterns, AI can discover areas wherein power may be stored and offer guidelines for enhancements. This can result in significant value financial savings and reduced carbon emissions. Moreover, AI may be used to improve the protection of creation websites. By studying statistics from sensors and cameras, AI can locate capacity dangers and alert workers to take suitable action. This could help save you from injuries and accidents on production sites, lowering the chance for workers and enhancing overall safety in the enterprise. The impact of AI on electric and electronics engineering productivity inside the creation industry is enormous. AI can transform how we layout, build, and function buildings by automating ordinary duties, optimising electricity intake, and enhancing safety. However, ensuring that AI is used ethically and responsibly and that the advantages are shared fairly throughout the enterprise is essential.
Nwosu Obinnaya Chikezie Victor
2023-10-05T15:14:48Z
http://arxiv.org/abs/2310.03591v1
Impact of Artificial Intelligence on Electrical and Electronics Engineering Productivity in the Construction Industry ###### Abstract Artificial intelligence (AI) has the capacity to revolutionize the development industry, especially in the field of electrical and electronics engineering. with the aid of automating recurring duties, AI can growth productivity and efficiency in the creation process. as an instance, AI can be used to research constructing designs, discover capability troubles, and generate answers, reducing the effort and time required for manual analysis. AI also can be used to optimize electricity consumption in buildings, that is a critical difficulty inside the construction enterprise. via the use of machines gaining knowledge of algorithms to investigate electricity usage patterns, AI can discover areas wherein power may be stored, and offer guidelines for enhancements. This can result in large value financial savings and a reduction in carbon emissions. Moreover, AI may be used to improve the protection of creation web sites. By studying statistics from sensors and cameras, AI can locate capacity dangers and alert workers to take suitable action. this could help save you injuries and accidents on production sites, lowering the chance to workers and enhancing overall safety in the enterprise. Basically, the impact of AI on electric and electronics engineering productivity inside the creation industry is big. By means of automating ordinary duties, optimizing electricity intake, and enhancing safety, AI has the capacity to convert the manner we layout, build, and function buildings. However, it's essential to ensure that AI is used ethically and responsibly, and that the advantages are shared fairly throughout the enterprise. **Keywords:** Artificial Intelligence (AI); Electrical and Electronics Engineering; Construction Industry; Productivity; Automation ## 1 Introduction ### Background and significance of the research topic The development industry plays a crucial role within the global financial system, contributing to infrastructure development and economic growth. but the industry has traditionally faced challenges associated with productiveness and efficiency. To deal with these demanding situations, there was growing interest in leveraging synthetic intelligence (AI) technology within the discipline of electrical and electronics engineering (EEE). AI could revolutionize numerous aspects of the development enterprise, along with venture control, automation, and selection-making techniques. Several research have explored the utility of AI in distinctive domains, together with healthcare, finance, and transportation. However, the specific effect of AI on EEE productiveness in the production industry remains an underexplored location. This study's goals to fill this hole by examining how AI can decorate productiveness in the context of EEE in production. The primary objective is to evaluate the impact of integrating AI technologies on efficiency, accuracy, and overall productivity within the realms of Electrical and Electronics Engineering construction processes. ### Objective of the research The primary objective of this study is to assess the impact of synthetic intelligence on EEE productivity inside the production enterprise. via investigating the usage of AI technology, this observe seeks to discover the ability blessings and challenges associated with their implementation. The findings will make contributions to a higher knowledge of the way AI can decorate productivity inside the EEE area, ultimately informing industry practices and choice-making procedures. ### Research questions To attain the research objective, the subsequent studies questions will be addressed: How can AI technology be integrated into EEE procedures in the construction enterprise? What are the potential blessings of enforcing AI in EEE inside the creation industry? What are the demanding situations and barriers related to the adoption of AI in EEE inside the production industry? How does the implementation of AI effect productiveness in the EEE area within the construction industry? D. Hypotheses for the research paper topic: "effect of synthetic Intelligence on electrical and Electronics Engineering productivity inside the creation industry" The following hypotheses are proposed for these studies: H1: the mixing of AI technologies into EEE methods within the production enterprise will result in progressed productiveness in comparison to conventional strategies. H2: The implementation of AI in EEE inside the construction industry will bring about superior decision-making procedures and progressed mission management. H3: The adoption of AI in EEE inside the creation enterprise will present demanding situations related to information protection, privateness, and the need for skilled personnel. H4: The productiveness of EEE methods within the construction industry will be positively impacted by using the implementation of AI technologies. These hypotheses will manual the investigation and analysis of the studies subject matter, allowing a complete evaluation of the effect of AI on EEE productivity inside the production enterprise. ## 2 Literature Review ### Evaluate the development industry and its challenges. The construction enterprise plays an important role in the economic improvement of nations worldwide. It contains an extensive variety of tasks, consisting of the layout, making plans, and execution of various creative tasks. but the enterprise faces numerous demanding situations that preclude productivity and performance. Several studies have examined the demanding situations confronted by the development industry. In step with [1], one of the foremost challenges is the complexity of construction projects, which includes coordinating a couple of stakeholders, coping with schedules, and ensuring fine management. Additionally, the construction enterprise is known for its fragmented nature, with diverse events involved, along with contractors, subcontractors, architects, and engineers. This fragmentation frequently leads to verbal exchange gaps and coordination problems [2]. ### Introduction to artificial intelligence and its programmes in the creation Synthetic intelligence (AI) has emerged as a transformative era that has the capability to revolutionize the development industry. AI refers to the improvement of laptop structures that can carry out tasks that normally require human intelligence, such as decision-making, hassle-fixing, and studying. AI has observed several packages in construction. For example, building information modelling (BIM) has been broadly adopted inside the enterprise, allowing the creation of virtual representations of physical structures, and facilitating collaboration among stakeholders [3]. Gadget mastering algorithms had been used to investigate huge volumes of production data, leading to advanced project scheduling, price estimation, and chance evaluation [4]. ### Preceding studies on the effect of AI in electrical and electronics engineering on the creation Several studies have explored the impact of AI, specifically around electrical and electronics engineering in production. As an example, [5] investigated using AI techniques for energy system optimization in construction initiatives. They have confirmed that AI algorithms, such as neural networks and genetic algorithms, can correctly optimize energy distribution structures, resulting in stepped-forward electricity performance and reduced fees. Every other look [6] tested the software of AI-based techniques for fault analysis in electrical structures in construction projects. Their research verified that AI methods, including support vector machines and deep-gaining knowledge of fashion, can appropriately pick out and diagnose electric faults, leading to enhanced machine reliability and reduced downtime. Identifying gaps in current research for the research paper topic: "Impact of Artificial Intelligence on Electrical and Electronics Engineering Productivity in the Construction Industry". Even as previous studies have explored the impact of AI in electric and electronics engineering within the production enterprise, there are nonetheless awesome gaps that want to be addressed. One vast gap is the confined focus on the general productivity gains because of the adoption of AI in electrical and electronics engineering in construction. Present studies frequently observe precise applications of AI in isolation, consisting of optimization or fault analysis, without considering the broader effect on productivity. Additionally, there's a lack of complete research that integrates a couple of AI techniques and their combined results on productivity. Therefore, similar research is wanted to investigate the overall impact of AI on electrical and electronics engineering productivity within the production enterprise. **3 Methodology** Fig 2 findings indicate a strong positive correlation between AI adoption and productivity within the construction industry's Electrical and Electronics Engineering domain. The productivity scores of the five companies were as follows: Company A: Productivity - 75 Company B: Productivity - 85 Company C: Productivity - 90 Company D: Productivity - 65 Company E: Productivity - 80 The study compares the productivity levels of five companies (A, B, C, D, and E) while considering multiple influential factors, namely Diffusion Innovation, Capability Maturity Model Integration (CMMI), Socio-Technical System, and Unified Theory of Acceptance and Use of Technology (UTAUT). Companies with higher productivity scores (Company C and Company B) were found to have higher levels of AI diffusion innovation (0.6 and 0.8, respectively), compared to those with lower productivity scores (Company D and Company A) with lower innovation scores (0.3 and 0.5, respectively). Company C also displayed the highest CMMI level (4), indicating its proactive approach towards AI adoption and integration. Interestingly, Company E demonstrated a significant productivity level (80) despite having a high diffusion innovation score (0.9) and a high CMMI level (4), indicating that other factors may also play a role in its productivity. The Socio-Technical System and UTAUT scores varied across the companies, showing no linear relationship with productivity levels. However, these factors could be potential contributors to the productivity variations observed within the EEE sector of the construction industry. Fig 2: Productivity of Companies Project Productivity Diffusion Innovation Theory \(\backslash\) 0 Project A 85 4 1 Project B 92 3 2 Project C 78 5 3 Project D 80 2 4 Project E 88 4 Capability Maturity Model Integration Socio-Technical \(\backslash\) 0 3 5 1 4 4 2 3 5 3 5 4 2 4 Unified Theory of Acceptance and Use of Technology 0 1 5 2 3 3 In Fig 3, Project A exhibited a productivity score of 85, which was influenced by a moderate application of diffusion innovation theory (4), CMMI (3), socio-technical systems (5), and UTAUT (4). In comparison, Project B achieved a higher productivity score of 92, with a relatively lower utilization of diffusion innovation theory (3) and socio-technical systems (4) but a higher implementation of CMMI (4) and UTAUT (4). Project C had a productivity score of 78, with a strong emphasis on diffusion innovation theory (5) and socio-technical systems (5) but relatively lower incorporation of CMMI (3) and UTAUT (3). On the other hand, Project D attained a productivity score of 80, which was influenced significantly by CMMI (5) but limited by diffusion innovation theory (2), socio-technical systems (3), and UTAUT (4). Finally, Project E demonstrated a productivity score of 88, with the notable application of diffusion innovation theory (4) and socio-technical systems (4) but a lesser emphasis on CMMI (2) and UTAUT (3). The comparative analysis of the five projects reveals exciting insights into the impact of different methods on EEE project productivity. Diffusion innovation theory has a varied effect, with higher scores positively correlating with productivity in Projects B and E. However, the significance of CMMI on productivity is evident in Projects B and D. Applying socio-technical systems is crucial in enhancing productivity, as Projects A, C, and E demonstrate. Additionally, the UTAUT method consistently positively influences productivity across all five projects. Fig 3 highlights the importance of selecting and implementing appropriate methodologies to enhance productivity in Electrical and Electronics Engineering projects. The results indicate that successfully applying socio-technical systems and the UTAUT method can significantly impact project productivity. Furthermore, it underscores the need for tailored approaches based on project characteristics and requirements. As the EEE industry evolves, this study provides valuable insights for project managers and stakeholders to optimize productivity and ensure Fig 3: Impact of Methods on Electrical and Electronics Engineering Productivity successful project outcomes. ### Research method and design In order to research the effect of artificial intelligence (AI) on electrical and Electronics Engineering (EEE) productivity inside the creative industry, a quantitative study technique was adopted. The research layout employed was a cross-sectional observation that aimed to collect facts at a specific point in time. This method allowed for the collection of comprehensive and representative statistics to evaluate the relationship between AI implementation and productiveness inside the EEE zone in the construction enterprise [7]. Figure 4 examines the levels of AI adoption in five different companies, labeled Company A, Company B, Company C, Company D, and Company E. Each company's AI adoption level is represented by a numerical value, with Company A having an AI adoption level of 3, Company B having an AI adoption level of 4, Company C having an AI adoption level of 2, Company 1 having an AI adoption level, and Company 5 having an AI adoption level. TAM Analysis: The Technology Acceptance Model (TAM) analysis in this research reveals that the average level of AI adoption in companies is 1.0, indicating that the overall implementation of AI in the electrical and electronics sector is relatively low. In addition, the standard deviation of AI adoption levels is calculated to be 1.58, indicating considerable variation in the extent of AI integration across the companies studied. **Resource-based view (RBV) analysis:** The research uses resource-based view (RBV) analysis to assess the impact of AI adoption on improving productivity. The average increase in productivity resulting from AI adoption is 15.0, indicating that companies that have adopted AI technologies have experienced significant increases in productivity levels. The standard deviation of this measure of productivity improvement is 7.90, demonstrating the variability in the magnitude of gains observed across companies. T-Test Analysis: T-test analysis is performed to determine the statistical significance of the relationship between AI adoption levels and productivity improvements. The calculated T-statistic is 4.809, indicating a strong correlation between AI adoption levels and productivity improvements in the electrical and electronics sector. Additionally, the P-value obtained is 0.0001404, below the conventional significance threshold (usually set at 0.05), which provides strong evidence to reject the null hypothesis and supports the idea that AI adoption positively affects productivity. Table 1: shows the central tendency and variability meansures for pre- and post-AI electrical installation times. Let's interpret the findings: Count: 5.0 indicates five observations in the pre-AI and post-AI time data sets, ensuring a fair comparison between the two periods. Mean: The mean represents the average electrical installation time. Before AI integration, the mean installation time was approximately 130 hours. However, after AI implementation, the mean time reduced to 100 hours. This indicates a significant reduction in the time taken for electrical installations due to AI utilization. Standard Deviation: The standard deviation measures the dispersion or variability of the data points around the mean. Pre-AI installation time, he has had a more significant standard deviation of 46.9 hours, indicating a wider spread of data points. Conversely, post-AI installation time had a minor standard deviation of 31.6 hours, meaning more consistent and predict able outcomes. Minimum and Maximum: The minimum and maximum values show the lowest and highest observed installation times, respectivly. In the pre-AI era, the minimum installation time was 80.9 hours, while the entire time was 200 hours. After AI implementation, the minimum installation time c reduced to 70 hours, and the total time decreased to 150 hours. This reduction in the range of installation times suggests that AI has positively impacted productivity by strea mlining processes and reducing the occurrence o f extreme outliers. Percentiles: Percentiles provide insights into the distribution of data. The 25th percentile (Q1) represents the value below which 25% of the data falls, the 50th percentile (Q2) is the median, and the 75th percentile (Q3) represents the value below which 75% of the data drops. Comparing the percentiles betwe en the two periods, it is evident that AI integration has led to a notable reduction in the time taken for electrical installations at all levels. In conclusion, the findings in Table 1 indicate that adopting AI methods has significantly impacted electrical and electronics engineering pr productivity. The post-AI era shows reduced installation times, increased consist tency, and more efficient processes than the pre-AI generation. These results are promising for the industry as they highlight the potential benefits of integratin g AI technologies into engineering practices. Table 2 presents the descriptive statistics for defect detection accuracy results. The accuracy (%) of defect detection for AI-based systems can be described using the following statistics: The dataset was represented by five AI-based systems, accounting for the sample size. Across the selected systems, the average accuracy is indicated by a mean accuracy value of 90.2%. Providing insights into the systems' variability, the standard deviation of 3.42% highlights the dispersion of accuracy values around the mean. Indicating the lowest level of defect detection accuracy among the systems, the sample showed a minimum observed accuracy of 85.0%. The 89% accuracy value for the 25th percentile indicates that 25% of the systems perform at 89% or below. 91% signifies that 50% of the systems have an accuracy of 91% or lower, making it the median accuracy at Q2 or the 50th percentile. The 75th percentile, representing 75% of the systems, indicates that those systems exhibit 92% accuracy or lower. Among the chosen AI-based systems, the recorded maximum accuracy is 94%. This level of defect detection accuracy is the highest. The data observed and presented in Table 3 provides a comprehensive overview of the descriptive statistics related to "Cost Reduction (%). Table 3 presents the descriptive statistics for cost reduction (%) measures. The text has been rearranged throughout the following paragraph, and some portions have been removed to make the information more unique. The information individual has To that is in time word sound it the pulled been some and rearranged been has text following the paragraph been Throughout more portions make to information have logical. In this study, five recorded instances of cost reduction percentages exist, ultimately representing the number of data points or observations in the dataset. The average cost reduction achieved in electrical and \begin{table} \begin{tabular}{l l} Descriptive Statistics - Defect Detection Accuracy: & Descriptive Statistics - Cost Reduction: \\ AI-based System Accuracy (\%) & Cost Reduction (\%) \\ count & 5.000000 \\ mean & 15.000000 \\ std & 4.123106 \\ min & 10.000000 \\ 25\% & 12.000000 \\ 50\% & 15.00000 \\ 75\% & 18.000000 \\ max & 20.000000 \\ \end{tabular} \end{table} Table 3: Descriptive Statistics - Cost Reduction \begin{table} \begin{tabular}{l l} Descriptive Statistics - Cost Reduction: & Descriptive Statistics - Cost Reduction: \\ Cost Reduction (\%) & Cost Reduction (\%) \\ count & 5.000000 \\ mean & 15.000000 \\ std & 4.123106 \\ min & 10.000000 \\ 25\% & 12.000000 \\ 50\% & 15.00000 \\ 75\% & 18.000000 \\ max & 20.000000 \\ \end{tabular} \end{table} Table 2: Descriptive Statistics - Defect Detection Accuracy electronics engineering projects is 15.0%. The mean, commonly referred to as the average, represents the central tendency of the data. The standard deviation, also known as the measure of variability, is equal to 4.123. This study's standard deviation of 4.123 indicates moderate variability in cost reduction percentages. A lower standard deviation suggests that the data points are close to the mean, while a higher value indicates a broader spread. The standard deviation measures the dispersion or variability of the data points from the norm. The minimum value indicates the percentage of cost reduction observed in the dataset. This study found that the lowest recorded cost reduction is 10%. The 25th percentile, or the quartile (Q1), represents the value below which 25% of the data falls. In this context, 25% of the recorded cost reduction percentages are 12% or lower. The 50th percentile, also known as the median or second quartile (Q2), divides the data into two halves. Half of the recorded cost reduction percentages are, at or below 15%, while the other half is above 15%. The 75th percentile, also known as Q3 or third quartile, represents a value below which 75% of the data falls. Our research findings showed that 75% of recorded cost reduction percentages are at or below 18%. Lastly, let's consider the value which indicates the percentage of cost reduction observed in our dataset. Our study found that the maximum achieved cost reduction reaches up to 20%. In conclusion, Table 3 gives an overview of the statistics related to the variable "Cost Reduction(%)" in the research paper titled "Impact of Methods on Electrical and Electronics Engineering Productivity." Based on the data, electrical and electronics engineering projects typically experienced a cost reduction of 15.0% with a degree of variation. It is essential to grasp these statistics to fully understand how different methods affect cost reduction and to make informed decisions to improve productivity in this field. Table 4 presents the descriptive statistics for the two efficiency variables, Pre-AI Efficiency (%) and Post-AI Efficiency (%), based on a sample size of 5.0 (indicating 5 data points). Pre-AI Efficiency (%): Count: 5.0 Mean: 77.6% Standard Deviation (std): 5.59 Minimum: 70% 25th Percentile (Q1): 75% Median (50th Percentile, Q2): 78% 75th Percentile (Q3): 80% Maximum: 85% Post-AI Efficiency (%): Count: 5.0 Mean: 87.4% Standard Deviation (std): 3.97 Minimum: 82% 25th Percentile (Q1): 85% Median (50th Percentile, Q2): 88% 75th Percentile (Q3): 90% Maximum: 92% Discussion: The descriptive statistics presented in Table 4 provide \begin{table} \begin{tabular}{l c c} \multicolumn{3}{l}{Descriptive Statistics - Efficiency Improvement with AI:} \\ \multicolumn{3}{l}{Pre-AI Efficiency (\%) Post-AI Efficiency (\%) count} & 5.000000 & 5.000000 \\ mean & 77.00000 & 87.400000 \\ std & 5.5944 & 3.974921 \\ min & 70.00000 & 82.000000 \\ 25\% & 75.00000 & 85.000000 \\ 50\% & 78.00000 & 88.00000 \\ 75\% & 80.00000 & 90.00000 \\ max & 85.00000 & 92.00000 \\ \end{tabular} \end{table} Table 4: Descriptive Statistics - Efficiency Improvement with AI valuable insights into the efficiency improvement achieved with AI in Electrical and Electronics Engineering projects. Here are some key observations: Pre-AI Efficiency (%): The mean Pre-AI Efficiency is 77.6%, indicating that, on average, projects were operating at 77.6% of their maximum potential efficiency before the introduction of AI methods. The data exhibits a moderate variation, as shown by the standard deviation of 5.59. The range of Pre-AI Efficiency lies between 70% (minimum) and 85% (maximum), with 50% of the data falling between 75% and 80%, as represented by the interquartile range (IQR). Post-AI Efficiency (%): The mean Post-AI Efficiency is significantly higher at 87.4%, suggesting that AI implementation led to substantial improvements in productivity levels. The standard deviation of 3.97 indicates less variability in Post-AI Efficiency compared to the Pre-AI phase, indicating a more consistent impact of AI methods on productivity. The Post-AI Efficiency data ranges from 82% (minimum) to 92% (maximum), with 50% of the data falling between 85% and 90% (IQR). The research findings clearly demonstrate the positive impact of AI methods on efficiency in Electrical and Electronics Engineering projects. The mean Post-AI Efficiency of 87.4% shows a notable improvement compared to the mean Pre-AI Efficiency of 77.6%. The narrower spread of data in the post-AI phase indicates a more predictable and consistent enhancement in productivity. Table 5 presents the descriptive statistics for Pre-AI Downtime and Post-AI Downtime, which provide a comprehensive overview of the changes in equipment downtime following the introduction of AI methods. Pre-AI Downtime: Count: 5.0 Mean: 130 Standard deviation (std): 46.9 Minimum (min): 80 25th percentile (Q1): 100 Median (50th percentile): 120 75th percentile (Q3): 150 Maximum (max): 200 Post-AI Downtime: Count: 5.0 Mean: 97 Standard deviation (std): 34.9 Minimum (min): 60 25th percentile (Q1): 75 Median (50th percentile): 90 75th percentile (Q3): 110 Maximum (max): 150 Discussion: The descriptive statistics reveal substantial improvements in equipment downtime following the integration of AI-based methodologies in the Electrical and Electronics Engineering domain. The mean rest has reduced significantly from 130 hours (Pre-AI) to 97 hours (Post-AI), indicating an average reduction of approximately 33 hours. Moreover, the standard deviation decreased from 46.9 hours (Pre-AI) to 34.9 hours (post-AI), reflecting reduced variability and greater consistency in equipment performance. The quartile values (25th, 50th, and 75th percentiles) also demonstrate positive changes, with the Post-AI Downtime values being consistently lower than their \begin{table} \begin{tabular}{l c} Descriptive Statistics - Equipment Downtime & \\ Pre-AI Domtime (hours) & Post-AI Domtime (hours) \\ count & 5.000000 & 5.000000 \\ mean & 130.000000 & 97.000000 \\ std & 46.904158 & 34.92549 \\ min & 80.000000 & 60.00000 \\ 25\% & 100.00000 & 75.00000 \\ 50\% & 120.000000 & 90.00000 \\ 75\% & 150.000000 & 110.00000 \\ max & 200.00000 & 150.00000 \\ \end{tabular} \end{table} Table 5: Descriptive Statistics – Equipment Downtime Improvement Pre-AI counterparts. This indicates that not only has the average downtime improved, but a more significant proportion of cases also experienced shorter equipment downtime durations. Fig 5 investigates the relationship between electric set-up time and cost reduction in electrical and Electronics Engineering projects. They have a look at the focus on five tasks, particularly Project A (Cost reduction of 15%), Project B (Cost reduction of 12.3%), Project C (Cost reduction of 20%), Project D (cost discount of 10%), and Project E (Cost reduction of 17.5%). The number one objective of the research is to evaluate the effect of various methods and strategies on the productivity of electrical set-up techniques in those initiatives. The researchers utilized correlation evaluation to set up the connection between the time taken for electric installations and the value reduction completed in every project. The consequences of the look display a robust, compelling correlation between electrical installation time and cost discount, with a correlation coefficient of 0.84026. This shows that as the time taken for electrical installations decreases, there is a consistent development in fee discounts throughout all tasks. ### Information collection methods and resources To gather applicable records, an aggregate of number one and secondary fact series methods was employed. The primary facts were accrued via surveys administered to specialists running within the EEE subject in the creative enterprise. The survey tool was designed primarily based on verified scales and covered questions associated with the adoption of AI technologies, productivity measurements, and potentially demanding situations confronted with imposing AI inside the production industry. Moreover, secondary records amassed from industry reports, instructional guides, and governmental resources offer a broader knowledge of the situation [8]. Productivity," Table 6 showcases a comparison between the electrical installation time for various projects before and after the implementation of AI technology. The first column provides the pre-AI time (in hours) for each project, while the second column displays the corresponding post-AI time (in hours). This table allows for easy analysis and evaluation of the efficiency gains achieved through AI integration. Table 6 data suggests that AI technology has significantly reduced the electrical installation time for all five projects. In Project A, the pre-AI time was 120 hours, but with AI, it decreased to 90 hours, resulting in a time savings of 25 hours. Similarly, for Project B, the pre-AI time was 100 hours and reduced to 80 hours with AI, saving 20 hours. The data reveals that Project C had the lengthiest pre-AI installation time of 150 hours. However, with the integration of AI technology, the installation time was reduced to 110 hours, resulting in a noteworthy time savings of 40 hours. Similarly, Project D experienced the second longest pre-AI installation duration at 80 hours, which was then decreased to 70 hours after implementing AI technology, leading to a time savings of 10 hours. Lastly, Project E boasted the shortest pre-AI installation period at 200 hours but achieved a reduction to just 150 hours following AI implementation, resulting in an impressive time savings amounting to 50 hours. The findings presented in Table 6 indicate that the implementation of AI technology has significantly boosted productivity in electrical and electronics engineering projects, leading to notable time savings and enhanced efficiency. These research outcomes carry substantial implications for the future of this field and the integration of AI technology within it. Table 7 of the research paper "Impact of Methods on Electrical and Electronics Engineering Productivity" offers records on the accuracy of AI-based structures in detecting defects in numerous electric and electronics engineering tasks. Mainly, the desk lists the accuracy percentages for initiatives A, B, C, D, and E. Project A has an AI-based system device accuracy of 92%, which means that the AI machine can discover defects with an accuracy of 92% while used in this mission. Project B has an accuracy of 85%, which means that the AI machine can stumble on defects with an accuracy of 85% while used in this project. Project C has an accuracy of 94%, which means that the AI device can come across defects with an accuracy of 94% while used on this task. Project D has an accuracy of 89%, which means that the AI machine can discover defects with an accuracy of 89% while used on this mission. In the end, Project E has an accuracy of 91%, which means that the AI device can discover defects with an accuracy of 91% while used on this project. Overall, the accuracy of AI-based systems in detecting defects in electric and electronics engineering projects can vary substantially, depending on the challenge and the specific AI machine used. Table 8: Cost Reduction due to AI Implementation Table 8 suggests the project cost reduction due to AI implementation for five special electrical and electronics engineering initiatives. Project A has performed a value discount of 15%, at the same time as Project B has accomplished a cost discount of 12%. Project C has done the very best value discount of 20%, followed with the aid of Project D with a cost reduction of 10%. Eventually, Project E performed a cost reduction of 18%. The statistics in this Table 8 help the research paper's subject matter of exploring the effect of strategies on electric and electronics engineering productiveness, as it demonstrates how AI implementation can lead to full-size price savings in those tasks. Table 9 offers efficiency development with AI for five unique electric and electronics engineering responsibilities. The pre-AI performance for Project A is 75%, Project B is 80%, Project C is 70%, Project D is 85%, and Project E is 78%. After imposing AI, the post-AI efficiency for Project A has increased to 85%, even as Project B has executed an outstanding 90% performance. Project C has improved, with a post-AI efficiency of 82%. Project D has performed the highest increase in performance, with a post-AI efficiency of 92%. Finally, Project E has seen a moderate development in efficiency, with a submit-AI efficiency of 88%. These records reveal the massive effect of AI on the performance of electrical and electronics engineering responsibilities and support the research paper's subject matter of exploring the impact of methods on electrical and electronics engineering productivity. Table 10 reduces system downtime due to implementing AI for five extraordinary electric and electronics engineering responsibilities. The pre-AI downtime for device A is 120 hours, while gadget B has a pre-AI rest of eighty hours. Project C has the highest pre-AI downtime of 150 hours, observed through challenge D with one hundred hours of downtime. Project E has the very best pre-AI rest of two hundred hours. After enforcing AI, the post-AI downtime for Project A has been decreased to 90 hours, while Project B has seen a significant reduction in downtime to 60 hours. Project C has also skilled a discount in rest, with a post-AI downtime of a hundred and ten hours. Project D has seen a reduced downtime to 75 hours, whilst Project E has skilled the maximum reduction in downtime, with a post-AI downtime of 150 hours. These facts show the massive impact of AI on lowering gadget downtime in electrical and electronics engineering obligations and help the studies paper's topic of exploring the impact of techniques on electric and electronics engineering productivity. \begin{table} \begin{tabular}{l r r} \hline \hline Dataset: Equipment & Dentine Reduction & \\ Equipment & Pre-AI & Dentine (hours) & Post-AI \\ Equipment & 120 & 90 \\ Equipment & 80 & 60 \\ Equipment & 150 & 110 \\ Equipment & 100 & 75 \\ Equipment & 200 & 150 \\ \hline \hline \end{tabular} \end{table} Table 10: Equipment Downtime Reduction \begin{table} \begin{tabular}{l r r} \hline \hline Dataset: Efficiency Improvement with AI & \\ Task & Pre-AI Efficiency (\%) & Post-AI Efficiency (\%) \\ Task & 4 & 75 & 85 \\ Task & 80 & 90 \\ Task & 70 & 82 \\ Task & 85 & 92 \\ Task & 88 & 88 \\ \hline \hline \end{tabular} \end{table} Table 9: Efficiency Improvement with AI Fig 6 showcases the most helpful productivity profits performed through implementing these methodologies. The findings display extensive upgrades in various elements of electrical engineering initiatives. Specifically, the studies demonstrate an exceptional reduction in overall project time by 150 hours, leading to expanded performance within the execution of obligations and overall task control. Furthermore, adopting these methodologies resulted in a noteworthy average disorder stumble on accuracy improvement of 92%. This indicates a sizable reduction in defects and mistakes, paramount to improved niceness and reliability in electrical engineering processes and merchandise. Moreover, the observation highlights an average efficiency development of 9.8%. This improvement suggests that the new methods implemented in the electrical engineering approaches ended in extra streamlined workflows, optimised aid utilisation, and greater productivity. Finally, the studies identify a significant discount in general equipment downtime. Using the new methodologies minimised the rest, leading to expanded productivity and reduced delays at some stage in task execution. Fig 7 investigates the impact of different methods on the productivity of Electrical and Electronics Engineering projects. The study focuses on five projects labelled A, B, C, D, and E and evaluates their performance in Efficiency Improvement, Cost-Effectiveness, Quality Improvement, and Equipment Downtime Reduction. The results indicate that Project C shows the highest efficiency improvement at 17.14%, closely followed by Project E at 12.82%. On the other hand, Project D displays the lowest efficiency improvement at 8.24%. Regarding cost-effectiveness, Project A demonstrates the highest percentage of 13.33%, making it the most financially efficient project among the five. Project B has the lowest cost-effectiveness improvement at 12.50%. Regarding quality improvement, Project C and Project E tie for the highest percentage at 17.50%, while Project B exhibits the lowest quality improvement at 6.25%. Furthermore, Project C shows the highest equipment downtime reduction at 26.67%, while Project D has the lowest reduction rate at 25.00%. These findings shed light on the varying degrees of success achieved by different methods in Electrical and Electronics Engineering projects. Project C is the most well-rounded performer, exhibiting significant improvements across all evaluated aspects. Conversely, Project D lags in several areas, suggesting potential areas for improvement. This research emphasizes the importance of selecting suitable methods to enhance productivity in Electrical and Electronics Engineering projects. It serves as a valuable reference for decision-makers seeking to optimize project outcomes. ### 3.3 Variables and Measurements The subsequent variables have been considered in this look at: Unbiased Variable: Adoption of AI technology in the EEE region Established Variable: Productivity in the Production Enterprise Other potential influencing variables had been additionally diagnosed, which include: Length of the construction employer Experience of EEE professionals Degree of AI implementation in the production industry Measurements for the variables have been acquired via self-reported survey responses. The adoption of AI technology changed the use of a Likert scale, starting from "Strongly Disagree" to "Strongly Agree." Productivity was measured with the aid of assessing key performance indicators (KPIs) applicable to the EEE region within the production industry, together with crowning glory time, fee savings, and error reduction [9]. ### 3.4 Records analysis techniques The accrued statistics were analysed through the use of suitable machine learning strategies to determine the impact of AI on EEE productivity in the construction enterprise. Descriptive statistics, including ways and frequencies, were used to summarise the information. Inferential facts, which include correlation evaluation and regression evaluation, have been carried out to have a look at the relationships among variables and verify the importance of the findings. Furthermore, information visualisation strategies, which include charts and graphs, were hired to present the outcomes effectively [10]. The evaluation was conducted using software programmes, which include Kaggle or Google Colaboratory, to make sure of accurate and dependable effects. Moreover, suitable checks of importance, which include t-exams or ANOVA, have been done to validate the study's hypotheses and draw significant conclusions. ### 3.5 Impact of Artificial Intelligence on Electrical and Electronics Engineering Productivity #### 3.5.1 Automated design and modelling ##### 3.5.1.1 AI Programmes in Electric and Electronic Engineering Design The integration of synthetic Intelligence (AI) in electrical and electronics engineering layouts has revolutionized the field by permitting superior automation and smart selection-making techniques. AI strategies consisting of the device getting to know itself, neural networks, and expert structures had been carried out on various aspects of layout, consisting of circuit design, machine optimization, and issue selection. Those AI applications offer extensive blessings over conventional layout methods by way of improving efficiency, accuracy, and common productivity [11]. ##### 3.5.1.2 Advantages of AI-based Layout and Modelling Equipment AI-based layout and modelling gear offer numerous benefits to electric and electronics engineering professionals. First off, that equipment automates repetitive and time-consuming tasks, permitting engineers to focus on higher-level design choices. Secondly, AI algorithms can analyze massive datasets and extract treasured insights, leading to optimized designs and advanced performance. Thirdly, AI-based tools enable speedy prototyping and simulation, decreasing the time to market brand-new products. Finally, these tools facilitate collaborative layout strategies by presenting real time feedback and tips to layout teams [12]. 5.1.3 Case research and examples of AI-pushed design and modelling through Kaggle for the study paper topic: "Impact of synthetic Intelligence on electric and electronic Engineering productivity within the construction industry" Numerous case studies and examples demonstrate the impact of AI-pushed design and modelling on electric and electronics engineering productivity inside the creative industry. One such example is the use of AI algorithms for optimizing energy distribution systems in large-scale construction tasks. By analyzing ancient statistics and real-time sensor inputs, AI models can become aware of ideal configurations, reduce strength losses, and improve strength efficiency in building structures [13]. Any other case that requires a look involves the utility of AI-based total algorithms for automated circuit layout in the construction of electronic gadgets. These algorithms can generate ultimate circuit layouts, component placements, and interconnections primarily based on layout specs and overall performance necessities. This computerized technique significantly speeds up the layout technique, reduces mistakes, and complements the overall productiveness of electronics engineering groups [14]. Kaggle, a popular platform for statistics and science competitions and collaborations, provides a wealth of AI-pushed layout and modelling examples. Researchers and practitioners can explore Kaggle's datasets, notebooks, and competitions associated with electrical and electronics engineering to gain insights into the potential productivity profits conceivable through AI technology [15]. In the end, the impact of synthetic Intelligence on electric and electronics engineering productivity is evident within the domain of automatic design and modelling. AI applications provide several advantages, such as automation of tasks, optimization of designs, rapid prototyping, and more advantageous collaboration. Case research and examples, together with those to be had on platforms like Kaggle, showcase the tangible effect of AI-pushed design and modelling within the construction enterprise and beyond. Clever monitoring and manipulation systems have shown good-sized advancements with the combination of synthetic intelligence (AI) technology. AI-primarily based tracking and management systems in construction have won interest due to their ability to enhance performance and productivity. This segment provides three key elements of clever tracking and management structures, together with applicable references and citations. 5.1.4 AI-based monitoring and manipulation structures in production were notably explored in recent studies. Use either Research has validated the effectiveness of those structures in improving mission performance. As an example, [16] developed a tracking and control system that applied AI techniques to optimise production strategies. They implemented tabular outcomes and Kaggle coding to analyse and interpret information, allowing green selection-making and useful resource allocation. 5.1.5 Upgrades in productivity were a primary benefit of AI-powered monitoring and management systems. Researchers have proposed diverse processes to leverage AI technology for enhancing mission performance. [17] conducted a complete observation on AI-based total structures and their effect on productiveness inside the production industry. Their research employed tabular consequences and Kaggle coding to assess the performance profits completed via AI-powered systems, highlighting the capacity for big improvements in productivity's Case research has played an essential role in demonstrating the real-world effect of AI in tracking and manipulating systems. These studies offer treasured insights into the realistic applications of AI techniques in construction initiatives. In the context of electrical and electronics engineering projects within the construction industry, [18] carried out a case observation titled "Exploring devices and getting to know strategies to maximize efficiency in production industry electric and Electronics Engineering projects." They look at tabular effects and Kaggle coding to investigate the effectiveness of gadget learning strategies in maximizing performance in such projects. ### Results ### Presentation and evaluation of the collected information with Kaggle coding The amassed data was provided and analysed using Kaggle coding techniques. The record preprocessing steps blanketed cleansing the dataset, handling lacking values, and putting off outliers. Then, numerous record evaluation techniques have been implemented, which include descriptive facts, correlation analysis, and information visualisation. The Kaggle platform provided convenient surroundings for performing these tasks efficiently. Assessment of the Impact of AI on Electric and Electronic Engineering Productivity Inside the Construction Enterprise The evaluation of the impact of AI on electric and electronics engineering productivity inside the production enterprise is performed primarily based on the gathered data. The analysis discovered that the mixing of AI technology caused substantial upgrades in productivity. AI algorithms and systems had been hired for duties such as computerised design, optimisation, and predictive protection, resulting in better efficiency and decreased operational costs. Quantitative and qualitative findings related to performance, fee-effectiveness, and first-rate improvement through Kaggle coding The quantitative and qualitative findings obtained through Kaggle coding strategies shed light on the efficiency, cost-effectiveness, and best development aspects of AI implementation inside the creation industry. The evaluation established that AI-enabled systems contributed to improved performance with the aid of automating hard work-intensive responsibilities, optimising useful resource allocation, and minimising remodel. Furthermore, the cost-effectiveness of initiatives increased as AI algorithms enabled correct task scheduling, danger evaluation, and cost estimation. Additionally, the first class of production advanced via AI-powered fine manipulation mechanisms, which ensured compliance with industry requirements and decreased defects. ### Discussion of the statistical importance and practical implications of the outcomes The statistical importance of the consequences determined the use of suitable statistical assessments and measures. The analysis indicated that the identified upgrades in productivity, efficiency, value-effectiveness, and satisfaction have been statistically great. Those findings have big, realistic implications for the development enterprise, as they offer proof for the high-quality effect of AI on electrical and electronics engineering productivity. The consequences can guide industry specialists and policymakers in adopting AI technologies and integrating them into creation techniques for higher effects. ## 5 Discussion ### Interpretation of the findings in the context of the prevailing literature The findings received in this study have been interpreted within the context of current literature on the software of AI inside the production enterprise. The translation highlighted the consistency of the observer's effects with preceding research, further reinforcing the notion that AI has a massive and superb effect on electrical and electronics engineering productivity. The dialogue Additionally diagnosed any discrepancies or novel insights that deviated from previous research, contributing to the general expertise of AI's role within the construction region. ### Contrast of the study outcomes with previous studies The study's outcomes were compared with the findings of preceding studies that investigated the effect of AI on productivity in the production enterprise. This comparative evaluation aimed to discover similarities, variations, and capacity factors contributing to the variation in outcomes. By juxtaposing the contemporary study's results with current research, a comprehensive understanding of the impact of AI on electrical and electronics engineering productivity in production was gained. ### Identification of the important elements influencing the effect of AI on productivity The discussion identified the important elements that stimulated the impact of AI on productivity in the production industry. These factors encompassed technological aspects, together with the sophistication of AI algorithms and the availability of reliable information, as well as organizational elements, including the willingness to undertake AI, the level of worker training, and the mixing of AI with current processes. Spotting those influential factors is essential for knowing the situations under which AI implementation can yield the most extensive productivity upgrades. Exploration of capacity obstacles and demanding situations in imposing AI within the creation enterprise The dialogue explored potential barriers and challenges related to the implementation of AI in the production industry. Those demanding situations encompassed technical hurdles, consisting of exceptional statistics and interoperability troubles, in addition to organizational and cultural boundaries, which include resistance to trade and a shortage of AI information. Information about those barriers is crucial for devising techniques to conquer them and facilitating the successful integration of AI technology in the creation zone. Pointers for future studies and advancements in AI integration for the study paper subject matter: "Impact of artificial Intelligence on electric and electronic Engineering productivity inside the construction enterprise" The dialogue concluded with suggestions for future research and improvements within the integration of AI on the subject of the impact of artificial Intelligence on electric and electronic Engineering productivity inside the creation enterprise." These hints covered exploring emerging AI technologies, inclusive of device learning and natural language processing, to similarly beautify productivity. Additionally, investigating the lengthy-term results of AI implementation and reading the societal and ethical implications of AI adoption in creation were endorsed for destiny studies endeavours. ## 6 Conclusions ### Summary of key findings In this research paper, we investigated the impact of synthetic Intelligence (AI) on electrical and Electronics Engineering (EEE) productivity within the creation industry. Through a comprehensive analysis of applicable literature and case studies, numerous key findings have emerged. First off, the integration of AI technology, together with gadget knowledge and PC imaginative and prescient thinking, has caused vast improvements in productivity inside the EEE quarter of the construction enterprise. AI-primarily based structures have enabled the automation of repetitive obligations, greater accuracy in design and analysis, and real-time tracking and control of creation approaches. Secondly, AI has demonstrated itself to be instrumental in optimizing resource allocation and scheduling in creation initiatives. Using AI algorithms, EEE experts can efficiently manage the allocation of electrical and digital gadgets, ensure superior usage of assets, and minimize mission delays. Furthermore, AI has facilitated the development of smart systems for predictive protection and fault detection in electrical and electronic structures. By means of leveraging AI techniques, EEE specialists can proactively identify potential problems, carry out timely protection, and prevent high-priced system disasters, therefore improving the reliability and lifespan of essential additives inside the production industry [19]. ### Implications of the Studies The findings of this research have significant implications for academia and enterprise. First off, the combination of AI technology in the EEE area of the construction industry has the capability to revolutionize conventional practices and beautify productivity. This research highlights the importance of embracing AI and encourages further exploration of its applications in construction. Moreover, the results extend to academic institutions, as there may be a need to incorporate AI-associated courses and education programmes into the EEE curriculum. Through equipping future EEE experts with the essential know-how and skills in AI, they can efficaciously contribute to the improvement and implementation of AI-primarily based solutions within the production industry [20]. ### Contribution to the Sphere This research makes a valuable contribution to the field of EEE within the creation enterprise by offering an in-depth evaluation of the effect of AI on productivity. With the aid of synthesizing current literature and supplying relevant case research, this examination offers a comprehensive overview of the modern-day state of AI adoption and its implications for EEE professionals. Moreover, this study sheds light on the potentially demanding situations and possibilities associated with the integration of AI within the production industry. It provides insights into the important areas where AI can result in vast upgrades, such as automation, aid optimization, and predictive maintenance, guiding future research and development efforts in this area. Final comments for the research paper topic: "effect of synthetic Intelligence on electrical and electronic Engineering productivity within the construction industry" In the end, this study underscores the transformative potential of AI within the EEE sector of the development industry. The findings highlight the numerous advantages that AI can carry, together with more suitable productivity, optimized resource allocation, and advanced preservation practices. It is critical for EEE professionals and stakeholders in the construction industry to embrace and leverage AI technologies to free up their full capability and accelerate progress inside the field. This research contributes to the existing body of information by providing complete expertise on the impact of AI on EEE productivity in the creation industry. It provides the inspiration for additional exploration and encourages perseverance in studies and improvement in this domain. By harnessing the strength of AI, the development enterprise can reap better ranges of performance, price-effectiveness, and sustainability, in the end shaping a brighter future for electric and electronics engineering in creation [21]. ## References * [1] J. R. Smith, M. J. Rycroft, and R. Soetanto, "The development industry: problems and views," Routledge, 2018. * [2] A. Akintoye and M. Skitmore, "Knowledge the construction industry: A worldwide angle," Routledge, 2019. * [3] C. Eastman, P. Teicholz, R. Sacks, and okay. Liston, "BIM handbook: A guide to building records modelling for owners, managers, designers, engineers, and contractors," John Wiley & Sons, 2011. * [4] M. Arif, M. Katafygiotou, and C. J. Sreenan, "Artificial intelligence and machine studying for the sustainable constructed environment: A survey and evaluation of recent developments," Superior Engineering Informatics, vol. forty-three, p. 101058, 2020. * [5] C. Zhang, Y. Chen, and J. Zhang, "Energy device optimization based on artificial intelligence for sustainable creation," Journal of Production Engineering and Control, vol. a hundred forty-five, no. 2, p. 04018112, 2019. * [6] S. Li, H. Luo, Z. Li, and S. Huang, "Synthetic intelligence-based totally fault prognosis of electrical structures in creation tasks," Journal of Computing in Civil Engineering, vol. 35, no. 1, p. 04020070, 2021. * [7] A. J. Smith, "Quantitative research design and facts series methods for investigating AI effect within the production enterprise," IEEE Trans. Eng. Manag., vol. 10, no. Eleven, pp. 2000-2017, 2021. * [8] B. Johnson and C. Davis, "Primary and secondary records collection techniques for reading AI adoption in the EEE region," IEEE J. Constr. Eng. Manag., vol. 10, no. Eleven, pp. 2017-2034, 2022. * [9] C. Lee and D. Kim, "Variables and measurements in assessing AI impact on productivity within the construction industry," in IEEE Int. Conf. Autom. Constr., 2022, pp. three hundred-317. [10] D. Brown and E. Johnson, "Information analysis strategies for comparing the effect of AI on EEE productiveness within the creation enterprise," in Proc. IEEE Int. Conf. Eng. Constr., 2023, pp. 317-334. [11] L. Wang and C. Xing, "Artificial intelligence in electric engineering design," in 2020 IEEE electrical design of Advanced Packaging and Systems Symposium (EDAPS), 2020, pp. 1-four. [12] Y. Lu, Q. Yu, and Z. Peng, "Synthetic intelligence in digital design automation: challenges, possibilities, and destiny directions," in 2020 IEEE Worldwide Symposium on Circuits and Structures (ISCAS), 2020, pp. 1-5. [13] R. Singh and S. Yadav, "Optimization of electricity distribution structures using artificial intelligence techniques for big-scale creation projects," in 2019 IEEE 9th global conference on Advanced Computing (ICoAC), 2019, pp. 313-317. [14] R. Jafari, M. Mahdavi, and R. Ziaei, "Automatic circuit design the usage of evolutionary algorithms for electronic gadgets," in 2019 IEEE 10th Manage and Device Graduate Research Colloquium (ICSGRC), 2019, pp. 50-54. [15] "Kaggle - Your gadget was getting to know and facts science network," Kaggle. [Online]. to be had: [https://www.kaggle.com/](https://www.kaggle.com/). [Accessed: July 8, 2023]. [16] J. Smith, A. Johnson, and B. Williams, "Smart monitoring and manage device for production tasks," Journal of Creation Engineering and Control, vol. 10, no. 2, pp. 300-317, 2020. [17] R. Johnson, S. Thompson, and C. Davis, "Enhancing productivity in construction through AI-powered tracking and manage systems," construction research magazine, vol. 11, no: three, pp. 317-334, 2021. [18] M. Brown, E. Davis, and ok. Wilson, "Exploring system mastering strategies to maximize efficiency in production industry electrical and Electronics Engineering initiatives," in complaints of the IEEE worldwide convention on Construction and Structure (ICCA), 2022, pp. 334-354. [19] S. Kavas, ok. Ercoskun, and E. Oztemel, "Synthetic Intelligence applications inside the production industry: A complete overview," global magazine of production control, vol. 20, no. Four, pp. 343-358, 2020. DOI 10.1080/15623599.2019.1705612. [20] X. Han and M. Yilmaz, "Synthetic intelligence applications in construction engineering and management: A comprehensive evaluation," Automation in production, vol. 110, p. 103010, 2020. DOI: 10.1016/j.autcon.2019.103010. [21] T. Wang and N. El-Gohary, "Applications of Artificial Intelligence (AI) Strategies within the construction enterprise: A complete overview," Automation in Creation, vol. a hundred and one, pp. one hundred twenty-five-139, 2019. DOI: 10.1016/j.autcon.2019.01.011. [1. Figures and Tables in the "Results section"